The Two Worst Decisions in New Zealand Science Funding Policy

Disclosure statement: George realised, late but a while ago now, that people can’t always tell when he’s joking. Which doesn’t mean things aren’t serious. He has always been a fan of Rudyard Kipling’s Just So Stories, particularly the one about the ‘Stute Fish and the suspenders and unintended consequences.

The biggest problems in New Zealand Science for researchers are the appalling career structures and the increasing bidding pressure which has forced down success rates (and left everyone terribly risk averse). To my mind both problems stem from two decisions taken in the 1990s. And no, one of them wasn’t the breakup of the DSIR. The decision to go to full cost funding followed by the loss of ‘audit trail’ have led between them, inevitably, to where we are now.

Full cost funding is, on the face of it, is a very sensible idea. I can see why you would do it. In full cost funding of research the body funding the research gives a grant which covers the cost of the research itself, salaries of staff employed just for that project, reagents and so on, as well as the proportion of the salaries of permanent staff, usually the principal investigators, and, on top of that, the cost of maintaining the facilities that the research happens in, the so called overhead. The overhead pays for buildings, power, lights, car parking and all the non research staff, secretaries, commercialisation staff administrators and so on and the facilities they use that support the research (note, O my Best Beloved, that I do not use the phrase ‘are essential to the research’). Marginal funding on the other hand just pays the direct cost of the research (including staff employed just for that project) and sometimes the salaries of the permanent research staff, leaving the institution to carry the overhead.

Full cost funding looks good to policy makers because clearly someone has to pay the overhead costs to support the research: if you want the research done then you should pay for it. It also means that the funder only pays for the share of the overhead that contributes to the research they have paid for. There is no cross subsidisation. So in an ideal world the funder pays for all of what they want done and only what they want done. Very tidy.

However, this, although it is undoubtedly the best of all possible worlds, is very far from being an ideal world. The concept of overheads in full cost funding has two fundamental flaws that undermine the policy rationale and lead to these undesirable and unintended consequences. The first flaw, which was identified by the original instigators of full cost funding, is that you cannot simply pay the your share of the overhead cost on facilities for the project you fund, stop when you don’t need them and hope they will be there when you need them again. Research is not like water in a tap that can be turned off in one place and turned on in another when you choose. The underlying expertise, experience, equipment and facilities that make for good (or even half way decent) research need to be built up and then maintained. Redirection or worse, relocation, of research effort is slow and expensive. This makes sudden changes in funding that includes overheads hugely damaging. Research staff and equipment cannot be mothballed and sit there at no cost waiting for the research funding to come back and neither can they be seamlessly assigned to another research area and just get on with it.

And so, as I said, that was recognised, and the concept of ‘audit trail’ was introduced at the same time as full cost funding.

Audit trail is where when a proposal fails and isn’t funded, the amount of money the research institution is reduced by steps to give researchers a chance to redirect their effort or rebid without losing all its capacity in the meantime. Without audit trail all your good research staff clear off, the rest get fired and the equipment and facilities rot when the funding runs out. In the meantime the successful bidder is hiring different staff and buying new facilities. In a few years they will be unsuccessful in a bidding round and all the good research staff will clear off...

Which is why I took to referring the removal of audit trail as the ‘second worst decision in the new Zealand Science System.’ Again it was done for the best possible reasons. Trying to redirect research funding becomes very slow and cumbersome with audit trail in place because the funding agency winds up funding a whole lot of research it’s just turned down proposals from – which doesn’t make sense on a superficial analysis. Even more annoyingly for the potential funder, in a limited funding environment this is money that could be spent on research which would have been successful in their funding round. So in the name of efficiency (and don’t forget efficiency, Best Beloved) as well as flexibility audit trail went. This makes perfect sense in that ideal world where research can instantaneously start and stop but that, sadly, is not the world in which we do research. The decision gave the unintended effects of full cost funding free reign.

The other major flaw in overheads as part of full cost funding is that they aren’t. By which I mean that direct costs plus overheads is almost never the full cost of research. Calculating the difference between the marginal cost of research and the full cost is very difficult, especially when you are estimating the cost of a six year research programme in advance. So the overhead rates tend to be variously a byzantine calculation vaguely connected to reality, a wild guess or in the worst case scenario, what you can get away with. This problem seriously undermines the removal of cross subsidisation rationale for having full cost funding in the first place. And even if you could accurately calculate all the costs how would you, Best Beloved, determine what is essential to the research. Is the VC’s PA (sorry Vicki) essential to the research? Depends who you ask. The VC says “Totally”; the principal investigator says, “Not so much.” This makes overheads very divisive. The researcher sees all that money lost to their project and the administrators are struggling to keep the lights on in labs. The mathematician in their freezing attic with a pencil glares balefully at the NMR spectroscopist in the basement who is desperately longing for a bigger machine. Everybody is grizzling.

And then it gets nasty. The standard way of calculating overheads is to put a multiplier over salary costs of the research staff involved in the project. This is simple – like a window tax to fund local bodies – rather than figure out the actual costs of keeping the household/research project going you just count the most prominent feature that gives a convenient proxy and multiply until you get the amount of money you need. The effect of this is that no one want mid career people on their research grant. Young researchers are cheap (and fluffy with big eyes so everybody likes to fund them) and they are good to have on your grant to do the work. Older researchers have good publication records and good contacts with industry (and the funding panels) through their old boys (and hopefully one day soon their old- no, we need a better term – folks? well established gender neutral?) networks so they are good to have on as well. Especially if you keep them down to 0.15 FTE. The people in the middle, building their careers and paying off their mortgages; not so welcome.

Which is trouble for scientific career development. It’s relatively (I said relatively) easy to get funding for students and postgrad level staff and the old guys but there is nothing in the middle. Funding is uncertain and could be cut off at any moment with the institutions not able to carry staff who don’t bring in overhead. Add to that that the institutions are not choosing who they hold on to. The granting panels are. OK, that is radically simplified, but the people choosing who get the overhead support needed to fund the institution are picked by funding panels in Wellington. Their aims and levels of information are very different to the institutions. The panels are aiming to drive the outcomes of a particular programme informed by 40 odd pages of Arial 9.5 point and shaved margins (in the hope that no one notices) without paragraph breaks. The institution knows the staff (ameliorated by the unusual internecine warfare and personal animosities) and is trying to build a long term successful research enterprise. The distortion of careers by institutions ending up with large numbers of short term, soft money early stage researchers that they can drop when the funding runs out is causing problems everywhere. At a time when success rates for funding proposals are in the low twenty percents and the gap between substantial funding rounds (see previous post) is increasing.[1] It’s no wonder the career structure for research in New Zealand is, well, buggered.

And why those success rates? The amount of money in the system has been pretty static. Depending on who you ask, of course; politicians say up a bit, administrators and researchers say down a bit but even in real terms it hasn’t plunged like proposal success rates. The number of researchers hasn’t doubled but still success rates have more than halved.

One reason is overheads. If an institution needs more money – not just money to do research but money to keep going – it needs to bring in more overhead. The long term solution in the ideal world is to do research that better matches the funder’s needs and be more successful in the bidding but the short term solution in this world is to bid more. Overheads make successful research proposals look like income to keep the institution going, not funding to achieve research outcomes.[2] Looking at overheads as income means there is huge pressure on staff to bid for anything going –regardless of the chance of success. I think this has been going long enough that people have seen the effects are getting more strategic but it’s still a problem.

Another reason is desperation. No successful proposal; no job. No chance to regroup and have another go. The system begins to look like a numbers game at a 23% success rate where you need to get five or six proposals into get one funded. But that just isn’t the case. If your research bears no relationship to the request for proposals it simply won’t get funded. Don’t bother. But how can you not when...

All this means that the number of researchers in the system is roughly the same but the turnover and shuffling around is huge. With each funding round the unsuccessful groups downsize and leave the field or the country and the successful groups employ another crop of the fluffy and large eyed and import expertise from offshore. How can this waste, disproportionately affecting the mid level, female researchers, possibly be a good idea?

Sorry I’m beginning to shout. I’ll sit down again.

So what can be done about it? It’s always dangerous to give policy advice but I will anyway. There needs to be more block funding. Give institutions enough block funding to run themselves and allow them to plan ahead, keeping their costs within a certain clear budget including salaries for principal investigators and permanent research staff. Marginally fund research projects in established research areas, including costs for students, postgrads and especially postdocs. Only let researchers with permanent posts at institutions lead proposals for these research projects (you fluffy, big eyed people hate me now but you’ll thank me if you are still in science when you take out your first mortgage). Then you could hold the institutions to account on the macro-scale, where outcomes make a difference, and not project by project, where they rarely do.

Use full cost funding (probably without audit trail because it is an ugly work-around) to get new areas going, drive new programmes and give new potential principal investigators a go.

This would allow you to save money by cutting the amount of bidding, along with reporting costs, and reducing flux in the system.

The big issue with seeing these types of ideas implemented?  So called ‘efficiency.’

But Best Beloved, research is not by nature efficient. If you know enough about what the result will be to make it ‘efficient’ then it is not research it is development. Actual research needs to be effective rather than efficient. In research that means taking risks. You need to give good people the chance to get on with it; make mistakes and run into blind ends but have the time and resources to get there in the end. Even if it wasn’t where you meant to get to. Having your best researchers wasting 30% or more of their time bidding is a much greater loss overall than having even 30% of the research apparently misdirected and ‘inefficient’. One of the major issues with New Zealand research is that it already too ‘efficient’. Why bother being good when you can be cheap? Nice bibliometrics but crap outcomes.

And keep fashion out of it. Dairy prices are cyclical. Political fortunes are cyclical. Research shouldn’t be. But that is another topic.



[1] I’ll footnote this so there is no compulsion to read it. Look, it isn’t rocket science to figure out that if you fund proposals for mixtures of 2, 4 and 6 years big peaks and troughs of funding coming off contract are going to build up. Cicadas understand this. If the funding terms were 3, 5 and 7 years it wouldn’t happen (at this point people who know me roll their eyes. It’s true though).

[2] I pointed out to one of my clients (as they escorted me from the premises) that research is at best an investment and it is only the things the research is applied to that actually bring in money.

No Comments Yet.

Leave a comment