Research grants and agenda shaping

A chapter published in David M. Allen and James W. Howell (eds.), Groupthink in Science: Greed, Pathological Altruism, Ideology, Competition, and Culture (Springer, 2020), pp. 77-83
pdf of published chapter

Brian Martin


Go to

Brian Martin's publications on science, technology and society

Brian Martin's publications

Brian Martin's website


Abstract

Grants are essential for some scientists to do their research, and receiving them can be a mark of status. Grants are supposed to be awarded on merit, but there are many deviations from this ideal. There are a few publicized cases in which grants to dissident scientists have been blocked. Far more common, though difficult to prove, is routine bias in grant committees towards favored applicants and dominant views and against dissidents and competitors. This sort of bias can reflect altruism towards those with personal connections or ideological affinity with grant-givers. Most grant systems serve to orient researchers to the agendas of government and industry. This is a systemic process independent of biases against individuals or topics.

Introduction

In 1969, Clyde Manwell was appointed the second professor of zoology at the University of Adelaide. He came with an outstanding research record. In 1971, he and his wife Ann Baker wrote a letter to the newspaper criticizing aspects of the government’s fruit-fly spraying program, triggering commentary in state parliament. The senior professor of zoology wrote to the Vice-Chancellor, leading to an investigation that could have resulted in Manwell’s dismissal. The saga, which lasted four years before resolution in Manwell’s favor, involved media coverage, court cases, and student protest (Baker, 1986).

Manwell later wrote about his experience with Australia’s leading competitive research grants scheme at the time. Prior to the complaint and publicity, Manwell had received a grant. Afterwards, despite a publication record in the top 2% in his field, his grant was terminated without explanation (a rare occurrence), and his subsequent applications were unsuccessful, at a time when most applications in his field were funded. The implication was that the complaint against Manwell, or his challenge to pesticide orthodoxy, influenced grant assessors or panel members against his applications (Manwell, 1979).

Manwell’s case can be considered a manifestation of altruism leading to unfairness: research grant panels are likely to award money to those who are most like themselves, including their ideas. Manwell had challenged conventional views and therefore was henceforth considered unworthy of support: he had become an “other” rather than one of “us.”

Let’s take a step back and look at the purpose of research grants. Researchers need time and resources to carry out their studies. Most commonly, they receive this via an appointment at a university or research institution, which provides a salary, computing and library facilities, and sometimes a laboratory and support staff. In addition, for extra support, they can apply for research grants.

Grants come in all amounts and from various sources. They can be for $1000 or $10 million. They can be provided by a researcher’s employer or can be “external,” offered by some other organization. Two common types are competitive schemes, in which a panel chooses between numerous applications based on merit, and tied schemes, in which an organization provides funds for projects directly related to its interests. In a typical national competitive scheme, researchers from a wide range of disciplines can apply; applications are judged by experts, rankings are made and grants awarded to the highest ranking applicants. In a typical tied scheme, a grant is given to a chosen researcher on a specified topic, for example to carry out studies for the army or a breast cancer charity. There are all sorts of variants of these two types of schemes. Many tied schemes have some level of competition and some competitive schemes have thematic priorities.

Grant applications range from brief to lengthy and from simple to elaborate. Typically they must follow a template that includes an exposition of the research proposed to be undertaken, a budget, and a listing of the applicant’s achievements. For some schemes, writing an application is a major operation, taking weeks of effort (Graves, Barnett, and Clarke 2011). For external competitive grants, applications may be vetted by superiors and administrative staff to ensure compliance with various requirements as well as to improve the quality of the application.

In principle, the grant system sounds sensible. Money to support research should go to those who undertake the most meritorious projects. However, there are various shortcomings, ranging from bias against individuals and projects to systemic problems due to the grant system itself.

Agenda setting

There are a few other documented examples like Manwell’s (e.g., Horrobin 1974, 1996; Martin 1986), though these are hardly enough to make a strong case that there is extensive bias in awarding grants. The methodological obstacles to investigating bias in grant systems are considerable. Deliberations are usually confidential, and committee members rarely speak out about disputes and problems. More fundamentally, if there is bias among expert assessors and panel members, it may be unconscious, so independent means are required to make judgments about the fairness of grant allocations. The challenge is that competitive grants are awarded on the basis of opinions of experts in the field, so claims about bias usually involve questioning expert judgment on the basis of some other experts or criteria.

Some critics of grant systems argue that there is a systemic bias against innovative projects (Nicholson and Ioannidis 2012). Based simply on probabilities, a radical or unorthodox proposal is likely to be read by assessors who are closer to the mainstream than the converse. Whether or not there is any such bias in grant committees, many applicants feel it is better to play safe, so beliefs about bias against unorthodox research can be self-fulfilling.

Over the years, I’ve had many discussions with colleagues about grant applications, theirs and my own. For many academics, applying for a grant is a strategic enterprise, with the topic, methods and goals chosen to maximize the chance of success. Close attention is given to the members of grant panels, especially their areas of interest. If a particular panel member is likely to take carriage of your proposal, then you may be able to improve your odds by making the application appealing to that individual. When, as occurred periodically, a panel member came for a visit to the university to give a talk about grants committee operations and expectations, many academics would attend, seeking insights into how to tailor their applications to win approval. The implication is that many applicants subordinate what they really want to do and think is important to what they think will win favor with grant bodies.

Systematic slanting in topics funded is most obvious in tied grants, typically offered by corporations or government departments. For example, the military funds a wide range of research, thus having an influence over priorities in fields including oceanography, psychology, and computer science. As well as influencing priorities within fields, grant funding can influence the relative emphasis between fields. Military funds give more priority to nuclear physics than to ecology or law.

Researchers who are not reliant on grants have a greater opportunity to explore areas that serve the public interest, or just their own personal interests. However, the influence of grant systems affects them as well. This is because funding priorities influence the questions seen as important in a field. So, for example, if the computational challenges relating to encryption are given plenty of funding, they move higher on the priority list for other researchers too, and influence editors and even the setting up of journals.

Competitive grant schemes seem on the surface to be less tied to special interests. Competitive schemes typically draw on the expertise of top researchers, and these are the very researchers most likely to have succeeded based on their interest in areas that are well funded and are central to the field. So it is plausible that competitive schemes are inherently biased against dissident or unorthodox views and against research that addresses unfashionable topics. More generally, competitive schemes suffer the same shortcomings as any system reliant on peer review (Bartlett 2011; Bornmann 2011; Horrobin 1990).

There are a number of studies of grant systems and the operations of grant bodies (Lamont 2009; Mow 2010; Thorngate et al. 2009; Wessely 1998). Undoubtedly, nearly all members of such bodies attempt to be fair, awarding grants according to the stated criteria. Also undoubtedly, there are cases involving insider bias, in which panel members award grants to each other, collaborators, or allies. This need not be conscious bias, and it is far more insidious when it is unconscious. (On the social psychology of panel peer review, see Olbrecht and Bornmann 2010.)

Another aspect of grant systems is the enormous effort they entail. Part of this effort occurs in the central administration of the system and in peer assessments. Another and usually larger part is the effort required of applicants and their employers. If, for example, writing an application for a competitive scheme requires an effort similar to writing a paper for publication, and the success rate for applications is 20%, then the publication of five papers is sacrificed to the system itself, and this might be a non-trivial percentage of the additional papers published due to the successful grants, especially considering that grant recipients commonly attribute their outputs to the grant even when these outputs might have occurred anyway.

It is worth noting that for many researchers, especially in fields not requiring laboratories, their main requirements are time, computers, and library facilities. Additional funding is seldom crucial in the humanities. Yet even in such fields, applying for grants has become a necessary ritual for scholars seeking advancement because obtaining a grant is prestigious, both for the scholar and the institution. Grant successes become surrogates for research excellence even though in practical terms grants are inputs to research, not outputs. Institutions can provide incentives to those obtaining lucrative grants, including teaching relief, promotion, and leadership roles. Star researchers are sometimes lured to other institutions by the promise of a research-only position, support staff, travel funding, and other advantages. There is no similar glorification of star teachers.

The grant system can inadvertently lead to cementing of the status and success of successful applicants. Obtaining a grant can create more opportunities for research, thus helping develop a track record that in turn enables further grant successes. Thus at the beginning a slight superiority, or good luck, can compound over time into entrenched advantage. Those most likely to benefit this way are scholars who position themselves at the cutting edge of mainstream or fashionable topics.

Alternatives

The systemic biases in the usual sorts of grant schemes are easier to see when a comparison is made to alternatives. One option is ample funding for researchers as part of their appointments, so no grant applications are required. This would eliminate the excessive overheads of grant administration and grant writing. However, it might be argued, this would not provide sufficient incentive to make efficient use of resources, because poor performers would receive as much support as good ones.

A modification of this method is to provide research support based on productivity: the more papers a scientist produced, or the greater the number of citations received, the more internal and/or external funding is provided (Roy 1984). This approach rewards those who achieve by conventional criteria. Special support might be provided to those at the beginning of their research careers, or who are making a major shift in research directions, to enable the building of a record of outcomes. However, this model of funding has no simple way of encouraging innovative, unorthodox projects, because typically it is harder to publish findings for such research. Furthermore, projects with a long gestation would not attract long-term funding: instead, the quest for funding might encourage short-term superficial projects with quick publication turnarounds.

Another model for funding is to introduce an element of randomness (Fang and Casadevall 2016; Gillies 2014). Applications might be received in the usual way. Any application above a specified minimum of quality would be put in a pool, and successful applications chosen by lot. One advantage of such a scheme would be to enable researchers to pursue what they really want to do, including being as creative as they wish, because every application, assuming it satisfies the minimum criteria, would have an equal chance.

A modification of this model involves a combination of peer review and randomness. For example, each application would be peer reviewed and given a score. The score (perhaps a number from 1 to 10) would determine the number of lottery tickets assigned to the application. A top-quality application would have a better chance of being funded, but even applications seen as inferior by peers would have a chance.

Introducing a grant lottery would make formal what is already happening in many grant schemes that nominally operate entirely according to merit (Bornmann and Daniel 2009; Cole, Cole, and Simon 1981; Graves, Barnett, and Clarke 2011). Peer review introduces an element of luck, as the fate of an application depends sensitively on the choice of peer reviewers. For superlative applications and very poor ones, this may make no difference, but for a large number of good applications, the difference between success and failure may come down to tiny score differences, which means that luck plays an important role (Frank 2016). The illusion that outcomes are based only on merit has some undesirable effects: unsuccessful applicants may become unnecessarily demoralized and successful ones may falsely believe they are greatly superior to their less fortunate colleagues. When a random element is formally introduced to the selection process, it is easier to rationalize failure as bad luck and harder to claim success as impeccable evidence of superiority.

It is worth noting that despite billions of dollars of research funding being allocated in competitive research schemes, there is relatively little research into how effective they are in achieving their goals (Demicheli and Di Pietrantonj 2007). For example, it would be possible to specify several models for funding, introduce them in well-defined fields and measure outcomes years down the track. For example, it is possible to imagine the usual competitive scheme being compared to grants being awarded based on previous publications. Another possibility, involving some deception, would be to award some portion of grants to applications that did not gain peer support and then to compare outputs years later to those that did. The lack of empirical tests of the effectiveness of grant schemes suggests that they may be serving purposes other than improving research performance.

Conclusion

Research grant schemes are ostensibly intended to improve the quality and quantity of research. Schemes are subject to bias against particular individuals or types of projects, as shown by a few documented cases. However, this sort of bias is less important than the general effects of grant schemes and the increasing priority put on obtaining external research funding.

The creation and expansion of grant schemes may be related to their role as disciplining procedures, to subordinate researchers to outside agendas. This is most obvious for grants tied to particular areas. Most of these sorts of grants are provided by corporations and government agencies, thereby providing pressure to investigate topics and use methods amenable to the funders. Funding of this sort offers an incentive to report findings that do not challenge the agendas of the funders. In what is called the “funding effect,” research results favor funder agendas far more than results of independently funded research (Krimsky 2012). One explanation for this is that researchers realize, often unconsciously, that coming up with results unwelcome to the funder will mean that prospects for future funding are reduced.

Another effect of tied grants is that research areas for which there is less funding are less likely to be investigated. Groups that have little money, such as social justice activists, have little capacity to set research agendas.

The funding effect plays relatively little role in competitive grant schemes, in which decisions are typically made by scholars in relevant fields. These schemes can nevertheless provide a disciplining effect. Scholars, when seeking grants, are likely to slant their proposals to what they believe their peers will think is worthwhile in terms of topics and methods, thereby providing a subtle discouragement of unorthodox approaches. The disciplining effect of competitive schemes thus serves to orient research towards mainstream agendas, thereby serving the more prominent and influential figures in the field. Meanwhile, to the degree that these figures are seeking tied funding, mainstream agendas become oriented to the interests of governments and large corporations.

The increasing prestige of obtaining grant money is strange, at least on the surface, considering that grants are inputs rather than research outputs. If scholars were left to their own devices, they might be tempted to carry out investigations that go in a multitude of directions, including those that challenge elite agendas. This does occur to a certain extent, but is constrained to the extent that appointments, promotions, and honors go to those most successful in obtaining grants. Although researchers see themselves as autonomous, the grant process can contribute to maintaining the “ideological discipline” that they developed during their research training (Schmidt 2000).

Acknowledgements

Thanks to David Allen, Steven Bartlett and Lutz Bornmann for useful feedback.

References

Baker, C. M. A. (1986). The fruit fly papers. In B. Martin, C. M. A. Baker, C. Manwell, and C. Pugh (Eds.), Intellectual suppression: Australian case histories, analysis and responses (pp. 87–113). Sydney: Angus & Robertson.

Bartlett, S. J. (2011). The psychology of abuse in publishing: Peer review and editorial bias. In S. J. Bartlett, Normality does not equal mental health: The need to look elsewhere for standards of good psychological health (pp. 137–160).Santa Barbara, CA: Praeger.

Bornmann, L. (2011). Scientific peer review. Annual Review of Information Science and Technology, 45, 199–245.

Bornmann, L., & Daniel, H.-D. (2009). The luck of the referee draw: The effect of exchanging reviews. Learned Publishing, 22, 117–125.

Cole, S., Cole, J. R., & Simon, G. A. (1981). Chance and consensus in peer review. Science, 214 (20 November), 881–886.

Demicheli, V., & Di Pietrantonj, C. (2007). Peer review for improving the quality of grant applications. Cochrane Database of Systematic Reviews, 2, Art. No. MR000003. doi: 10.1002/14651858.MR00003.pub2.

Fang, F. C., & Casadevall, A. (2016). Research funding: The case for a modified lottery. mBio, 7(2).

Frank, R. H. (2016). Success and luck: Good fortune and the myth of meritocracy. Princeton, NJ: Princeton University Press.

Gillies, D. (2014). Selecting applications for funding: Why random choice is better than peer review, RT. A Journal on Research Policy & Evaluation, 2, 1–14.

Graves, N., Barnett, A. G., & Clarke, P. (2011). Funding grant proposals for scientific research: Retrospective analysis of scores by members of grant review panel. BMJ, 343:d4797. doi: 10.1136/bmj.d4797.

Horrobin, D. F. (1974). Referees and research administrators: Barriers to scientific advance? British Medical Journal, 27 April, 216–218.

Horrobin, D. F. (1990). The philosophical basis of peer review and the suppression of innovation. Journal of the American Medical Association, 263(10), 1438–1441.

Horrobin, D. F. (1996). Peer review of grant applications: A harbinger for mediocrity in clinical research? Lancet, 347 (9 November), 1293–1295.

Krimsky, S. (2012). Do financial conflicts of interest bias research? An inquiry into the “funding effect” hypothesis. Science, Technology, & Human Values, 38(4), 566–587.

Lamont, M. (2009). How professors think: Inside the curious world of academic judgment. Cambridge, MA: Harvard University Press.

Manwell, C. (1979). Peer review: A case history from the Australian Research Grants Committee. Search, 10(3), 81–86.

Martin, B. (1986). Bias in awarding research grants. British Medical Journal, 293 (30 August), 550–552.

Mow, K. E. (2010). Inside the black box: Research grant funding and peer review in Australian research councils. Germany: Lambert Academic Publishing.

Nicholson, J. M., & Ioannidis, J. P. A. (2012). Conform and be funded. Nature, 492 (6 December), 34–36.

Olbrecht, M., & Bornmann, L. (2010). Panel peer review of grant applications: what do we know from research in social psychology on judgment and decision-making in groups? Research Evaluation, 19(4), 293–304.

Roy, R. (1984). Alternatives to review by peers: A contribution to the theory of scientific choice. Minerva, 22 (Autumn-Winter), 316–328.

Schmidt, J. (2000). Disciplined minds: A critical look at salaried professionals and the soul-battering system that shapes their lives. Lanham, MD: Rowman & Littlefield.

Thorngate, W., R. M. Dawes, & M. Foddy (2009). Judging merit. New York: Psychology Press.

Wessely, S. (1998). Peer review of grant applications: What do we know?’ Lancet, 352 (25 July), 301–305.