Scientific misconduct: falsification, fabrication, and misappropriation of credit

Published in Tracey Bretag (editor), Handbook of Academic Integrity (Singapore: Springer, 2016), pp. 895-911

David L. Vaux

The Walter and Eliza Hall Institute
1G Royal Parade Parkville VIC 3052

Department of Medical Biology
The University of Melbourne

vaux@wehi.edu.au


Go to

Other articles in the same section of The Handbook of Academic Integrity


Abstract

Much published science, especially biomedical science, is not reproducible. While most of this is likely due to sloppy research practices, part of it is due to deliberate falsification or fabrication of data, i.e. research misconduct. Plagiarism is also a form of misconduct, and although it might not cause errors to enter the literature, it undermines trust, creates inefficiencies, and deters honest researchers from careers in science. While a growing number of papers are being retracted, and the biggest reason for retractions is misconduct, it is not clear whether there is an increase in the incidence of misconduct, an increase in awareness, or both. Authors, readers, reviewers, editors, publishers, and institutions all have responsibilities in detecting and managing misconduct, and correcting the literature. To improve the situation, the incentives to fabricate need to be reduced, and rewards for authors, readers, reviewers, editors, publishers, and institutions who do the right thing should be increased. Every country needs to establish research integrity bodies to provide advice and oversight, collect data, and improve codes of practice.

I Introduction: The problem

More is invested in research than ever before, with about a million new publications being listed on PubMed each year. However, studies from industry that set out to test reproducibility found that as much as 90% of the research published by academic laboratories cannot be reproduced (Begley & Ellis, 2012; Prinz, Schlange, & Asadullah, 2011). Because repeatable experiments and observations are at the heart of the scientific method, this represents an enormous inefficiency. As well as wasting the time and resources of academic researchers, it has led to financial losses by pharmaceutical companies, both due to actions of the companies themselves, as well as because of faulty information they have relied on. For example, after researchers at Baylor reported that a type of antihistamine called latrepirdine could improve symptoms in Alzheimer's disease (Doody et al., 2008), Pfizer spent $725 million, and carried out a clinical trial involving 600 patients, only to find that the drug didn’t work.

As John Ioannidis has convincingly argued (Ioannidis, 2005), much of the lack of reproducibility might be due to publication bias and inappropriate statistical analysis. This, together with sloppily conducted science, probably accounts for the vast majority of the problem. However it is clear that some of the errors in the literature, and the failure of research to be reproducible, is due to research misconduct, i.e. the deliberate fabrication or falsification of results. In surveys of researchers, about 2% admitted to having done so themselves, and a third admitted to other questionable research practices (Fanelli, 2009).

For example, in 2003 the physicist Jan Hendrik Schön retracted no fewer than seven papers from Nature for scientific misconduct ("Retractions' realities," 2003), eight from Science, and a further six from Physical Review journals. In hindsight, there were abundant warning signs, such as his prodigious output. In 2001 alone he was listed as an author on 40 primary papers.

Similarly, it was implausibly high productivity that gave away cardiologist John Darsee. He authored five major studies in his first 15 months at Harvard, in the lab of renowned cardiologist Eugene Braunwald. Once the true story came out, he had to retract 30 papers and abstracts from his time at Harvard, and another 50 from his earlier time at Emory (Knox, 1983). But even these numbers pale beside those of Joachim Boldt, who has about 90 retractions, and Yoshitaka Fujii, who has 183 (see http://retractionwatch.com/category/yoshitaka-fujii/ and http://retractionwatch.com/2014/01/16/another-retraction-for-former-record-holder-joachim-boldt/).

Errors in research, whether due to misconduct or not, can waste money of governmental agencies, such as the ~$400,000 it is estimated is lost for each paper that is retracted (Stern, Casadevall, Steen, & Fang, 2014). 

The problem of misconduct is not limited to academic laboratories. In 2005, New England Medical Journal belatedly published an expression of concern about a paper from Merck for failing to mention heart attacks in three patients in the trial of Vioxx (Curfman, Morrissey, & Drazen, 2005). In 2004, Merck withdrew the drug, and settled legal action with a payment $4.8 billion (Horton, 2004) <http://www.officialvioxxsettlement.com/>. In testimony to a Senate investigation, the FDA found that as many as 55,000 premature deaths might have been caused by Vioxx.

Two aspects to integrity in research

For research to proceed efficiently, two aspects of scientific integrity need to be fostered. First, there is the integrity of the scientific literature, which can accumulate errors due to inadvertent mistakes as well as due to deliberate falsification or fabrication of data, i.e. research misconduct.  Secondly, there is the integrity of the scientists themselves, who need to act honestly both in how they generate and report data, as well as in how they adhere to ethical regulations, and how fairly they allocate credit. Plagiarism, for example - the use of another’s words or ideas without attribution - might not cause scientific errors to enter the literature, but it is classed as research misconduct, because it is dishonesty in the conduct of research. Similarly, self-plagiarism, in which authors publish the same work more than once, does not introduce errors into the literature, but is to unfairly claim credit for research productivity. Researchers also must act honestly when conducting peer review of papers and grant applications. If research is not perceived to be a fair process, and where cheating is tolerated, confidence in research as a career, and the willingness of people to engage in it and fund it, will be undermined.

Growing number of retractions

A bell-weather of the problems in academic publishing has been the growing number of retractions. Journals can correct errors in the literature, and alert their readers to problems in published papers, in three ways: they can publish a correction, an editorial note of concern, or they can retract the paper, either with or without the authors’ consent. The number of papers that are retracted can give an indication of the amount of misconduct, but it is only a very crude measure, both because some papers are retracted due to innocent mistakes, and because authors, journals, and institutions are reluctant to publish retractions because they feel it damages their reputations.

The web site Retraction Watch <http://retractionwatch.com/> and the journal Nature (Van Noorden, 2011) have both commented on the growing number of retractions. Is this due to increasing incidence, or increased detection, or both? Although a relatively small proportion of retraction statements say that the reason for the retraction was research misconduct, when Fang et al. followed up to determine the reason, they found that the vast majority (67%) were for misconduct (Fang, Steen, & Casadevall, 2012). In a subsequent paper, they attributed the increase in retractions to the lower quality controls for publishing flawed papers, increased detection (particularly of plagiarism), and a growing willingness of journals to retract (Steen, Casadevall, & Fang, 2013).

The difference between poor practices vs. misconduct (intent)

Errors in the scientific literature, and the poor reproducibility of research findings, most likely occur for three reasons. Firstly, a small number of errors are just due to chance alone. If 20 laboratories all perform the same experiment, the lab with anomalous positive result might publish their findings, whereas the 19 other labs that did not make this observation would not even submit their findings. A much greater source of errors are those that arise from sloppy research, with poor controls, lack of blinding, reagents that have not been validated etc. These are the "flags" that Begley refers to in his commentary (Begley, 2013). Lastly, there are the errors that arise from deliberate falsification of fabrication of data. These, together with plagiarism, are usually used to define "research misconduct", and the critical element is intent, i.e., that it was done in order to deceive.

Although all research misconduct shares the common features being both deliberate and dishonest, the seriousness varies enormously, from the very minor, such as deliberately failing to cite competitors, to the extremely serious, such as falsifying data that endangers the lives of human research subjects.

The Singapore Statement

In 2010, the second World Conference on Research Integrity produced The Singapore Statement on Research Integrity <http://www.singaporestatement.org/>. It provides a concise description of how researchers should behave, based on principles of honesty, accountability, fairness, and good stewardship. Among 14 listed responsibilities, it cites the importance of reporting findings fully, maintaining records, including as author all those and only those that meet the criteria applicable to the research field, giving credit to those who have contributed but are not authors, and declaring conflicts of interest.

II Incentives

The main motivations for misconduct are, at their base, either financial or reputational.  As fewer and fewer researchers are in tenured positions, and more and more rely on competitive grants to fund both their salaries and their laboratory costs, scientists know that if they don’t keep publishing, their careers will be at an end. This is compounded when funding is based on non-objective measures, or on simplified metrics such as volume of publications, rather than their quality. Similarly, students and post-doctoral researchers know that if their experiments fail, they won’t get publications, and the next career step will be jeopardized. Foreign students and post-docs know that a successful experiment published in a prominent journal can lead to residency and citizenship, and perhaps a tenure-track position, whereas experiments that fail to produce the hoped-for result will mean they have to return to their home country. Thus the temptation to dishonestly generate experimental results is ultimately financial, but it is rarely to gain riches, more frequently to just keep a job (Kornfeld 2012).

Amongst more senior researchers, including those that have job security, there are strong incentives to build a reputation by consistently publishing in high-profile journals, to be invited to give plenary talks at international meetings, for membership of academies, and to be awarded prizes.

Such pressures have not only tempted researchers to fabricate papers, they have also led some to corrupt the peer review process, by tricking editors so that they act as referees for their own manuscripts (Ferguson, Marcus & Oransky, 2014).

The case summaries from the US Office for Research Integrity give some insight into how research misconduct occurs, how it is (sometimes) brought to light, and what sort of penalties are applied. For example, Dr Jun Fu was a post-doctoral fellow at the University of Texas MD Anderson Cancer Center <https://ori.hhs.gov/content/case-summary-fu-jun>. Having admitted to intentionally falsifying a figure in a research publication, he entered into a two year voluntary settlement agreement in which his research was to supervised and certified by his employing institution, and he could not sit on grant review committees. Adam Marcus and Ivan Oransky discuss the penalties handed down to those found guilty in an article in the New York Times: <http://www.nytimes.com/2014/07/11/opinion/crack-down-on-scientific-fraudsters.html?_r=0>  

Fabrication/falsification

It is important to realize that there is a wide spectrum of severity of research misconduct. On the less severe end of the scale are practices such as intentionally failing to cite the work of competitors, and citing your own work more frequently than necessary. Similarly, cropping out cross-reactive bands in Western blots, or changing the white threshold of an image to "clean up" the background should not be done, because it alters the original data, but it is a relatively mild sin. On the other end of the scale is generation of data by just making up numbers, or generating false images by duplicating, altering and relabeling other ones.

In determining the severity of the misconduct, or whether it is misconduct at all, it is important to determine the degree of intent, although this is not always easy.

Figures in papers are often comprised of many similar-looking parts, whether they be photomicrographs, gels and blots, flow cytometry plots, or traces from a patch-clamp amplifier. It is therefore always possible for someone to inadvertently grab the same image file twice, leading to a duplicated and wrongly labelled part of a figure. On the other hand, if many duplications are found in the figures in a paper, and they also involve rotations, differential cropping, or mirror images, and if similar anomalies are also apparent in other works by the same authors, deliberate falsification or fabrication is much more likely. With increased pressures to publish, and the availability of image processing software, the temptation to cut corners and artificially generate the desired result has never been greater (Rossner & Yamada, 2004). Hundreds of examples can be found on the post-publication peer review site PubPeer <https://pubpeer.com/>. However, although sites such as this can alert readers to concerns about research papers, and can provide very strong evidence, they don’t provide proof of intent, or reveal which of the authors on multi-author papers bears responsibility. For this, action is required either by the authors themselves or through the establishment of an inquiry by their institution.

For the last decade or so, many journals have explicitly stated in their guidelines to authors what kinds of image manipulation are acceptable, and which are not. The Journal of Cell Biology (JCB) has shown leadership in this area <http://jcb.rupress.org/site/misc/ifora.xhtml>.  Currently, however, even those journals that do have clear guidelines vary in how rigorously they ensure compliance, or publish corrections when authors infract.

III Stealing credit

The importance of obtaining credit for work is illustrated by the frequency and vehemence of authorship disputes. Papers are the primary currency of research, and authorship is therefore the main mechanism for determining how credit is allocated. Authorship therefore gives benefits, but also carries responsibilities (Strange, 2008).

Like other forms of misbehaviour, authorship issues can range from the trivial to the serious, with plagiarism – the taking of another’s words or ideas without attribution – being classified as "research misconduct," along with fabrication and falsification. The reason authorship is so important is because it is the currency that determines not only honours such as prizes and membership of academies, but also the grants and fellowships that pay the researcher’s salary.

In life science publications from academic institutions, the first author is usually the student or post-doc who did most of the hands-on experimental work. The last author is typically the laboratory head. Usually, authors in between will be closer to the first position if they have contributed experimental data, and closer to the last position if they have provided analysis and writing.

Peter Lawrence highlighted the problem of misallocation of credit in a Commentary in Nature in 2002 (Lawrence, 2002). He listed many examples of where senior researchers, who were not even present when discoveries are made, nevertheless received the accolades for the breakthrough, often to the exclusion of their more junior colleagues who actually did the work. This phenomenon has been termed "The Matthew Effect" (Merton, 1968). Merton compared the inappropriate flow of credit from junior researchers who produce the work to senior researchers who don’t:

"For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath." Matthew 25:29.

Although there have been calls for many years to improve the unfair power structures that operate in science to determine how credit is allocated ("On being a scientist. Committee on the Conduct of Science, National Academy of Sciences of the United States of America," 1989), little change has occurred.

Ghost and Honorary Authorship

Two of the unethical ways in which authorship is corrupted are known as "Ghost" and "Honorary" authorship. Ghost authorship is when someone who would fulfil the usual requirements to be listed as an author - namely to have provided substantial intellectual input to a paper – is not named amongst the authors. Pharmaceutical companies have used ghost authorship as a way of hiding their role in a publication.  For example, Merck was accused of using ghost writers for papers published about its anti-arthritic drug Vioxx, which allowed them to avoid disclosing relevant financial relations (Ross, Hill, Egilman, & Krumholz, 2008).

Honorary authorship is when an author is listed without having fulfilled the usual requirements to justify their inclusion, i.e. where they have not made a substantial intellectual contribution to a paper. Sometimes when drug companies write papers, they offer honorary authorships to "opinion leaders" so in order to influence clinicians. For example, internal documents released by Wyeth to an inquiry by the US Senate showed that they commissioned papers about their hormone replacement drug Premarin, and then recruited prominent clinicians to act as the authors, whereas the those from DesignWrite, the company that wrote the papers, were not listed <http://www.grassley.senate.gov>.

Honorary inclusion as an author can also be claimed by department or laboratory heads for work that they have not produced themselves, or it can be offered to friends or collaborators to curry favour. The honorary inclusion of a famous person or someone known to the journal’s editors can increase the chances that a paper is sent out for review. Honorary authorship on one paper can be offered by a group leader in exchange for honorary inclusion as an author on another group’s paper.

Whether honorary or ghost authorship is classed as research misconduct varies among nations. In the Australian Code for the Responsible Conduct of Research, obtaining grant funding, general supervision, or provision of reagents from third parties in and of themselves do not justify inclusion as an author, and, moreover, inappropriate authorship is listed as an example of research misconduct.  In contrast, in the USA, authorship issues (other than plagiarism) are not considered to be misconduct by the Office for Research Integrity.

In a positive move, many journals are now adopting the practice of listing the specific contributions of each the authors. This discourages honorary authorship, makes it easier to know who should receive the most credit, and, if an error is subsequently found in the paper, it helps to determine who might be responsible. For example, in the paper by Kapoor et al. (Kapoor et al., 2014) there are 30 authors listed, but the "Author Contributions" section mentions only three, revealing which of the authors contributed.

IV What to look for (red flags)

People can become aware of accidental errors, or possibly deliberate research misconduct, in two ways. Firstly, they can become aware if they notice misbehaviour of a colleague or a co-author. Alternatively, they might see something as a third party, when they are reading a paper, reviewing a manuscript for a journal, or when they are acting as an editor.

Whether it is before a paper is written, or after it is submitted and published, the earlier errors are noticed and corrected the better. When criticising work at lab meetings, during manuscript review, or when reading published papers, there are a number of "red flags" that can signal sloppy science or possible misconduct. 

Similar text, that may amount to plagiarism, can be detected by simple Google searches, or by commercial software that is available at many institutions (e.g. "iTthenticate" <http://www.ithenticate.com/> and "Turnitin" http://turnitin.com/) .

Sloppy statistics, such as failing to describe the type of error bars that are shown in figures, or results that look implausibly consistent, can be a giveaway.

Images should be looked at on a computer screen, rather than on a printed copy, because the resolution is greater, it is possible to zoom in, and the contrast and brightness can be altered. Things that should raise concern include sudden linear changes in brightness of the background of an image, a washed out or perfectly uniform background, inadequate resolution, or parts of an image that appear to be duplicated. For more examples, see PubPeer  <https://pubpeer.com/> , and papers by Vaux and Begley (Begley, 2013; Vaux, 2008).

Researchers have a duty to take action if they become aware of errors or possible research misconduct. If they notice a mistake in one of their own publications, they should write to the journal and ask them to publish a correction, or, if the mistake affects the conclusions of the paper, ask for it to be retracted. If a colleague is suspected of error or misconduct, the action to take would depend on the specific circumstances, such as whether it involves a publication or not, whether the colleague is more senior or junior, and whether the error is thought to be accidental or deliberate.  Well run institutions have mechanisms in place so that researchers can easily obtain advice on what to do.

If an error is found in a publication by a third party, the options are to contact one or more of the authors, a responsible person at the host institution, the journal editors, post a post-publication peer review comment on the web, and/or contact the national integrity office (if there is one).

V Peer review and the responsibilities of journals

In the general journals (e.g. Nature, Science, PNAS), and most of the life science journals, manuscripts are submitted on-line (via the web), and are first seen by a member of the editorial board. In the high-profile general journals, the editors will be full-time paid employees of the publisher. In the other journals, the editors are usually part-time, and may be paid or volunteers, but will usually be prominent researchers with expertise in the field covered by the journal.

The first decision the editor needs to make is whether to send the manuscript out for review. Although in an ideal world this decision would be made on the basis of the scientific content of the paper, editors are often busy, and make a decision without reading the paper, but just on the basis of the title and abstract, and whether the authors are known to them, or come from an institution they respect. In the high-profile general journals this arises much more frequently, because here the editors will seldom have deep expertise in the area the paper addresses. In other words, publication bias can arise because the editors often do not base their decisions on the science alone. The influence particular authors can have on the decision to consider a paper for publication, and the biases against papers from authors or institutions that are unknown to the editors, is illustrated by the Korean stem cell case.

In years prior to publication of the papers that were later found to be fabricated, Korean stem cell expert Dr. Woo Suk Hwang had trouble publishing his work in high profile journals. They would usually refuse to even send his manuscripts out for review. When Hwang met Dr. Gerald Schatten, a prominent stem cell researcher from the University of Pittsburgh, he offered to help Hwang get his papers published journals such as Nature and Science, in exchange for being listed as an author. When the story subsequently broke that the two papers in Science had been fabricated, Schatten’s defence was that he had not participated in or overseen any aspect of the work, and had not interacted with most of the scientists that did the experiments. He also claimed to have minimal involvement with another co-authored paper in Nature (Marris & Check, 2006). The lesson from this episode is that there is bias in what gets published. Acceptance of a paper - especially in the high profile journals - is based more on who the authors are and where they come from, rather than the quality of the scientific content.

In this single-blind process, which operates in most scientific journals, the same problem arises with the reviewers. If an editor does send a manuscript out for review, knowing who the authors are might influence whether they choose reviewers who they think are extra tough or extra lax. When the reviewers receive the manuscript, the first thing they will look at will be the names of the authors, and if they are known to them, or are collaborators or competitors, it might influence their attitude to the paper.

In the 19th February 2004 edition of Nature, there were 10 papers with figures that showed error bars, but only three of the papers described what the error bars were anywhere in the paper (Vaux, 2004). This suggested that seven of the ten papers had not been carefully read by the authors, reviewers, or editors. Clearly, as the decision to publish had been made without the papers being carefully read, it was based on some other reason. The most likely explanation is reviewer and editor bias.

Journals should screen the data in all accepted papers prior to publication

Journals play several important roles in ensuring the integrity of scientific research. They have the final say in publishing corrections, editorial notes of concern, and retractions. As gate-keepers for what gets published, they can prevent erroneous or falsified papers from appearing, but to do so they must operate a rigorous peer review process. If journals are alerted to potential problems by reviewers or readers, determining the validity of the allegations, and which of the authors is responsible usually requires cooperation with the authors’ institutions, but this might not be requested, and if it is, might not be granted.

With leadership from Dr Mike Rossner, the JCB has been innovating in adopting methods to prevent publication of erroneous figures (Rossner, 2006). The JCB routinely screens the images and figures in all manuscripts accepted by the reviewers but prior to publication, looking for inadequate resolution sudden changes in brightness, loss of visibility of the background, over-enhancement of contrast, etc. They find that for  25% of papers they need to ask the authors for the original data and to re-make a figure, and in about 1% of cases, they revoke acceptance. Practices at other journals vary – Nature checks the images in just two of the articles in each edition; Science relies mainly on its reviewers to identify problems; the Journal of Biological Chemistry has adopted many of the same author and review guidelines as the JCB, but doesn’t routinely ensure compliance (Couzin, 2006). However, almost all journals have now at least published image guidelines, so authors will know up-front what minimal resolution is acceptable, whether or how images can be altered, cropped and spliced, and how statistics should be described.

COPE

COPE, the Committee on Publication Ethics, has been a great source of advice for journal editors since its establishment in 1997. Although its mandate is limited, and it was established by journal editors to help other editors, its efforts have raised the standards of publication integrity, and also provided benefits that have flowed on to authors, publishers and institutions. For example, the COPE flowcharts, which give step by step recommendations on how to handle a variety of misconduct related issues, have been helpful to countless editors, and have also helped whistle-blowers and authors know what to expect <http://publicationethics.org/resources/flowcharts>.

VI Responsibilities of institutions

The way the host institution manages allegations of research misconduct is critical, but is often handled sub-optimally, not least due to conflicts of interest (such as fear of reputational damage), and lack of experience and established protocols.

In trying to avoid reputational damage when a case of research misconduct becomes public, an institution can risk even greater damage by engaging in a cover-up. Yet the institutions play an essential role, because unlike the publishers and readers of research papers, the institutions have access to the authors’ original data, and can individually interview each of the authors to try to determine which ones were responsible for any mistakes or misconduct.

Institutions can hear of concerns of possible research misconduct from outsiders, such as journal editors or reader of papers or grant applications, or they can be contacted by a whistle-blower who might also be a member of the same institution, or even a close colleague of the persons being accused. In many countries, the investigations have two phases. First, there is a preliminary investigation and collection and securing of evidence. The main goal of this state is to determine whether the allegations do not have substance, and can be dismissed. If the case cannot be dismissed, then the investigation should continue to a more thorough stage.

Unless the case can be summarily dismissed, e.g. because it is apparent the allegations were mistaken, issues that now need to be addressed on a case-by-case basis include: is it possible to proceed as anonymous allegations, or does the name of the accuser need to be revealed? When is it best to inform the person against whom the allegation is made? What sort of supporters and advisors should be appointed to council and assist both the complainant and the accused? Which people should be interviewed and who should be present? Should the investigation be external and independent, or internal? At what stage should other interested parties, such as funding bodies and journals, be informed? Do any expert investigators or witnesses need to be consulted? Who should be indemnified? When and what kind of legal advice should be sought?

Much useful advice on conducting investigations can be found at the ORI site <http://ori.hhs.gov/investigations>. As outlined in this website, the key goal of the investigation is to substantiate or refute the allegation. The investigation must be carried out without regard to the motivation or status of the accuser, and the inquiry panel is responsible for gathering and assessing the evidence, and conducting the case. The burden of proof lies with the inquiry panel, not the accuser. The terms of reference for the inquiries should not be set narrowly, so that if, during their investigations, the panel uncovers additional evidence of misconduct, they can extend their investigations until all related instances of misconduct are uncovered.

Once the inquiry panel has made its findings of fact, the host institution has to determine the best way for restitution. The institution bears responsibilities to the scientific public and the journals, that can be fulfilled by correcting or retracting publications. They also have responsibilities to funding bodies, that might involve alerting them, or returning funds. They have a responsibility to those who have been found to have engaged in misconduct, to provide sanctions that are proportionate, and, ideally, a path for reform. If mistreatment of animal or human research subjects was involved, they have a responsibility to determine what went wrong in their governance, so that similar failures won’t be repeated.

Unfortunately, when an allegation is forwarded to an institutional official, they may not have much experience in handling such cases. Where there is a national office or ombudsman for research integrity, they can seek advice from them, but in countries where there is no such body, institutional officials administering cases of potential misconduct can find themselves alone, which makes mistakes much more likely.

Considering the two aspects of research integrity (integrity of the scientific record and integrity in the practice of science), in cases of research misconduct the role institutions can play in upholding the former are more straightforward, as it will involve publishing corrections or retractions, but in this case a cooperative relationship with the journals is essential. COPE has published guidelines for cooperation between research institutions and journals on research integrity cases (Wager & Kleiert, 2012). They recommend that institutions:

• have a research integrity officer (or office) and publish their contact details prominently;
• inform journals about cases of proven misconduct that affect the reliability or attribution of work that they have published;
• respond to journals if they request information about issues, such as disputed authorship, misleading reporting, competing interests, or other factors, including honest errors, that could affect the reliability of published work;
• initiate inquiries into allegations of research misconduct or unacceptable publication practice raised by journals;
• have policies supporting responsible research conduct and systems in place for investigating suspected research misconduct.

The path to upholding integrity in the practice of science is less straightforward. The over-arching principle is that research should be conducted honestly, and credit should be awarded fairly ("On being a scientist. Committee on the Conduct of Science, National Academy of Sciences of the United States of America," 1989). For this, education and classes in research integrity principles will have less impact than having researchers and administrators lead by example, by having procedures in place to handle allegations of misconduct, to manage such cases efficiently, and by not tolerating those who cheat. Those in countries with research integrity offices or ombudsmen have a source of advice on how to make allegations of misconduct, and how to conduct investigations. Integrity offices also provide oversight to ensure allegations of misconduct are handled appropriately. In some countries, such as the US and UK, the national science academies play a active leadership role in upholding research integrity, for example by publishing articles on research ethics and mentoring such as that mentioned above, or by discussing research ethics online <http://blogs.royalsociety.org/in-verba/author/elizabethb/> . Others should follow their example, or set even higher standards.

Institutions are wise to have procedures in place that anticipate the occurrence of research misconduct. Relying on education and the promotion of integrity principles on their own are unlikely to prevent all occurrences of research misconduct (Kornfeld, 2012), but require measures to ensure compliance. Heavy-handed, restrictive "big brother" approaches are expensive to implement, and are likely to cause resentment. The "fire alarm" approach to handling misconduct is both cheap and likely to be effective.

In the "fire alarm" model, researchers are not required to know how to investigate and manage cases of misconduct themselves, they are just required to "push the alarm button" to summon help when they see something that causes them concern. The key requirements of this model is that everyone must know how to sound the alarm, and once the alarm is sounded, the institution must have protocols in place to take action. The "fire alarm" model is relatively cheap to operate (for example compared to a surveillance model), empowers whistle-blowers, and is less likely to generate antagonism with administrators than other systems. In addition, as colleagues are most likely to spot problems, have the knowledge to distinguish what is acceptable in their particular field, and may see things early, the fire alarm model is more likely to minimize the amount of damage that occurs. While the fire alarm model, could, like all other models, be abused, for example if opponents of a scientist make multiple complaints as a form of harassment, whether action is taken against the accused person, or whether they are even informed of the allegation, would depend on the nature of the allegation, and the strength of the evidence provided by the whistle-blower or as part of a preliminary investigation.

VII Roles and responsibilities of whistle-blowers/individuals

As written in the Singapore Statement on Research Integrity <http://www.singaporestatement.org/>, researchers have a duty to report to the appropriate authorities any suspected research misconduct, and other irresponsible research practices, that undermine the trustworthiness of research.

The best way for researchers to fulfil this duty is complex, and depends greatly on circumstance. Issues that need to be considered include:

• anonymity (whether the whistle-blower’s name needs to be revealed)
• who to raise concerns with (journal editors, authors, institutional officials, national research integrity offices, department heads, the individual who is suspected. PPPR blogs such as PubPeer, or PubMed comments, funding bodies).
• the position of the whistle-blower in the hierarchy.
• whether delay could cause harm to human subjects or experimental animals
• the nature of potential conflicts of interest
• the prevailing legal environment, and whether it protects free speech.

Just as all researchers have a duty to report concerns of possible research misconduct, all would be wise to seek advice first. A search of the web provides links to many national whistle-blower organizations.

In Australia, the Code for the Responsible Conduct of Research states that institutions must appoint one or more "advisers in research integrity", so that those who have concerns can get confidential advice. The advisers inform the individual what options they have, and, for example, how to make a formal allegation. The adviser’s role is one of support; they are not to investigate the case.

VIII     The way forward

The increasing numbers of retractions indicates a growing awareness of issues of research integrity, and new avenues for reporting concerns. The web has made anonymous post-publication peer review possible, in sites such as PubPeer.  Individual scandals have prompted the strengthening of practices that promote research integrity in a number of countries, and this has led to establishment of offices for research integrity (ORIs) or research integrity ombudsmen. It has culminated in the series of World Research Integrity Conferences <http://wcri2015.org/> every few years, since the first in Portugal in 2007.

The promise of the web

In recent years, alarm about falling integrity in science has prompted a number of positive responses. The growth in the internet has made it possible for bloggers to raise concerns anonymously. For example, it was concerns initially raised in a blog, and then publicised in the popular media, that ultimately led led to the retraction of the Woo Suk Hwang’s stem cell paper (Kennedy, 2006). Blogs reporting allegations of research misconduct, such as the Abnormal Science blog <http://ktwop.com/tag/abnormal-science/>, 11jigen’s blog <http://katolab-imagefraud.blogspot.com.au/>, and Paul Brooke’s Science Fraud blog (which was closed down following legal threats) have given way to more organised post-publication peer review sites, such as PubPeer <https://pubpeer.com/>. PubPeer allows concerns about any published paper to be raised anonymously, and automatically contacts the authors and invites them to respond. PubPeer has itself been threatened with legal action demanding that it release the names of registered commenters, but the strong freedom of speech laws in the US give more protection than in other countries.

World Conferences and National Offices for Research Integrity

There have been three World Conferences on Research Integrity, with a fourth planned for 2015 in Rio de Janeiro, Brazil <http://www.researchintegrity.org/>.  These not only provide an opportunity for researchers, administrators, editors and publishers to air their issues, and propose possible solutions, they provide an opportunity for the latest research into scientific integrity to be discussed. The second World Conference on Research Integrity in Singapore produced the Singapore Statement <http://www.singaporestatement.org/> which succinctly describes 14 responsibilities of scientists, and how they flow from a set of four principles.

Several countries have established national offices for research integrity (ORIs) or ombudsmen for research integrity. The ORI in the US <http://ori.hhs.gov/> will oversee any allegation of misconduct involving NIH funded research in the previous five years. The NSF has a similar office where concerns can be lodged <http://www.nsf.gov/oig/hotline.jsp>. In Germany, the DFG has an Ombudsman for research integrity <http://www.ombudsman-fuer-die-wissenschaft.de/>. Denmark has a Committee on Scientific Dishonesty <http://ufm.dk/en/research-and-innovation/councils-and-commissions/the-danish-committees-on-scientific-dishonesty>. Those countries that do not have a national office that can handle confidential reports of possible research misconduct leave its management in the hands of the research institutions, where serious conflicts of interest almost inevitably arise.

Improving scientific integrity in publishing

Double blind peer review (DBR) offers one way of reducing publication bias. In DBR, the authors’ names and affiliations are submitted on a web page that is not presented to the editor who decides whether the paper should be sent out for review, or the reviewers themselves. They are left to give their opinions on the merits of the science alone, not on whether they know the authors. Like the double-blind clinical trial, DBR is an innovation that attempts to reduce bias and increase objectivity in scientific publications(Vaux, 2011). Post-publication peer review, whether on a dedicated site such as PubPeer, as part of PubMed Commons <http://www.ncbi.nlm.nih.gov/pubmedcommons/>, or on a site hosted by the publisher, should improve integrity of the literature, and de-emphasize the published paper as the be-all and end-all of career advancement.

IX        Summary

Although it remains true that science is ultimately self-correcting, society as a whole will benefit more, and progress will be more rapid, if research is conducted efficiently. To do so requires minimising the number of errors that enter the literature, and quickly correcting those that inevitably do. Research will also be performed more efficiently if those who conduct it are fair and honest. However, as a human endeavour, science must be managed actively for its integrity to be upheld. This requires not only a bottom-up, "grass roots" effort based on principles of honesty and fairness, it also requires some top-down mechanisms to ensure compliance. There must be mechanisms in place so that errors and concerns of possible misconduct can be reported. Publishers should try to minimize entry of errors into the literature by screening manuscripts and using unbiased peer review, and should cooperate with institutions when problems arise. Nations and national scientific academies should provide mechanisms to offer advice and oversight for research institutions. Researchers need to have integrity in how they conduct themselves, and whether it is through official channels or anonymously via the web, when they see errors or have concerns about possible misconduct, they should, after seeking advice, speak up.

X Acknowledgements

The author would like to thank Ivan Oransky for constructive comments, and the NHMRC (Grants 1016701 and 1020136) for funding. This work was made possible through Victorian State Government Operational Infrastructure Support and Australian Government NHMRC Independent Research Institute Infrastructure Support Scheme (IRIISS) Grant 361646.

References

Begley, C. G. (2013). Six red flags for suspect work. Nature, 497(7450), 433-434. doi: 410.1038/497433a.

Begley, C. G., & Ellis, L. M. (2012). Drug development: Raise standards for preclinical cancer research. Nature, 483(7391), 531-533. doi: 510.1038/483531a.

Couzin, J. (2006). Scientific publishing. Don't pretty up that picture just yet. Science, 314(5807), 1866-1868.

Curfman, G. D., Morrissey, S., & Drazen, J. M. (2005). Expression of concern: Bombardier et al., "Comparison of upper gastrointestinal toxicity of rofecoxib and naproxen in patients with rheumatoid arthritis," N Engl J Med 2000;343:1520-8. N Engl J Med, 353(26), 2813-2814. Epub 2005 Dec 2818.

Doody, R. S., Gavrilova, S. I., Sano, M., Thomas, R. G., Aisen, P. S., Bachurin, S. O., . . . Hung, D. (2008). Effect of dimebon on cognition, activities of daily living, behaviour, and global function in patients with mild-to-moderate Alzheimer's disease: a randomised, double-blind, placebo-controlled study. Lancet., 372(9634), 207-215. doi: 210.1016/S0140-6736(1008)61074-61070.

Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One., 4(5), e5738. doi: 5710.1371/journal.pone.0005738.

Fang, F. C., Steen, R. G., & Casadevall, A. (2012). Misconduct accounts for the majority of retracted scientific publications. Proc Natl Acad Sci U S A, 109(42), 17028-17033. doi: 17010.11073/pnas.1212247109. Epub 1212242012 Oct 1212247101.

Ferguson, C., Marcus, A., & Oransky, I. (2014). Publishing: The peer-review scam. Nature, 515(7528), 480-482. doi: 410.1038/515480a.

Horton, R. (2004). Vioxx, the implosion of Merck, and aftershocks at the FDA. Lancet, 364(9450), 1995-1996.

Ioannidis, J. P. (2005). Why most published research findings are false. PLoS Med, 2(8), 30.

Kapoor, A., Yao, W., Ying, H., Hua, S., Liewen, A., Wang, Q., . . . DePinho, R. A. (2014). Yap1 activation enables bypass of oncogenic Kras addiction in pancreatic cancer. Cell, 158(1), 185-197.

Kennedy, D. (2006). Editorial retraction. Science, 311(5759), 335. Epub 2006 Jan 2012.

Kornfeld, D. S. (2012). Perspective: research misconduct: the search for a remedy. Acad Med., 87(7), 877-882. doi: 810.1097/ACM.1090b1013e318257ee318256a.

Knox, R. A. (1983). Deeper problems for Darsee: Emory probe. JAMA, 1983 Jun 3;249(21), 2867.

Lawrence, P. A. (2002). Rank injustice. Nature, 415(6874), 835-836.

Marris, E., & Check, E. (2006). Disgraced cloner's ally is cleared of misconduct. Nature, 439(7078), 768-769.

Merton, R. K. (1968). The Matthew Effect in Science: The reward and communication systems of science are considered. Science, 159(3810), 56-63.

On being a scientist. Committee on the Conduct of Science, National Academy of Sciences of the United States of America. (1989). Proc Natl Acad Sci U S A, 86(23), 9053-9074.

Prinz, F., Schlange, T., & Asadullah, K. (2011). Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov., 10(9), 712. doi: 710.1038/nrd3439-c1031.

Retractions' realities. (2003). Nature, 422(6927), 1.

Ross, J. S., Hill, K. P., Egilman, D. S., & Krumholz, H. M. (2008). Guest authorship and ghostwriting in publications related to rofecoxib: a case study of industry documents from rofecoxib litigation. JAMA, 299(15), 1800-1812. doi: 1810.1001/jama.1299.1815.1800.

Rossner, M. (2006). How to Guard Against Image Fraud. The Scientist, 20, 24-24.

Rossner, M., & Yamada, K. M. (2004). What's in a picture? The temptation of image manipulation. J Cell Biol, 166(1), 11-15.

Steen, R. G., Casadevall, A., & Fang, F. C. (2013). Why has the number of scientific retractions increased? PLoS One, 8(7), e68397. doi: 68310.61371/journal.pone.0068397. Print 0062013.

Stern, A. M., Casadevall, A., Steen, R. G., & Fang, F. C. (2014). Financial costs and personal consequences of research misconduct resulting in retracted publications. Elife, 3:e02956.(doi), 10.7554/eLife.02956.

Strange, K. (2008). Authorship: why not just toss a coin? Am J Physiol Cell Physiol., 295(3), C567-575. doi: 510.1152/ajpcell.00208.02008.

Van Noorden, R. (2011). Science publishing: The trouble with retractions. Nature, 478(7367), 26-28. doi: 10.1038/478026a.

Vaux, D. L. (2004). Error message. Nature, 428(6985), 799.

Vaux, D. L. (2008). Sorting the good from the bad and the ugly. The Biochemist, 30, 8-10.

Vaux, D. L. (2011). A biased comment on double-blind review. Br J Dermatol., 165(3), 454. doi: 410.1111/j.1365-2133.2011.10546.x.

Wager, E., & Kleiert, S. on behalf of COPE Council. (2012). Cooperation between research institutions and journals on research integrity cases: guidance from the Committee on Publication Ethics (COPE). www.publicationethics.org.