Countering research fraud.  III – Disinfection – cleansing the scientific record

Disinfection overview

This chapter covers the processes that should occur once a credible accusation of research fraud has been made and it finishes with my thoughts about measures that might improve the situation in the future.

Post publication scrutiny and repetition may show that the data has been fabricated or indicate fundamental errors that cannot be put right by a simple correction. Such findings effectively invalidate the data so this should lead to voluntary or forced retraction of the paper. A retraction notice should be published in a current issue of the journal and the electronic online version of the paper should be marked as retracted. Ideally the reasons for the retraction should also be stated. This should effectively remove the paper from the scientific record although it should still be accessible to those who want to read it knowing that it has been retracted. The fact that an author has had a paper retracted for fraud should reduce the chances of publication of more tainted data from this author and lead to examination and, where appropriate, decisive action to remove other papers where there is evidence of fraud.  If effective measures are taken to highlight or retract suspect papers then this should minimise the chance of these being unknowingly by other researchers and reviewers of the literature. The increasing use of meta-analysis where data from different sources is weighted and aggregated to give a summation of the results of these studies increases the imperative to make sure that fraudulent data is not used in such analyses. How effective is the cleansing of the literature of fabricated or falsified data from corrupt scientists? Are known fraudsters prevented from further polluting the scientific record? Are retracted papers identified unmistakably so that they are not cited unwittingly by others? Are clear reasons for retraction of papers published in the retraction notice?

One confirmed act of fraud should trigger a wider investigation

If the person accused of acts of research fraud admits the specific offences or the papers that have triggered the accusation are clearly shown to contain fabricated or falsified data then this should be followed by a critical appraisal of as much as is practicable of the accused’s other publications. This should lead to retraction of all of the output of the accused where there is evidence of similar fabrication or falsification. I have suggested in other articles that once an author has been found to have committed research fraud then the threshold of evidence needed to retract other work should be lowered. In legal parlance, the level of proof should change from “beyond reasonable doubt” to on the “balance of probabilities” especially if the offender is known to have committed multiple acts of research fraud. In some cases the only reliable publications that a serial fraudster has co-authored are those where others have been largely responsible for the collection and analysis of the data.  The aims of this process are four fold:

  • Removal of the initial suspect paper(s) from the scientific record once the allegations are confirmed
  • To prevent the guilty author from continuing to conduct and supervise research and from continuing to pollute the literature with more fraudulent data
  • To prevent readers and reviewers of the literature from being misled by any of the author’s output of fraudulent data
  • To provide a warning for others of the thoroughness of any investigation and of the potential damage to their careers, livelihoods and even their liberty of committing research fraud.

Who should investigate accusations of research fraud?

In July 2005, Richard Smith discussed this question in an article in the BMJ and he focused particularly upon who should investigate the previous output of a fraudulent author. This article was published shortly after the BMJ had published a meta-analysis by El-Kadiki and Sutton that included data from Ranjit Chandra.  Chandra’s results clearly distorted this meta-analysis as already discussed in an earlier post . The BMJ published this meta-analysis despite having been involved in trying to expose Chandra as a fraudulent author a few years earlier. This is a clear illustration of the problems and confusion that un-retracted suspect papers from a fraudulent author can cause. Smith suggested nine possible answers to the question of who should investigate and for eight of these, he suggested limitations on their ability to investigate or circumstances where they may not be a suitable group for conducting an investigation. Smith favoured his ninth option, the creation of an international body to investigate research fraud.

One obvious problem is the likely scale and cost of investigating the total output of someone with a prolific output over many years and perhaps at several different institutions. Smith argued that an international body with the authority and resources to investigate cases in different countries is necessary because of the international nature of much current scientific and medical research. Whilst such an international body might be a useful fail-safe where other bodies prove unable or unwilling to investigate, there are several examples from the case studies of very thorough investigations being completed without such a body.  On several occasions these investigations have resulted in the publication of detailed, easily accessible reports and the removal of almost all the published data of a fraudulent author. The nine candidates that Smith explored as possibilities for conducting such an investigation are:

  • Employers
  • Research funders
  • Regulatory bodies that licence practitioners
  • College and professional bodies like learned societies
  • The criminal justice system
  • National bodies like the Office of Research Integrity in the USA
  • The media
  • An international body.

Employers

Some employers may not be keen to taint their reputations and generate negative publicity by investigating and publicising the fraudulent activities of one of their (past) star employees.  It is worth noting at this point that past experience in several fields, including politics, suggest that attempts to cover up an initial act of misconduct often causes more harm and scandal than the initial act. I believe that the very thorough and published investigations in cases like Diederick Stapel, Jatinder Ahluwalia and Jon Sudbo reflect well on the institutions concerned and offset much of the damage done by having had a research fraud as an employee (see later for brief summaries).

Employers may have a legitimate right and obligation to investigate the activities of current employees but they may be content to do nothing and hope that interest in the affair fades away once the person resigns or is dismissed. In the case of Ram B Singh he seems to have been self-employed and so there was no obvious employer to carry out an investigation. Smith discussed Memorial University’s failure to properly investigate and publish their findings into the activities of Ranjit Chandra who was employed by them for three decades. He was first accused of data fabrication in 1994 but was allowed to continue apparently unhindered despite strong evidence against him. The BMJ once again questioned the integrity of his research in 2000 but no action was taken. He left Memorial in 2002 and the university seems to have largely washed their hands of the affair at that point.

The case of Yoshitaka Fujii provides another example of apparent employer inertia over an extended period of time. Strong evidence suggesting data fabrication by Fujii was first published in 2000 and 2001 when he was at the University of Tsukuba but no action was taken against him and his senior colleague Professor Hidenoori Toyooka continued to work and publish with Fujii; an action which was later criticised in a Japanese Society of Anesthesiologist’s report. According to Martin Tramer in an editorial in the European Journal of Anaesthesiology in 2013, international experts ignored the work of Fujii when setting guidelines for the management of post-operative nausea and vomiting when they met in 2002 and 2005. Despite this shunning of his work by international experts, he was able to obtain a new position at Toho University in 2005 and continued to work and publish from there until 2012 when he was finally and unequivocally found guilty of multiple acts of research fraud by the Japanese Society of Anesthesiologists.

Detailed allegations and supporting evidence suggesting that Viswa Jit Gupta had made numerous false claims about unique fossil finds in the Himalayas were first published by John Talent in 1987 and these accusations were summarised in Nature in 1989 and in the same year several of Gupta’s past collaborators wrote pieces in Nature supporting Talent’s allegations. It was not until 1995 that Punjab University produced a report into this affair which did find that the charges against Gupta had been proven. He nonetheless retained a professorial position at the university until his normal retirement age in 2002.

It was not until 1988, five years after the chair of the ethics committee at Deakin University in Australia had first voiced his concerns about the activities of Michael Briggs that a report into his activities was produced by the university. Briggs resigned his position in 1985 and died in 1986 i.e. two years before the report was finished.

Several employers of those who have been the subjects of my case studies have conducted very thorough and wide-ranging investigations into allegations of research fraud against their employees such as the four examples listed below.

  • University College London published a detailed report into the activities of Jatinder Ahluwalia. This report amounted to a detailed forensic investigation into his alleged activities but it only covered one of his papers that was produced during his tenure at UCL.
  • Jon Sudbo’s joint employers, the University of Oslo and the Norwegian Radium Hospital commissioned a detailed report into his activities. This report was made freely available on the internet and resulted in the retraction of almost all his published original articles and the revoking of his PhD.
  • The University of Connecticut investigated publications of Dipak Das which stretched back six years from the date of allegations first being made against him to the Office of Research Integrity. This report concluded that at least 145 western blot images spread over 25 publications and three grant applications had been improperly manipulated.
  • The RIKEN institute in Japan quickly concluded that two papers published by Haruko Obokata in 2014 should be retracted because of acts of serious research misconduct. A final report later in the year into her activities concluded that her claim to be able to produce pluri-potential stem cells by simply exposing ordinary cells to mild acid shock could not be replicated and she resigned her position.

Perhaps the gold standard for an investigation into the activities of an alleged research fraud can be found in the Diederick Stapel case-study. The three Dutch universities where he had spent most of his academic life, Tilburg University, The University of Groningen and the University of Amsterdam, each appointed academically powerful committees which together contained 18 senior academics including five statisticians. They examined 138 research papers, 18 doctoral theses supervised by Stapel as well as book chapters, reviews and Stapel’s own PhD thesis. Their conclusions were amalgamated into a single report (The Levelt report) which was made freely accessible on the internet. In this report they found strong evidence of fraud in 55 research papers, 10 PhD theses supervised by Stapel, 4 book chapters and reviews and in Stapel’s own PhD thesis. This final report was over 100 pages long, took over a year to compile and must have cost hundreds of thousands of pounds to produce.

Funding agencies

Funding agencies can legitimately investigate suspect work that they have funded but not past work funded by others and they may well expect employers to actually conduct any investigation. Some funders may be charities with limited funds who would find it difficult to set up an inquiry on the scale of the one’s conducted into the activities of Diederick Stapel or Jon Sudbo. If they believe that funds given to conduct a specific piece of research have been misappropriated and used for other purposes then they can report this to the employer or even the police. In the Stephen Breuning case-study, Robert Sprague sent his initial allegations and supporting evidence to his funding agency, the National Institute of Mental Health (NIMH) who asked the University of Pittsburgh to investigate the allegations. Despite this investigation revealing clear evidence of data fabrication at another institution the dean of medicine assured the NIMH that there were no grounds to take any action relaying to Breuning’s activities at Pittsburgh. It was a further three years before NIMH produced its own report which concluded that Breuning had committed multiple acts of research misconduct.

Professional bodies that licence practitioners like the General Medical Council in the UK

The UK’s General Medical Council normally deals with cases where doctors have been accused of unethical or unprofessional behaviour or where they are said to have been negligent or incompetent. They have also investigated several doctors who have been accused of research misconduct and have struck offenders from the medical register. The main limitation with bodies like the GMC which give authority to practice is that they are confined to investigating their specific professional group.

Several British doctors have had their licence to practice revoked amid accusations of research fraud and the examples of Malcolm Pearce, Andrew Wakefield, Mark Williams, John Anderton and Anjan Banerjee were summarised in the Protection post.

In 2012, the Japanese Society of Anaesthesiologists eventually investigated 249 research papers published by Yoshitaka Fujii and found that most of them were fabricated and they could only verify and authenticate the data for three of them. It took a formal request from 23 journal editors to trigger this investigation after overwhelming evidence published in 2011 that his clinical trials data could not have been generated honestly. Most of Fujii’s fraudulent work has now been retracted but suspicions about the authenticity of his data had first been made public a decade earlier and the anaesthesiology community seem to have been ignoring his data for several years when drawing up guidelines for treating post-operative nausea and vomiting despite his enormous volume of published work and clinical trials in this field.

College and professional bodies like learned societies

These generally have very limited authority to conduct an investigation and probably limited resources as well. Membership of these societies is usually optional and not normally a condition of a member’s employment. Their only sanction would be an expression of disapproval and expulsion of the member concerned. Some of these societies may have greater leverage with employers if they accredit the specialist degrees awarded by the institution. They may also publish the journals used by a fraudster or may be able to persuade other journals in the field to retract faked papers.

The criminal justice system

There are several examples in the case studies of fraudsters being successfully prosecuted in the criminal courts and in two cases actually being sent to prison for their offences. In  the Protection post,  I briefly summarise five examples of research frauds who have been the subject of criminal prosecutions: Eric Poehlman, Scott Reuben, Stephen Breuning and Dong-Pyou Han in the USA and Steven Eaton in the UK.

In general criminal prosecution tends to occur at the end of an investigation by some other authority like an employer. The prosecution is usually for using false data to obtain research funding or for misuse of funds allocated specifically to conduct a piece of research. These activities breach existing laws against obtaining money by false pretences or misappropriation of research donations. The issue of whether research fraud should be a specific criminal offence was also briefly discussed in this post. The activities of some research fraudsters have undoubtedly resulted in harm to patients and even significant numbers of excess patient deaths.

National bodies like the US Office of Research Integrity (ORI) and the UK Research Integrity Office (UKRIO)

One obvious limitation of such bodies is that they can usually only deal with research fraud that has been perpetrated within their national boundaries.

The ORI was also set up to deal with research fraud perpetrated by holders or applicants for federal research funding. The US ORI does have some powers to penalise or recommend penalties against offenders:

  • They can specify that the offenders research must be supervised and monitored
  • They can prevent them from serving on publicly funded research committees
  • They can prevent them from applying for federal research funding
  • They can request retraction of any offending article(s).

Offenders may agree voluntarily to accept penalties of this sort when their guilt becomes clear.

The ORI publishes case summaries on its web-site with the names and affiliations of offenders with brief details of their fraudulent activities and the penalties imposed. This public exposure may be a substantial penalty in its own right; it may well result in dismissal from the offender’s current employment and make it very difficult for them to find other similar employment.

Congressional hearings in the USA in 1981 chaired by then Congressman Al Gore focused the attention of US legislators on the problem of research fraud. In 1985 The Health Research Extension Act was passed which required institutions applying for or in receipt of federal funding to establish “an administrative process to review reports of scientific fraud” and “report to the Secretary [of Health] any investigation of alleged scientific fraud that appears to be substantial”. In 1989 two agencies were set up to deal with this problem and in 1992 these were amalgamated into the current ORI.

The ORI lists a number of its roles on its web-site and these include:

  • Developing ways of detecting, investigating and preventing research fraud including production and making available forensic tools like programs for detecting images that have been “photo-shopped”.
  • Monitoring investigations which are usually conducted by the employer; the ORI is not itself responsible for investigating accusations.
  • Recommending decisions and actions to the Assistant Secretary for Health.
  • Providing technical help and expertise to institutions conducting investigations.
  • The ORI also has a wider role in trying to improve policies and programmes aimed at preventing, detecting and investigating research fraud including educational programmes aimed at fostering ethical research conduct. They also try to prevent any retaliation against whistle-blowers.

The UKRIO was set up in 2006 as an independent advisory body without any investigative or regulatory powers. It is classified as a charitable organisation and now receives its funding from subscriber organisations; mainly universities and research-linked organisations like the Royal Society and the Institute of Cancer Research. In 2009 it produced a code of practice for research which was voluntarily adopted by many institutions. It produces other publications designed to help in fostering good practice in research such as:

  • A step by step guide to investigating allegations of research fraud and misconduct
  • A Concordat to Support Research Integrity
  • Packs of illustrative case studies to help in education and training; these are anonymous scenarios based upon real-life situations
  • A 5 page guide to the retraction of research articles.

The UKRIO offers advice on conducting misconduct investigations and has a register of experts willing to provide specialist assistance in conducting such investigations.

The UKRIO thus has no powers to investigate allegations of fraud or to demand that employers or other agencies conduct an investigation. It has no power to impose penalties on those who commit research fraud.

In 2015 Nicola Foeger from the Austrian Agency for Research Integrity produced a map indicating the level of regulation for investigating allegations of research misconduct across Europe. Only 3 countries, Norway, Poland and Denmark had national commissions with legal mandates and a further seven countries had national commissions without any legal mandate namely Austria, Finland, Germany, Netherlands, Sweden, Switzerland and UK.

The Media

The media have played a role in exposing the extent of the fraudulent activity of a number of those discussed in the case studies:

  • Karl Sabbagh played a key role in making public the fraudulent activities of JW Heslop Harrison. Without his investigation and the publication in 1999 of his book A Rum Affair, the detailed 1948 report by John Raven of Heslop Harrison’s faking of his unique plant finds in the Hebrides would probably still be buried and largely unread in a Cambridge academic library. It was only after the publication of this book that the existence and location of this report became widely known and it was eventually published in full in a botany journal in 2004.
  • Oliver Gillie in articles in The Sunday Times in 1976 publicly accused Sir Cyril Burt (died 1971) of faking his data on identical twins and inventing research collaborators. This partly repeated the compelling evidence that Burt had fabricated or falsified his twin data that had already been published by Professor Leon Kamin in his 1974 book The Science and Politics of IQ. Gillie’s articles nonetheless led to a public discussion of Burt’s legacy and a widespread belief that little or none of the data that he published could be trusted.
  • The series of broadcasts by the  Canadian Broadcasting Corporation in 2006 entitled The Secret Life of Dr Chandra made very public the accusation that Chandra had  fabricated a study published in 2002 (retracted in 2005) which claimed that a dietary supplement could improve cognitive function in the elderly. It also suggested that Chandra had amassed a small fortune that was not commensurate with his salary and compelling evidence that he had been fabricating data in other areas of research for more than a decade.
  • Sunday Times journalist Brian Deer uncovered evidence which was published in 2011 in the BMJ which made a very strong case for believing that gastroenterologist Andrew Wakefield had not only acted unethically and unprofessionally in producing his report linking autism and inflammatory bowel disease with the MMR vaccine but had also falsified the data in this report.

In all of these cases the media or journalist has used evidence initially produced by scientific investigation and criticism of the accused author’s work. Their role has been to expand and or make widely known existing doubts about the integrity of author’s publications. This seems like a realistic and very useful role for the media but it seems unlikely that the media can have a major role in the generation of the first accusations against an established scientist unless a whistle-blower chooses to make their first accusation to a journalist.

An international agency

Richard Smith suggested in his 2005 article in the BMJ there should be some international body with the authority and resources to investigate cases of  alleged research fraud especially cases that involve international collaboration. This agency would be authorised to investigate allegations without being constrained by national jurisdiction. Presumably such an agency would have to operate under the auspices of the United Nations and presumably national governments would have to sign up to membership and also be prepared to fund this agency. Would governments be prepared to cede their sovereignty to investigate activities by their own citizens within their boundaries to such an organisation? Would they be prepared to contribute their share of the costs of running such a body? Would such an organisation have the power to penalise those found guilty of research fraud like the International Criminal Court or would they report their findings to the relevant national government for them to take action? Did Smith envisage teams of trained investigators sitting in offices in New York or Geneva ready to fly off to investigate accusations from any corner of the globe or would they just co-ordinate the formation of local committees to investigate? The case studies show that there have been a number of very effective and thorough investigations of suspected fraudsters in recent years set up by employers, funding agencies or professional bodies. If these organisations fulfil their obligations properly then most cases of research fraud can be investigated and dealt with without the need for another unwieldy international bureaucracy. Most of the examples of cases where investigations have been delayed or inadequate within the case studies have been because of a lack of willingness of one of these organisations to fully accept their responsibilities to investigate and deal with allegations of research fraud. The case of Ram B Singh may be an exception to this generalisation because as a private practitioner who does not seem to have been a recipient of major research grants there was no obvious candidate to conduct an investigation into his alleged misconduct.

I would suggest that rather than try to set up some elaborate international agency, existing local agencies like employers and professional bodies should learn from the examples of best practice highlighted by examples like the Diederick Stapel, Jon Sudbo and Dipak Das investigations by their employers, the investigations by the General Medical Council into the activities of Malcolm Pearce, Mark Williams and other British doctors and the eventual investigation by the Japanese Society of Anethesiologists into Yoshitaka Fujii.      

Indicators of data fabrication

Once a credible accusation of research fraud has been made then the same methods described for detecting fraud in the Detection post can also be rigorously applied to the research output of the accused. In many cases once a credible accusation has been made and it becomes clear that an investigation is about to take place then the accused will confess to at least some instances of fabrication or falsification because they realise that their guilt will become clear after even a cursory investigation into their research practices:

  • William Summerlin would have found it difficult to deny that he had “painted his mice”
  • Diederick Stapel realised that some of the scenarios that he had invented in his papers were not possible and so he confessed to many acts of data fabrication
  • Haruko Obokata admitted that she had made some “improvements” to figures in her papers
  • Scott Reuben admitted acts of data fabrication but claimed that these acts were committed during the manic phases of an undiagnosed bipolar condition.
  • Eric Poehlman even published a letter of apology in the American Journal of Physiology for his falsification of data.

Others have either died before public accusations were made like Sir Cyril Burt, Charles Dawson and JW Heslop Harrison or have continued to deny their guilt even after detailed evidence of their research misconduct has been made public like Ranjit Chandra and Ram B Singh.

Discrepancies in the data

If a credible accusation has been made to an employer then they must try to acquire as much of the raw data of the accused and their associates as possible e.g. laboratory notebooks, data files on computers including stored images used in publications and grant applications. Failure to find any raw data and failure of the accused to supply it may well be taken as de facto evidence or admission of guilt notwithstanding claims that the data has been “eaten by termites” or lost in some other way. If summary data has been fabricated then the raw data does not actually exist and so none-availability of raw data cannot be used as an excuse not to investigate or take action. The same irregularities spotted by vigilant readers of published research papers may be more obvious when the published output of the accused is scrutinised with the specific intention of looking for irregularities that might indicate fabrication or falsification; if raw data is found then this greatly increases the power of the investigators to uncover manipulation or fabrication.

When the University of Connecticut accessed the computer of Dipak Das they were able to find evidence that images of western blots had been changed using Adobe Photoshop in ways that materially changed the way these blots would have been interpreted.

When Sanaa Al-Marzouki and colleagues were commissioned by the BMJ to analyse raw data supplied by Ram B Singh to support one of his submitted manuscripts they were able to conclude that:

“Several statistical features of the data are so strongly suggestive of data fabrication that no other explanation is likely.”

They were able to show that when comparing control and experimental groups of a dietary trial where subjects were said to have been randomly allocated then:

  • For 10 of 22 variables the means were significantly different at the outset, in some cases very highly significantly different with incredible small p values which precludes them being randomly allocated
  • For 16 of the 22 variables the variances were significantly different at the outset
  • Distributions of the final recorded digit were significantly different between the two groups at the outset suggesting that the numbers were not generated honestly.

In the Detection chapter it was noted that Helene Hill at Rutgers University has been trying for some years to publish with statistician Joel Pitt an analysis of hand recorded data from a cell counting machine which seems to show strong final digit preference which would indicate that the data had not been generated by the machine used in the experiments. She suggests that the odds of the frequencies recorded in the data of the person she is accusing arising by chance is less than 1 in 100 billion. At 86 years of age, Hill seems determined not to retire until some journal agrees to publish this analysis.

It may well be possible to produce computer programs that will reverse engineer raw data to fit summary data without creating obvious anomalies like those listed above. However, most of the fraudsters that I have discussed in my case-studies do not seem to have been statistically very astute and probably not capable of producing such programs themselves. Most of the fraudsters seem to have acted alone without the active collusion of others even though in one or two cases one suspects that co-authors and collaborators were happy not to see issues with their productive colleague’s output. To try to recruit another statistically accomplished person to collude in the production of reverse engineered raw data would be a major risk. If some unscrupulous people develop and make available such software then being covertly in possession of it without proper authorisation could be classified as research misconduct in its own right and certainly construed as very strong evidence of wrongdoing.

Is the “footprint” of the research compatible with what is reported in published work?

In the detection chapter there are several examples highlighted where researchers claimed in their papers to have used resources that they did not have access to e.g. used databases, tests, apparatus or drugs that were not available to them at the time. When an internal investigation is being conducted then this can be a much more quantitative procedure such as:

  • Does the claimed use of experimental animals tally with animal house records and the numbers submitted to licensing authorities?
  • Do the amounts of consumables like drugs, chemicals and radioactive isotopes tally with amounts available and with radio-chemical disposal records?
  • Do the numbers and types of patients used as subjects tally with patient records?
  • If very large numbers of biochemical analyses are said to have been performed is this consistent with the resources, space and manpower used?
  • Similarly if large numbers of clinical assessments or psychological assessments are said to have been conducted is this also consistent with the amount and type of manpower used?
  • Do ethics committee records confirm that studies using patients or other human subjects have received ethical approval? Are consent forms available for the subjects who it is claimed participated in the study and supposedly gave their informed consent? If data is being fabricated then the fraudster may by-pass normal ethical approval and as there are no actual patients then they will not be filling in consent forms.

A few examples from the case studies illustrate some of these points:

  • In his 1997 letter in which he accuses Jatinder Ahluwalia of committing research fraud whilst a postgraduate student at Cambridge, Dr Martin Brand reports that Ahluwalia reported results using rats and radioactive chemical. Animal house and radioactivity disposal records show none recorded for the dates on which he claimed to have used them.
  • RK Chandra’s two linked studies looking at the effects of taking supplements on immune function and cognitive function would have required thousands of laboratory analyses and many hundreds of interviews recording infections and assessing cognitive function using a battery of tests. Yet the two papers were published by a single author with only limited acknowledgement of technical assistance. Similarly his double blind trials purportedly testing the effects of different infant formulae on allergic diseases would have required tens of thousands of cans of formula with special coded labels but there is no indication of who supplied these; were any specimens of these cans available to the initial Memorial University inquiry in 1994?
  • In the study that triggered his exposure, German anaesthesiologist Joachim Boldt claimed to have used an albumin-based solution as a comparator to prime heart-lung machines in cardiopulmonary by-pass surgery. Yet it was later found that albumin solutions had not been used in the hospital for a decade and none had been supplied to operating theatres by the hospital pharmacy.
  • Those who audited Werner Bezwoda’s 1995 report of a randomised controlled trial of high dose chemotherapy against conventional dose therapy in early 2001 could find no evidence of any of the patients being alive after 1995. Yet in a follow-up publication in 1999 Bezwoda shows a graph suggesting that almost a third of the HDC were still alive and effectively cured.
  • The first retraction notices for Joachim Boldt’s fabricated studies gave the reason for retraction as lack of ethical approval because there was no record of them having been given proper ethical approval.

Interviews with co-authors

In the section in the Protection post dealing with peer review by co-authors and gift authorship, a number of cases were highlighted where the safeguard of co-author scrutiny did not operate effectively as an aid to protecting the literature:

  • Sometimes co-authors allowed their names to be added to the author list despite their having played no significant part in the study. This adds prestige and authority to the author list without increasing the scrutiny of the work or confirming its authenticity. Professor Geoffrey Chamberlain added his name “as a courtesy” to Malcolm Pearce’s false report of successfully removing and re-implanting an ectopic embryo.
  • Yoshitaka Fujii added the names of co-authors without their consent to make studies appear to be multi-centred and even on occasion resorted to forging the signatures of supposed co-authors.
  • Viswa Jit Gupta tricked other experts into acting as co-authors and apparent guarantors for his false claims of unique fossil finds in the Himalayas.

One of the tasks of investigators into a research fraud allegation is to interview in depth the collaborators, co-authors and colleagues of the alleged fraudster to determine answers to questions:

  • What actual role did co-authors play in the alleged fraudsters data generation and publications?
  • Were they even aware of their co-authorship?
  • Did they actually sign the form that accompanied the manuscript when it was submitted?
  • In retrospect, do they have any observations about the behavior of the accused that seem suspicious or may point to data falsification or fabrication?

Some of those who collaborate and publish with people now known to have committed research fraud may have been happy to accept the reflected glory and other benefits generated by hanging on to the coattails of department’s star researcher. They may have not been too inquisitive about how this star researcher was managing to produce so much quality data with the resources at hand. They may have subconsciously suppressed any doubts about the fraudster’s activities or maybe been too in awe or too concerned about potential repercussions to challenge or question their activities. Likewise senior researchers may be happy to accept at face value the exciting data produced by their research students and assistants without giving too much thought to how they have produced this data. Once it is clear that an investigation is taking place and they are questioned by independent investigators any suppressed anxieties and concerns may be vocalised.

What can be done to improve the situation?

In this short section I suggest some measures that might lessen the chances of fraudulent data reaching the journals and increase the chances that if it is published it will be detected and retracted. I would certainly not suggest that this list is definitive or even offer them as a set of firm recommendations but they are offered as a starting point for discussion and elaboration. These discussions may lead to a set of recommendations and measures that could help to keep the literature as free of fraudulent data as possible but without imposing a heavy bureaucratic burden on authors and journals that will hinder honest research or make it more costly.

Increased awareness of the problem is clearly a key to guarding against the publication of fraudulent data; no referee or collaborator should in future say that they never even considered the possibility that a serial fraudster was just making up the data. I do think that all science courses at university level should include a short section about research fraud in which real examples are discussed to highlight some of the warning signs that might indicate data fabrication or falsification. As I have said in other posts:

Today’s undergraduate and postgraduate students are tomorrow’s research supervisors, journal referees and editors.

Referees of papers are the main gatekeepers given the key task of preventing the publication of fraudulent and flawed data and yet most referees get no training in how to perform this vital task and receive no reward or recognition for doing it. If referees are to be trained, should this be the responsibility of the journals that engage them or should this be an integral part of a scientific education and thus included in their undergraduate or postgraduate training. Should science courses have tutorial or seminar sessions within the now ubiquitous research skills modules with a title like “how to referee a paper”? This could be seen as an important skill that is part of a rounded science education and could be joined to the section on research fraud. Should journals periodically monitor the performance of their regular reviewers? It is probably unrealistic to expect most journals to pay their referees but maybe other ways of acknowledging the time, effort and skill of referees could be found. Should employers of scientists allocate time for their employees to carry out the occasional peer review? Should all journals publish lists of those who have refereed papers in the previous year? Should acting as a referee be something that is given credit on an application form or curriculum vitae? The experience of Sara Schroter and her colleagues at the BMJ, discussed in the Prevention post, suggest that face to face training courses may have limited impact and are probably not cost effective.

It is clear that some journals that claim to publish only peer reviewed articles are by-passing this step and publishing papers that have not undergone even the most superficial scrutiny by an editor or referee. All papers should at least be read by a scientific editor with some expertise in the area covered by the journal. If it claims to be a peer reviewed journal then it should be subject to the normal process of peer review but there may be a market for some journals to publish papers that are acknowledged as not peer reviewed but rely on post publication peer review by interested readers; these reviews would be attributed and also made public. In another model papers could be put up for a set period in their un-refereed state and then after a set period or after a set number of reviews either confirmed or taken down. Such papers would not be classified as published until after confirmation.

It should be generally accepted that from now on that anyone accepting co-authorship of a paper is acting as a guarantor for the data within it and that to accept authorship without having played any significant part in producing the data or writing the paper is an act of research misconduct.

Journals should make use of any tools available for detecting falsified of fabricated data. There are, for example, software packages that aim to detect images that have been manipulated using other software like Adobe Photoshop. It may be possible to develop other software to detect problems with the numerical values used in a paper; the forensic statistical analysis of the data in Yoshitaki Fujii’s trials by John Carlisle offers a potentially useful tool.

It is important that papers are seen by people with appropriate expertise. Where this is feasible, and certainly for the most influential and prestigious journals, almost all papers need to be looked at by someone with statistical expertise who should spot many of the less sophisticated attempts at fabricating data. If a submitted paper is on the periphery of the normal scope of the journal or contains elements that are clearly outside of the experience of its editors and its regular referees then editors need to make sure that it is seen by people who are experienced in these areas e.g. a paper submitted to a nutrition journal that includes psychological tests should be seen by a psychologist or a paper submitted to a surgical journal that is mainly focused on the mode of anaesthesia should be seen by an expert in anaesthesia.

Papers should never be sent to referees using contact details supplied by the authors without these being verified. The general system of using referees suggested by authors seems to be one that is very vulnerable to abuse and should be used with caution and some suspicion; it seems like something that is generally to be avoided wherever possible.

When papers are published in widely read journals then this provides an opportunity for a wide range of readers with different areas of expertise to scrutinise them. Readers who are seriously concerned about error or fraud in a paper already have the option of sending a letter to the editor in which they can give voice to their concerns. The editor may then decide to publish this letter and/or invite the original authors to comment upon these criticisms. Perhaps readers who have doubts about the authenticity of a paper should be encouraged to send their concerns in confidence to a designated person if they do not wish to follow the conventional route of an open, potentially publishable “letter to the editor”. Perhaps such claims could be sent to a designated site that is independent of any specific journal where the strength of the case could be assessed. One might envisage something like research fraud ombudsmen for different specialisms whom concerned readers could contact and state the rationale for their suspicions.

It is important that credible accusations of research fraud are thoroughly investigated and fraudulent papers removed from the literature. Once an author has been found to have committed research fraud then as much as possible of their past work should be scrutinised and if necessary retracted using a lower standard of proof to justify retraction if the author has any history of fraudulent behavior. In most cases the employer seems to be in the best position to carry out such an investigation; they have access to local collaborators and to the local facilities used by the accused. They should be in a position to require access to raw data, laboratory notebooks, computer files etc. If the accused has been acting fraudulently over a long period then previous employers may be requested to collaborate with the current employer and investigate the publications produced during the accused period with them; this was done in the case of Diederick Stapel where three of his employers investigated his activities and their individual findings synthesised into a single report effectively spanning almost all of his research career. Everything possible should be done to remove suspect papers from the scientific record. In cases where the accused is a licensed (clinical) practitioner then the licensing agency like the General Medical Council in the UK may be an appropriate agency to investigate accusations.

Consideration should be given to the proposal that different terms might be used to signify that a paper has been withdrawn by the authors because of some irretrievable error or flaw in the paper or withdrawn by some authors or editors because of research misconduct. The term retraction already has very negative connotations and because most retractions are now due to acts of misconduct then this term should be retained for papers withdrawn because of research fraud and maybe the term withdrawn used to signify error. Editors should have the option of allowing the term withdrawn (or whatever alternative is chosen) to be used if they are satisfied that honest error is the reason for the paper’s withdrawal. If a paper is withdrawn then the authors should state briefly what the error was and why it makes the paper’s conclusions unsound.

Some employers have, in the past, been very reluctant to take effective action against an employee or past employee despite strong evidence that they have committed multiple acts of research fraud. All institutions should have a policy with regard to accusations of research fraud and a designated procedure for handling any credible accusations. Grant-awarding bodies should make it a condition of awards that institutions that apply for and are in receipt of research grants should have policies and procedure guidelines in place to deal with allegations of research misconduct.

Wherever possible there should be openness about findings and actions relating to research fraud; certainly once clear evidence of guilt has been established or a confession made this should be made public. Employers should make the findings of any investigation public and journals should make clear why papers have been retracted. Obviously those making public such findings need to check they are not making themselves liable to credible litigation. The libel laws should not be allowed to shield the guilty; guilty parties must not be allowed to resign or retire quietly with their public reputations and roll of suspect papers largely intact and perhaps leaving them free to carry on their dishonest activities at another unsuspecting institution perhaps in another country.

Journals should make the fact that a paper has been retracted unmistakable when accessing the online version. Some journals print the word retracted in large bold type across every page of the article. In general, retraction notices should make clear why the paper was retracted and who initiated the retraction. Once a paper has been retracted and unmistakably marked, it should become freely accessible as should the retraction notice and any other editorials or letters relating to the retracted work. It seems particularly insensitive of journal publishers to charge readers to access this material.

Whistle-blowers have played a major role in unmasking many high profile cases of research fraud yet many whistle-blowers have not been well-treated and have suffered for their whistleblowing activity. Institutions must take care to create procedures that give those with concerns the confidence to make these known without fearing for the any consequences for their own careers. Major institutions like universities should have people who are publicly designated to listen to concerns in confidence and initiate action if they consider that these concerns about research misconduct are substantial. They should be people with experience and seniority but not members of the management structure. Whistle-blowers should have the option of taking their concerns to someone outside their own department or faculty; they may not confident about the independence and objectivity of the designated person within their faculty e.g. if they have been a research collaborator with the accused at some time.

All professional bodies that regulate the licensing of practitioners should regard it as normal to remove or suspend the licence of any practitioner found to have fabricated or falsified research data. Research fraud can adversely affect the treatments that patients are given and may in some cases cause patients to die because they have been given a non-optimal treatment as a result of false data produced by research fraud.

Journals should make it a condition of publication that authors either lodge their raw data in some secure facility or agree to supply raw data within a specified time if requested to do so by the editor. Failure to supply raw data upon request would then be considered to be an act of misconduct and lead to automatic retraction of the article unless this failure is clearly beyond the control of the authors. The failure to supply raw data should be stated as the reason for retraction.

If fraudulent researchers lie in order to obtain research funding or misuse funding allocated to them to conduct a specific piece then they can be prosecuted using existing criminal laws. If fraudulent research leads to activities that endanger patient health or even cause extra mortality then some way should be found of prosecuting such callous individuals either using existing laws or perhaps by making research fraud an offence in its own right. Imagine the outcry there would be if an aircraft engineer who falsified the servicing or repair record of an airliner which resulted in a fatal crash was allowed to resign or retire without any penalty. If existing law cannot be used to prosecute major research fraudsters then perhaps data fabrication or falsification, especially in medical research, should be made a criminal offence. Should employers who negligently fail to act upon evidence of multiple acts of research fraud by an employee be liable in the civil courts for any harm resulting from this false data after they should have acted? Doctors who cause death or injury through carelessness or “negligence” seem to be sometimes more harshly treated than those who cause death or injury through acts of wilful fabrication of clinical trial data.

The overall aims of these suggestions are:

  • To increase awareness amongst the scientific community of research fraud, the ploys used by fraudsters, their suspicious behaviors and some of the indications that data has been fabricated or falsified
  • To make the scientific community aware of the severe damage that fraudulent research can cause
  • To increase the belief of those tempted to commit research fraud that they will be found out and that when this happens the consequences for them will be severe and inevitable
  • To ensure that, as far as libel laws allow, acts of research fraud are made publicly known
  • To ensure that all co-authors accept responsibility for papers with their names on them
  • To make it more difficult for research frauds to publish fake summary data using fake patients and other resources unavailable to them
  • To make it more difficult or risky for fraudsters to use fake collaborators and publish studies which do not have proper ethical approval
  • To make sure that credible accusations of research fraud are always properly investigated
  • To ensure that readers of a peer reviewed article can be confident that it has actually been effectively peer reviewed
  • To ensure that those who have evidence of fraudulent activity feel an obligation to report their suspicions and have the confidence to do so without fear of reprisals.

 

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s