Countering research fraud. I –Protection – preventing the publication of fraudulent data

Introduction to the series

The following three posts are very long compared to most of my posts because they began life (and hopefully will end their lives) as chapters in an as yet unpublished book about error and fraud in research.

The number and perceived impact of papers that a scientist publishes is the most important determinant of their chances of employment, promotion and obtaining funding for their research. In this series of three posts, I am going to explain the processes involved in getting research data published as a paper in a scientific or medical journal and the way in which the influence of these papers is quantified.

I will focus upon the ways in which a paper is assessed prior to acceptance by the journal, the post publication checks on published work and the procedures for correcting the scientific record if published work is found to contain major structural errors or if misconduct is found to be involved in its production. This series is thus concerned with the various pre and post publication processes that maintain the integrity of the scientific record and minimise the impact of mistaken or dishonest scientists. I am going to classify these measures under three headings:

  • Protection – what are the barriers that prevent the publication of fabricated data?
  • Detection – how are fraudulent papers identified once they have been published?
  • Disinfection – how fraudulent papers removed from the scientific record and how are the activities of fraudulent scientists curtailed?

In these three posts, I will explain how these processes help maintain the integrity of the scientific record and also highlight some of the potential weaknesses in these processes. I will review how these mechanisms have succeeded or failed in some of the fraud case studies already presented on this blog. If factors that reduce the effectiveness of these measures can be identified, then it may indicate ways of making them more effective in the future. In the third post in this series, I will also briefly consider some practical ways in which the problem of research fraud might be minimised.

Protection overview

The flow of fraudulent data into the scientific and medical literature should be minimised because of:

  • The honesty and integrity of most research scientists
  • The fear of exposure and the shame and punishment that might result from being caught fabricating or falsifying data
  • Peer review by co-authors, journal editors and specialist referees.

Most of the general population never commit a serious crime and under most circumstances would never consider committing such a felony. Likewise most scientists do not fabricate data and would not contemplate doing so, especially if their false research might harm people or even result in people dying. However, many basically honest citizens do commit minor misdemeanors. Likewise, researchers who would not fabricate data and who would consider themselves to be basically honest scientists may nonetheless resort to massaging of their data to make it more publishable e.g. reach statistical significance. In a survey of major psychology journals, EJ Masicampo and Daniel Lalande found a significant bulge in probability values that just reached the arbitrary <5% threshold for statistical significance (this work was discussed more fully in an earlier post). This concentration of values just meeting the threshold for statistical significance suggests that some psychologists are selecting or massaging their data to make it easier to publish their findings; it seems unlikely that this practice would be restricted to psychologists.

Some people who might be tempted to commit a seriously dishonest act, may be dissuaded from doing so if they think it likely that they will be caught and punished for their actions. Likewise scientists tempted to fabricate or falsify their data may be deterred from doing so if they believe that their fraud will be exposed and their careers, reputations and livelihoods destroyed by their exposure or that they could even risk criminal prosecution and a prison sentence.

The process of peer review by expert referees and editors is seen by many scientists as the most important way in which the scientific record is protected. Experts in the field scrutinize the paper and the hope is that such scrutiny will not only expose errors and inadvertent flaws in the work but also identify data that has been fabricated or falsified. Where a group of co-authors is involved, one would also expect all of them to have been actively involved in the research and thus for them all to have taken some responsibility for verifying the quality and veracity of the published data. Is this faith in peer review to prevent publication of fraudulent data justified or does it regularly allow through data that, in retrospect, shows quite obvious signs of fabrication or falsification?

Author integrity

Does pressure to publish tempt more scientists to cheat?

Just as there have always been people prepared to commit serious acts of dishonesty out of greed or jealousy so there have always been scientists prepared to bend or ignore the rules in order to enhance their prestige or wealth or simply in order to support their own pet theory. Several of those selected for inclusion in the case studies have been ardent supporters of particular pet theories and have resorted to fabricating or falsifying data to support these theories, for example:

  • Botanist John Heslop Harrison believed that the Hebridean Islands escaped ice cover during the last ice age and allowed species to survive there but not elsewhere in Northern Britain
  • Both Paul Kammerer and Heslop Harrison were strong supporters of Lamarckism; the belief that an organism is able to pass on characteristics acquired through use or disuse during its lifetime i.e. the inheritance of acquired characteristics
  • Sir Cyril Burt and many early psychologists were strong believers that intelligence was largely an inherited characteristic
  • William Summerlin believed that when skin, retina and other organs were held in organ culture they became transplantable without triggering rejection presumably because of loss of immunogenicity during this culturing process.

Many academics are convinced that the recent pressure to publish may have persuaded more scientists to act dishonestly and to distort or fabricate data in order to keep their jobs, get promotion, or receive research funding. Even honest law-abiding citizens may resort to dishonesty if they cannot afford to feed, clothe or house themselves or their families. There are now very great pressures on scientists and medical researchers to publish papers in high prestige journals if they want to get research funding and advance their academic careers or even just to keep their current jobs. Top journals want high quality, clean data that unequivocally supports a particular position. Researchers may find it very hard to regularly generate such data honestly with the resources and time at their disposal and so they may decide that it is easier to make up such data or dishonestly manipulate real data so that it meets these criteria. Diederick Stapel in his candid interview after he had admitted fabricating most of his data makes exactly this point; he said that he found it difficult to generate data that met the exacting standards of top psychology journals so he resorted to fabricating data that gave clear cut results that supported interesting and headline making hypotheses. Generating real research data can be a drawn out and frustrating business with inevitable failed experiments, equivocal results, results that fail to achieve statistical significance or repeats of experiments not producing the same results as the first experiment. Stapel’s faked, but apparently high quality, data supporting “sexy” theories and themes made him an academic success and one of science’s media stars without him having to go through this frustrating process of generating real results.

Is research fraud becoming more common?

Many people strongly believe that research fraud is becoming more common but this is largely an intuitive belief and it is difficult to produce data that unequivocally supports it.

The research fraud case studies discussed on this blog show research fraud is not a new phenomenon and I have chosen examples of known or strongly suspected research fraud spread over more than 150 years. It seems almost inevitable that a few scientists have been falsifying their findings for as long as people have been conducting research. In their classic 1982 book on research fraud, “Betrayers of the Truth”, William Broad and Nicholas Wade list known or strongly suspected perpetrators of research fraud covering the past two thousand years. Among the famous historical figures in science who they suggest probably published fraudulent results were:

  • Claudius Ptolemy – the Egyptian astronomer who in the second century claimed to have made astronomical measurements that he could not have made.
  • Galileo Galilei – the 17th century Italian physicist, regarded as a key founding influence in the development of the modern scientific method. They say he exaggerated the precision of the outcome of some of his experimental results and some have suggested the experiments in question were not actually carried out.
  • Sir Isaac Newton – the early 18th century English physicist and mathematician introduced a so-called “fudge factor” in later editions of his seminal opus Principia so as to increase its apparent power of prediction.
  • John Dalton the 19th century chemist, who is regarded by many as the father of modern atomic theory, reported experimental results that cannot be repeated and were probably never carried out.

It is difficult to find evidence to definitively answer the question of whether research fraud is a growing problem and is more common now than it used to be. In the modern era, proof that data has been fabricated or falsified should lead to retraction of the paper or article although this does not always happen. One way of trying to gauge changes in the recent frequency of research fraud is to see how the number of retractions has changed in recent decades. Of course, such analysis would make the implicit assumption that the efficiency of detection and retraction has remained constant. In fact, there has been an increased awareness of research fraud in recent years and there seems to be an increasing willingness of employers and professional bodies to investigate the past work of known fraudsters and to recommend that journals retract many of their suspect papers.

Ferric Fang and two collaborators analysed all of the 2047 papers then listed (2012) as retracted on the PubMed database which covers papers in the biomedical and life sciences. They found that most of these papers (67%) were retracted because of research misconduct rather than because of an error in them. 43% were retracted because fraud was suspected, and 24% were retracted because of plagiarism or the same data being published more than once; this proportion retracted for misconduct is high in comparison to some earlier surveys. They found that not only has the rate of retraction risen over the years but the percentage of papers retracted because of fraud had increased by 10 fold since 1975. A number of other surveys have found that the number of retraction has been rising steeply in recent years.

A growing number of retractions coupled with an increased proportion retractions being due to fraud suggest either a substantial real increase in the prevalence of fraud or that journals, employers and professional bodies are making a greater effort to police the literature and to cleanse the published record of fraudulently derived data. It may be a combination of these two factors that is increasing the number of retractions for research fraud.

Fear of Exposure and Punishment

Impact factor

Published papers are of great importance to career advancement for academics and for others working in scientific and medical research. Ambitious researchers want to publish as many papers as they can in the most highly rated journals in their field. One of the ways in which journals are rated is by something called their impact factor. This is a calculated value based on the average number of times recent papers are cited in other journals. The journal’s impact factor is widely used as a crude indication of its influence and prestige amongst scientists. Of course, as with any published assessment system, people will try to favourably manipulate their scores so, for example, journals can affect their impact factor by publishing more papers likely to be highly cited (e.g. review articles especially invited reviews by recognised experts) and by publishing less of the papers that are likely to be infrequently cited (e.g. medical case reports). Many of the most highly cited papers are those that report a method that is then used widely by others. Probably the most highly cited paper of all time was published by Oliver Lowry and his colleagues (The Lowry paper) which reported an improved method of estimating protein content that became the standard method for protein estimation. Every paper where this method is used then cites this paper as a matter of course; it has probably been cited more than half a million times in the six decades since its publication.

One can use citation counting methods to assess the impact of individual papers or of particular authors. Citation rates can also be used to assess the output of a department or institution; those generating the largest numbers of highly cited papers are likely to be perceived as the elite research institutes and also those most worthy of receiving additional research funding. An author’s citation record may, for example, be used to bolster their credentials for a research or academic post or for internal promotion or to give support to their funding applications. Once again individual authors can try to increase their citation rates by frequently citing their own papers or by having an understanding with others to cite each other’s work. Citation analysis has been used in some of the case studies on this blog, to try to assess the impact of individual fraudulent authors or the influence of important fraudulent papers.

Journal selection as a way to avoid exposure of fraudulent activity

Given this onus upon impact at individual, journal and institutional level, then authors who are confident that their work is correct and who believe in its significance would want to publish in the highest impact journal to achieve the maximum number of  readers and citations for their work.  It is therefore surprising to come across papers that report findings which, if substantiated, would have major clinical or scientific significance published in obscure journals which have a narrow circulation and extremely low impact factors. These journals may not even be included in the search database of some of the main academic search engines used by scientists to find papers on a specified topic. I have, for example, seen papers in very low impact journals that report clinical trial data suggesting that simple dietary supplements are more effective than statins at lowering blood cholesterol levels. If the authors had confidence in this data then surely they would try to publish it in one of the world’s premier medical journals like the New England Journal of Medicine or the BMJ or maybe in a highly rated nutrition journal. It is possible of course that this material was rejected by these top journals because of some bias or prejudice against “natural medicines” by orthodox scientific or medical referees and editors.  It is also possible that the authors and sponsors just want this data published anywhere so that they can make claims about their product based upon these “published clinical trials”. Even if the data is unsound through poor design and execution or due to some deliberate misconduct, then if it is buried in an obscure journal then it is unlikely to be challenged and discredited and rapid attempts to repeat the work are unlikely. It is also probable that the process of peer review will be less rigorous, less critical and less well-informed than that of the most highly rated journals. The referees and editors are likely to be less experienced and qualified and will almost certainly be less eminent in their field. There will also be less competition for journal space from other high quality papers and this means that, in order to fill their journals, editors have to set the quality bar much lower when deciding upon the suitability of papers for inclusion in the journal.

Unsound work in papers published in an obscure journal will probably remain in the literature, largely unchallenged and ignored by experts in the field but still allowing the manufacturers or distributors of the product to use the published clinical trial data to promote their product. Likewise, a serial fraudster may be happy to bury their work in an obscure journal or in journals outside the main subject specialism to reduce or slow the risk of detection. In this way they can pack their curriculum vitae with many supposedly peer reviewed papers which, even though in low impact journals, may still impress job interview panels and some committees deliberating about where to allocate research funding.  A mass of publications even in low impact factor journals will still boost the ego of the author and may enhance their esteem amongst colleagues and friends.

If you publish something of major significance in one of the world’s top journals then it will certainly not go unnoticed by experts in the field and may even filter through to the general public via media science or medical correspondents who will summarise and publicise it in newspapers, TV, media web-sites and other outlets. This means that there is a high probability that a range of other experts will scrutinise the work closely and some may try to repeat and extend the findings. If the work is sound and accurate then this is good for the original authors; they get great kudos from the initial breaking news effect and then once the work has been confirmed and incorporated into the body of accepted scientific knowledge they will be acknowledged as the discoverers of this new knowledge. If, however, the work is unsound because of error or fraud then this is likely to become apparent when it proves to be unrepeatable or scrutiny of the “important” paper yields reasons to doubt the quality or integrity of the data and perhaps leads to questioning of the author’s other work. This discussion explains my decision to include choice of journal in the protection against fraud section.

Of course, if fabricated data proves to give essentially the correct answer then any apparent flaws in the detail of the paper as it is written will probably be overlooked once the substance of the fabricated or manipulated findings are confirmed by others. This paper’s authors will then be unjustly credited as the first demonstration of this new discovery. It may be particularly difficult to detect fabricated or falsified data which supports essentially correct conclusions. There is no doubt that Gregor Mendel’s data on pea crosses was essentially correct but more than 150 years after its first publication, people are still arguing about whether he massaged his data to make it fit more closely with those predicted by his new laws of genetics. If Jatinder Ahluwalia had been essentially correct in his belief that a particular ion channel played a key role in the killing of bacteria by neutrophils, it seems unlikely that anyone would have ever investigated his actions and certainly not with the level of forensic scrutiny needed to detect his fairly sophisticated fraudulent practices. The theory itself was convincing enough to initially persuade many distinguished experts that it was credible and so if it had proved to be essentially correct, then even if flaws or anomalies in the original paper were noted they would probably have been overlooked and regarded as just part of the inherent variability of research findings.

Do better journals publish fewer papers with major errors or fraudulent results?

One would expect that the more highly rated journals should generally contain less fraudulent data than the lowest ranked journals because:

  • They have a higher standard of peer review and more rigorous selection criteria
  • Fraudulent authors may be less likely to choose these journals if they feel they are more likely to be found out
  • Wide and detailed scrutiny and attempts to repeat novel findings increases the chances of identifying flawed or fraudulent results which ultimately may make retraction more likely.

If true, these suggestions would mean that fraudulent but cautious authors would be more likely to send their papers to less highly ranked journals or perhaps journals not in the speciality which is the main focus of the paper. The better journals would be protected at the expense of the less highly rated journals. This is mostly personal opinion and conjecture but the two studies discussed below add objective data to this largely subjective discussion.

In October 2011 Ferrick Fang and Arturo Casadevall wrote an editorial in the journal Infection and Immunity in which they looked at the relationship between retraction rate from a journal and its impact factor. They analysed 10 years output from 17 journals with a very wide range of impact factors (from 2-54) and correlated the journal’s impact factor with a measure of its retraction frequency which they termed its retraction index. The results of this correlation are summarised in figure 1 and they show a clear and quite strong trend for more retractions from the highest rated journals i.e. articles published in highly rated journals are more likely to be retracted than those in lower ranked journals. It is not possible to be sure why this happens but the most obvious explanation is that greater scrutiny and repetition of work published in high impact journals makes the discovery of error and fraud more likely and thus increases the chances of retraction. It is also possible that more of the papers published in these highly ranked journals are fraudulent or flawed perhaps because the prize of publishing in one of the world’s premier journals may encourage authors to take risks in the design and execution of their study or encourage authors to manipulate their data to meet the requirements of these journals for clear-cut unequivocal data. Murat Cokol and several colleagues from Columbia University published similar findings in 2007. They did a much wider analysis but also came to the conclusion that retractions occur more frequently in high impact journals. They did a quite sophisticated statistical analysis to try to establish whether this was because better journal published more flawed papers or because they are subject to greater scrutiny leading to more retractions. They concluded that the higher retraction rate is because papers published in more highly ranked journals are much more meticulously tested after publication.

Figure 1           A graph showing a positive correlation between retraction index and impact factor i.e. more papers are retracted from journals with higher impact factors (Casadevall and Fang, 2011).

retraction index

Are the consequences of exposure sufficient to deter potential fraudsters?

If potential offenders are to be deterred by the consequences of being exposed then they must first believe that the threat of exposure is real. Some fraudsters have escaped detection and exposure during their lifetimes and several others have been able to prosper and continue to publish fraudulent data for many years or even decades.

Charles Dawson  was still being honored for his Piltdown discovery more than two decades after his death e.g. in 1938 at an unveiling ceremony for a stone monolith to mark the place where he claimed to have made this discovery. Sir Cyril Burt was still being honored as a respected and major figure in educational psychology right up to his death in 1971. In 1968 he was awarded the Thorndike award for outstanding contributions to educational psychology; he was the first non-American to receive this award from the American Psychological Society and his Thorndike lecture was published 5 months after his death. Botany professor and Fellow of the Royal Society JW Heslop Harrison was not publicly exposed as a fraudster until the publication of Karl Sabbagh’s book A Rum Affair in 1999 i.e. over 30 years after his death in 1967. This was despite the existence of a damning report, essentially proving his dishonesty, compiled by John Raven in 1948 but then buried in a Cambridge University library for half a century. Others like Michael Briggs, Diederick Stapel, RK Chandra and Vishwa Jit Gupta have been able to continue their fraudulent activities for decades and have achieved great success and international recognition before being publicly exposed.

The worst-case scenario for most research fraudsters is that they could be subject to a criminal prosecution for activities relating to their fraud and end up serving time in prison. This is a rare occurrence, only three of the case-study subjects have been the subject of a criminal prosecution and only two have actually served prison sentences. Some of the research frauds who been the subject of criminal prosecution are listed below.

  • In June 2006, obesity researcher Eric Poehlman was sentenced in a US federal court to a year and a day in prison for using false data to obtain federal funding for his research; the first person to be sent to prison for research fraud.
  • In May 2010, pain specialist Scott Reuben was sentenced to 6 months imprisonment plus additional financial penalties for receiving money from pharmaceutical companies for conducting trials of their drugs. He never conducted these studies but published fabricated results.
  • The first person to be subjected to a criminal prosecution for fabricating scientific research data was US psychologist Stephen Breuning. He was found guilty of two counts of fabricating his research results and one of obstructing the investigation into his activities. Prior to sentencing, it was said by prosecutors that he could have faced a sentence of 10 years in prison and a large fine. In the event he was sentenced to serve 60 days in a halfway house and 5 years’ probation plus additional financial penalties.
  • In Britain in April 2013, Steven Eaton,  an employee at the Edinburgh branch of a US pharmaceutical company became the first person to be prosecuted under a 1999 law called the Good Laboratory Practice Regulations. He was sentenced to 3 months in prison for falsifying test results relating to an experimental anti-cancer drug. He was said by the Financial Times to have been fabricating results for ten years and at several other drug companies.
  • In June 2014, Dong-Pyou Han, a biomedical scientist at Iowa State University was indicted on four federal felony counts of making false statements to obtain federal research grant money. He faked results to make it seem that a vaccine he was working with had anti-HIV activity. At the time of his indictment it was suggested that he could have faced up to 20 years in prison plus a $1 million fine. Around $15million in federal grant was given to Han and Iowa State to fund his research. In January 2015 he pleaded guilty under an undisclosed plea bargain arrangement and on 1st July 2015 he was sentenced to 57 months in prison, 3 years of supervised release after leaving prison and a large fine.

In a short piece in the BMJ in December 2013, Richard Smith, an ex-editor of the journal, suggested that research fraud should be made a specific criminal offence. He considered it almost inevitable that research fraud would eventually become a criminal offence. He recalls arguments made in 2000 by Alexander McCall Smith a professor of medical law and ethics to justify this drastic step:

  • There is misuse of financial resources
  • The police may be best placed and most qualified to conduct an investigation including being able to demand access to papers and computers and to interrogate reluctant witnesses
  • Where research has any medical implications then there is the real possibility of patients being harmed and even fabricated non-medical research can cause serious harm to people.

Some of what fraudsters have done may infringe existing criminal law e.g. if someone lies or submits false data to obtain funding then this funding has been obtained by false pretences and if someone uses money given to fund research for their own purposes then then this is also a clear breach of existing law. As discussed in a previous article, some published fraudulent research studies have led to inappropriate treatments being given to patients and in some cases led to many extra deaths. These deaths would certainly seem to be the moral responsibility of those publishing the fraudulent data and as a non-lawyer I wonder why no-one seems to be suggesting that such offenders should be charged with manslaughter. If a company or an engineer wilfully falsified safety data or maintenance records for machinery and this resulted in a fatal accident then the offenders would probably be charged with manslaughter. If a doctor falsifies the result of a clinical trial and then patients die when others replicate the apparently successful treatment protocol then this seems like a very similar scenario. Of course, it may sometimes be difficult to pinpoint exactly which patients died or were harmed as a result of the piece of false research data but if the falsely supported treatment increases risks for patients then there is no doubt that the fraudster has recklessly endangered their lives and wellbeing.

In the UK, any test facility which conducts regulatory studies must comply with good laboratory practice (GLP) regulations when carrying out safety tests on a range of chemicals including drugs, cosmetics, food additives, industrial chemicals and chemicals used in agriculture. The facility must belong to the GLP compliance monitoring programme. An initial inspection is needed to obtain GLP status and then regular notified inspections (every 12-30 months) are carried out. Additional inspections can be carried out if there is some cause or concern e.g. accusations from a whistle-blower or suspicion of research fraud. The monitors have the power to demand improvements in any deficiencies detected and ultimately have the power to suspend or disqualify a facility for recognition as GMP compliant. It is an offence punishable by a fine and/or imprisonment to make any false good laboratory practice instrument. It is under this legislation that Steven Eaton was prosecuted and imprisoned in Edinburgh.

For those researchers who are also clinicians and who need a licence to continue to treat patients, there is a real risk that if they are found to have committed research fraud then they could have their licence to practice suspended or revoked. Two of the case-studies involve British doctors who have been accused of research fraud and who have been the subject of GMC disciplinary hearings resulting in their being struck off the medical register. In the case of Malcolm Pearce the obstetrician who was accused of faking a report of a successful removal and re-implantation of an ectopic embryo it is clear that the main reason for his removal from the register was his falsification of his research because in the words of the chair of the disciplinary committee his:

“Deceit has had incalculable consequences for public confidence in the integrity of research”.

Andrew Wakefield was found to have committed research fraud in relation to his paper suggesting an association between the MMR vaccine, inflammatory bowel disease and regressive autism. The main criticisms of him in the GMC report related to his professional and ethical conduct although he was found guilty of several charges of dishonesty. It was later convincingly argued by journalist Brian Deer in an article published in the BMJ that the data presented in the infamous Lancet paper bore no resemblance to the data from the patients’ notes and that he had committed research fraud.

Norwegian dentist and physician Jon Sudbo was restricted to acting as an assistant dentist within a defined geographical region after he admitted committing multiple acts of research fraud.  Other cases where British doctors have been struck off for research fraud include:

  • Mark Williams a senior lecturer in public health at Bristol University was struck off in the 1998 for falsifying statistics in a research paper. He was later found to have fabricated much of the patient data for an important piece of work on severely disabled adults that may have influenced the design of the UK’s Care in the Community initiative.
  • In July 1997, John Anderton who had been a consultant for over 20 years and was a former registrar of the Royal College of Physicians in Edinburgh was struck off when it was found that he had forged consent forms for patients supposedly taking part in a clinical trial of an anti-angina drug. He also admitted fabricating some of his data. The chairman of his disciplinary hearing said that:

“Dishonesty by doctors participating in such trials is not only thoroughly discreditable in itself, but is also a potential source of dangers for patients”

  • Surgeon, Anjan Kumar Banerjee, who was awarded an MBE in 2014 for “services to patient safety” had been previously struck off the medical register for gross professional misconduct. In 2000 he had been found guilty of falsifying a scientific paper published in 1990 in the journal Gut and this and the subsequent cover-up resulted in his suspension from the medical register. Two years later he was struck off for financial dishonesty but successfully applied for re-instatement in 2007.

Even where researchers do not have clinical positions, one would expect that verified accusations of research fraud would result in loss of the perpetrator’s current employment and perhaps effective exclusion from similar academic or research posts for some time afterwards and perhaps even permanently. Most of those found to have committed serious acts of research misconduct have either been dismissed from their positions e.g. Dipak Das from the University of Connecticut or have resigned before a decision to dismiss them was made e.g Diederik Stapel Tilburg University.

A few people seem to have emerged relatively unscathed from public accusations of research fraud. Ram Bahadur Singh has been investigated by two top medical journals and expressions of concern issued about papers published in both the Lancet and BMJ. A commissioned analysis of raw data that he provided to support a paper submitted to the BMJ concluded that it had been fabricated or falsified and could not have been generated in the manner described in the manuscript. Despite this Singh was still in 2018 described on the World Heart Journal web-site as Professor on Internal Medicine at the Halberg Hospital and Research Institute in Moradabad, India although this institute seems to have a limited presence and profile on the internet. In March 2013 Singh was still listed as editor in chief of two open access journals and has continued to publish at an astonishing rate, largely in these two journals and a few other low impact journals.

When first investigated by Memorial University in 1994, RK Chandra was found to have committed serious research misconduct but was allowed by the university to continue his employment and fraudulent activities and build up a fortune of millions of dollars. In 2002 after a second round of allegations of data fabrication were made, he resigned and eventually returned to India. For some years afterwards, Chandra continued to publish and attend symposia as an invited speaker. Articles and interviews in Indian newspapers still presented him as a major contributor to science and medical research long after his flight from Canada. He still seems to have been feted as a major scientific figure in India and the Canadian allegations are not mentioned. Only after November 2015 when he lost a case for defamation against the Canadian Broadcasting Company with an order to pay $1.6million for CBC costs was any serious effort made to cleanse the literature of some of his most notoriously fabricated data; in 2016 he was also stripped of his Order of Canada.

After his dismissal by the Royal Free Hospital in London, Andrew Wakefield moved to Texas where for several years he held a lucrative post as director of what was then called Thoughtful House in Austin. He only resigned from this institution when he was struck of the UK medical register in 2010 and his infamous Lancet paper fully retracted. He has become a folk hero for the anti-vaccine movement in the USA who regard him as a victim of powerful agencies and pharmaceutical companies who are trying to stifle and suppress criticism of the MMR vaccine. He has written a book defending his position entitled Callous Disregard. Outbreaks of measles in many countries have been partly attributed to this anti-vaccine movement including the USA where measles had been declared eradicated in 2000.

Jon Sudbo continued to work as a dentist in Norway and attracted the attention of the Norwegian press when in 2014 he applied unsuccessfully for a senior dental post outside the area he had been restricted to working in. William Summerlin who falsely claimed to be able to transplant skin and other tissues without the need for anti-rejection drugs went back to being a dermatologist after losing his position at the Sloan Kettering Institute and apparently still had a dermatology practice in Arkansas in 2015. Vishwa Jit Gupta seems to have held onto his professorial position at Punjab University until his scheduled retirement at 60 years old despite several journal articles detailing his fraudulent activities and despite an inquiry set up by the university finding him guilty of research misconduct.

The effect on the personal lives, self-esteem and quality of life of those publicly exposed as research frauds can only be imagined. In his long and apparently candid interview with Yudhujit Bhattachargee published in the New York Times in April 2013, Diederik Stapel describes the depression and self-loathing that followed his exposure. He says that he was receiving psychotherapy and he had medication from his psychiatrist to treat his depression. Bhattacharjee also interviewed Stapel’s wife who described her initial anger at his betrayal although she seemed to have forgiven him by the time the interview was conducted. Two months after the scandal broke Stapel’s wife made him promise that he would not commit suicide which gives an apparent indication of his desperate mental state at that time. The sadness and disappointment of his very elderly parents also comes through in this article.

Two of my case-study subjects died within short periods of their exposure as research frauds, Michael Briggs and Dipak Das. In letters to the University of Connecticut authorities, Das claimed that serious ill health including a stroke had been precipitated by the stress of trying to fight the accusations made against him. Michael Briggs died of liver failure in Spain at the age of 51, just a year after he resigned from Deakin University and left Australia. In September 1926, the Austrian biologist Paul Kammerer shot himself in the head just six weeks after an article published in Nature claimed that his most important specimen had been faked. He publicly accepted that the specimen had been faked and this undoubtedly was the reason for his suicide although he maintained that he was not personally responsible for the deception.

Peer review by co-authors, referees and editors

The process of peer review has traditionally been viewed as the major safeguard against the publication of poor quality or fabricated papers. Any paper submitted to a properly run journal should be read critically by at least one person selected by the editors because of their particular expertise in the subject matter of the paper. For the better journals there will probably be at least two referees with special interest in the area(s) covered by the paper and in many cases the paper will also be scrutinised by someone whose expertise is in statistics. The referees will try to ensure that the work is put into proper perspective by the way in which previous findings are properly acknowledged, summarised and related to the current work. The referees will consider whether the study has been soundly designed, whether suitable methods and statistical analyses have been appropriately used and satisfactorily described. They will also look critically at how the findings are interpreted and if there are alternative ways of interpreting the findings. These referees should then submit independent reports of their deliberations and make recommendations as to whether the paper should be accepted, rejected or accepted subject to certain changes or conditions. I have written about the history and development of the peer review process in another article on this blog.

This peer review process will be undertaken with varying degrees of skill and diligence by different referees. Conscientious editors are hoping to receive a set of searching and unbiased opinions about the strengths and weaknesses of the paper and its suitability for publication in the journal. However, whatever their degree of skill and application, most referees will tend to trust what has been written by the authors. They will generally accept that the authors’ descriptions of the design, methods and statistical analyses are honest. They will assume that the data that is presented and discussed has really been generated by the authors using the methods described. They will probably not actively consider the possibility that the author(s) have deliberately tried to deceive them by falsely describing the procedures used or by submitting data that has been unacceptably manipulated or even completely fabricated unless some glaring anomaly forces them to consider this possibility. This would suggest that a flaw in an honestly described study is more likely to be spotted by a referee than blatant fabrication of superficially plausible data. Many co-authors and referees when later confronted by evidence of blatant fraud that, in retrospect, looks quite obvious make comments along the lines of:

I never even considered the possibility that s/he was making it all up”

The failure of the peer review system to identify many fraudulent papers is supported by data in the second article in this series about detection of research fraud. A 2012 analysis by Wolfgang Stroebe and two Dutch colleagues (2012) on the ways in which major known fraudsters were unmasked, found that very few were first identified during the peer review process. Of course, this could be because when referees and editors suspect research fraud they simply reject the paper leaving the fraudulent author’s reputation intact and allowing them to continue trying to pollute the scientific record with their falsified or fabricated data and perhaps even to publish the rejected paper elsewhere. In the Ram B Singh case, Professor Paul McKeigue suggested that Singh was using the detailed criticism of his rejected manuscripts in editorial and referees’ reports as a sort of tutorial on how to make the paper more convincing and believable when it was re-submitted to another journal.

Vigilance of co-authors as part of the peer review process

One would hope and expect that if a paper has been submitted in the names of several authors that all of them have had a significant input into the data collection, analysis and/or the drafting of the paper and that all of them accept responsibility for its accuracy and honesty. Of course, in large research teams the day-to-day monitoring of each other’s work may be necessarily quite loose. This may be a particular problem where senior researchers are responsible for supervising a number of junior assistants and postgraduate research students.

A junior researcher such as a graduate student or postdoctoral assistant will normally present to their supervisor the results of the studies that have been performed under his/her direction and guidance. As with other aspects of peer review, this system relies heavily upon trust. A supervisor will be naturally inclined to believe the junior researcher’s account of exactly how the study was conducted and that the results were generated as a result of this process. The supervisor will offer criticism and guidance based upon these assumptions just like the referees and editors make such assumptions when assessing the value of a paper submitted for publication in a journal. Supervisors will not expect junior colleagues, who they may regard as friends or even protégées, to lie about what they did and present them with data that has been manipulated improperly or even completely fabricated. Some determined fraudsters may go to considerable lengths to make their accounts and results seem more plausible and they may be prepared to tell blatant and repeated lies in order to deceive their colleagues.

Where the fraudulent scientist is a senior author then it is probably even less likely that his junior colleagues will consider the possibility of fraud. The more eminent and distinguished the fraudster and the more dominant and confident (arrogant?) their manner then the less likely it is that the veracity of the data they produce will be challenged even when it may have unusual or suspicious features.

In discussing the role and responsibilities of co-authors, it has been assumed that all of the authors made a substantial contribution in one way or another to the final manuscript. This is not always the case. Decisions and guidelines about who should appear amongst the list of authors and in what order they should be listed could generate a small volume in its own right. All individuals listed as co-authors should have made a substantial contribution to the work and should have reviewed and approved the final manuscript and should accept responsibility for the data that it contains. Everyone who has made such a contribution should be given credit as a co-author. Anyone who has made some contribution to the work but not a sufficient contribution to warrant co-authorship should be acknowledged for their specific contribution usually at the end of the paper. The International Committee of Medical Journal Editors (ICMJE) defined authorship as someone who satisfied the following criteria:

  • Substantial contribution to the conception and design, acquisition of data, or analysis and interpretation of data
  • Substantial contribution to drafting the article or reviewing it critically for important intellectual content, and
  • Final approval of the version to be published.

The Council of Science Editors (CSE) in a 2012 White Paper on Promoting Integrity in Science Journal Publications suggest a number of contributions that do not, by themselves, justify authorship: professional writers who only participate in the drafting of the manuscript, providing research space, departmental oversight, obtaining financial support, assisting the research by providing advice, isolated analyses, or providing reagents/patients/animals/other study materials.

I will not even attempt to discuss the thorny issue of how the order of the names of contributors should be decided; an issue that has undoubtedly damaged many working relationships and probably been the trigger for several long lasting feuds

Gift authorship

Gift authorship is the inclusion of someone’s name on a paper even though they have not made a significant contribution to the work as defined by the ICMJE guidelines given earlier. For example, a student or technical assistant who might have helped in some of the routine data collection or perhaps played an administrative role in collecting or collating the data might be rewarded for their time by being included on a lengthy author list. Someone who provides a gift of a material used in a study might be thanked by inclusion in the list of authors; some might only provide the material in return for a guarantee of co-authorship. In some extreme cases, someone’s name has been included on the list of authors without their even being aware of the paper and the use of their name.

It used to be very common practice in many university departments or research units for the head of the department to be included as a co-author as a courtesy and gesture of appreciation for providing general support and facilities for the work; some heads no doubt insisted upon this courtesy to increase their research reputations and CVs. This was presumably justified because it allowed heads of department to maintain a presence in the literature despite focusing their efforts upon management and creating an environment and resources conducive to good research. This would now be regarded by many as a form of scientific misconduct.

When the motivation for gift authorship is solely to register appreciation then it could be regarded as a relatively low level misdemeanor that should not affect the integrity of the scientific literature except insofar as it devalues scientific authorship. A more sinister motivation for gift authorship is where a researcher includes the name of an eminent colleague or acquaintance in order to add prestige and authority to the author list in the belief that it will increase the chances of a substandard or fabricated study being accepted for publication. If an eminent author accepts a gift publication suspecting that the motivation for their inclusion on the author list is to help secure publication then their level of culpability is ratcheted up several notches.

In the case-study of Yoshitaka Fujii, he added the names of colleagues from other institutions without their knowledge or consent to make the generation of data from a huge volume of patients seem more credible. It was made clear in the report by the Japanese Society of Anesthesiologists that some of his co-authors were listed without their consent or knowledge and that on occasion he even resorted to forging signatures of co-authors. Some co-authors claimed to have been unaware that their names had been used in this way over several years but it does seem that others used these gift publications to add to their CVs.

The Indian geologist and palaeontologist Vishwa Jit Gupta duped other unsuspecting experts into becoming co-authors by sending them fossils for identification and description which he falsely claimed to have found in the Himalayas. Some of these co-authors explain how they were duped into supporting his findings by describing and authenticating the real fossil in the paper but now accept that they were fooled by him and were probably too gullible in accepting his improbable accounts of where these fossils were found.

British obstetrician Malcolm Pearce was struck off the medical register in 1995 for falsely claiming in a case report to have successfully re-implanted an ectopic embryo and of fabricating a clinical trial. All of his co-authors were sent warning letters by the General Medical Council reminding them of their duty as co-authors to ensure the accuracy of data published in their names. Pearce’s head of department, Professor Geoffrey Chamberlain accepted gift authorship of the case report as a matter of routine as head of department without any active participation in the paper and resigned all his academic positions when Pearce’s fraud was made public.

Another case of controversial gift authorship centred on a clinical trial of a stem cell treatment for stress incontinence in women. This emanated from the Medical University of Innsbruck in Austria, was published in the Lancet and retracted in 2008. The study was found to not have proper ethical approval, patients had not been properly informed about the experimental nature of the treatments and serious irregularities in the published data.

The person deemed responsible for the misconduct was Dr Hannes Strasser and he was subsequently banned by the university from treating patients. The head of the urology department Georg Bartsch and five others were listed as co-authors of the paper. Professor Bartsch said that he did not ask to be credited as a co-author and subsequently asked the Lancet to withdraw his name and that of two other co-authors from the credits for this paper. Professor Bartsch was exonerated by the official Austrian report and the investigating team accepted that he had not taken part in the study. Nevertheless in a Lancet editorial published at the time there was a warning about the responsibilities of co-authors:

Co-authors abrogating responsibility is a recurrent theme in research misconduct cases”

“Honorary or gift authorship is unacceptable. Using gift authorship as an excuse for not taking responsibility for research when serious flaws are uncovered goes a step further, and should not be tolerated.”

Has proper peer review actually taken place?

Much of this material has already been discussed in a previous article specifically about peer review but is summarised again here because it is critical to the protection theme of this piece. This earlier piece contains an account of the history and development of the peer review system.

It is clear from several of the case studies that even apparently rigorous peer review by the best journals has failed to find fabricated data which in hindsight looks to be seriously and obviously flawed. There are also disturbing indications that, on some occasions, little or no peer review has actually taken place even though the journal claims to publish peer reviewed articles.

One of the big innovations in scientific publishing in recent years has been the birth and growth of so-called open-access journals. This model, where the author pays a fee to cover publication costs means that they, rather than the reader, become the customer. Clearly this has the potential for abuse by publishers or editors who may be tempted to accept many poor quality papers just so that they can collect the publication fees. In my article on peer review, I discussed at length, a report in Science by John Bohannon. He sent 304 spoof papers with serious and obvious flaws to open access journals and more than half were accepted and only 98 rejected. In many cases there was no evidence of peer review and in a few cases the paper was accepted by editors despite the flaws being highlighted by peer reviewers.

In 2009 the The Open Information Science Journal accepted a spoof paper by Philip Davis and Kent Anderson which was just strings of meaningless jargon generated using a computer program. The authors gave the institutional address of the mythical authors as The Center for Research in Applied Phrenology or CRAP! The paper was entitled Deconstructing Access Points and contains the following sentences which indicate the quality of what was submitted:

“We describe a novel heuristic for the extensive unification of web browsers and rasterization, which we call Trifiling Thamyn.”

“To accomplish this ambition for unstable models, we constructed new metamorphic algorithms”

Four months after the article was submitted, the authors received a statement saying that the paper had been peer reviewed and accepted for publication and requesting a publication fee of $800; at which point the authors withdrew the paper. Shortly after this hoax became public, the editor-in-chief of the journal resigned. In 2014 more than 120 papers published in supposedly peer reviewed conference proceedings were identified as having also been generated by this computer program. The last example is almost too outrageous  to believe. Peter Vamplew was so annoyed by receiving unsolicited e-mails from predatory open access journals that he sent one of them a spoof paper comprised entirely of the 7 words “get me off your f***ing mailing list”. A diagram from this spoof paper is reproduced below. The journal accepted the paper subject to payment of a processing fee which Vamplew declined to pay.

mailing list diagram

This paper was originally constructed by David Mazieres and Eddie Kohler of New York University and UCLA and the full version is available online.

During the 1990s I was a member of the editorial board of the British Journal of Nutrition. This was before the days of online submission and reviewing of papers so every few weeks I would receive a large package containing several hard copies of a paper that had been submitted to the journal. The paper had to be refereed by two subject specialists (one of which could be me) and usually sent for statistical review to a statistical editor. My task was to find experts willing to referee the paper, seek guidance from the statistical editor and then produce an editorial report that synthesised the findings of all of the referees. I would then recommend that the paper be accepted, rejected or accepted subject to certain conditions. It was often a time-consuming task to identify suitable referees who were willing to undertake the task of reviewing the paper. It also frequently required some cajoling and several polite reminders to try to get reports submitted within a reasonable time frame. For some obscure research areas it could be quite difficult to identify a referee with suitable background and experience who was willing to undertake this unpaid task.

Nowadays most papers are submitted electronically and the refereeing process usually conducted via e-mail or through an online portal. This still leaves editors with the sometimes difficult task of identifying suitable referees for the paper. To facilitate this process, some journals now ask authors to suggest the names of people who would be suitable referees for the paper and to provide contact details for these possible referees. This process can be abused by unscrupulous authors if journal editors simply select a couple of names from the list provided and send the manuscripts for review to the e-mail addresses provided by the author. There have been several cases where authors have created fake e-mail addresses for real or phantom referees using one of the free e-mail providers like g-mail or Yahoo. These e-mail addresses actually belong to the author and so the paper ends up being sent to its author for review and criticism.

Khalid Zaman was an economics professor at a public research university in Pakistan (COMSATS Institute of Information Technology). In 2014, sixteen papers that he had co-authored were retracted from journals published by Elsevier for just such a peer review scam. Zaman, by using fake e-mail addresses of real or imaginary academics, was sent his own papers for review. An editor of one of the journals became suspicious because all of the referees with non-institutional e-mail addresses gave very positive reviews of Zaman’s work and also did this very quickly. When challenged by one of Elsevier’s editors, Zaman admitted that he had written the reviews of his own papers and accepted full responsibility for the deception and said that his co-authors had not been involved in the dishonesty.

In July 2014 the publishing company SAGE announced that it was retracting 60 articles from its Journal of Vibration and Control because of the existence of a large peer review and citation ring who were refereeing and citing each other’s papers at a strikingly high rate. The central character in the operation of this ring was Peter Chen an academic formerly based at the National Pingtung University of Education in Taiwan who was a co-author on almost all of these papers. During an investigation by a team of SAGE employees, 130 suspicious e-mail accounts were identified. Chen resigned from his academic position and is said to have taken sole responsibility for the operation of this peer review and citation ring. He said that his co-authors were not involved in his deception and he also admitted to adding the name of Taiwan’s education minister to several of these retracted papers without his permission. The minister himself also denied playing any active role in this process nevertheless he resigned his ministerial position:

“To uphold his own reputation and avoid unnecessary disturbance of the work of the education ministry”.

In November 2014, Cat Ferguson, Adam Marcus and Ivan Oransky discuss this Peter Chen example and several similar peer review scams including the South Korean medicinal plant researcher Hyun-In Moon. Moon admitted to the editor of the Journal of Enzyme Inhibition and Medicinal Chemistry in 2012 that he had written many of the reviews of his papers himself. The reviews themselves were unremarkable, generally favorable comments often with suggestions about how to improve the paper. The editor’s suspicions were raised because of the speed with which reviews were submitted, often within 24 hours of the paper being sent. In their Nature article Ferguson and colleagues suggest a few signs that should raise suspicions that the authors might be trying to exploit loopholes in the peer review system and maybe end up writing the reviews of their own papers:

  • Authors ask to exclude some potential reviewers and then list almost everyone in their specialist field in this category
  • Authors recommend referees who cannot be found in an online search
  • Authors provide e-mail addresses from free e-mail providers rather than institutional addresses
  • Very rapid return of very positive reviews
  • Reviewers are unanimous in their highly positive opinions.

Several of the case study fraudsters have been long term editors of the journal where they publish many of their papers. This means that an unscrupulous and autonomous editor can control the peer review process of their own papers and those submitted by supporters or rivals.

Sir Cyril Burt was the founding editor of what has now become the Journal of Mathematical and Statistical Psychology. Over a 16 year period Burt published 66 papers in this journal under his own name which was well over half of his published output of scientific papers during this time. It is also widely believed that he published articles, book reviews and other contributions under a variety of assumed names. It seems that he could effectively control what was published in this journal over this period.

For over 20 years (1981-2003), RK Chandra was founding editor-in-chief of the Elsevier journal Nutrition Research. According to the SCOPUS database, Chandra has 65 publications listed for this journal including 27 research articles, 26 editorials and 4 reviews. Two of these papers published in 2002 have come under particular scrutiny since he was openly accused of research fraud, one published under his own name and the other published under the name of an untraceable author, AL Jain, presumed to be Chandra pseudonym. Both of these papers support his earlier findings published in the Lancet that his patented supplement improves immune function in older people. They are thus essentially repetition studies that supported his earlier findings at a time when the integrity and veracity of his research was being publicly questioned. Both papers were accepted for publication within 24 hours of submission which suggests that no independent peer review of either paper took place or that Chandra had found some amazingly efficient and diligent referees for these papers.

Ram B Singh was in 2015 editor in chief for two open access journal, the World Heart Journal and the Open Nutraceuticals Journal. In the period 2008-2014, no less than 118 of his 147 publications listed on the SCOPUS database were published in these two journals. The Open Nutraceutical Journal was founded in 2008 and was published by Bentham Open until it ceased publication in 2015. The World Heart Journal started in 2004 and is published by Nova Science publishers whose 2015 Wikipedia entry contained the following quotation:

Nova has been criticized for not always evaluating authors through the academic peer review process and for republishing old public domain book chapters and freely-accessible government reports at high prices. These criticisms prompted librarian Jeffrey Beall to write that in his opinion Nova Science Publishers was in the “bottom-tier” of publishers.

Both of the journals with Singh as editor-in-chief or co-editor have or had very low impact factors.

Peer review issues – summing up

Peer review is not designed to detect deliberate attempts to publish fabricated or falsified data. A number of factors like those listed below undermine the effectiveness of these review processes.

  • If co-authors abrogate their responsibilities by accepting gift authorship of a paper but playing no active role in ensuring the accuracy and quality of the work. By signing their acceptance of authorship they are acting as guarantors for the paper and could thus add unwarranted weight and prestige to the submission. Their names on the author list tend to re-assure editors and peer-reviewers that several experts are attesting to the quality and veracity of the work.
  • If the peer-review process is effectively by-passed. This may be because editors and publishers are keen to fill the pages of a lower prestige journal or because those managing some open access journals are primarily concerned with collecting the publication fees for the article. Some of those accused of research fraud have become editors of journals which could allow them to by-pass proper unbiased peer review for their own papers and for those of friends and collaborators.
  • If a paper is submitted to a journal that does not directly specialise in the area that is the main or a major focus of the paper. After the initial allegations were made against Yoshitaka Fujii in 2000, he published a number of papers in journals that were not specifically focused on anaesthesiology. The editors and referees for these journals may not have enough subject specific expertise in the topic dealt with by the paper. Likewise if a paper crosses several subject areas then the paper may not be seen by someone who has expertise in all of these areas. Ranjit Chandra’s retracted paper on the effects of a dietary supplement on cognitive function required expertise in nutrition, psychiatric testing and statistics to asses it properly. As a reviewer with nutrition expertise I would have probably not have spotted serious errors in the cognitive function testing that might have been obvious to a psychiatrist or psychologist.
  • If the paper is not seen by a statistical specialist then flaws in the statistical analysis may not be spotted. Some papers have been published where the values presented in the statistical summary are clearly impossible and would suggest to a statistician that the data has been fabricated or manipulated. Huge statistical flaws in the Chandra paper on the effect of a supplement on cognitive function have been discussed at length in the case study. Similar but perhaps less spectacular flaws have been reported in several of his other papers. Analysis of the raw data eventually provided by Ram B Singh to support a paper submitted to the BMJ showed that it could not have been generated in the manner described in the manuscript.
  • Referees and co-authors may not even consider the possibility of research fraud. The peer review process was never intended to detect fraudulent data but to check that the study as described by the authors is sound and of sufficient interest to warrant publication. Likewise co-authors and research supervisors also generally make critical assessment of the work of their colleagues and juniors at the pre-submission stage on the assumption that they have been told the truth. If peer reviewers and editors in particular were more aware of the possibility of fraud and gave it active consideration during this process then this might increase detection rates.

Can peer review be improved?

Most people who referee papers receive no formal training to help them fulfill this vital role and are not paid for their efforts. Even membership of an editorial board of a major journal is usually an honorary unpaid role.  A study conducted by Sara Schroter and several other BMJ staff tested the effect of training upon the performance of referees. They split a group of potential referees into three groups:

  • A control group who received no training
  • A first intervention group who received a full days face to face training
  • A second intervention group who were sent a self-teaching package on a CD-ROM.

The participants were made fully aware of the purpose of the study and knew that their performance as reviewers was being monitored.

They used three previously published papers which they anonymised and then they introduced 9 major errors and 5 minor errors. All three groups were sent the first paper prior to any training and 68% of the respondents recommended rejection of this seriously flawed paper with an average of 2.6 major faults reported by each reviewer. This is less than a third of the total major faults but reviewers who found some major faults that justified recommending rejection might have not looked too hard for others.

Two to three months after the training for the intervention groups, the second flawed paper was sent to those who had completed the first review. Both intervention groups spotted more flaws than the control group; this difference was statistically significant but not editorially very important. The intervention groups were also more likely than the controls to recommend rejection.  About 6 months after the training, all groups were then sent the third paper and this time only the self-taught group spotted significantly more errors than the control group and they were also significantly more likely to recommend rejection.

Schroter and her colleagues concluded that short training packages have only a slight impact on the quality of peer review. The self-taught package generally had more effect and certainly a more prolonged effect than the face to face training and so short face to face training courses were not cost effective and not recommended. Of course, this training was not specifically about research fraud but was aimed at generally improving the ability of reviewers to spot and report errors in the papers they review.

A recurring theme in cases of fraud that have resulted in publication of papers with fabricated or manipulated data is the inherent trust of reviewers and co-authors/supervisors in the authenticity of data they are reviewing and their failure to even consider the possibility of deliberate deception. Increased awareness of the problem is obviously a key factor in trying to rectify this problem. There needs to be open discussion of fraud and suspected cases of fraud need to be addressed in an open way and not dealt with in secret for fear of harming an the reputation of institutions or journals; attempts to cover up major scandals of all sorts often causes more harm and scandal than the original problem. There may be a case for making research misconduct and ethical research behavior a curriculum topic for all university level science courses. This should not be confined to theoretical and abstract discussion of appropriate and inappropriate behaviors but in depth discussion and analysis of some real fraud and misconduct cases.

Today’s undergraduates and postgraduates are tomorrow’s referees and editors and an awareness of misconduct and some of its characteristics and ploys should be embedded into their psyche early in their science careers.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s