Earlier this year, the Journal of Biological Chemistry retracted four papers by the same researcher, who studies immune-system chemicals that have links to cancer and autoimmune diseases. In each case, the journal offered an identical explanation: “This article has been withdrawn by the authors.” As Ivan Oransky wrote in Retraction Watch, a blog that started in August 2010, “That certainly clears things right up.”
Oransky and Adam Marcus, who works with him to highlight errors in scientific publications, are leading a growing group of critics who say that acknowledging these types of mistakes and explaining them matters greatly, especially given the scientific tradition of building arguments by citing the work of others. They contend that for biomedical research in particular, recognizing mistakes can truly be a matter of life and death.
Marcus, a science writer and managing editor of Anesthesiology News, and Oransky, a doctor who is also executive editor of Reuters Health, take pleasure in skewering scientific behavior that ranges from clumsy and preposterous to devious and downright criminal. In their first post, they explained that retractions cover a wide range: they may reflect an honest mistake in the research, or they can address full-blown fraud. But the bulk of retractions involve errors that are somewhere between the extremes, and they argue, convincingly, that most “live in obscurity in Medline and other databases.” Many journals have no retraction policies, and the ones that do often publish these critical notices of error long after the original paper appeared.
The current record holder, a German anesthesiologist named Joachim Boldt, had more than 80 papers retracted by editors of 18 different journals because he had failed to receive approval from an institutional review board before conducting human studies. Boldt was relieved of his duties as chief physician at Ludwigshafen Hospital in Rhineland after separate allegations that he published a study based on a drug trial that never took place. Former Duke University oncologist Anil Potti, who falsely claimed on grant applications to have won a Rhodes scholarship, had seven papers retracted because his group could not reproduce its own work.
Retractions in the Medical Literature: How Many Patients Are Put at Risk by Flawed Research?
R. Grant Steen
Journal of Medical Ethics online, May 17, 2011
Retractions in the Scientific Literature: Is the Incidence of Research Fraud Increasing?
R. Grant Steen
Journal of Medical Ethics 37(4): 249–253
Retraction Policies of High-Impact Biomedical Journals
Michel C. Atlas
Journal of the Medical Library Association 92(2): 242–250
Editorial Expression of Concern
Science 333(6038): 35
Analyses show that retractions have skyrocketed over the past decade. Neil Saunders, a statistical bioinformatician for the national science agency in Australia, has created a Web application that tracks retractions in the main database of biomedical publications, PubMed. Since 1977, the number of publications has jumped almost fourfold, but the number of retractions has increased by a factor of about 30. This could reflect an increase in self-policing or fraud, a growing public consciousness, the effects of social networking, the use of plagiarism-detecting software, or a combination of these factors. Even reading the full notice, as Retraction Watch makes abundantly clear, often does not yield any meaningful understanding of why the retraction occurred.
The opacity of the Journal of Biological Chemistry has had some serious competition. In 2009 the Journal of the American Chemical Society retracted a paper with this ridiculous note: “This manuscript has been withdrawn for scientific reasons.” When Adam Marcus phoned the Annals of Thoracic Surgery for more information about a vaguely worded retraction, the editor, L. Henry Edmunds Jr., responded: “That’s none of your damn business.”
Actually, it is our business: the public funds a great deal of scientific work. What’s more, errors can cause real harm. An article published online May 17 in the Journal of Medical Ethics examined 180 retracted papers published between 2000 and 2010. Author R. Grant Steen, a biologist who runs a medical communications consulting business in Chapel Hill, North Carolina, found that these erroneous papers had involved 9,189 patients who received a treatment. The retracted studies were cited more than 5,000 times, more than one-third of them after the retractions—and those subsequent studies involved 70,501 treated patients. The paper detailed specific harm that might have occurred as a result, including undermedication of patients who had postsurgical pain, unnecessary surgery for cancer patients, and a treatment for kidney disease that may have made patients’ outcomes worse.
In a separate survey that Steen published in the same journal, he analyzed the reasons behind the retractions of 742 papers during the preceding decade. Nearly 75 percent resulted from errors including scientific mistakes, ethical issues, and plagiarism, while the rest were due to either data fabrication or falsification—that is, fraud. Steen’s analysis revealed that the incidence of retraction for both error and fraud has increased steeply: the number of papers retracted as fraudulent jumped from two in 2000 to 51 in 2009, and the number withdrawn annually for scientific mistakes rose from one to 35 over the same period. Steen suggests that the increases may reflect a more aggressive attempt by journals to police themselves.
Journals certainly are not aggressive about stating their retraction policies. A study published in 2004 by the Journal of the Medical Library Association evaluated whether 122 “major” biomedical journals even had such policies. Michel C. Atlas, a reference librarian at the University of Louisville in Kentucky, first looked to see whether a journal had a retraction policy in its online instructions to authors. Only four did. Atlas e-mailed the editors of the remaining journals and asked for a copy of their policy, if any. In the final tally, 76 of the editors—78 percent of those who responded—reported that their journals had no formal policy.
Not having a policy does not, of course, equal a reckless disregard for the truth. Some said they adhered to standard guidelines. And some journals defend not having a hard-and-fast policy because retractions are often complicated.
Take Science, where I am a contributing correspondent. I recently interviewed Bruce Alberts, the editor in chief, about a paper the journal published in 2009. The study linked a mouse retrovirus to chronic fatigue syndrome. At the time I spoke with Alberts, a dozen independent groups had reported that they could not replicate the work, and suspicions were mounting that the samples had been contaminated. Although no evidence then existed of contamination or error in any of the contributing authors’ labs, Alberts had asked the authors whether they wanted to retract. Several were incensed, and all felt a retraction was premature. Alberts settled for publishing an “editorial expression of concern.”
I asked Alberts where he drew the line between requesting a retraction and letting the study languish in disgrace. “It’s always tough,” he said. In the absence of outright fraud, plagiarism, or blatant error, editors have to make difficult judgment calls about a decision that can bring both promising lines of research and ascending careers crashing down. Science, in a further layer of nuance and complexity, later ran an unusual “partial retraction” after one of the labs that contributed to the paper found contamination.
Given that the evidence suggests most retractions do not stem from the manipulation of data, the reluctance to come clean seems self-defeating. The culture of science affords a great deal of respect for those who show flexibility in response to facts, even if it means spotlighting their own shortcomings and previous errors. Mistakes are particularly easy to make in the type of high-risk science that breaks new ground, and scientists should be encouraged to take such risks. But scientific journals need to be far more forthcoming in addressing those errors with full explanations, and in acknowledging that promptly retracting results is also part of the communication process.
Jon Cohen, a correspondent with Science, has written for the New Yorker, the Atlantic Monthly, and the New York Times Magazine. His latest Book, Almost Chimpanzee, was published last year.
These weird virtual creatures evolve their bodies to solve problems
They show how intelligence and body plans are closely linked—and could unlock AI for robots.
Surgeons have successfully tested a pig’s kidney in a human patient
The test, in a brain-dead patient, was very short but represents a milestone in the long quest to use animal organs in human transplants.
Is everything in the world a little bit conscious?
The idea that consciousness is widespread is attractive to many for intellectual and, perhaps, also emotional
reasons. But can it be tested? Surprisingly, perhaps it can.
We reviewed three at-home covid tests. The results were mixed.
Over-the-counter coronavirus tests are finally available in the US. Some are more accurate and easier to use than others.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.