The problem with peer review

Disgraced researcher
Disgraced South Korean cloning expert Hwang Woo-Suk walks into a disciplinary committee meeting of Seoul National University in Seoul, in March 2006. The South Korean government took punitive measures against Hwang Woo-Suk, banning him and his co-researchers from conducting stem cell research.
Photo by JUNG YEON-JE/AFP/Getty Images

Peer reviewers are independent scientists who examine the research of other scientists, to determine if it's worthy of publication. Reviewers look at things like the quality of the study, the strength of the findings and whether the research is important and meaningful.

But peer review has its limits. Dale Hammerschmidt knows this from personal experience.

"A publication or application can be completely fabricated and can slip through the peer review process pretty well," says Hammerschmidt. "That's not something it really detects."

Peer review not perfect
Dr. Dale Hammerschmidt says the peer review process is good at determining whether researchers are using sound scientific methods in their studies. But he says it's not a good way to detect scientific fraud.
MPR Photo/Lorna Benson

Dr. Hammerschmidt is the former editor in chief of The Journal of Laboratory and Clinical Medicine, based at the University of Minnesota. A little over a decade ago, his journal published two articles by a scientist who was later suspected of fabricating his findings on the use of antioxidants in fighting ulcer and bowel diseases.

Create a More Connected Minnesota

MPR News is your trusted resource for the news you need. With your support, MPR News brings accessible, courageous journalism and authentic conversation to everyone - free of paywalls and barriers. Your gift makes a difference.

Hammerschmidt says peer reviewers didn't detect researcher Aws Salim's deception, because it was subtle.

"What he was reporting was similar enough to what was already being reported by other people that it didn't raise a lot of eyebrows, until the question was raised about whether he really could be doing as much research as he was reporting," says Hammerschmidt.

A journal reader raised that question. So did another peer reviewer who happened to be examining a third article that Salim had submitted for publication. When the journal investigated the matter, it discovered that the time frame on Salim's study didn't match up with the availability of an important drug used in the study.

Ultimately, the journal's investigation concluded that Salim had fabricated a total of 45 out of 49 papers in the published literature. The journal decided to withdraw its aegis, or endorsement, of the two papers it published. That action triggered a notice in indexing databases. The notice informed other researchers that the journal no longer stood behind its publication of Salim's articles.

A publication or application can be completely fabricated, and can slip through the peer review process pretty well. That's not something it really detects.

But Hammerschmidt says it's not a perfect fix, because not all researchers bother tracing papers back to the indexing databases when doing their literature searches.

"When we've looked at the papers on which we've withdrawn aegis and we've looked at other papers that have actually been retracted, we find that they're still being cited as much as eight and 10 years later," says Hammerschmidt. "So it isn't removing them as effectively from the universe of science as you'd like."

The best solution, according to Hammerschmidt, is to catch these problems before they appear in print. But he's not sure that peer reviewers should have that burden, because he says they work for free and rarely have the time or resources to look for fraud.

Instead, Hammerschmidt says research institutions should become much more involved in the research they sponsor, by creating an environment where scientists are expected to talk openly about projects with their colleagues.

He also says journals should adopt tougher standards. His former journal now requires that researchers make their data available for up to five years if their findings are questioned.

Gary Schwitzer agrees with those ideas, but he still thinks peer review could do more. Schwitzer runs a Web site called HealthNewsReview.org. He's also a journalism and mass media professor at the University of Minnesota.

"I'm not ready to give up on peer review," says Schwitzer. "I think that there are conversations that are just beginning that are very healthy, that ought to involve all of us."

Despite its flaws, Schwitzer says peer review is the best system science has to detect fraud, because it asks people who have a lot of expertise in a certain area to evaluate the work of others in the same area. He thinks peer reviewers should do a lot more investigating, and he thinks it makes sense for journals to pay them to do it.

Schwitzer also says journals need to spend more time scrutinizing their own conflicts of interest.

"Journals tend to publish positive findings," says Schwitzer. "Well, don't be surprised that many of those positive findings come from studies that have been funded by drug or device manufacturers who are paying the bills. And lo and behold, their stuff seems to look good when they're paying for the research."

Schwitzer says a recent article in BMJ, formerly the British Medical Journal, showed that only one study has ever taken a look at journal editors' policies on conflict of interest. That study found that only nine of 30 journal editors in the study had an explicit policy for dealing with editors' financial conflicts of interest.

Journalists are not off the hook either. Schwitzer says medical reporters need to get better at disclosing those conflicts of interest in their stories when they find them. And he says journalists need to do a better job of explaining weaknesses in research.