Peer-Reviewed Scientific Journals Don't Really Do Their Job

The rapid sharing of pandemic research shows there is a better way to filter good science from bad.
Collage of images of scientist writing microscope and journal pages
Photo-Illustration: Sam Whitney; Getty Images

The rush for scientific cures and treatments for Covid-19 has opened the floodgates of direct communication between scientists and the public. Instead of waiting for their work to go through the slow process of peer review at scientific journals, scientists are now often going straight to print themselves, posting write-ups of their work to public servers as soon as they’re complete. This disregard for the traditional gatekeepers has led to grave concerns among both scientists and commentators: Might not shoddy science—and dangerous scientific errors—make its way into the media, and spread before an author’s fellow experts can correct it? As two journalism professors suggested in an op-ed last month for The New York Times, it’s possible the recent spread of so-called preprints has only “sown confusion and discord with a general public not accustomed to the high level of uncertainty inherent in science.”

There’s another way to think about this development, however. Instead of showing (once again) that formal peer review is vital for good science, the last few months could just as well suggest the opposite. To me, at least—someone who’s served as an editor at seven different journals, and editor in chief at two—the recent spate of decisions to bypass traditional peer review gives the lie to a pair of myths that researchers have encouraged the public to believe for years: First, that peer-reviewed journals publish only trustworthy science; and second, that trustworthy science is published only in peer-reviewed journals.

Scientists allowed these myths to spread because it was convenient for us. Peer-reviewed journals came into existence largely to keep government regulators off our backs. Scientists believe that we are the best judges of the validity of each other's work. That's very likely true, but it's a huge leap from that to "peer-reviewed journals publish only good science." The most selective journals still allow flawed studies—even really terribly flawed ones—to be published all the time. Earlier this month, for instance, the journal Proceedings of the National Academy of Sciences put out a paper claiming that mandated face coverings are “the determinant in shaping the trends of the pandemic.” PNAS is a very prestigious journal, and their website claims that they are an “authoritative source” that works “to publish only the highest quality scientific research.” However, this paper was quickly and thoroughly criticized on social media; by last Thursday, 45 researchers had signed a letter formally calling for its retraction.

Now the jig is up. Scientists are writing papers that they want to share as quickly as possible, without waiting the months or sometimes years it takes to go through journal peer review. So they're ditching the pretense that journals are a sure-fire quality control filter, and sharing their papers as self-published PDFs. This might be just the shakeup that peer review needs.

The idea that journals have a special way to tell what’s good science and what’s bad has always been an illusion. In fact, the peer review process at journals leaves much to be desired. When a paper goes through, only those reviewers invited by the editor can weigh in on its quality, and their comments almost never get shared with readers. Journal peer review typically means that authors get a small dose of vetting—a few drops of criticism—on the way to publication. In contrast, when a paper is posted as a preprint, the authors’ peers still review it, but their vetting isn’t forced through the tip of a pipette. Instead, a firehose of criticism gets turned on. Because a preprint is public, any scientist can review the paper, and their comments may be posted to it using annotation software such as hypothes.is, or shared on social media for all readers to consider. That tends to make for better science, in the end.

In reality, it’s still quite rare for a preprint to get a lot of reviews. The firehose may be open for criticism to flow, but often no one bothers to turn it on. The use of preprints is still a new development in most fields, and it’s important to keep in mind that many such papers have been read by literally no one in the world besides their authors. But controversial findings about important issues, such as Covid-19, are a clear exception, especially when they get picked up by (or peddled to) the media. Those papers will almost certainly get more thorough vetting as a preprint than they would by going through journal peer review. That may be why one of the authors of the PNAS paper told BuzzFeed last week that he and his colleagues would “prefer not to engage in scientific debates via social media platform.”

One of the advantages of preprints is that they make the process of peer review more flexible. Indeed, it never really ends: A paper can be subjected to another round of scrutiny, for example, if it’s being picked up by policymakers, or if we later learn that the methods are flawed. At journals, peer review is almost always limited to just three or four reviewers whose work is over once the paper is accepted for publication.

That’s not to say that moving to preprints and public peer review will solve all our problems. There are many obstacles that this shift won’t solve, and new problems it will create. But it’s clear that the old system doesn’t live up to the credibility we’ve bestowed on it. Journal peer review is full of holes, and the idea that scientific journals—and they alone—can tell us what’s trustworthy and what's not is a fantasy.

In many ways, journals don't even pretend to ensure the validity of scientific findings. If that were their primary goal, journal policies would require authors to share their data and analysis code with peer reviewers, and would ask reviewers to double-check results. In practice, reviewers can only judge the science based on what’s reported in the writeup, and they usually can’t see the details of the process that led to the findings. (This is kind of like asking a mechanic to evaluate a car without looking under the hood.) And for really important discoveries, you might expect journals to recruit an independent team of scientists to try to replicate a study from scratch. This basically never happens.

Journals do ask reviewers to weigh in on a study’s quality, but also its novelty and drama. Most peer-reviewed journals aren't simply trying to filter out inaccurate findings, they're also trying to select the stuff that will boost their "impact factor"—a public ranking based on how many times a journal's articles get cited in the few years after they've been published. Accuracy matters but many other aspects of a study also play important roles. Whether the authors are eminent scientists, for example, or whether they're from prestigious universities, or whether the discovery is likely to get media attention. (Journal peer review also makes no attempt to ferret out deliberate fraud.)

Scientists know all of this, in principle. I knew all of it myself. But I didn't know the full extent until I became editor in chief of a peer-reviewed journal, Social Psychological and Personality Science, in 2015. I should never have gotten the job: I was young, barely tenured, and a bit rebellious. But the gatekeepers took a chance on me, and, as obstreperous as I was, I knew this job was a big responsibility and I had to fulfill my duties according to professional norms and ethics. I took this to mean that I should evaluate the scientific merits of each manuscript submitted to the journal, and decide whether to publish it based only on considerations of quality. In fact, I chose to hide the authors' names from myself as much as possible (sometimes called “triple-blind” review), so that I wouldn't be swayed or intimidated by how famous they were.

A few months later, this got me into trouble. Apparently I had upset some Very Important People by “desk-rejecting” their papers, which means I turned them down on the basis of serious methodological flaws before sending out the work to other reviewers. (This practice historically accounted for about 30 percent of the rejections at this journal.) My bosses—the committee that hires the editor in chief and sets journal policy—sent me a warning via email. After expressing concern about “toes being stepped on,” especially the toes of "visible ... scholars whose disdain will have a greater impact on the journal's reputation," they forwarded a message from someone whom they called "a senior, highly respected, award-winning social psychologist." That psychologist had written them to say that my decision to reject a certain manuscript was "distasteful." I asked for a discussion of the scientific merits of that editorial decision and others, but got nowhere.

In the end, no one backed down. I kept doing what I was doing, and they stood by their concerns about how I was damaging the journal’s reputation. It’s not hard to imagine how things might have gone differently, though. Without the persistent support of the associate editors and the colleagues I called on for advice during this episode, I very likely would have caved and just agreed to keep the famous people happy.

This is the seedy underbelly of peer-reviewed journals. Award-winning scientists are so used to getting their way that they can email the editor's boss and complain that they find rejection "distasteful." Then the editor is pressured to be nicer to the award-winning scientists.

I heard later that the person who had hired me as editor in chief described the decision as "an experiment gone terribly, terribly wrong." Fair enough: That's basically what I think about the whole system of peer-reviewed science journals. It was once a good idea—even a necessary one—but it isn’t working anymore.

It's not that peer review can't work; indeed, as the old saying goes, it's the worst form of quality control, except for all the other ones that have been tried. But there are new ways of doing peer review that we haven't yet tried, and that's where preprints come into play.

Many of the problems with peer-reviewed journals are problems with the journals, rather than problems with peer review, per se. Preprints allow peer review to be taken out of the journals’ hands, which opens up dramatic, new opportunities to improve it. There’s no guarantee that the freewheeling, open-ended peer review of preprints will be rigorous and just, but everyone can see the process: Was it thorough? Do the reviews seem detailed, fair? We get to judge the judges. Journals don't let us do that. We just have to take their word that their peer review process is rigorous and just.

For now, most preprints will get very few, if any, reviews. That needs to change, but even just knowing that a paper has not been thoroughly reviewed is a huge improvement over the black box of journal-based peer review. As these public reviews become more commonplace, there is reason to hope that preprints will elicit more piercing criticism than typically happens at journals, particularly for sensationalistic papers by famous people. Journal editors and reviewers may be blinded by the flashiness of a paper’s claims, or the prominence of its authors; or else they may notice a study’s flaws but choose to publish it anyway for the “impact.” Either way they can be confident in the knowledge that they will not be held accountable for the stringency of the peer review process. In a preprint, though, a famous scientist’s exaggerated or unwarranted claims may be more likely to be called out, instead of less so.

Preprints also introduce new challenges, such as how to guarantee that unknown authors can get attention, or prevent friends from writing glowing reviews of one another’s work. But the most frequent concern I’ve heard—that preprints allow bad science to get into the hands of policymakers and practitioners—rings hollow. Peer-reviewed journals have been disastrously ineffective at preventing that very outcome. Indeed, some of the papers we published under my editorship at Social Psychological and Personality Science have been convincingly, and quite devastatingly, criticized. Editors and reviewers are fallible, and the journal peer review process is far too flimsy to live up to its reputation. It’s time we stop putting so much faith in journals, and look for more transparent and effective ways to peer review scientific claims.

Updated, 6/26/2020, 1:00 pm EST: This story has been updated to correct how long it took for the author to run into trouble for "desk-rejecting papers."


More Great WIRED Stories