COMMENTARY

Spin Cycle: Do Preprints Overhype Their Findings?

F. Perry Wilson, MD, MSCE

Disclosures

April 20, 2023

This transcript has been edited for clarity.

Cards on the table: I've got mixed feelings about preprint servers like medRxiv. On the surface, the ability to publish research findings prior to peer review, especially when the results may have major impact for public health, is laudable. But I always wonder if the quality of the research suffers for that lack of oversight.

On the other hand, peer review is painfully slow.

True story: I submitted the results of a randomized trial to a journal, which shall remain nameless, on July 8, 2022. It was sent for peer review and I was asked to respond to reviewer comments on November 30, 2022 — 145 days later. I responded to reviewers and resubmitted on January 9, 2023. The resubmission was sent to reviewers and I was asked for further revisions on March 16, 2023 — 66 days later. I responded to the reviewers and resubmitted again on March 23, 2023. That was 19 days ago at the time of this recording.

I've heard no news since then. From initial submission it has been 283 days at the same journal. That is more than 9 months. I could have had a baby by now.

So, yes, the appeal of preprint servers is huge.

But peer review is not just about mindlessly treading water waiting for reviewer comments, although sometimes it feels like that. In fact, the comments the reviewers provided on my manuscript substantially strengthened it; they suggested new analyses that complemented the primary findings and, equally importantly, they forced me to describe my findings more impartially. In other words, they forced me to reduce my own "spin." Without that check, are preprint server manuscripts overhyped?

In the COVID era, preprint server usage — particularly medRxiv, which tends to publish clinical research studies — exploded. The public health need was clear: We needed data fast, not a year after initial submission. But what did we get for the bargain? Was the research of reasonable quality? Could it have been overspun?

We're talking about a nice research letter from David Schriger and colleagues at UCLA appearing in JAMA. The goal of the analysis was to assess the level of "spin" on COVID-related randomized trials first published in medRxiv compared with the final published versions of those articles.

They reviewed 236 preprints from January 2020 to December 2021, and it won't surprise you to hear that by November 2022, 54 preprints had not yet been published. Of those that were published, the median time from preprint submission to peer-reviewed publication was 134 days… but it's okay; I'm not jealous.

The authors basically identified three categories of preprint abstracts: those that never got published in peer-reviewed journals; those that did eventually get published; and then the official published abstracts.

The authors first scored the categories on abstract completeness. Did they provide the results for their primary outcome, for instance? You can see across the board that, in general, preprints that never got published were less complete than those that did get published. For example, just 30% of the unpublished preprints provided their primary outcome results in the abstracts, although 53.4% of abstracts that eventually got published had these results, and 57.8% of the final published articles had them. So, the ones that got published are pretty similar to those articles when they hadn't been published. But seriously? Forty percent of peer-reviewed published articles aren't reporting their primary outcomes in the abstract? This feels more like an indictment of peer review than of preprint servers.


 

But the real point here is to talk about spin. The JAMA study authors devised their own system to look at spin, including things like highlighting positive secondary outcomes when the primary outcome was negative, and extending the claims beyond the target population of the study. Spin was much higher in the preprint articles that never got published. Among those that did get published, though, spin was a little better after peer review — but not dramatically so.

When individual preprint abstracts were compared with their peer-reviewed counterparts, in general the peer-reviewed abstracts were considered to have more completeness of scientific communication and less spin overall. But see those big beige areas here? These are pairs of preprint and published abstracts that were no different with regard to spin — the majority, really. Again, to me this reads as more damning of peer review than preprint servers.


 

Some caveats here. Randomized trials are, in my opinion, probably less subject to spin than observational research, given that with the latter authors have a lot more flexibility to pick and choose important outcomes and analyses. And COVID-19 is not necessarily a proxy for all clinical research. Also, we know that the preprints that never got published were more highly spun to begin with. That may seem reassuring, but remember that when a manuscript appears on a preprint server, we have no way of knowing whether it will be published in the future. We are flying blind.

It's for that reason that the authors caution that "adoption of COVID-19 treatment protocols based on erroneous preprints suggests potential problems associated with less complete, more highly spun abstracts."

Indeed. But to me it seems that the difference between the good preprints and the bad preprints doesn't lie in the peer review. If anything, maybe it lies with those poor associate editors at the medical journals — you know, the ones who reject the paper without even sending it out to peer reviewers, often within a day or two. Could such a model apply to preprint servers? A simple, quick, perhaps unilateral "no" to separate the wheat from the chaff? Or would that defeat the whole purpose?

Maybe I'll write a manuscript about it. You can find it in a well-respected, peer-reviewed journal sometime in the next 12-18 months.

For Medscape, I'm Perry Wilson.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale's Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his new book, How Medicine Works and When It Doesn't, is available now.

Follow Medscape on Facebook, Twitter, Instagram, and YouTube

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.

processing....