Can Emerging Peer Review Models Improve Scientific Quality and Integrity?

Friday, February 12, 2016: 10:00 AM-11:30 AM
Marshall Ballroom South (Marriott Wardman Park)
Carole J. Lee, University of Washington, Seattle, WA
In recent years, a number of failures in scientific communication and peer review have been publicized, including fraud, poor statistical review, selective reporting, poor transparency in study design and results, and low reproducibility of landmark cancer studies.

How can we improve peer review to sustain trust in science?  Scientific communities are now leveraging a strategy once proposed by sociologists of science Harriet Zuckerman and Robert Merton.  In their narrative on the origins of the normative structure of science, Zuckerman and Merton proposed that so long as authors are motivated to certify their work through peer-reviewed publication at prestigious journals, those journals hold the power to motivate scientists to raise their standards of performance through the articulation and enforcement of stricter peer review standards.

In this talk, I explore the introduction of this template for initiating change in biomedicine and its recent expansion in the sciences (including the social and behavioral sciences) more broadly.  In this template, scientists and stakeholders coordinate to articulate stricter peer review standards with the hope that requiring conformance will motivate improved behavior. 

I suggest that recent implementations of this template in science innovate on the form by motivating authors not just through sanctions or requirements, but through crowd effects in a new economy of credibility: here, it becomes increasingly costly for individual authors not to signal their credibility (by conforming to new standards) in the face of competitors who accrue credit for doing so.  I also explore whether and under what conditions journals may also become subject to crowd effects in this new economy of credibility.

Finally, I propose adding another category to the next generation of guidelines aimed at improving transparency and openness in scientific communication and evaluation.  In particular, I propose adding a category that indicates a journal’s willingness to undertake meta-research on the effectiveness of its own practices.  Conformance to this ideal would work towards disrupting remaining collective action problems (this time among journals as opposed to authors) that hinder transparency and openness in scientific communication and evaluation.