A Pottery Barn rule for scientific journals

Proposed: Once a journal has published a study, it becomes responsible for publishing direct replications of that study. Publication is subject to editorial review of technical merit but is not dependent on outcome. Replications shall be published as brief reports in an online supplement, linked from the electronic version of the original.

*****

I wrote about this idea a year ago when JPSP refused to publish a paper that failed to replicate one of Daryl Bem’s notorious ESP studies. I discovered, immediately after writing up the blog post, that other people were thinking along similar lines. Since then I have heard versions of the idea come up here and there. And strands of it came up again in David Funder’s post on replication (“[replication] studies should, ideally, be published in the same journal that promulgated the original, misleading conclusion”) and the comments to it. When a lot of people are coming up with similar solutions to a problem, that’s probably a sign of something.

Like a lot of people, I believe that the key to improving our science is through incentives. You can finger-wag about the importance of replication all you want, but if there is nowhere to publish and no benefit for trying, you are not going to change behavior. To a large extent, the incentives for individual researchers are controlled through institutions — established journal publishers, professional societies, granting agencies, etc. So if you want to change researchers’ behavior, target those institutions.

Hence a Pottery Barn rule for journals: once you publish a study, you own its replicability (or at least a significant piece of it).

This would change the incentive structure for researchers and for journals in a few different ways. For researchers, there are currently insufficient incentives to run replications. This would give them a virtually guaranteed outlet for publishing a replication attempt. Such publications should be clearly marked on people’s CVs as brief replication reports (probably by giving the online supplement its own journal name, e.g., Journal of Personality and Social Psychology: Replication Reports). That would make it easier for the academic marketplace (like hiring and promotion committees, etc.) to reach its own valuation of such work.

I would expect that grad students would be big users of this opportunity. Others have proposed that running replications should be a standard part of graduate training (e.g., see Matt Lieberman’s idea). This would make it worth students’ while, but without the organizational overhead of Matt’s proposal. The best 1-2 combo, for grad students and PIs alike, would be to embed a direct replication in a replicate-and-extend study. Then if the “extend” part does not work out, the replication report is a fallback (hopefully with a footnote about the failed extend). And if it does, the new paper is a more cumulative contribution than the shot-in-the-dark papers we often see now.

A system like this would change the incentive structure for original studies too. Researchers would know that whatever they publish is eventually going to be linked to a list of replication attempts and their outcomes. As David pointed out, knowing that others will try to replicate your work — and in this proposal, knowing that reports of those attempts would be linked from your own paper! — would undermine the incentives to use questionable research practices far better than any heavy-handed regulatory response. (And if that list of replication attempts is empty 5 years down the road because nobody thinks it’s worth their while to replicate your stuff? That might say something too.)

What about the changed incentives for journals? One benefit would be that the increased accountability for individual researchers should lead to better quality submissions for journals that adopted this policy. That should be a big plus.

A Pottery Barn policy would also increase accountability for journals. It would become much easier to document a journal’s track record of replicability, which could become a counterweight to the relentless pursuit of impact factors. Such accountability would mean a greater emphasis on evaluating replicability during the review process — e.g., to consider statistical power, to let reviewers look at the raw data and the materials and stimuli, etc.

But sequestering replication reports into an online supplement means that the journal’s main mission can stay intact. So if a journal wants to continue to focus on groundbreaking first reports in its main section, it can continue to do so without fearing that its brand will be diluted (though I predict that it would have to accept a lower replication rate in exchange for its focus on novelty).

Replication reports would generate some editorial overhead, but not nearly as much as original reports. They could be published based directly on an editorial decision, or perhaps with a single peer reviewer. A structured reporting format like the one used at Psych File Drawer would make it easier to evaluate the replication study relative to the original. (I would add a field to describe the researchers’ technical expertise and experience with the methods, since that is a potential factor in explaining differences in results.)

Of course, journals would need an incentive to adopt the Pottery Barn rule in the first place. Competition from outlets like PLoS One (which does not consider importance/novelty in its review criteria) or Psych File Drawer (which only publishes replications) might push the traditional journals in this direction. But ultimately it is up to us scientists. If we cite replication studies, if we demand and use outlets that publish them, and if we we speak loudly enough — individually or through our professional organizations — I think the publishers will listen.

6 thoughts on “A Pottery Barn rule for scientific journals

  1. Sanjay,

    I’m a fan of your blog and love this idea.

    It is worth noting that JPSP eventually did publish a series of failures to replicate Bem when they were coupled with a meta-analysis (my paper: http://psycnet.apa.org/index.cfm?fa=search.displayRecord&id=6D52F254-D22D-161B-E60F-60DB41B4C1E4&resultID=1&page=1&dbTab=pa). That said, I agree that they should have been okay with publishing a series of failures to replicate WITHOUT the meta-analysis as the replications were informative on their one.

    I’d like to point you to the International Journal of Research in Marketing (IJRM). They recently put out an editorial statement (not sure if it’s public yet…I received it via email) suggesting that they will add exactly what you are writing about and are encouraging methods classes to run replications and submit them. They will then publish them in a special replication section called “Replication Corner”.

    -Jeff

  2. It is hard to think of something bad about this proposal! I am trying to anticipate objections and it is hard.

    I think the incentive structure for journals could be worked out so long as the replication reports did not detract from their impact factors. Moreover, if there was a clear template, the additional editorial time should be minimal. I would just add that having the raw data on the website would be great for future meta-analyses.

    I also like the public relations aspect of this approach. Journalists and the lay pubic can see that psychological scientists value replication because it is part of our journals. A journalist might even wait to report on a particular finding until there are some positive hits. It might slow down the coverage of “hot” results but the upside is that the findings that do get coverage could be based on more solid empirical footing. The journalist can see what is happening in the replication section to see what is “hot” among the science community instead of waiting for university press releases. The science articles in the popular press can say that the effect has even recently been duplicated by team X.

    Having space for replications might even provide an incentive for “primary authors” to be more open and transparent about their methods, procedures, and measures. If you give “replicators” a clear recipe, it should be easier for them to conduct the study. If journals want to have content for their replication sections, they should push for thorough reporting. The incentive is now to provide more information to readers, not less. (This additional “technical” information can be contained in an online supplement as well). So again, this seems like a great idea with tremendous upside.

  3. Thanks everyone!

    The link to Jeff’s paper doesn’t work for me — for interested readers, this one might work better: http://psycnet.apa.org/psycinfo/2012-23134-001

    I should also note that the non-replication of Bem that was rejected by JPSP (and that led to my post a year ago) was eventually published at PLoS ONE: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0033423

    Brent, thanks for your comments and all good points. I think it would be reasonable to say that replication reports in an online supplement shouldn’t go into journal’s impact factor (another current barrier to journals publishing replications) (though maybe if replications started getting cited a lot, they’d want them to!). I also like the idea of promoting transparency of materials. In the age of online supplements, it would be great if reviewers and readers routinely had access to the stimuli, questionnaires, etc.

  4. I like this idea, with one reservation: I would hope that replication studies would not be restricted to on-line archives in all cases. If you believe that editors still matter (I know some proposed models eliminate them entirely), then the editor’s most important role is to direct attention to articles that are important enough to merit it. We all have more stuff coming at us than we can possibly read. But I do scan the table of contents of JPSP, JRP, and a few other journals as soon as they arrive. My hope and expectation is that this is where I will see the stuff I need to know about. So if an article can convince me that an important and widely-accepted study is false, I would hope to have my attention drawn to it. And, in the case of an important, groundbreaking and counterintuitive finding turning out to have robust support (if that ever happened), I’d like to know about that too.
    This of course requires editors to make subjective judgments about importance, credibility, and so forth — but that is the job description, isn’t it?

  5. David, I think if someone is interested in the research topic he or she will include replication sections in the periodic search/alert trawls that the Internet makes easy. But ultimately replication sections would be for the record – to put the confidence of peer review behind the data that ultimately would go into a meta-analysis, which should be seen as the goal of our science.

    Commenting on the Nosek & Bar-Anan Psych Inquiry recently, I also expressed high hopes for the feasibility of a cascading multi-section journal, where sound studies judged too “incremental” or whatever would still get to see publication without a multi-year runaround. What you propose, I think, would be a subset of that.

Comments are closed.