Blog badge, Occam's corner

A question of trust: fixing the replication crisis

The crisis of non-replications in experimental social psychology is a crisis of trust. What's the solution?

Policeman behind police tape
Failure to replicate: witnesses wanted. Photograph: RTimages/Alamy

Human beings are born to communicate with each other. Communication involves both trust and vigilance. We constantly monitor how reliable the information is and how trustworthy the person is who has provided the information.

So what about information we get from scientists? Psychology has recently provided material that could figure in a crime scene investigation story. This is not a story about scientific fraud, but about the failure to replicate experiments. This is serious because replication is the gold standard by which we know if we can trust a result.

For example, there was a failure to replicate a much cited study which reported that people walked more slowly after being primed with words relating to old age. And what about other priming studies? An avalanche of doubt had started.

This month there’s a special issue of Social Psychology on this topic. The current issue of Perspectives in Psychological Science contains a series of papers discussing how methods and practice could be improved. The story started years ago, notably with an influential paper ("Why most published research findings are false") by John Ioannides, but the crisis goes on and on, and there is now a new flurry of papers.

On our blog we recently highlighted some of the non-explicit aspects of experimental procedures used in baby labs, with their subtle influences on the results. For instance, data have to be omitted if the baby did not sufficiently attend to the stimuli. It turns out that the way mothers are prepared before the start of the experiment and instructed on how to hold their babies is critical. Gratifyingly there have been a number of comments.

Dorothy Bishop points out that the issues raised are cause for worry and that it is up to those who have the expertise to describe the methods such that anyone should be able to replicate:

The worrying bit is that if experimenters decide to omit data post hoc then it opens the floodgates to the kind of problems with ‘False positive psychology’ talked about by Simmons et al. It’s very clear that a lot of this goes on, and in many cases it is not a case of deliberate data manipulation or fraud, but rather of experimenters deceiving themselves. As psychologists we should be all too well aware of how easily we can be biased in what we observe, and if we are to have a replicable science we must defend against that with scrupulous methods.

We agree, but there is also another side to the story. Decisions when to omit data, and preparing participants for experiments, require skills that can only be acquired through practice. These skills are not transmitted by writing methods sections more scrupulously. For example, you need perceptual skill to recognise what you are seeing under the microscope. Often, even very detailed and explicit procedures can only be "read" by a highly skilled other experimenter. This is important when evaluating replications, as inexperience alone can sometimes account for non-replications.

Bishop suggests several steps that could be taken to help improve matters. For example, sharing expertise and data more openly and using pre-registration, where experimenters specify in advance criteria they will use for excluding data and what methods they will use in analysis. This is similar to the way that clinical trials are registered at ClinicalTrials.gov in advance of the start of recruitment, and jibes with a recent post by Chris Chambers.

This is one way forward, but we are worried about unintended side effects. We fear the creeping in of increasingly rigid rules and regulations. Do we want the type of regulations that are in place for ethics applications for psychological experiments? Regulations can seriously delay scientific projects, and yet cannot prevent other cases of bad practice. Do we want science to be monitored like the Stasi, so that careers can be advanced by denouncing one’s rivals?

So, how can trust be reinstated? Here we want to mention two proposals that balance responsibility between both the original and the replicating experimenters.

Preemptive methods

We might learn to embrace "slow science" and not be pressured into publishing too much and too quickly. We could all be trained to be more aware of confirmatory bias, that is, the over-eager acceptance of results that we want to see. This would lead us to be more vigilant about our own data and results, both retrospectively and prospectively. We apparently cannot inhibit this bias via some automatic procedure, and can only counteract it with effort.

One way to do this is to constantly check for consistency between previous and subsequent results. For instance, you could add replications of tests used in earlier experiments to the procedure of later experiments, given that the same hypothesis is tested. This would provide data on a new sample of participants and give valuable information in its own right. Here at least lack of replication cannot be blamed on inexperienced or incompetent experimenters.

Collaborative methods

The original author is informed about the attempt to replicate by the new experimenter and is given a chance to check what the replicating lab is actually proposing to do in detail. This somewhat reverses the now quite common procedure where a replicating lab demands to see the original data and to assume that the procedure is specified in such a way that it can be flawlessly replicated. In theory this sounds perfectly fair and in line with basic scientific approaches, but in practice it does not work. Even a complete video recording is not enough, if only one camera angle is used and if the places where the experiments have to be done cannot be standardised. Matters are much worse in the case of a hostile non-replication, as seen in a worrying account by Simone Schnall.

Daniel Kahneman made a similar suggestion in a recent commentary on the multi-labs project entitled “Investigating variation in replicability”, in the current issue of Social Psychology. Rather than putting the onus on the original experimenter to make his procedure fully explicit, he believes that current norms allow replicators too much freedom. Among other things, Kahneman says,

“ … the original author should have detailed advance knowledge of what the replicator plans to do. The hypothesis that guides this proposal is that authors will generally be more sensitive than replicators to the possible effects of small discrepancies of procedure.”

Perhaps there is no clear solution. Rather there is a dilemma. We have to cope with the fact that sometimes an experiment works and sometimes it doesn’t, and we have no idea why. There are just too many different factors involved, and there are too many ways to allow self-deception – not just when we omit data points with a post hoc justification, but also when we fool ourselves into believing that we have made sure that every detail has been put into the methods section.

It is perhaps educational to compare doing psychological experiments to doing alchemy in a medieval workshop, rather than working in a high tech lab. We are still a science in its early stages of development and, of course, we hope to get much better control in the future. Meanwhile, as alchemists we don’t know that it is critical to have clean containers for liquids. Instead we are fooling ourselves that if they look clean they are clean.

The crisis of non-replications in experimental social psychology is a crisis of trust. But there are rays of hope. Reputation is a good regulator, and if we agree on this, then we may not have to suffer from the burden of increased regulations. Let’s trust in one of the sharpest knives in the toolbox of human social cognition: our craving for a good reputation.

@utafrith and @cdfrith are cognitive neuroscientists and emeritus professors at UCL. They are currently trying to write a book, What Makes Us Social?

Today's best video

The Guardian's science blog network hosts talented writers who are experts in their fields, from mathematics, particle physics and astronomy to neuroscience, science policy and psychology. By giving them the freedom to write on whatever subjects they choose – without editorial interference – they broaden and deepen our coverage of scientific research and debate

;