LearningBlog.org
a learning adventure

Open Science, Open Source

Study reveals that a lot of psychology research really is just 'psycho-babble'The Independent

Psychology changed forever on the 27 August 2015. For the previous four years, the 270 psychologists of the Open Science Collaboration had been quietly re-running one hundred published psychology experiments. Now, finally, they were ready to share their findings. The results were shocking — less than half of the re-run experiments had worked.

When someone tries to re-run an experiment and it doesn't work, we call this a failure to replicate. Scientists had known about failures to replicate for a while, but it was only quite recently that the extent of the problem became apparent. Now, an almost existential crisis loomed. That crisis gained a name: the Replication Crisis. Soon, people started asking the same questions about other areas of science. Often, they got similar answers. Only half of results in economics replicated. In pre-clinical cancer studies, it was worse — only 11% replicated.

Open Science

Clearly, something had to be done. One option would have been to conclude that psychology, economics, and parts of medicine, could not be studied scientifically. Perhaps these parts of the universe were not lawful in any meaningful way? If so, you shouldn't be surprised if two researchers did the same thing and got different results.

Alternatively, perhaps different researchers got different results because they were doing different things. In most cases, it was not possible to tell whether you'd run the experiment exactly the same way as the original authors. This was because all you had to go on was the journal article - a short summary of the methods used and results obtained. If you wanted more detail you could, in theory, request it from the authors. But we'd already known for a decade that this approach was seriously broken - in around 70% of cases, data requests ended in failure.

Even when the authors sent you their data, this often didn't help that much. One of the most common problems was that, when you re-analysed their data, you ended up with different answers to them! This turned out to be quite common, because most descriptions of data analyses provided in journal articles are incomplete and ambiguous. What you really needed was the original author's source code - an unambiguous and complete record of every data processing step they took, from the raw data files, to the graphs and statistics in the final report. In Psychology in 2015, you could almost never get this.

In other words, psychology research at the beginning of the Replication Crisis was like closed-source software. You had to take the author's conclusions entirely on trust, in the same way you have to trust that closed-source software performs as described. There was essentially no way to properly audit research, because you could not access the source code on which the experiment was based: the testing software, the raw data, and the analysis scripts.

Next >

Cookies are used on this site (please see our privacy policy for more details); continued use of this site indicates that you accept this policy