"Trying to reproduce previous results is not glamorous or creative, so it is rarely done. But being able to get the same result over and over is part of the definition of what makes knowledge scientific," says Colin Camerer, the Robert Kirby Professor of Behavioral Economics at Caltech and lead author on the paper.
The study was based on a previous method used to assess the replication of psychology experiments. In the earlier technique, called the reproducibility project psychology (RPP), researchers replicated 100 original studies published in three of the top journals in psychology—and found that although 97 percent of the original studies reported so-called "positive findings" (meaning a significant change compared to control conditions), such positive findings were reliably reproduced only 36 percent of the time.
Using this same technique, Camerer and his colleagues reproduced 18 laboratory experimental papers published in two top-tier economics journals between 2011 and 2014. Eleven of the 18—roughly 61 percent—showed a "significant effect in the same direction as in the original study." The researchers also found that the sample size and p-values—a standard measure of statistical confidence—of the original studies were good predictors for the success of replication, meaning they could serve as good indicators for the reliability of results in future experiments.
"Replicability has become a major issue in many sciences over the past few years, with often low replication rates," says paper coauthor Juergen Huber of the University of Innsbruck. "The rate we report for experimental economics is the highest we are aware of for any field."
The authors suggest that there are some methodological research practices in laboratory experimental economics that contribute to the good replication success. "It seems that the culture established in experimental economics—incentivizing subjects, publication of the experimental procedure and instructions, no deception—ensures reliable results. This is very encouraging given that it is a very young discipline," says Michael Kirchler, another coauthor and collaborator from the University of Innsbruck.
"As a journal editor myself, we are always curious whether experimental results will replicate across populations and cultures, and these results from multiple countries are really reassuring," says coauthor Teck-Hua Ho from the National University of Singapore.
Coauthor Magnus Johannesson from the Stockholm School of Economics adds, "It is extremely important to investigate to what extent we can trust published scientific findings and to implement institutions that promote scientific reproducibility."
"For the past half century, Caltech has been a leader in the development of social science experimental methods. It is no surprise that Caltech scholars are part of a group that use replication studies to demonstrate the validity of these methods," says Jean-Laurent Rosenthal, the Rea A. and Lela G. Axline Professor of Business Economics and chair of the Division of the Humanities and Social Sciences at Caltech.
The work was published in a paper titled, "Evaluating Replicability of Laboratory Experiments in Economics." Other coauthors are: Taisuke Imai and Gideon Nave from Caltech; Johan Almenberg from Sveriges Riksbank in Stockholm; Anna Dreber, Eskil Forsell, Adam Altmejd, Emma Heikensten, and Siri Isaksson from the Stockholm School of Economics; Taizan Chan and Hang Wu from the National University of Singapore; Felix Holzmeister and Michael Razen from the University of Innsbruck; and Thomas Pfeiffer from the New Zealand Institute for Advanced Study.
The study was funded by the Austrian Science Fund, the Austrian National Bank, the Behavioral and Neuroeconomics Discovery Fund, the Jan Wallander and Tom Hedelius Foundation, the Knut and Alice Wallenberg Foundation, the Swedish Foundation For Humanities and Social Sciences, and the Sloan Foundation.