The markers of scientific knowledge are predictability and replicability. If you possess scientific knowledge about a given process, knowing the inputs should allow you to foretell the outputs—like a called shot in pocket billiards. Similarly, if you can assemble the inputs to achieve a predicted output, I should be able to follow your recipe and replicate your output—using the same ingredients, tools, and methods, my chocolate chip cookies should be the same as your chocolate chip cookies. The study published in Science and commented upon in this Economist piece is interesting in light of the relative intellectual standing psychology and economics enjoy among scholars in the mainstream business ethics field. Although psychology is generally thought to be overall more scientific, findings in experimental economics appear to be more replicable. However, two caveats: (1) The sample size for the Science study is small. The investigators attempted to replicate only 18 experimental economics studies. (2) The finding is limited to the predicted outcomes of experimental studies in economics, and so should not be taken as indicative of the scientific rigor of claims advanced within economics on the basis of other methodological approaches. >>>
LINK: A far from dismal outcome (by The Economist)
In recent years medicine, psychology and genetics have all been put under the microscope and found wanting. One analysis of 100 psychology papers, published last year, for instance, was able to replicate only 36% of their findings. And a study conducted in 2012 by Amgen, an American pharmaceutical company, could replicate only 11% of the 53 papers it reviewed.
…
But as economics adopts the experimental procedures of the natural sciences, it might also suffer from their drawbacks. In a paper just published in Science, Colin Camerer of the California Institute of Technology and a group of colleagues from universities around the world decided to check. They repeated 18 laboratory experiments in economics whose results had been published in the American Economic Review and the Quarterly Journal of Economics between 2011 and 2014.For 11 of the 18 papers (ie, 61% of them) Dr Camerer and his colleagues found a broadly similar effect to whatever the original authors had reported. That is below the 92% replication rate they would have expected had all the original studies been as statistically robust as the authors claimed—but by the standards of medicine, psychology and genetics it is still impressive.
NOTE: The Camerer et al paper’s abstract is here.
What do you think?
I would be curious to know more about the psychology study. In The Economist, there is no specific information about the selection of the journals in a highly differentiated field. Perhaps the Psychology study should be based on a more limited examination of two premier journals with substantial methodological overlap.