Not every study is robust
Why are many scientific findings in various fields so hard to replicate? LMU researchers will coordinate a new DFG-funded Priority Program that will try to answer this question for the social, behavioral and cognitive sciences.
Many scientific studies that have appeared in leading journals, are featured in textbooks, and have been cited again and again, do not appear to be as replicable as one would hope they are. This has plunged in particular the social, behavioral and cognitive sciences into a roiling debate as to why this is the case and what it means - also for the methodology of science as a whole. Many scientific disciplines talk about a “reproducibility crisis” or a “crisis of confidence”.
Against this background, the Deutsche Forschungsgemeinschaft (DFG) has recently decided to fund a new Priority Program to throw new light on the problem. The Program (acronym: META-REP) will be coordinated by Mario Gollwitzer (Professor of Social Psychology at LMU) together with two of his LMU colleagues, and researchers at the Max Planck Institute for Research on Collective Goods in Bonn and the University Hospital in Hamburg-Eppendorf. Its full title, “A Metascientific Program for the Analysis and Optimization of Reproducibility in the Behavioral, Social and Cognitive Sciences”, defines its major aims. As in the case of all other DFG-funded Priority Programs, research teams throughout Germany are invited to submit proposals for research projects that fall within the ambit defined in the title. Funding is available for up to 30 individual projects for 3 years in the first instance, although the Program itself is scheduled to run for a total of 6 years.
What requirements must any attempt to replicate an original study meet in order to justify its own outcome as confirming or refuting the results of the original experiment? Are low replication rates attributable to weaknesses in the methodology used in replication efforts – or do they point instead to the underestimation of the significance of previously overlooked contextual factors? These are some of the questions that the program will seek to answer. The individual project proposals should therefore define in a precise and practicable manner what the terms “reproducibility” and “successful replication” mean in their respective discipline, how low replicability rates can be explained and, ultimately, improved. The coordinators are confident that the results obtained here will be of relevance for all the empirical sciences in which replicability has become a contentious issue. Furthermore, they will ensure that the program enriches the wider debate on the credibility, value and utility of the experimental sciences.