A recent meta-scientific study shows that presenting a single analysis often fails to reflect the true extent of empirical uncertainty.

A recent meta-scientific study shows that presenting a single analysis often fails to reflect the true extent of empirical uncertainty. The authors advocate for multiple analyses. 

Meta-Science: Meta-analy­ses Strengthen Sci­en­tific Valid­ity

Metascientists analyze how scientific knowledge is generated and how reliable it is. A large-scale study involving the University of Innsbruck shows that in the behavioral and social sciences, even when using identical data, significantly divergent conclusions can arise when independent research teams conduct the analysis. Scientific objectivity is not achieved through a single “true” analytical method, but rather by making the range of possible alternatives transparent, the study authors emphasize.

Over the past decade, the social and behavioural sciences have undergone far-reaching reforms to make research practices more transparent, rigorous and reliable. Pre-registration, registered reports, replication studies and checks on analytical reproducibility aim to reduce the frequency of chance findings and biased results. However, one key question has received only limited attention to date: to what extent do research findings depend on the way data are analysed?

A recent study published in the journal Nature examines the analytical robustness of the social and behavioural sciences. This is a large-scale international research project led by Balázs Aczél (Eötvös Loránd University) and Barnabás Szászi (Eötvös Loránd University and Corvinus University). Felix Holzmeister, a behavioural scientist at the University of Innsbruck, is one of the lead authors and has also played a key role in the publication.

A team of 457 international researchers carried out 504 re-analyses of data from 100 previously published studies in the social and behavioural sciences. All analysts were provided with the dataset from the respective original study and the same research question. They were free to choose a suitable analytical method. The project was carried out as part of the Systematizing Confidence in Open Research and Evidence (SCORE) programme.

A multitude of justifiable statistical decisions

In standard academic practice, a dataset is analysed by individual researchers or a single research team. The resulting publication thus presents the outcome of a specific analytical approach. It generally remains unclear what results might have been obtained had alternative, yet equally valid, statistical decisions been made.

However, empirical research involves numerous decision points, such as data cleaning, the definition of variables, the use of statistical models and the interpretation of results. The flexibility involved in these decisions leads to analytical variability – a form of uncertainty that can fundamentally influence the final conclusions.

The method of analysis influences the conclusions

The analysis of the meta-scientific study reveals considerable analytical variability in the results of independent analyses of the same research question based on the same data across 100 studies. Although most of the re-analyses generally supported the main findings of the respective original studies, the estimates of effect sizes and levels of uncertainty differed significantly in many cases. In about one third of the cases, all researchers reached the same conclusions as the original authors.

The discrepancies observed cannot be attributed to a lack of expertise or flawed analyses. Established experts with a sound statistical background arrived at differing results just as frequently as early-career researchers. Observational studies proved to be less robust than experimental studies, suggesting that more complex data structures allow for greater analytical flexibility – and thus also entail greater uncertainty.

Multi-analyses strengthen scientific validity

Felix Holzmeister, associate professor at the University of Innsbruck, emphasises: “For the first time, the study provides a systematic insight into analytical variability across a broad cross-section of empirical studies. The findings suggest that analytical variability – and the associated uncertainty regarding conclusions – is not an isolated problem of individual studies or methods, but a fundamental feature of empirical research in the social and behavioural sciences.”

Balázs Aczél, a professor at Eötvös Loránd University, concludes: “These findings do not call into question the credibility of previous research. Rather, they highlight that the presentation of a single analysis often fails to reflect the actual extent of empirical uncertainty, and that ignoring analytical variability can lead to unwarranted confidence in scientific conclusions.”

 Barnabás Szászi, Assistant Professor at Eötvös Loránd University and Corvinus University, adds: “We advocate for a wider use of multi-analyst and multiverse approaches, particularly when dealing with issues of great scientific or societal significance. Rather than seeking a single ‘true’ answer, these approaches reveal just how stable – or fragile – scientific conclusions actually are.”

 

Publikation: Aczél, B., Szászi, B., Clelland, H. T., Kovacs, M., Holzmeister, F., et al. (2026). Investigating the analytical robustness of the social and behavioural sciences. Nature. DOI: 10.1038/s41586-025-09844-9

 

Nach oben scrollen