Advertisement

Reproducibility in neuroimaging analysis: challenges and solutions

Published:December 19, 2022DOI:https://doi.org/10.1016/j.bpsc.2022.12.006
      Recent years have marked a renaissance in efforts to increase research reproducibility in psychology, neuroscience and related fields. Reproducibility is the cornerstone of a solid foundation of fundamental research – one that will support new theories built on valid findings and technological innovation that works. The increased focus on reproducibility has made the barriers to it increasingly apparent, along with the development of new tools and practices to overcome these barriers. Here, we review challenges, solutions, and emerging best practices with a particular emphasis on neuroimaging studies. We distinguish three main types of reproducibility, discussing each in turn. “Analytical reproducibility” is the ability to reproduce findings using the same data and methods. “Replicability” is the ability to find an effect in new datasets, using the same or similar methods. And “robustness to analytical variability” refers to the ability to identify a finding consistently across variation in methods. The incorporation of these tools and practices will result in a more reproducible, replicable, and robust psychological and brain research, and a stronger scientific foundation across fields of inquiry.

      Keywords

      To read this article in full you will need to make a payment

      Purchase one-time access:

      Academic & Personal: 24 hour online accessCorporate R&D Professionals: 24 hour online access
      One-time access price info
      • For academic or personal research use, select 'Academic and Personal'
      • For corporate R&D use, select 'Corporate R&D Professionals'

      References (up to 150)

      1. Ioannidis JP a. (2005): Why most published research findings are false. PLoS Med 2: e124.

      2. Open Science Collaboration (2015): Estimating the reproducibility of psychological science. Science 349. https://doi.org/10.1126/science.aac4716

      3. Button KS, Ioannidis JPA, Mokrysz C, Nosek BA, Flint J, Robinson ESJ, Munafò MR (2013): Power failure: Why small sample size undermines the reliability of neuroscience. Nat Rev Neurosci 14: 365–376.

      4. Ioannidis JPA, Munafò MR, Fusar-Poli P, Nosek BA, David SP (2014): Publication and other reporting biases in cognitive sciences: detection, prevalence, and prevention. Trends Cogn Sci 18: 235–241.

      5. Munafò MR, Nosek BA, Bishop DVM, Button KS, Chambers CD, du Sert NP, et al. (2017): A manifesto for reproducible science. Nat Hum Behav 1: 0021.

      6. Houtkoop BL, Chambers C, Macleod M, Bishop DVM, Nichols TE, Wagenmakers E-J (2018): Data Sharing in Psychology: A Survey on Barriers and Preconditions. Advances in Methods and Practices in Psychological Science 1: 70–85.

      7. Nelson LD, Simmons J, Simonsohn U (2018): Psychology’s Renaissance. Annu Rev Psychol 69: 511–534.

      8. Niso G, Botvinik-Nezer R, Appelhoff S, De La Vega A, Esteban O, Etzel JA, et al. (2022, April): Open and reproducible neuroimaging: from study inception to publication. https://doi.org/10.31219/osf.io/pu5vb

      9. Paret C, Unverhau N, Feingold F, Poldrack RA, Stirner M, Schmahl C, Sicorello M (2022): Survey on Open Science Practices in Functional Neuroimaging. Neuroimage 257: 119306.

      10. Borghi JA, Van Gulick AE (2018): Data management and sharing in neuroimaging: Practices and perceptions of MRI researchers. PLoS One 13: e0200562.

      11. Hardwicke TE, Thibault RT, Kosie JE, Wallach JD, Kidwell MC, Ioannidis JPA (2022): Estimating the Prevalence of Transparency and Reproducibility-Related Research Practices in Psychology (2014-2017). Perspect Psychol Sci 17: 239–251.

      12. Piller C (2021): Disgraced COVID-19 studies are still routinely cited. Science 371: 331–332.

      13. Bucci EM (2019): On zombie papers. Cell Death Dis 10: 189.

      14. Nissen SB, Magidson T, Gross K, Bergstrom CT (2016): Publication bias and the canonization of false facts. Elife 5. https://doi.org/10.7554/eLife.21451

      15. Jwa AS, Poldrack RA (2022): The spectrum of data sharing policies in neuroimaging data repositories. Hum Brain Mapp 43: 2707–2721.

      16. Milham MP, Craddock RC, Son JJ, Fleischmann M, Clucas J, Xu H, et al. (2018): Assessment of the impact of shared brain imaging data on the scientific literature. Nat Commun 9: 2818.

      17. Allen C, Mehler DMA (2019): Open science challenges, benefits and tips in early career and beyond. PLoS Biol 17: e3000246.

      18. de Jonge H, Cruz M, Holst S (2021): Funders need to credit open science. Nature 599: 372.

      19. Nosek BA, Hardwicke TE, Moshontz H, Allard A, Corker KS, Dreber A, et al. (2022): Replicability, Robustness, and Reproducibility in Psychological Science. Annu Rev Psychol 73: 719–748.

      20. Hardwicke TE, Bohn M, MacDonald K, Hembacher E, Nuijten MB, Peloquin BN, et al. (2021): Analytic reproducibility in articles receiving open data badges at the journal Psychological Science: an observational study. R Soc Open Sci 8: 201494.

      21. Errington TM, Denis A, Perfito N, Iorns E, Nosek BA (2021): Challenges for assessing replicability in preclinical cancer biology. eLife, vol. 10. https://doi.org/10.7554/elife.67995

      22. Stodden V, Seiler J, Ma Z (2018): An empirical analysis of journal policy effectiveness for computational reproducibility. Proc Natl Acad Sci U S A 115: 2584–2589.

      23. Obels P, Lakens D, Coles NA, Gottfried J, Green SA (2020): Analysis of Open Data and Computational Reproducibility in Registered Reports in Psychology. Advances in Methods and Practices in Psychological Science 3: 229–237.

      24. Hardwicke TE, Mathur MB, MacDonald K, Nilsonne G, Banks GC, Kidwell MC, et al. (2018): Data availability, reusability, and analytic reproducibility: evaluating the impact of a mandatory open data policy at the journal Cognition. R Soc Open Sci 5: 180448.

      25. Gronenschild EHBM, Habets P, Jacobs HIL, Mengelers R, Rozendaal N, van Os J, Marcelis M (2012): The effects of FreeSurfer version, workstation type, and Macintosh operating system version on anatomical volume and cortical thickness measurements. PLoS One 7: e38234.

      26. Glatard T, Lewis LB, Ferreira da Silva R, Adalat R, Beck N, Lepage C, et al. (2015): Reproducibility of neuroimaging analyses across operating systems. Front Neuroinform 9: 12.

      27. Nichols TE, Das S, Eickhoff SB, Evans AC, Glatard T, Hanke M, et al. (2017): Best practices in data analysis and sharing in neuroimaging using MRI. Nat Neurosci 20: 299–303.

      28. Pernet C, Garrido MI, Gramfort A, Maurits N, Michel CM, Pang E, et al. (2020): Issues and recommendations from the OHBM COBIDAS MEEG committee for reproducible EEG and MEG research. Nat Neurosci 23: 1473–1483.

      29. Karakuzu A, DuPre E, Tetrel L, Bermudez P, Boudreau M, Chin M, et al. (2022, April): NeuroLibre : A preprint server for full-fledged reproducible neuroscience. https://doi.org/10.31219/osf.io/h89js

      30. Nuijten MB, Polanin JR (2020): “statcheck”: Automatically detect statistical reporting inconsistencies to increase reproducibility of meta-analyses. Res Synth Methods 11: 574–579.

      31. Sandve GK, Nekrutenko A, Taylor J, Hovig E (2013): Ten simple rules for reproducible computational research. PLoS Comput Biol 9: e1003285.

      32. Balaban G, Grytten I, Rand KD, Scheffer L, Sandve GK (2021): Ten simple rules for quick and dirty scientific programming. PLoS Comput Biol 17: e1008549.

      33. Eglen SJ, Marwick B, Halchenko YO, Hanke M, Sufi S, Gleeson P, et al. (2017): Toward standard practices for sharing computer code and programs in neuroscience. Nat Neurosci 20: 770–773.

      34. Wilson G, Bryan J, Cranston K, Kitzes J, Nederbragt L, Teal TK (2017): Good enough practices in scientific computing. PLoS Comput Biol 13: e1005510.

      35. Blischak JD, Davenport ER, Wilson G (2016): A Quick Introduction to Version Control with Git and GitHub. PLoS Comput Biol 12: e1004668.

      36. Lee BD (2018): Ten simple rules for documenting scientific software. PLoS Comput Biol 14: e1006561.

      37. Riquelme JL, Gjorgjieva J (2021): Towards readable code in neuroscience. Nat Rev Neurosci 22: 257–258.

      38. Wilson G, Aruliah DA, Brown CT, Chue Hong NP, Davis M, Guy RT, et al. (2014): Best practices for scientific computing. PLoS Biol 12: e1001745.

      39. Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, et al. (2011): Scikit-learn: Machine learning in Python. the Journal of machine Learning research 12: 2825–2830.

      40. Flandin G, Friston K (2008): Statistical parametric mapping (SPM). Scholarpedia J 3: 6232.

      41. Jenkinson M, Beckmann CF, Behrens TEJ, Woolrich MW, Smith SM (2012): FSL. Neuroimage 62: 782–790.

      42. Cox RW (1996): AFNI: software for analysis and visualization of functional magnetic resonance neuroimages. Comput Biomed Res 29: 162–173.

      43. Breeze JL, Poline J-B, Kennedy DN (2012): Data sharing and publishing in the field of neuroimaging. Gigascience 1: 9.

      44. Poldrack RA, Gorgolewski KJ (2014): Making big data open: data sharing in neuroimaging. Nat Neurosci 17: 1510–1517.

      45. Markiewicz CJ, Gorgolewski KJ, Feingold F, Blair R, Halchenko YO, Miller E, et al. (2021): The OpenNeuro resource for sharing of neuroscience data. Elife 10. https://doi.org/10.7554/eLife.71774

      46. Wilkinson MD, Dumontier M, Aalbersberg IJJ, Appleton G, Axton M, Baak A, et al. (2016): The FAIR Guiding Principles for scientific data management and stewardship. Sci Data 3: 160018.

      47. Gorgolewski KJ, Auer T, Calhoun VD, Craddock RC, Das S, Duff EP, et al. (2016): The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Sci Data 3: 160044.

      48. Gorgolewski KJ, Alfaro-Almagro F, Auer T, Bellec P, Capotă M, Chakravarty MM, et al. (2017): BIDS apps: Improving ease of use, accessibility, and reproducibility of neuroimaging data analysis methods. PLoS Comput Biol 13: e1005209.

      49. Keator DB, Helmer K, Steffener J, Turner JA, Van Erp TGM, Gadde S, et al. (2013): Towards structured sharing of raw and derived neuroimaging data across existing resources. Neuroimage 82: 647–661.

      50. Maumet C, Auer T, Bowring A, Chen G, Das S, Flandin G, et al. (2016): Sharing brain mapping statistical results with the neuroimaging data model. Sci Data 3: 160102.

      51. Halchenko Y, Meyer K, Poldrack B, Solanky D, Wagner A, Gors J, et al. (2021): DataLad: distributed system for joint management of code, data, and their relationship. Journal of Open Source Software, vol. 6. p 3262.

      52. Borghi JA, Van Gulick AE (2021, October 2): Promoting Open Science Through Research Data Management. arXiv [cs.DL]. Retrieved from http://arxiv.org/abs/2110.00888

      53. Kiar G, Chatelain Y, de Oliveira Castro P, Petit E, Rokem A, Varoquaux G, et al. (2021): Numerical uncertainty in analytical pipelines lead to impactful variability in brain networks. PLoS One 16: e0250755.

      54. Kiar G, de Oliveira Castro P, Rioux P, Petit E, Brown ST, Evans AC, Glatard T (2020): Comparing perturbation models for evaluating stability of neuroimaging pipelines. Int J High Perform Comput Appl 34: 491–501.

      55. Kurtzer GM, Sochat V, Bauer MW (2017): Singularity: Scientific containers for mobility of compute. PLoS One 12: e0177459.

      56. Camerer CF, Dreber A, Holzmeister F, Ho T-H, Huber J, Johannesson M, et al. (2018): Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nature Human Behaviour 2: 637–644.

      57. Camerer CF, Dreber A, Forsell E, Ho T-H, Huber J, Johannesson M, et al. (2016): Evaluating replicability of laboratory experiments in economics. Science 351: 1433–1436.

      58. Cova F, Strickland B, Abatista A, Allard A, Andow J, Attie M, et al. (2021): Estimating the Reproducibility of Experimental Philosophy. Rev Philos Psychol 12: 9–44.

      59. Errington TM, Mathur M, Soderberg CK, Denis A, Perfito N, Iorns E, Nosek BA (2021): Investigating the replicability of preclinical cancer biology. Elife 10. https://doi.org/10.7554/eLife.71601

      60. Klein RA, Ratliff KA, Vianello M, Adams RB, Bahník Š, Bernstein MJ, et al. (2014): Investigating variation in replicability: A “many labs” replication project. Soc Psychol 45: 142–152.

      61. Klein RA, Vianello M, Hasselman F, Adams BG, Adams RB, Alper S, et al. (2018): Many Labs 2: Investigating Variation in Replicability Across Samples and Settings. Advances in Methods and Practices in Psychological Science 1: 443–490.

      62. Ebersole CR, Mathur MB, Baranski E, Bart-Plange D-J, Buttrick NR, Chartier CR, et al. (2020): Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability. Advances in Methods and Practices in Psychological Science 3: 309–331.

      63. Frank MC, Bergelson E, Bergmann C, Cristia A, Floccia C, Gervain J, et al. (2017): A collaborative approach to infant research: Promoting reproducibility, best practices, and theory-building. Infancy 22: 421–435.

      64. Pavlov, Adamian, Appelhoff, Arvaneh (2021): # EEGManyLabs: Investigating the replicability of influential EEG experiments. Cortex 144: 213–229.

      65. Coles NA, March DS, Marmolejo-Ramos F, Larsen JT, Arinze NC, Ndukaihe ILG, et al. (2019, February): A Multi-Lab Test of the Facial Feedback Hypothesis by The Many Smiles Collaboration. https://doi.org/10.31234/osf.io/cvpuw

      66. Moshontz H, Campbell L, Ebersole CR, IJzerman H, Urry HL, Forscher PS, et al. (2018): The Psychological Science Accelerator: Advancing Psychology through a Distributed Collaborative Network. Adv Methods Pract Psychol Sci 1: 501–515.

      67. Algermissen J, Mehler DMA (2018, June 1): May the power be with you: are there highly powered studies in neuroscience, and how can we get more of them? Journal of Neurophysiology, vol. 119. pp 2114–2117.

      68. Poldrack RA, Baker CI, Durnez J, Gorgolewski KJ, Matthews PM, Munafò MR, et al. (2017): Scanning the horizon: Towards transparent and reproducible neuroimaging research. Nat Rev Neurosci 18: 115–126.

      69. Szucs D, Ioannidis JP (2020): Sample size evolution in neuroimaging research: An evaluation of highly-cited studies (1990-2012) and of latest practices (2017-2018) in high-impact journals. Neuroimage 221: 117164.

      70. Marek S, Tervo-Clemmens B, Calabro FJ, Montez DF, Kay BP, Hatoum AS, et al. (2022): Reproducible brain-wide association studies require thousands of individuals. Nature 603: 654–660.

      71. Nikolaidis A, Chen AA, He X, Shinohara R, Vogelstein J, Milham M, Shou H (2022, July 23): Suboptimal phenotypic reliability impedes reproducible human neuroscience. bioRxiv. p 2022.07.22.501193.

      72. Spisak T, Bingel U, Wager T (2022, June 26): Replicable multivariate BWAS with moderate sample sizes. bioRxiv. p 2022.06.22.497072.

      73. Han X, Ashar YK, Kragel P, Petre B, Schelkun V, Atlas LY, et al. (2022): Effect sizes and test-retest reliability of the fMRI-based neurologic pain signature. Neuroimage 247: 118844.

      74. Reddan MC, Lindquist MA, Wager TD (2017): Effect Size Estimation in Neuroimaging. JAMA Psychiatry 74: 207–208.

      75. Zunhammer M, Bingel U, Wager TD, Placebo Imaging Consortium (2018): Placebo Effects on the Neurologic Pain Signature: A Meta-analysis of Individual Participant Functional Magnetic Resonance Imaging Data. JAMA Neurol 75: 1321–1330.

      76. Lindquist KA, Satpute AB, Wager TD, Weber J, Barrett LF (2016): The Brain Basis of Positive and Negative Affect: Evidence from a Meta-Analysis of the Human Neuroimaging Literature. Cereb Cortex 26: 1910–1922.

      77. Flint C, Cearns M, Opel N, Redlich R, Mehler DMA, Emden D, et al. (2021): Systematic misestimation of machine learning performance in neuroimaging studies of depression. Neuropsychopharmacology 46: 1510–1517.

      78. Belov V, Erwin-Grabner T, Gonul AS, Amod AR, Ojha A, Aleman A, et al. (2022, June 16): Multi-site benchmark classification of major depressive disorder using machine learning on cortical and subcortical measures. arXiv [q-bio.QM]. Retrieved from http://arxiv.org/abs/2206.08122

      79. Nielsen AN, Barch DM, Petersen SE, Schlaggar BL, Greene DJ (2020): Machine Learning With Neuroimaging: Evaluating Its Applications in Psychiatry. Biol Psychiatry Cogn Neurosci Neuroimaging 5: 791–798.

      80. Poldrack RA, Huckins G, Varoquaux G (2020): Establishment of Best Practices for Evidence for Prediction: A Review. JAMA Psychiatry 77: 534–540.

      81. Davatzikos C (2019): Machine learning in neuroimaging: Progress and challenges. Neuroimage 197: 652–656.

      82. Woo C-W, Chang LJ, Lindquist MA, Wager TD (2017): Building better biomarkers: brain models in translational neuroimaging. Nat Neurosci 20: 365–377.

      83. Varoquaux G (2018): Cross-validation failure: Small sample sizes lead to large error bars. Neuroimage 180: 68–77.

      84. Van Essen DC, Ugurbil K, Auerbach E, Barch D, Behrens TEJ, Bucholz R, et al. (2012): The Human Connectome Project: A data acquisition perspective. Neuroimage 62: 2222–2231.

      85. Miller KL, Alfaro-Almagro F, Bangerter NK, Thomas DL, Yacoub E, Xu J, et al. (2016): Multimodal population brain imaging in the UK Biobank prospective epidemiological study. Nat Neurosci 19: 1523–1536.

      86. Feldstein Ewing S, Luciana M (2018): The Adolescent Brain Cognitive Development (ABCD) Consortium: Rationale, Aims, and Assessment Strategy [Special Issue]. Dev Cogn Neurosci 32: 1–164.

      87. Schmaal L, Pozzi E, C Ho T, van Velzen LS, Veer IM, Opel N, et al. (2020): ENIGMA MDD: seven years of global neuroimaging studies of major depression through worldwide data sharing. Transl Psychiatry 10: 172.

      88. Yu M, Linn KA, Cook PA, Phillips ML, McInnis M, Fava M, et al. (2018): Statistical harmonization corrects site effects in functional connectivity measurements from multi-site fMRI data. Hum Brain Mapp 39: 4213–4227.

      89. Fortin J-P, Parker D, Tunç B, Watanabe T, Elliott MA, Ruparel K, et al. (2017): Harmonization of multi-site diffusion tensor imaging data. Neuroimage 161: 149–170.

      90. Bayer JMM, Thompson PM, Ching CRK, Liu M, Chen A, Panzenhagen AC, et al. (2022): Site effects how-to and when: An overview of retrospective techniques to accommodate site effects in multi-site neuroimaging analyses. Front Neurol 13: 923988.

      91. Button KS, Munafò MR (2017): Powering Reproducible Research. Psychological Science Under Scrutiny. Hoboken, NJ, USA: John Wiley & Sons, Inc., pp 22–33.

      92. Noble S, Scheinost D, Constable RT (2020): Cluster failure or power failure? Evaluating sensitivity in cluster-level inference. Neuroimage 209: 116468.

      93. Noble S, Mejia AF, Zalesky A, Scheinost D (2022): Improving power in functional magnetic resonance imaging by moving beyond cluster-level inference. Proc Natl Acad Sci U S A 119: e2203020119.

      94. Eklund A, Nichols TE, Knutsson H (2016): Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. Proceedings of the National Academy of Sciences 113: 7900–7905.

      95. Lakens D (2022): Sample size justification. Collabra Psychol 8. https://doi.org/10.1525/collabra.33267

      96. Lakens D, McLatchie N, Isager PM, Scheel AM, Dienes Z (2020): Improving Inferences About Null Effects With Bayes Factors and Equivalence Tests. J Gerontol B Psychol Sci Soc Sci 75: 45–57.

      97. Finn ES (2021): Is it time to put rest to rest? Trends Cogn Sci 25: 1021–1032.

      98. Rosenberg MD, Finn ES (2022): How to establish robust brain-behavior relationships without thousands of individuals. Nat Neurosci. https://doi.org/10.1038/s41593-022-01110-9

      99. Fröhner JH, Teckentrup V, Smolka MN, Kroemer NB (2019): Addressing the reliability fallacy in fMRI: Similar group effects may arise from unreliable individual effects. Neuroimage 195: 174–189.

      100. Chen G, Pine DS, Brotman MA, Smith AR, Cox RW, Taylor PA, Haller SP (2022): Hyperbolic trade-off: The importance of balancing trial and subject sample sizes in neuroimaging. Neuroimage 247: 118786.

      101. Baker DH, Vilidaite G, Lygo FA, Smith AK, Flack TR, Gouws AD, Andrews TJ (2021): Power contours: Optimising sample size and precision in experimental psychology and human neuroscience. Psychol Methods 26: 295–314.

      102. Lorenz R, Monti RP, Violante IR, Anagnostopoulos C, Faisal AA, Montana G, Leech R (2016): The Automatic Neuroscientist: A framework for optimizing experimental design with closed-loop real-time fMRI. Neuroimage 129: 320–334.

      103. Lorenz R, Johal M, Dick F, Hampshire A, Leech R, Geranmayeh F (2021): A Bayesian optimization approach for rapidly mapping residual network function in stroke. Brain 144: 2120–2134.

      104. Dosenbach NUF, Koller JM, Earl EA, Miranda-Dominguez O, Klein RL, Van AN, et al. (2017): Real-time motion analytics during brain MRI improve data quality and reduce costs. Neuroimage 161: 80–93.

      105. Simmons JP, Nelson LD, Simonsohn U (2011): False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci 22: 1359–1366.

      106. Simonsohn U, Nelson LD, Simmons JP (2014): P-curve: a key to the file-drawer. J Exp Psychol Gen 143: 534–547.

      107. Kerr NL (1998): HARKing: hypothesizing after the results are known. Pers Soc Psychol Rev 2: 196–217.

      108. John LK, Loewenstein G, Prelec D (2012): Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol Sci 23: 524–532.

      109. Gopalakrishna G, Ter Riet G, Vink G, Stoop I, Wicherts JM, Bouter LM (2022): Prevalence of questionable research practices, research misconduct and their potential explanatory factors: A survey among academic researchers in The Netherlands. PLoS One 17: e0263023.

      110. Xie Y, Wang K, Kong Y (2021): Prevalence of Research Misconduct and Questionable Research Practices: A Systematic Review and Meta-Analysis. Sci Eng Ethics 27: 41.

      111. Simmons J, Nelson L, Simonsohn U (2021): Pre‐registration: Why and how. J Consum Psychol 31: 151–162.

      112. Nosek BA, Ebersole CR, DeHaven AC, Mellor DT (2018): The preregistration revolution. Proceedings of the National Academy of Sciences 2017: 201708274.

      113. Paul M, Govaart GH, Schettino A (2021): Making ERP research more transparent: Guidelines for preregistration. Int J Psychophysiol 164: 52–63.

      114. Beyer F, Flannery J, Gau R, Janssen L, Schaare L, Hartmann H, et al. (2021): A fMRI pre-registration template. PsychArchives. https://doi.org/10.23668/PSYCHARCHIVES.5121

      115. Crüwell S, Evans NJ (2021): Preregistration in diverse contexts: a preregistration template for the application of cognitive models. R Soc Open Sci 8: 210155.

      116. Chambers CD, Tzavella L (2022): The past, present and future of Registered Reports. Nat Hum Behav 6: 29–42.

      117. Henderson EL, Chambers CD (2022): Ten simple rules for writing a Registered Report. PLoS Comput Biol 18: e1010571.

      118. Wager TD, Atlas LY, Lindquist MA, Roy M, Woo C-W, Kross E (2013): An fMRI-based neurologic signature of physical pain. N Engl J Med 368: 1388–1397.

      119. Gelman A, Loken E (2014): The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time. Psychol Bull 140: 1272–1280.

      120. Carp J (2012): On the plurality of (methodological) worlds: Estimating the analytic flexibility of fmri experiments. Front Neurosci 6: 1–13.

      121. Botvinik-Nezer R, Holzmeister F, Camerer CF, Dreber A, Huber J, Johannesson M, et al. (2020): Variability in the analysis of a single neuroimaging dataset by many teams. Nature 582: 84–88.

      122. Li X, Ai L, Giavasis S, Jin H, Feczko E, Xu T, et al. (2021, December 3): Moving Beyond Processing and Analysis-Related Variation in Neuroscience. bioRxiv. p 2021.12.01.470790.

      123. Schilling KG, Rheault F, Petit L, Hansen CB, Nath V, Yeh F-C, et al. (2021): Tractography dissection variability: What happens when 42 groups dissect 14 white matter bundles on the same dataset? Neuroimage 243: 118502.

      124. Zhou X, Wu R, Zeng Y, Qi Z, Ferraro S, Yao S, et al. (2021, March 10): Location, location, location– choice of Voxel-Based Morphometry processing pipeline drives variability in the location of neuroanatomical brain markers. bioRxiv. p 2021.03.09.434531.

      125. Bhagwat N, Barry A, Dickie EW, Brown ST, Devenyi GA, Hatano K, et al. (2021): Understanding the impact of preprocessing pipelines on neuroimaging cortical surface analyses. Gigascience 10. https://doi.org/10.1093/gigascience/giaa155

      126. Nørgaard M, Ganz M, Svarer C, Frokjaer VG, Greve DN, Strother SC, Knudsen GM (2020): Different preprocessing strategies lead to different conclusions: A [11C]DASB-PET reproducibility study. J Cereb Blood Flow Metab 40: 1902–1911.

      127. Clayson PE, Baldwin SA, Rocha HA, Larson MJ (2021): The data-processing multiverse of event-related potentials (ERPs): A roadmap for the optimization and standardization of ERP processing and reduction pipelines. Neuroimage 245: 118712.

      128. Silberzahn R, Uhlmann EL, Martin DP, Anselmi P, Aust F, Awtrey E, et al. (2018): Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results. Advances in Methods and Practices in Psychological Science 1: 337–356.

      129. Schweinsberg M, Madan N, Vianello M, Sommer SA, Jordan J, Tierney W, et al. (2016): The pipeline project: Pre-publication independent replications of a single laboratory’s research pipeline. J Exp Soc Psychol 66: 55–67.

      130. Landy JF, Jia ML, Ding IL, Viganola D, Tierney W, Dreber A, et al. (2020): Crowdsourcing hypothesis tests: Making transparent how design choices shape research results. Psychol Bull 146: 451–479.

      131. Breznau N, Rinke EM, Wuttke A, Adem M, Adriaans J, Alvarez-Benjumea A, et al. (2021, March 24): Observing many researchers using the same data and hypothesis reveals a hidden universe of uncertainty. BITSS. https://doi.org/10.31222/osf.io/cd5j9

      132. Schweinsberg M, Feldman M, Staub N, van den Akker OR, van Aert RCM, van Assen MALM, et al. (2021): Same data, different conclusions: Radical dispersion in empirical results when independent analysts operationalize and test the same hypothesis. Organ Behav Hum Decis Process 165: 228–249.

      133. Wagenmakers E-J, Sarafoglou A, Aczel B (2022, May): One statistical analysis must not rule them all. Nature, vol. 605. pp 423–425.

      134. Hall BD, Liu Y, Jansen Y, Dragicevic P, Chevalier F, Kay M (2022): A survey of tasks and visualizations in multiverse analysis reports. Comput Graph Forum 41: 402–426.

      135. Steegen S, Tuerlinckx F, Gelman A, Vanpaemel W (2016): Increasing Transparency Through a Multiverse Analysis. Perspect Psychol Sci 11: 702–712.

      136. Simonsohn U, Simmons JP, Nelson LD (2020): Specification curve analysis. Nat Hum Behav 4: 1208–1214

      137. Simonsohn U, Simmons JP, Nelson LD (2015): Specification Curve: Descriptive and Inferential Statistics on All Reasonable Specifications. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2694998

      138. Aczel B, Szaszi B, Nilsonne G, van den Akker OR, Albers CJ, van Assen MA, et al. (2021): Consensus-based guidance for conducting and reporting multi-analyst studies. Elife 10. https://doi.org/10.7554/eLife.72185

      139. Del Giudice M, Gangestad SW (2021): A Traveler’s Guide to the Multiverse: Promises, Pitfalls, and a Framework for the Evaluation of Analytic Decisions. Advances in Methods and Practices in Psychological Science 4: 2515245920954925.

      140. Dafflon J, F Da Costa P, Váša F, Monti RP, Bzdok D, Hellyer PJ, et al. (2022): A guided multiverse study of neuroimaging analyses. Nat Commun 13: 3758.

      141. Markiewicz CJ, De La Vega A, Wagner A, Halchenko YO, Finc K, Ciric R, et al. (2021): Poldracklab/fitlins: v0.9.2. https://doi.org/10.5281/zenodo.5120201

      142. Dragicevic P, Jansen Y, Sarma A, Kay M, Chevalier F (2019): Increasing the Transparency of Research Papers with Explorable Multiverse Analyses [no. Paper 65]. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems 1–15.

      143. Liu Y, Kale A, Althoff T, Heer J (2021): Boba: Authoring and Visualizing Multiverse Analyses. IEEE Trans Vis Comput Graph 27: 1753–1763.

      144. Bowring A, Nichols TE, Maumet C (2022): Isolating the sources of pipeline-variability in group-level task-fMRI results. Hum Brain Mapp 43: 1112–1128.

      145. Lonsdorf TB, Gerlicher A, Klingelhöfer-Jens M, Krypotos A-M (2022): Multiverse analyses in fear conditioning research. Behav Res Ther 153: 104072.

      146. Donnelly S, Brooks PJ, Homer BD (2019): Is there a bilingual advantage on interference-control tasks? A multiverse meta-analysis of global reaction time and interference cost. Psychon Bull Rev 26: 1122–1147.

      147. Kapur S, Phillips AG, Insel TR (2012): Why has it taken so long for biological psychiatry to develop clinical tests and what to do about it? Mol Psychiatry 17: 1174–1179.

      148. Insel TR, Cuthbert BN (2015): Medicine. Brain disorders? Precisely. Science 348: 499–500.

      149. Davis KD, Aghaeepour N, Ahn AH, Angst MS, Borsook D, Brenton A, et al. (2020): Discovery and validation of biomarkers to aid the development of safe and effective pain therapeutics: challenges and opportunities. Nat Rev Neurol 16: 381–400.