File drawer effect meta-analysis

Sign up for our email newsletter for the latest science news

The 'file drawer problem' refers to the fact that in science, many results remain unpublished - especially negative ones. This is a problem because it produces publication bias. Now, a group of Belgian psychology researchers have decided to make a stand. In a bold move against publication bias, they've thrown open their own file drawer. In the new paper, Anthony Lane and colleagues from the Université catholique de Louvain say that they've realized that over the years, "our publication portfolio has become less and less representative of our actual findings". Therefore, they "decided to get these [unpublished] studies out of our drawer and encourage other laboratories to do the same." Lane et al.'s research focus is oxytocin, the much-discussed "love hormone". Their lab has published a number of papers reporting that an intranasal spray of oxytocin alters human behaviour. But they now reveal that they also tried to publish numerous negative findings, yet these null results remain in the file drawer because they weren't accepted for publication.

Is there a file drawer problem in intranasal oxytocin research? If this is the case, it may also be the case in our laboratory. This paper aims to answer that question, document the extent of the problem, and discuss its implications for intranasal oxytocin research. We present eight studies (including 13 dependent variables overall, assessed through 25 different paradigms) that were performed in our lab from 2009 until 2014 on a total of 453 subjects... As we will demonstrate below, the results were too often not those expected. Only four studies (most often a part of them) of the eight were submitted for publication, yielding five articles (2, 8, 27, 34, 35). Of these five articles, only one (27) reports a null-finding. We submitted several studies yielding null-findings to different journals (from general interest in psychology to specialized in biological psychology and in psychoenodcrinology) but they were rejected time and time again.

Neuroskeptic readers may remember Lane et al.'s sole published negative study (27), as I blogged about it last year. The authors go on to present the results of all eight oxytocin studies. A meta-analysis of all of the studies finds that oxytocin has no detectable effect: "The aggregated effect size was not reliably different from zero, Cohen’s d = 0.003 (95% CI: -0.10 - 0.10)". They conclude, in an understated but powerful paragraph:

This large proportion of "unexpected" null-findings raises concerns about the validity of what we know about the influence of intranasal oxytocin on human behaviors and cognition... Our initial enthusiasm on intranasal oxytocin findings has slowly faded away over the years and the studies have turned us from 'believers' into 'skeptics'.

So what? In my view this is a very important paper, and a brave move by the authors. This kind of revelation of what goes on "behind closed drawers" could be an effective remedy for publication bias. I suspect though that prevention is better than cure, and that the best way to keep the file drawers from filling up in the first place will be toreform the scientific process itself. That said, a 'skeptic' might say that Lane et al. are doing too little, too late. After all, their papers reporting positive effects of oxytocin are still out there - and some of them have been highly cited. If Lane et al. no longer have confidence in those papers, should they retract them? I don't think so. If we started expecting scientists to retract papers whenever they changed their minds, I think it would have two effects: slightly more papers would be retracted, and scientists would change their minds a lot less. By publishing these results, Lane et al. have ensured that future meta-analysts will be able to include the full dataset in their calculations. In the long run, this will erase any damage caused by the publication bias. Hat tip: thanks to Bernard Carroll.

Anthony Lane, Olivier Luminet, Gideon Nave and Moïra Mikolajczak (2016). Is there a publication bias in behavioral intranasal oxytocin research on humans? Opening the file drawer of one lab Journal of Neuroendocrinology

ABSTRACT

The replication crisis in the social and psychological sciences is said to be due in part to publication bias and the resulting file drawer problem. Meta-analysis is often advocated as a means of resolving this crisis but is prone to the same publication bias and file drawer effects. A study of 23 meta-analyses examined the consequences of correcting for these effects through the inclusion of unpublished research. Results indicated that the inclusion of unpublished data led to smaller meta-analytic means in some instances, consistent with the primary theorized effects of the file drawer problem and publication bias, but led to larger meta-analytic means in nearly as many instances, suggesting the presence and influence of other, unknown artifacts in unpublished research. These findings suggest that meta-analysis is limited in its ability to resolve the replication crisis and instead may introduce unexplained effects that render cumulative science problematic.

In published academic research, publication bias occurs when the outcome of an experiment or research study biases the decision to publish or otherwise distribute it. Publishing only results that show a significant finding disturbs the balance of findings in favor of positive results.[1] The study of publication bias is an important topic in metascience.

Despite similar quality of execution and design,[2] papers with statistically significant results are three times more likely to be published than those with null results.[3] This unduly motivates researchers to manipulate their practices to ensure statistically significant results, such as by data dredging.[4]

Many factors contribute to publication bias.[5] For instance, once a scientific finding is well established, it may become newsworthy to publish reliable papers that fail to reject the null hypothesis.[6] Most commonly, investigators simply decline to submit results, leading to non-response bias. Investigators may also assume they made a mistake, find that the null result fails to support a known finding, lose interest in the topic, or anticipate that others will be uninterested in the null results.[2] The nature of these issues and the resulting problems form the five diseases that threaten science: "significosis, an inordinate focus on statistically significant results; neophilia, an excessive appreciation for novelty; theorrhea, a mania for new theory; arigorium, a deficiency of rigor in theoretical and empirical work; and finally, disjunctivitis, a proclivity to produce many redundant, trivial, and incoherent works."[7]

Attempts to find unpublished studies often prove difficult or are unsatisfactory.[5] In an effort to combat this problem, some journals require studies submitted for publication pre-register (before data collection and analysis) with organizations like the Center for Open Science.

Other proposed strategies to detect and control for publication bias[5] include p-curve analysis[8] and disfavoring small and non-randomized studies due to high susceptibility to error and bias.[2]

Publication bias occurs when the publication of research results depends not just on the quality of the research but also on the hypothesis tested, and the significance and direction of effects detected.[9] The subject was first discussed in 1959 by statistician Theodore Sterling to refer to fields in which "successful" research is more likely to be published. As a result, "the literature of such a field consists in substantial part of false conclusions resulting from errors of the first kind in statistical tests of significance".[10] In the worst case, false conclusions could canonize as being true if the publication rate of negative results is too low.[11]

Publication bias is sometimes called the file-drawer effect, or file-drawer problem. This term suggests that results not supporting the hypotheses of researchers often go no further than the researchers' file drawers, leading to a bias in published research.[12] The term "file drawer problem" was coined by psychologist Robert Rosenthal in 1979.[13]

Positive-results bias, a type of publication bias, occurs when authors are more likely to submit, or editors are more likely to accept, positive results than negative or inconclusive results.[14] Outcome reporting bias occurs when multiple outcomes are measured and analyzed, but the reporting of these outcomes is dependent on the strength and direction of its results. A generic term coined to describe these post-hoc choices is HARKing ("Hypothesizing After the Results are Known").[15]

 

Meta-analysis of stereotype threat on girls' math scores showing asymmetry typical of publication bias. From Flore, P. C., & Wicherts, J. M. (2015)[16]

There is extensive meta-research on publication bias in the biomedical field. Investigators following clinical trials from the submission of their protocols to ethics committees (or regulatory authorities) until the publication of their results observed that those with positive results are more likely to be published.[17][18][19] In addition, studies often fail to report negative results when published, as demonstrated by research comparing study protocols with published articles.[20][21]

The presence of publication bias was investigated in meta-analyses. The largest such analysis investigated the presence of publication bias in systematic reviews of medical treatments from the Cochrane Library.[22] The study showed that statistically positive significant findings are 27% more likely to be included in meta-analyses of efficacy than other findings. Results showing no evidence of adverse effects have a 78% greater probability of inclusion in safety studies than statistically significant results showing adverse effects. Evidence of publication bias was found in meta-analyses published in prominent medical journals.[23]

Where publication bias is present, published studies are no longer a representative sample of the available evidence. This bias distorts the results of meta-analyses and systematic reviews. For example, evidence-based medicine is increasingly reliant on meta-analysis to assess evidence.

Meta-analyses and systematic reviews can account for publication bias by including evidence from unpublished studies and the grey literature. The presence of publication bias can also be explored by constructing a funnel plot in which the estimate of the reported effect size is plotted against a measure of precision or sample size. The premise is that the scatter of points should reflect a funnel shape, indicating that the reporting of effect sizes is not related to their statistical significance.[24] However, when small studies are predominately in one direction (usually the direction of larger effect sizes), asymmetry will ensue and this may be indicative of publication bias.[25]

Because an inevitable degree of subjectivity exists in the interpretation of funnel plots, several tests have been proposed for detecting funnel plot asymmetry.[24][26][27] These are often based on linear regression including the popular Eggers regression test,[28] and may adopt a multiplicative or additive dispersion parameter to adjust for the presence of between-study heterogeneity. Some approaches may even attempt to compensate for the (potential) presence of publication bias,[22][29][30] which is particularly useful to explore the potential impact on meta-analysis results.[31][32][33]

Two meta-analyses of the efficacy of reboxetine as an antidepressant demonstrated attempts to detect publication bias in clinical trials. Based on positive trial data, reboxetine was originally passed as a treatment for depression in many countries in Europe and the UK in 2001 (though in practice it is rarely used for this indication). A 2010 meta-analysis concluded that reboxetine was ineffective and that the preponderance of positive-outcome trials reflected publication bias, mostly due to trials published by the drug manufacturer Pfizer. A subsequent meta-analysis published in 2011, based on the original data, found flaws in the 2010 analyses and suggested that the data indicated reboxetine was effective in severe depression (see Reboxetine § Efficacy). Examples of publication bias are given by Ben Goldacre[34] and Peter Wilmshurst.[35]

In the social sciences, a study of published papers exploring the relationship between corporate social and financial performance found that "in economics, finance, and accounting journals, the average correlations were only about half the magnitude of the findings published in Social Issues Management, Business Ethics, or Business and Society journals".[36]

One example cited as an instance of publication bias is the refusal to publish attempted replications of Bem's work that claimed evidence for precognition by The Journal of Personality and Social Psychology (the original publisher of Bem's article).[37]

An analysis[38] comparing studies of gene-disease associations originating in China to those originating outside China found that those conducted within the country reported a stronger association and a more statistically significant result.[39]

John Ioannidis argues that "claimed research findings may often be simply accurate measures of the prevailing bias."[40] He lists the following factors as those that make a paper with a positive result more likely to enter the literature and suppress negative-result papers:

  • The studies conducted in a field have small sample sizes.
  • The effect sizes in a field tend to be smaller.
  • There is both a greater number and lesser preselection of tested relationships.
  • There is greater flexibility in designs, definitions, outcomes, and analytical modes.
  • There are prejudices (financial interest, political, or otherwise).
  • The scientific field is hot and there are more scientific teams pursuing publication.

Other factors include experimenter bias and white hat bias.

Publication bias can be contained through better-powered studies, enhanced research standards, and careful consideration of true and non-true relationships.[40] Better-powered studies refer to large studies that deliver definitive results or test major concepts and lead to low-bias meta-analysis. Enhanced research standards such as the pre-registration of protocols, the registration of data collections and adherence to established protocols are other techniques. To avoid false-positive results, the experimenter must consider the chances that they are testing a true or non-true relationship. This can be undertaken by properly assessing the false positive report probability based on the statistical power of the test[41] and reconfirming (whenever ethically acceptable) established findings of prior studies known to have minimal bias.

Study registration

In September 2004, editors of prominent medical journals (including the New England Journal of Medicine, The Lancet, Annals of Internal Medicine, and JAMA) announced that they would no longer publish results of drug research sponsored by pharmaceutical companies, unless that research was registered in a public clinical trials registry database from the start.[42] Furthermore, some journals (e.g. Trials), encourage publication of study protocols in their journals.[43]

The World Health Organization (WHO) agreed that basic information about all clinical trials should be registered at the study's inception, and that this information should be publicly accessible through the WHO International Clinical Trials Registry Platform. Additionally, public availability of complete study protocols, alongside reports of trials, is becoming more common for studies.[44]

  •  Psychology portal

  • Academic bias
  • Bad Pharma – Polemical book (2012) by Ben Goldacre
  • Adversarial collaboration
  • AllTrials
  • Confirmation bias – Bias confirming existing attitudes
  • Conflicts of interest in academic publishing – Overview of conflicts of interest in academic publishing
  • Counternull
  • Observer bias – Cognitive bias
  • Funding bias – Tendency of a scientific study to support the interests of its funder
  • FUTON bias
  • List of cognitive biases – Systematic patterns of deviation from norm or rationality in judgment
  • Parapsychology – Study of paranormal and psychic phenomena
  • Peer review – Evaluation of work by one or more people of similar competence to the producers of the work
  • Proteus phenomenon
  • Replication crisis – Ongoing methodological crisis in science stemming from failure to replicate many studies
  • Selection bias – Bias in a statistical analysis due to non-random selection
  • Scientific journals for null results
  • White hat bias – Type of bias in public health research
  • Woozle effect – False credibility due to quantity of citations

  1. ^ Song, F.; Parekh, S.; Hooper, L.; Loke, Y. K.; Ryder, J.; Sutton, A. J.; Hing, C.; Kwok, C. S.; Pang, C.; Harvey, I. (2010). "Dissemination and publication of research findings: An updated review of related biases". Health Technology Assessment. 14 (8): iii, iix–xi, iix–193. doi:10.3310/hta14080. PMID 20181324.
  2. ^ a b c Easterbrook, P. J.; Berlin, J. A.; Gopalan, R.; Matthews, D. R. (1991). "Publication bias in clinical research". Lancet. 337 (8746): 867–872. doi:10.1016/0140-6736(91)90201-Y. PMID 1672966. S2CID 36570135.
  3. ^ Dickersin, K.; Chan, S.; Chalmers, T. C.; et al. (1987). "Publication bias and clinical trials". Controlled Clinical Trials. 8 (4): 343–353. doi:10.1016/0197-2456(87)90155-3. PMID 3442991.
  4. ^ Pearce, J; Derrick, B (2019). "Preliminary testing: The devil of statistics?". Reinvention: An International Journal of Undergraduate Research. 12 (2). doi:10.31273/reinvention.v12i2.339.
  5. ^ a b c H. Rothstein, A. J. Sutton and M. Borenstein. (2005). Publication bias in meta-analysis: prevention, assessment and adjustments. Wiley. Chichester, England ; Hoboken, NJ.
  6. ^ Luijendijk, HJ; Koolman, X (May 2012). "The incentive to publish negative studies: how beta-blockers and depression got stuck in the publication cycle". J Clin Epidemiol. 65 (5): 488–92. doi:10.1016/j.jclinepi.2011.06.022. PMID 22342262.
  7. ^ Antonakis, John (February 2017). "On doing better science: From thrill of discovery to policy implications" (PDF). The Leadership Quarterly. 28 (1): 5–21. doi:10.1016/j.leaqua.2017.01.006.
  8. ^ Simonsohn, Uri; Nelson, Leif D.; Simmons, Joseph P. (2014). "P-curve: A key to the file-drawer". Journal of Experimental Psychology: General. 143 (2): 534–547. doi:10.1037/a0033242. PMID 23855496. S2CID 8505270.
  9. ^ K. Dickersin (March 1990). "The existence of publication bias and risk factors for its occurrence". JAMA. 263 (10): 1385–9. doi:10.1001/jama.263.10.1385. PMID 2406472.
  10. ^ Sterling, Theodore D. (March 1959). "Publication decisions and their possible effects on inferences drawn from tests of significance—or vice versa". Journal of the American Statistical Association. 54 (285): 30–34. doi:10.2307/2282137. JSTOR 2282137.
  11. ^ Nissen, Silas Boye; Magidson, Tali; Gross, Kevin; Bergstrom, Carl (20 December 2016). "Research: Publication bias and the canonization of false facts". eLife. 5: e21451. arXiv:1609.00494. doi:10.7554/eLife.21451. PMC 5173326. PMID 27995896.
  12. ^ Jeffrey D. Scargle (2000). "Publication bias: the "file-drawer problem" in scientific inference" (PDF). Journal of Scientific Exploration. 14 (1): 91–106. arXiv:physics/9909033. Bibcode:1999physics...9033S.
  13. ^ Rosenthal R (1979). "File drawer problem and tolerance for null results". Psychol Bull. 86 (3): 638–41. doi:10.1037/0033-2909.86.3.638.
  14. ^ D.L. Sackett (1979). "Bias in analytic research". J Chronic Dis. 32 (1–2): 51–63. doi:10.1016/0021-9681(79)90012-2. PMID 447779.
  15. ^ N.L. Kerr (1998). "HARKing: Hypothesizing After the Results are Known" (PDF). Personality and Social Psychology Review. 2 (3): 196–217. doi:10.1207/s15327957pspr0203_4. PMID 15647155.
  16. ^ Flore P. C.; Wicherts J. M. (2015). "Does stereotype threat influence performance of girls in stereotyped domains? A meta-analysis". J Sch Psychol. 53 (1): 25–44. doi:10.1016/j.jsp.2014.10.002. PMID 25636259.
  17. ^ Dickersin, K.; Min, Y.I. (1993). "NIH clinical trials and publication bias". Online J Curr Clin Trials. Doc No 50: [4967 words, 53 paragraphs]. ISSN 1059-2725. PMID 8306005.
  18. ^ Decullier E, Lheritier V, Chapuis F (2005). "Fate of biomedical research protocols and publication bias in France: retrospective cohort study". BMJ. 331 (7507): 19–22. doi:10.1136/bmj.38488.385995.8f. PMC 558532. PMID 15967761.
  19. ^ Song F, Parekh-Bhurke S, Hooper L, Loke Y, Ryder J, Sutton A, et al. (2009). "Extent of publication bias in different categories of research cohorts: a meta-analysis of empirical studies". BMC Med Res Methodol. 9: 79. doi:10.1186/1471-2288-9-79. PMC 2789098. PMID 19941636.
  20. ^ Chan AW, Altman DG (2005). "Identifying outcome reporting bias in randomised trials on PubMed: review of publications and survey of authors". BMJ. 330 (7494): 753. doi:10.1136/bmj.38356.424606.8f. PMC 555875. PMID 15681569.
  21. ^ Riveros C, Dechartres A, Perrodeau E, Haneef R, Boutron I, Ravaud P (2013). "Timing and completeness of trial results posted at ClinicalTrials.gov and published in journals". PLOS Med. 10 (12): e1001566. doi:10.1371/journal.pmed.1001566. PMC 3849189. PMID 24311990.
  22. ^ a b Kicinski, M; Springate, D. A.; Kontopantelis, E (2015). "Publication bias in meta-analyses from the Cochrane Database of Systematic Reviews". Statistics in Medicine. 34 (20): 2781–93. doi:10.1002/sim.6525. PMID 25988604. S2CID 25560005.
  23. ^ Kicinski M (2013). "Publication bias in recent meta-analyses". PLOS ONE. 8 (11): e81823. Bibcode:2013PLoSO...881823K. doi:10.1371/journal.pone.0081823. PMC 3868709. PMID 24363797.
  24. ^ a b Debray, Thomas P.A.; Moons, Karel G.M.; Riley, Richard D. (2018). "Detecting small-study effects and funnel plot asymmetry in meta-analysis of survival data: a comparison of new and existing tests". Research Synthesis Methods. 9 (1): 41–50. doi:10.1002/jrsm.1266. ISSN 1759-2887. PMC 5873397. PMID 28975717.
  25. ^ Light, Richard J.; Pillemer, David B. (1984). Summing Up: The Science of Reviewing Research. Cambridge, Mass.: Harvard University Press. pp. 65ff. doi:10.2307/j.ctvk12px9. ISBN 9780674854307. OCLC 1036880624.
  26. ^ Jin, Zhi-Chao; Zhou, Xiao-Hua; He, Jia (30 January 2015). "Statistical methods for dealing with publication bias in meta-analysis". Statistics in Medicine. 34 (2): 343–360. doi:10.1002/sim.6342. ISSN 1097-0258. PMID 25363575. S2CID 12341436.
  27. ^ Rücker, Gerta; Carpenter, James R.; Schwarzer, Guido (1 March 2011). "Detecting and adjusting for small-study effects in meta-analysis". Biometrical Journal. 53 (2): 351–368. doi:10.1002/bimj.201000151. ISSN 1521-4036. PMID 21374698. S2CID 24560718.
  28. ^ Egger, M.; Smith, G. D.; Schneider, M.; Minder, C. (13 September 1997). "Bias in meta-analysis detected by a simple, graphical test". BMJ. 315 (7109): 629–634. doi:10.1136/bmj.315.7109.629. ISSN 0959-8138. PMC 2127453. PMID 9310563.
  29. ^ Silliman N (1997). "Hierarchical selection models with applications in meta-analysis". Journal of the American Statistical Association. 92 (439): 926–936. doi:10.1080/01621459.1997.10474047.
  30. ^ Hedges L, Vevea J (1996). "Estimating effect size under publication bias: small sample properties and robustness of a random effects selection model". Journal of Educational and Behavioral Statistics. 21 (4): 299–332. doi:10.3102/10769986021004299. S2CID 123680599.
  31. ^ McShane, Blakeley B.; Böckenholt, Ulf; Hansen, Karsten T. (29 September 2016). "Adjusting for Publication Bias in Meta-Analysis". Perspectives on Psychological Science. 11 (5): 730–749. doi:10.1177/1745691616662243. PMID 27694467.
  32. ^ Sutton AJ, Song F, Gilbody SM, Abrams KR (2000). "Modelling publication bias in meta-analysis: a review". Stat Methods Med Res. 9 (5): 421–445. doi:10.1191/096228000701555244.
  33. ^ Kicinski, M (2014). "How does under-reporting of negative and inconclusive results affect the false-positive rate in meta-analysis? A simulation study". BMJ Open. 4 (8): e004831. doi:10.1136/bmjopen-2014-004831. PMC 4156818. PMID 25168036.
  34. ^ Goldacre, Ben (June 2012). What doctors don't know about the drugs they prescribe (Speech). TEDMED 2012. Retrieved 3 February 2020.
  35. ^ Wilmshurst, Peter (2007). "Dishonesty in Medical Research" (PDF). Medico-Legal Journal. 75 (1): 3–12. doi:10.1258/rsmmlj.75.1.3. PMID 17506338. S2CID 26915448. Archived from the original on 21 May 2013.
  36. ^ Orlitzky, Marc (2011). "Institutional Logics in the Study of Organizations: The Social Construction of the Relationship between Corporate Social and Financial Performance" (PDF). Business Ethics Quarterly. 21 (3): 409–444. doi:10.5840/beq201121325. S2CID 147466849. Archived from the original (PDF) on 25 January 2018.
  37. ^ Goldacre, Ben (23 April 2011). "Backwards step on looking into the future". The Guardian. Retrieved 11 April 2017.
  38. ^ Zhenglun Pan, Thomas A. Trikalinos, Fotini K. Kavvoura, Joseph Lau, John P.A. Ioannidis (2005). "Local literature bias in genetic epidemiology: An empirical evaluation of the Chinese literature". PLOS Medicine. 2 (12): e334. doi:10.1371/journal.pmed.0020334. PMC 1285066. PMID 16285839.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  39. ^ Ling Tang Jin (2005). "Selection Bias in Meta-Analyses of Gene-Disease Associations". PLOS Medicine. 2 (12): e409. doi:10.1371/journal.pmed.0020409. PMC 1285067. PMID 16363911.
  40. ^ a b Ioannidis J (2005). "Why most published research findings are false". PLOS Med. 2 (8): e124. doi:10.1371/journal.pmed.0020124. PMC 1182327. PMID 16060722.
  41. ^ Wacholder, S.; Chanock, S; Garcia-Closas, M; El Ghormli, L; Rothman, N (March 2004). "Assessing the Probability That a Positive Report is False: An Approach for Molecular Epidemiology Studies". JNCI. 96 (6): 434–42. doi:10.1093/jnci/djh075. PMC 7713993. PMID 15026468.
  42. ^ Vedantam, Shankar (9 September 2004). "Journals Insist Drug Manufacturers Register All Trials". Washington Post. Retrieved 3 February 2020.
  43. ^ "Instructions for Trials authors — Study protocol". 15 February 2009. Archived from the original on 2 August 2007.
  44. ^ Dickersin, K.; Chalmers, I. (2011). "Recognizing, investigating and dealing with incomplete and biased reporting of clinical research: from Francis Bacon to the WHO". J R Soc Med. 104 (12): 532–538. doi:10.1258/jrsm.2011.11k042. PMC 3241511. PMID 22179297.

  • Lehrer, Jonah (13 December 2010). "The Truth Wears Off". The New Yorker. Retrieved 30 January 2020.
  • Register of clinical trials conducted in the US and around the world, maintained by the National Library of Medicine, Bethesda
  • Skeptic's Dictionary: positive outcome bias.
  • Skeptic's Dictionary: file-drawer effect.
  • Journal of Negative Results in Biomedicine
  • The All Results Journals
  • Journal of Articles in Support of the Null Hypothesis
  • Psychfiledrawer.org: Archive for replication attempts in experimental psychology

Retrieved from "https://en.wikipedia.org/w/index.php?title=Publication_bias&oldid=1089476774"