Blog Post »

Brief Reports and the Sampling Distribution

Following on the heels of a year that included Bem’s ESP paper and the False Positive Psychology upheaval and culminated in our area-wide discussion of research practice in social and personality psychology, Perspectives on Psychological Science just released a new issue with a special section dedicated to an evaluation of publication trends and strategies in our field. The five articles contained in the section range from discussions of meta-analyses to proposals for new methods of measuring impact factor, and the curious reader should certainly check out the issue for him or herself. I particularly enjoyed the two articles that targeted the recent proliferation of brief reports in our field. These articles critiqued brief reports on a number of grounds, including their relative lack of integration with prior literature and their propensity to dress up flashy effects in lieu of considering theoretically important psychological processes.

Another problem endemic to brief reports, which also received attention in the special section, is the inflation of type-I error rate, both within a specific brief report and in any area of psychology that bases its knowledge largely on brief reports. Consider a basic law of statistics: any psychological phenomena has a true effect size—whether quantified with a t-test, ANOVA, or correlation coefficient—and the sampling distribution of estimates of this effect size should be normally distributed around the true effect with a standard error that decreases as the number of data points (e.g., subjects within studies; studies within papers) used to measure the effect in a given instance increases. In other words, any test of a psychological phenomenon will more closely approximate its true effect size to the extent that it involves a larger number of data points and thus is less susceptible to variation due to sampling error (for further discussion, see LeBel & Peters, 2011, Review of General Psychology, p. 375-376).

Now, consider brief reports. First, brief reports include fewer data points (e.g., fewer subjects overall; fewer studies) than longer articles and thus by definition contain more sampling error. Thus, the effect sizes disseminated in brief reports are likely to vary to a greater degree from the true effect size of the psychological phenomenon in question than the effects reported in longer articles. An immediate consequence of these error-filled effect sizes is that they are more likely to reach magnitudes sufficient to attain statistical significance even if the true effect size of the psychological phenomenon is near-zero. This consequence is similar to the critiques of underpowered studies raised by the authors of the False Positive Psychology paper.

A second, and more often overlooked, consequence is that scientific knowledge based on aggregation of brief reports is likely to contain more variation than knowledge based on aggregation of longer articles. Again, given that brief reports contain fewer data points that longer articles, their estimations of true psychological effect sizes are likely to contain more sampling error. In other words, any given brief report is more likely to misestimate a true psychological effect than a corresponding longer paper. Thus, any area of psychology that is flooded with brief reports is likely to contain wdely divergent estimates of true population effect sizes, thus causing some confusion and misdirection for researchers attempting to systematically advance the field. In addition, due to publication bias toward non-null results, certain of these widely divergent effects (i.e., those that are large and in the direction that previous theory would dictate) are more likely to be published. Such publication bias will further shift the field’s perception of psychological phenomena away from their true effect sizes (for an illustration, see Bertamini & Munafò, 2012, Perspectives on Psychological Science, p. 68-69).

In conclusion, careful consideration of the points raised in the current issue of Perspectives should serve as a cautionary tale regarding publication trends in our field.

Discussion (1 Comment)

Jess

Great points, Aaron! I hope that these kinds of papers and the field’s growing awareness of these issues helps temper the excitement that’s surrounded Psych Science and its hot/flashy short reports in recent years. The good news is, JPSP (which is widely bemoaned for it’s stereotypical ‘8+ studies testing all possible mediators, moderators, and boundary conditions’ requirement) still has a higher impact factor than Psych Science. In fact, the PS impact factor is relatively low (in 2010 it was just above 4.0–quite good for any random psychology journal, but not particularly impressive for a flagship psych journal) for all the hype it gets.

Make a Comment »