What happens in the lab…sometimes stays in the lab
At the risk of inundating loyal readers with navel-gazing fodder, I recommend Gregory Mitchell’s (2012, Perspectives on Psych Science) new article examining effect sizes within and outside of the lab. Mitchell surveyed 82 meta-analyses that directly compared effect sizes for a similar conceptual phenomenon from in-lab and field experiments. On the surface, the results sound more encouraging than the bevy of self-criticism the field has conducted recently. Across all of psychology, the zero-order correlation between effect sizes in the lab and in the field was .71. Great; looks like our lab studies are holding up in the real world!
Things look somewhat less pristine when considering only social psychological phenomena. The same correlation between lab and field effect sizes was .53 for social psychological research, and Mitchell found that 21 out of 80 effect sizes in the lab actually changed sign (i.e., went from negative to positive) when examined in the field. These findings, while certainly refuting the idea that all social psychological lab studies are fabricated, contrived, and not applicable to the real world, suggest that effects seen in the lab exhibit considerable deviation from those found in the field.
Why does social psychological research exhibit more external invalidity when compared to psychology as a whole? The culprit may be small effect sizes. The fact that 21 effect sizes changed sign from lab to field doesn’t necessarily indicate problems with the validity of our research; rather, it could merely reflect an abundance of small effect sizes that—regardless of sign—do not differ from zero more than would be expected by chance. For example, an effect size of r = .09 in the lab might conceivably switch to r = -.04 in the field if the population effect size is actually ρ = 0. Indeed, the correlation between lab and field effect sizes was only .30 for lab studies with small effect sizes—which comprised 66.3% of social psychology lab effects surveyed—compared with a correlation of .57 for lab studies with medium effect sizes. A preliminary conclusion from this report might be to adopt a skeptical view of lab studies with small effect sizes due to the relatively high likelihood that these effects will not translate to the real world.
As a side note, young researchers enamored with the virtues of personality psychology—not naming names—might be tempted to boast about the excellent view of personality research provided by Mitchell’s report: effect sizes in lab and field studies correlated .83 and only 1 of 22 in-lab effect sizes changed sign when examined in the field. Is personality psychology really that much more reliable and externally valid that social psychology? No way! Lab studies falling under the umbrella of personality research do not generally attempt to create complex situations like those classified as social psychological research; in-lab personality research is generally pretty simple and often involves procedures similar to those one would use in the field. For example, a number of studies examining attributional styles of depressed individuals found nearly identical effect sizes in lab and in the field. Few researchers would show much surprise that self-report assessments of attributional style are completed in a similar manner regardless of the setting in which the study was conducted. In contrast, studies examining aggressive behavior—of which Mitchell examined many—may be more hard-pressed to replicate the heat of aggression that we see in the real world when confined to the sterile lab environment. That’s not a fault of the researchers, but rather a necessary evil of scientifically measuring social interaction.