Blog Post »

What happens in the lab…sometimes stays in the lab

At the risk of inundating loyal readers with navel-gazing fodder, I recommend Gregory Mitchell’s (2012, Perspectives on Psych Science) new article examining effect sizes within and outside of the lab. Mitchell surveyed 82 meta-analyses that directly compared effect sizes for a similar conceptual phenomenon from in-lab and field experiments. On the surface, the results sound more encouraging than the bevy of self-criticism the field has conducted recently. Across all of psychology, the zero-order correlation between effect sizes in the lab and in the field was .71. Great; looks like our lab studies are holding up in the real world!

Things look somewhat less pristine when considering only social psychological phenomena. The same correlation between lab and field effect sizes was .53 for social psychological research, and Mitchell found that 21 out of 80 effect sizes in the lab actually changed sign (i.e., went from negative to positive) when examined in the field. These findings, while certainly refuting the idea that all social psychological lab studies are fabricated, contrived, and not applicable to the real world, suggest that effects seen in the lab exhibit considerable deviation from those found in the field.

Why does social psychological research exhibit more external invalidity when compared to psychology as a whole? The culprit may be small effect sizes. The fact that 21 effect sizes changed sign from lab to field doesn’t necessarily indicate problems with the validity of our research; rather, it could merely reflect an abundance of small effect sizes that—regardless of sign—do not differ from zero more than would be expected by chance. For example, an effect size of r = .09 in the lab might conceivably switch to r = -.04 in the field if the population effect size is actually ρ = 0. Indeed, the correlation between lab and field effect sizes was only .30 for lab studies with small effect sizes—which comprised 66.3% of social psychology lab effects surveyed—compared with a correlation of .57 for lab studies with medium effect sizes. A preliminary conclusion from this report might be to adopt a skeptical view of lab studies with small effect sizes due to the relatively high likelihood that these effects will not translate to the real world.

As a side note, young researchers enamored with the virtues of personality psychology—not naming names—might be tempted to boast about the excellent view of personality research provided by Mitchell’s report: effect sizes in lab and field studies correlated .83 and only 1 of 22 in-lab effect sizes changed sign when examined in the field. Is personality psychology really that much more reliable and externally valid that social psychology? No way! Lab studies falling under the umbrella of personality research do not generally attempt to create complex situations like those classified as social psychological research; in-lab personality research is generally pretty simple and often involves procedures similar to those one would use in the field. For example, a number of studies examining attributional styles of depressed individuals found nearly identical effect sizes in lab and in the field. Few researchers would show much surprise that self-report assessments of attributional style are completed in a similar manner regardless of the setting in which the study was conducted. In contrast, studies examining aggressive behavior—of which Mitchell examined many—may be more hard-pressed to replicate the heat of aggression that we see in the real world when confined to the sterile lab environment. That’s not a fault of the researchers, but rather a necessary evil of scientifically measuring social interaction.

 

Discussion (1 Comment)

Jess

Thanks for this review, Aaron! In my mind, this (and all the other navel gazing articles) really underlines the importance replication–both internal and external. It really bothers me when papers are published on the basis of one study that reports one (typically small-sized) effect. The norm for all of us, upon getting a cool significant effect, should not be “let’s run to publish it”, but rather, “let’s see if it replicates”.

Re. personality psychology showing greater external validity: not surprising. Off the top of my head, here are two explanations (which are not mutually exclusive). First, correlational studies almost NEVER are published on the basis of single studies reporting ONE correlation. Personality researchers measure and correlate a bunch of stuff, and interpret results by closely examining correlation matrices, not searching for one particular correlation. And, if matrices reveal inconsistencies, few researchers (I hope) go onto report the one significant correlation that emerged. This is quite a contrast to the typical social psych study that includes one DV, and gets written up and published on that basis. It’s much easier to get a false positive when this is the norm, than when you’re looking for, and reporting, several different correlations/effects that are all consistent with a general theme. Second, the emphasis on external validity is one of the (many) philosophical bents that differentiates personality from social researchers, as was found in our 2008 paper (Tracy, Robins, & Sherman, JPSP) on this topic. As we argued there, this difference may be in part due to a difference in what researchers think they’re doing. Personality researchers believe that they are measuring constructs that predict real world behaviors and outcomes. Social researchers do too, but many of them also think that the mental processes they study are themselves important– reaction times tell us something about how the mind works–regardless of whether they predict anything else beyond that in the real world.

Make a Comment »