Social Psychologist Admits Faking Results

Diederik Stapel, a prominent Dutch social psychologist, has  admitted to fabricating data for dozens of published studies, as has been reported by New Scientist and Nature.

The full report on the extent of Stapel’s fraud is in Dutch, so I can’t tell exactly which findings of his were tainted; nevertheless, according to New Scientist, at least one of the affected studies includes a widely reported one finding that disorder in a person’s environment exacerbates racial stereotypes. I first read about this study when it was picked up by the Situationist (“The Disorderly Situation of Stereotyping“); others may have read about it at i09 (“Urban Decay Causes Ethnic Prejudice“).

Given the usual state of the desks of most public interest lawyers – including mine – I guess I’m pretty thankful that these results were fabricated. I’m also thankful that the damage to the field of social psychology from this one person’s fraud is probably not too severe (according to the Nature article linked above, Stapel wasn’t yet sufficiently prominent that his work appeared in major social psychology textbooks, although he was widely cited and worked with a lot of people in his field).

Still, I’m concerned that this was not an isolated incident. To me, the fact that the extent of this this fraud (in terms of the number of papers affected) exceeded that of other similar incidents in other fields (New Scientist mentions similar incidents in electronics and cancer research) just means that the field of social psychology took longer to catch on than the fields of cancer and electronics research did. If your fraud detection system is not too robust, then for every fraud you do detect there are probably numerous frauds that you haven’t yet noticed.

This is especially problematic to me because, if you’re interested in legal systems design, social psychology is the most pervasively relevant field of scientific inquiry. Judges and policymakers almost always base their decisions on how to structure legal systems at least in part on how they think people will behave in response to that structure. However, people’s intuitions about how they, or others, will act in response to any given situation are often dead wrong (see, for example, my recent post about institutional abuse). When practiced responsibly, social psychology can give policymakers a better understanding of the likely effects that their policies will have on people’s actual behavior.

And on a more personal note, as an Autistic person, I’ve used cognitive and social psychology research to get a better understanding of how people work – frequently a much better understanding of how people work than you can get from someone trying to explain their own feelings and behavior through introspection. Luckily, “people get more bigoted when the room is messy” was never a big part of my model of  human behavior, and the parts of my model that are most significant (such as an understanding of social signaling and people’s tendency to understand themselves in terms of their intentions while understanding others in terms of their actions) are pretty well-established and widely replicated.

None of this can work if a significant portion of social psychology data are downright fabricated. It’s hard enough to deal with the pervasive over- and misinterpretation of results that actually exist (I’ll save this for a later post; in the meantime, you might want to check out the critiques of autism research over at the Autism and Empathy Blog to see an example of what I’m talking about). But people can critique studies for over-/misinterpretation just by reading them and observing that the experimental design and results lack conceptual validity. Since most studies don’t include raw data reports, and it’s hard to recognize fabricated data just by looking at a scatter plot, people have to just take on faith that the experimenters aren’t downright lying about what they did during the course of the experiment and what happened as a result.

I hope I’m overreacting, but it seems to me that the field is going to have to fundamentally change its peer review process to prevent this type of fraud from happening. They’re going to have to insist on reviewing not just a thorough description of how experimenters collected and analyzed their data, but also the raw data themselves, right down to any forms or computer programs used to collect it. They’ve got to put more of an emphasis on replicating results in different labs, with different researchers. They might even have to have random visits by the Institutional Review Board to the actual sites on which experiments are purportedly being held to make sure that they’re actually conducting them. It’s going to add a lot of paperwork, and it’s going to be a huge pain, but I can’t really see another option.

About these ads

Leave a comment

Filed under Being Weird, Experimental Psychology, Practicing Law While Weird, Regulation, The Law as Applied to Weird People & Situations

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s