Category Archives: Experimental Psychology

Dehumanization and the Brain

Here’s a great article, via the Situationist:

A Brain’s Failure to Appreciate Others May Perpetuate Atrocities.

No, it’s not about how Autistic people are responsible for genocide. Although the actual published journal article is behind a paywall, from what I can tell the researchers (including Susan Fiske, on whom I have a huge intellectual crush) are not really talking about autism at all but rather the general population.

The study showed that Princeton undergraduates (that is, not a representative sample of the whole population, but also not a sample of only people with a particular mental disability) systematically shut off the social processing centers of their brains (the centers generally associated with empathy and social reasoning) when looking at or thinking about people whom they considered disgusting and less-than-human: people thought to be homeless, addicted to drugs, immigrants, or poor. The social areas lit up normally when participants looked at pictures of other individuals.

This suggests that bias isn’t just about thinking some people are bad, but in at least some situations it’s about thinking of people as not human. This, of course, is something that activists have been pointing out for a long time, but it’s cool to see that their phenomenological, introspective description of what’s happening matches up pretty closely with the neuropsychological data (see also the same researchers’ findings that men who scored high on “hostile sexism” turned off the social centers of their brain while looking at scantily clad women).

What’s more, when participants looked at pictures of dehumanized individuals, their brains showed activity in the areas known to govern disgust, attention, and cognitive control. The researchers suggested that disgust may play a role in the shutdown of the “social” areas of the participants’ brains.

The fact that disgust is part of the equation here is particularly interesting, because it suggests (in my mind) that the underlying cognitive process evolved as a response to contagious disease or vermin. Although most people at least intellectually understand that people with contagious diseases are still people, societies historically have shunned individuals who appear to have a life-threatening contagious disease (most notably people with leprosy), to prevent the disease from spreading throughout the population. Of course, doing this was to some extent against human nature because we naturally want to be kind to others who are suffering, so it was necessary to develop a way for disgust to trump empathy.

The discovery of sanitation has made this tactic totally unnecessary (although people perceived as having a life-threatening contagious illness are still frequently subjected to serious discrimination), but the same mechanism is still being applied to people who are considered unworthy of empathy for some other reason, particularly outsiders (the immigrants), people who are perceived as “diseased” (the substance abusers and to some extent homeless people, who are widely presumed to have some sort of mental illness), and people dealing with unjust situations that are perceived as intractable and not worth trying to fix (poor persons).

The possibility that people are more likely to dehumanize a person when they associate the person with the idea of disease has important disability and human rights implications. For example, it may explain why promoting “medical” explanations of mental disability can paradoxically increase stigma while decreasing blame. It explains why, whenever people try to distinguish “high-functioning” Autistic advocates from the “really autistic” people who are not-quite-human and need to be cured, they invariably end up talking about gross things the person does, like playing with poop.

At the risk of over-interpreting this study, I’d say it supports the arguments of activists who object to medicalizing terminology that characterizes a long-term disability as a “disease,” “disorder,” “illness,” or “epidemic,” especially when the “disease” in question is considered severe and “incurable.” While medicalizing terms may discourage society from blaming the disabled individual, the individual may also be considered less than human and thus less deserving of human rights such as self-determination, bodily autonomy, and full participation in the community.

It would also be interesting to see further research on exactly when this dehumanization response occurs and when it doesn’t, so that we can think of ways to prevent it. I suspect that people will show less of a dehumanizing response when the “disease” is perceived as either mild (such as the flu), presently curable (such as malaria), or the result of an injury and not a disease. This coincides with findings that people are more likely to empathize with individuals with mental illness if they are told that the person’s distress is caused by adverse life situations (that is, an “injury” model rather than a “disease” model). Moreover, one of the pictures used in the present study was of a “disabled” woman (they don’t say what the woman’s disability was), and this was apparently not a picture that elicited a dehumanizing response. My guess, without seeing the article, is that this woman’s disability appeared to have been caused by an injury (for example, a person who looks “normal” except that she is using a wheelchair) rather than a disease.

By analogy, would people be more likely to, say, view homeless individuals as fully human if they were told that homelessness is often a temporary life situation and that many people who experience homelessness ultimately find housing and have stable, fulfilling lives? Would people be less likely to dehumanize poor people if they were told that poverty arises from external social forces that can be changed? Or if they saw poverty as an injury caused by some sort of injustice?

I really like this sort of research. Understanding how people think about marginalized groups is a great step toward getting to think about them better. And I’m not self-deluded enough to think that anyone can truly understand how people think without doing some actually good research on the topic.

Advertisements

Leave a comment

Filed under Being Weird, Disabilities, Experimental Psychology, Health Care

Social Psychologist Admits Faking Results

Diederik Stapel, a prominent Dutch social psychologist, has  admitted to fabricating data for dozens of published studies, as has been reported by New Scientist and Nature.

The full report on the extent of Stapel’s fraud is in Dutch, so I can’t tell exactly which findings of his were tainted; nevertheless, according to New Scientist, at least one of the affected studies includes a widely reported one finding that disorder in a person’s environment exacerbates racial stereotypes. I first read about this study when it was picked up by the Situationist (“The Disorderly Situation of Stereotyping“); others may have read about it at i09 (“Urban Decay Causes Ethnic Prejudice“).

Given the usual state of the desks of most public interest lawyers – including mine – I guess I’m pretty thankful that these results were fabricated. I’m also thankful that the damage to the field of social psychology from this one person’s fraud is probably not too severe (according to the Nature article linked above, Stapel wasn’t yet sufficiently prominent that his work appeared in major social psychology textbooks, although he was widely cited and worked with a lot of people in his field).

Still, I’m concerned that this was not an isolated incident. To me, the fact that the extent of this this fraud (in terms of the number of papers affected) exceeded that of other similar incidents in other fields (New Scientist mentions similar incidents in electronics and cancer research) just means that the field of social psychology took longer to catch on than the fields of cancer and electronics research did. If your fraud detection system is not too robust, then for every fraud you do detect there are probably numerous frauds that you haven’t yet noticed.

This is especially problematic to me because, if you’re interested in legal systems design, social psychology is the most pervasively relevant field of scientific inquiry. Judges and policymakers almost always base their decisions on how to structure legal systems at least in part on how they think people will behave in response to that structure. However, people’s intuitions about how they, or others, will act in response to any given situation are often dead wrong (see, for example, my recent post about institutional abuse). When practiced responsibly, social psychology can give policymakers a better understanding of the likely effects that their policies will have on people’s actual behavior.

And on a more personal note, as an Autistic person, I’ve used cognitive and social psychology research to get a better understanding of how people work – frequently a much better understanding of how people work than you can get from someone trying to explain their own feelings and behavior through introspection. Luckily, “people get more bigoted when the room is messy” was never a big part of my model of  human behavior, and the parts of my model that are most significant (such as an understanding of social signaling and people’s tendency to understand themselves in terms of their intentions while understanding others in terms of their actions) are pretty well-established and widely replicated.

None of this can work if a significant portion of social psychology data are downright fabricated. It’s hard enough to deal with the pervasive over- and misinterpretation of results that actually exist (I’ll save this for a later post; in the meantime, you might want to check out the critiques of autism research over at the Autism and Empathy Blog to see an example of what I’m talking about). But people can critique studies for over-/misinterpretation just by reading them and observing that the experimental design and results lack conceptual validity. Since most studies don’t include raw data reports, and it’s hard to recognize fabricated data just by looking at a scatter plot, people have to just take on faith that the experimenters aren’t downright lying about what they did during the course of the experiment and what happened as a result.

I hope I’m overreacting, but it seems to me that the field is going to have to fundamentally change its peer review process to prevent this type of fraud from happening. They’re going to have to insist on reviewing not just a thorough description of how experimenters collected and analyzed their data, but also the raw data themselves, right down to any forms or computer programs used to collect it. They’ve got to put more of an emphasis on replicating results in different labs, with different researchers. They might even have to have random visits by the Institutional Review Board to the actual sites on which experiments are purportedly being held to make sure that they’re actually conducting them. It’s going to add a lot of paperwork, and it’s going to be a huge pain, but I can’t really see another option.

Leave a comment

Filed under Being Weird, Experimental Psychology, Practicing Law While Weird, Regulation, The Law as Applied to Weird People & Situations

The Psychology of Cover-Ups

Time Magazine has a great article on the psychology of cover-ups in the context of the recent events at Penn State (trigger warning for discussions of sexual abuse). Here is a choice snippet:

When the actions of a group are public and visible, insiders who behave in an unacceptable way — doing things that “contravene the norms of the group,” Levine says — may actually be punished by the group more harshly than an outsider would be for the same behavior. “It’s seen as a threat to the reputation of the group,” says Levine.

In contrast, when the workings of a group are secretive and hidden — like those of a major college football team, for instance, or a political party or the Catholic priesthood — the tendency is toward protecting the group’s reputation by covering up. Levine suggests that greater transparency in organizations promotes better behavior in these situations.

The article also makes some other important observations: that people are more intervene if they think that their intervention will be supported by the community around them and not met with hostility for “butting in” to issues that aren’t their business, and that people are less likely to intervene when the bad actor is a respected authority figure and the victim is a member of a marginalized group (for example, a “troubled teen”).

All of these observations are incredibly important not only to the recent Penn State case but also to the law of institutions in general. There’s an institutional bias in our society that is particularly evident in our disability services systems (see, e.g., Bruce Darling’s testimony for ADAPT (accessible PDF)), criminal justice systems, and child services systems. Although abuse and other human rights violations in these institutions are rampant (see any of the links above), many defenders of institutional services delivery will explain abuse as the work of a few “bad apples” and not a problem with the institutions themselves. These explanations have a lot of intuitive appeal to those who have never actually experienced institutionalization or tried to be a whistleblower themselves. People would like to think that they’d report abuse all the way up the institutional hierarchy and also to the police and the media, and that anyone who fails to do so must simply be a bad person who is not like them in any way.

However, as this post by Amanda Forest Vivian illustrates, it’s incredibly difficult even for highly moral individuals to report abuse in many institutional and “community” programs. Like football staff at Penn State, staff at institutional program (and at many “community” programs) tend to form cohesive groups and are invested in protecting their reputation. Because these programs operate more or less out of sight from the rest of the community, they tend to respond to misbehavior by covering it up rather than publicly punishing their own members, as Levine noted in the Time article. Moreover, lower-level staff members often justifiably fear that whistleblowing will not actually end the abuse but instead may lead to retaliation by other staff members and supervisors (especially when the perpetrator is higher-ranking). Like McQueary at Penn State, even when a low-ranking staff member is disturbed enough to report abuse to a supervisor, they frequently do not feel empowered to follow up and report to outside authorities if the supervisor fails to take action; to do so would likely be perceived as insubordination.

This is why social sciences research on the environmental influences on social policing is so important. Unless community members and policy members understand that certain environmental factors are perpetuating and enabling institutional abuse, they won’t be able to commit to eliminating those factors from our service delivery systems.

(h/t to the Situationist for linking to the Time article).

6 Comments

Filed under Children's Rights, Crime and Punishment, Disabilities, Experimental Psychology, Health Care, Regulation, The Law as Applied to Weird People & Situations

New Milgram research

The Situationist Blog recently posted about an interesting new study on the human ability to inflict pain on others.

Dominic J. Packer, of Ohio State University, performed a statistical meta-analysis on several of the original Milgram experiments, in which experimental participants were asked to administer progressively severe electric shocks to another individual (the other person was in reality an actor who was not in fact receiving shocks). Despite the victim’s expressions of severe pain, pleas to be released, and, eventually, silence, over two-thirds of participants continued “shocking” the victim up to 450 volts. These participants were not sadistic or callous – in fact they usually showed signs of extreme distress – but were unable to resist the persistent directions of the researcher that the experiment “must” continue.Ethical concerns prevent psychologists from conducting this type of study again, at least not in the same exact form. However, Packer was able to statistically analyze eight studies that Milgram performed several years ago.

The meta-analysis indicated that of the participants who disobeyed, about 37% did so at 150 volts, which is when the “victim” first asked to end the study. Considering that there were 28 other potential moments where the participants could have stopped, this size of a cluster around 150 volts is very significant.

The other most common points of disobedience were at 315 volts, 300 volts, and 180 volts. However, although the overall level of disobedience varied across the eight studies, most of this variation happened at 150 volts, while the rate of disobeying at other points stayed largely the same across the different studies. Thus, a variation in the experiment that made people more likely to disobey, did so by making people more likely to disobey when the learner first asks to leave, not at some other point.

But wait, there’s more: psychologist Jerry Burger, of Santa Clara University, has recently replicated Milgram’s experiment. As I pointed out above, ethical rules prohibit psychologists from performing experiments identical to Milgram’s, so Burger’s experiment ended after the 150-volt mark. As in the original experiments, a great majority of the subjects administered the 150-volt shock – despite the victim’s request to leave – and would have been willing to continue had the experiment not been stopped.

Packer calls attention in his study to its potential implications in situations where potential victims have no recognized right to leave a particular situation, such as treatment of prisoners. Since participants did not seem to respond to escalating expressions of pain, it is not reasonable to expect interrogators to stop an interrogation practice when it appears to be too painful. But the study may be even more relevant to the treatment of people (especially children) with disabilities, whose protests to abusive treatment are frequently ignored and dismissed.

It could, for example, shed light on an incident where a prank phone call lead caretakers of children with disabilities to shock them dozens of times within a few hours. In that particular group home, electric shock was used as an “aversive therapy” for those children, authorized through a “substituted judgment” proceeding through which a judge decides that the child “would have consented” to the treatment were they competent to make such a choice. This is even worse than an interrogation situation, where victims’ requests to end interrogation are simply not respected; in the case of these children, at no point are the child’s protests and attempts to avoid the shock even considered the child’s own choice.

Alternately, we can imagine (rather optimistically) that in situations where people aren’t paying attention to requests to stop, they may compensate by paying attention to other factors. For example, the people who ended the experiment at 150 volts may have reasoned until that point that their victim was implicitly consenting to the shocks by not asking to be let free; these people may have been more attentive to other signals that it’s “time to stop” if they know the victim is unable to make such a request or have been told to disregard such requests as illegitimate or inauthentic. It may seem hard to imagine such a result given the widespread level of abuse against people with cognitive disabilities, but remember that even the Milgram experiments, the majority of participants ignored the requests of an apparently competent adult to end the experiment. Thus, even if people do begin focusing on other factors when their victims are unable (or have no right) to ask them to stop, we wouldn’t necessarily expect most people to actually stop. That said, I don’t if any studies have been done that would support or refute this theory.

Overall these two studies emphasize the vulnerability of people whose choices, even choices to avoid pain, are disregarded or seen as not really their own. Although the choices of even perceived “competent” choice-makers are often disregarded in the face of authoritarian pressure, it is respect for those people’s choices that seems most important in causing people to resist those pressures. Take away that respect, and hope of humane treatment could grow incresingly dim.

Leave a comment

Filed under Disabilities, Experimental Psychology, The Law as Applied to Weird People & Situations