We perceive our moral judgments as grounded in principled reason. Cognitive science shows our reasons are less principle than public relations. (More)
The Righteous Mind, Part I: How We Judge
This week Morning Feature explores Jonathan Haidt’s new book The Righteous Mind: Why Good People Are Divided by Politics and Religion. Today we examine how we make moral judgments, and the respective roles of conscious reasoning and unconscious intuition. Tomorrow we’ll consider why we make moral judgments, and why we evolved to seek affirmation over truth. Saturday we’ll see the different moral models of progressives, conservatives, and libertarians, and how we can better discuss our differing moral judgments.
Jonathan Haidt earned his Ph.D in psychology from the University of Pennsylvania. He is a professor of psychology at the University of Virginia, where his research includes applied social psychology, culture, ethics, and social cognition. Dr. Haidt was the principal developer of moral foundations theory, and is presently a visiting professor teaching business ethics at New York University. He also authored The Happiness Hypothesis, published in 2006.
A Man and His Chicken
Consider this hypothetical story from the opening chapter:
A man goes to the supermarket once a week and buys a chicken. But before cooking the chicken, he has sexual intercourse with it. Then he cooks and eats it. He lives alone and no one else knows he does this. Is this morally wrong?
Dr. Haidt suggests that progressive or libertarian Westerners usually give nuanced answers. They admit the man’s act is disgusting, yet recognize that the chicken is already dead and no one else knows about the act. The man is strange, but he is free to do as he wishes in the privacy his home, as long as he doesn’t harm anyone. Yet Dr. Haidt also notes that conservative Westerners – and most non-Westerners – will disagree. They will argue that the man is morally wrong, even if he hasn’t harmed anyone, even if no one else knows what he does, because the act itself is inherently immoral.
Why do progressive or libertarian Westerners usually give different answers for this example than conservatives and non-Westerners? The answer lies in culture, and in how and why we make moral judgments.
The Elephant and the Rider
Recall how you felt as you read that hypothetical story. As you reached the end of the second sentence, you probably felt a flash of disgust. The third sentence may have pushed disgust into physical revulsion. You may have swallowed hard, as if to stop the vomit reflex. Perhaps you even felt that reflex. You had probably decided you did not like the man.
Then came the question: “Is he morally wrong?” Upon reading that, you may have begun a conversation in your mind. You began conscious reasoning, weighing your disgust against other factors – the chicken was already dead, he lives alone, no one else knows – based on moral principles such as harm and personal liberty. Having completed that process, you made a moral judgment. Or at least that’s how you experience it.
Dr. Haidt offers readers the metaphor of an elephant and a rider. The elephant is your unconscious mind, what Daniel Kahneman calls System 1. The rider is your conscious reasoning, what Kahneman calls System 2. The elephant makes immediate decisions, often without our awareness. The rider is your conscious mind, forming feelings and fragments of ideas into logical thoughts.
We imagine the rider as the pilot, looking at the path ahead, evaluating conditions and consequences, and steering the elephant based on analysis and reason. Most ethicists and philosophers back to the time of Plato have said that’s how our minds should work. And because we like to believe we think well, we think that’s how our minds do work.
The Press Secretary
Yet cognitive science now shows the rider is the elephant’s press secretary, not its pilot. That conscious reasoning you used to weigh your disgust against other factors and moral principles was not you rider making a decision of whether the man in the story was morally wrong. It was your rider a seeking a story to justify a decision your elephant already made.
Evidence for this includes neuroscientist Antonio Damasio‘s work with people who had suffered damage to the ventromedial prefrontal cortex, a part of the brain behind and slightly above the bridge of the nose. People who suffer such damage will have no emotional reaction to a photo of kittens cuddling with a ball of yarn, or to a photo of a mutilated body. They can’t hear the elephant of their unconscious minds, but the rider of conscious reasoning still works. Given a situation and a set of options, they can forecast possible outcomes and list the benefits and risks. Yet even having done that conscious reasoning, most people who suffer from damage to the VMPFC find it almost impossible to make even simple choices. They lose their jobs and often their families. Dr. Damasio’s research strongly suggests that emotions are essential to making decisions. Conscious reasoning can list the pros and cons of alternative options, but unless we feel something about those pros and cons … we can’t decide.
Based on this and several other studies – see my comment below – Dr. Haidt concludes that while the rider can prepare an issue brief, only the elephant can make a decision. The elephant uses parts of the brain and cognitive processes that evolved over hundreds of millions of years. The rider is comparatively recent, having evolved as our hominid ancestors began to develop speech. The elephant interacts with the world around us. The rider tells stories about the elephant and the world.
In Dr. Haidt’s model, the first task of conscious reasoning is not to seek objective truth, but to justify your intuitive decisions to yourself. Its next task is to justify those decisions to others. If they agree, the elephant is happy. If they disagree – and if their agreement matters to the elephant – the rider looks for another story. If the rider can’t find any story justifying that decision that others accept, the elephant may make a different decision … and the rider will tell a story to explain that change-of-mind.
And because your elephant knows our cultural ideal is of the rider-as-pilot, your elephant lets your rider be the star of its own stories. You tell yourself a story of making decisions by conscious reasoning – and you believe that story – because you know other people are more likely to accept those stories.
Tomorrow we’ll discuss why your rider evolved to be a press secretary rather than a scientist, and why we evolved to make moral decisions.
+++++
Happy Thursday!
So I know one thing, I will not be having chicken for dinner today! I find that story haunting and weird.
The more phone calls I make in search of volunteers, the easier it gets to spot what kind of elephant the person on the other end of the phone is riding. It makes a connection more likely if I can say in effect, “Nice elephant, I have one just like it.” or in values speak, “Obamacare has sure made a difference for you. I love hearing stories like yours and we need to make sure that we elect Obama and a Congress that will protect Obamacare and make progress on the rest of our agenda.”
We’ll discuss that more tomorrow and especially Saturday, addisnana. It’s easier for your rider to listen to with someone else’s rider if your elephant likes walking alongside their elephant, and vice versa.
If you feel uncomfortable with someone, your elephant “leans away” (Dr. Haidt’s term) from them and your rider immediately starts telling a story to justify that avoidance. That avoid-story will highlight anything in the other person’s story that doesn’t fit your rider’s stories of the world.
If you feel comfortable with someone, your elephant “leans toward” them and your rider starts telling a story to justify that approach. That approach-story will highlight anything in the other person’s story that fits your rider’s stories of the world.
Why does your elephant choose to avoid or to approach? Lots of reasons. The other person may seem threatening (mocking, or aggressive). It may be a response to some behavior or characteristic that you don’t consciously recognize.
For example, one of Dr. Haidt’s colleagues did interviews on a street, near a trash can. The interviewer put a new bag in the can before each interview. Before half of the interviews – before the subjects could see him – the interviewer spritzed the bag with fart spray. Those subjects judged the characters in the stories much more harshly than subjects who were not smelling that odor … even if the subjects were not consciously aware of the odor. (This is an example of affect bias.)
In another study, a researcher did some interviews next to a hand sanitizer dispenser, and others far away from the dispenser. The subjects who were interviewed next to the dispenser, again, judged characters in the stories more harshly. (Cleanliness is next to godliness.)
In another study, one of Dr. Haidt’s colleagues hypnotized subjects and implanted some with a disgust-aversion to the word take. He doesn’t detail the process, but imagine if the hypnotic suggestion involved this story:
Before bringing them out of hypnosis, the researcher instructed them to forget what she had told them. She then read them several stories like the example in the article above, including one of these two stories:
Subjects primed with a disgust-aversion to the word take were much more likely to say Dan was morally wrong if they read the second story. If asked why, they offered explanations like “Dan is a popularity-seeking snob” or “I don’t know, it just seems like he’s up to something.”
In other words, their elephants were primed to lean away from the word take, and their riders told stories to justify that lean, although the riders weren’t aware of actual reason for the aversion. Their reasons were rider-as-press-secretary tales to explain the elephant’s lean.
So yes: “Nice elephant, I have one just like it” matters … a lot.
Good morning! ::hugggggs::
My rider is looking forward to the next 2 installments, Crissie. Thanks for the discussion.
Thanks, Mike. Tomorrow will be a bit less theoretical, as we look at why we evolved to make moral judgments this way.
Good morning! ::hugggggs::
Oh yeah, I’m with Addisnana on the chicken dinner option.
Tenderloin for dinner tonight. Rotisserie chicken is also on the menu but it is too salty.
Tee hee….
Rhetorical question: would you have offered that reason if you hadn’t read the hypothetical story about the man and his chicken?
I say it’s a rhetorical question because your rider may tell a story that makes your elephant comfortable … but your rider doesn’t always know why your elephant leans one way or another. (See the examples in my reply to addisnana above.)
Good morning! ::hugggggs::
This is interesting, I’m looking forward to more, and I’m not entirely sure I agree with it. Yes, we do a lot of things automatically, then justify them. I’ve been aware of that for a long time. But moral reasoning I’m not so positive, at least not in every case.
We all feel similar disgust with that story above. That leaves the question of why we can judge it so differently. This is an interesting take on cognitive dissonance: where I go my mind will follow.
Certainly that’s why it’s so important to get people to volunteer and contribute to political campaigns. Once they’ve taken an action, however small, they have made a commitment they’re likely to defend.
We’ll discuss tomorrow why we all feel disgust with that story, yet we don’t reach the same judgment. A preview: our elephants internalize stories told by other riders, if our elephants like walking with their elephants. Our moral judgments are more cultural than individual.
And this …
… is an important insight. Once you get someone to act on your behalf, they’re more prone to defend their actions … and thus defend you.
Good morning! ::hugggggs::
I know exactly when I formed my moral reasoning, and I did it myself. I was sixteen, steeped in all the varieties of “sin” I had been taught by the Catholic Church and my mother (who had plenty of her own). I still remember the moment, on a spring afternoon when I was pondering what makes a good person, cut through all the dross I was loaded with and came up with “If it doesn’t harm anyone, it’s not a sin.” Corollaries spun off from that, but I do believe it was at that instant I became a progressive, while living in a conservative town with a conservative family.
So I know I consciously reached my own moral metric. I won’t argue that I haven’t been using it automatically ever since, since I’m aware that I have.
But this is where I disagree with Haidt. Maybe most of the time the press secretary justifies the elephant, but there are times when at least some of us point our elephants in a different direction quite consciously (and sometimes without other elephants to walk with), and demand the press secretary change his/her tune.
Or maybe I’m all mixed up on this. I guess I’ll be in a better position to understand and form an opinion after the next couple of days. Thanks!
We’ll discuss this more tomorrow, but Dr. Haidt proposes six moral ‘tastebuds’ – Harm, Fairness, Authority, Loyalty, Purity, and Liberty – and theorizes that we might inherit different balances of each, just as we seem to inherit different preferences for food tastes (salt, sour, bitter, sweet, umami). But he also proposes that which moral ‘tastebuds’ we emphasize are strongly influenced by our cultures, just as are our tastes for food. The progressive moral matrix that emphasizes Harm and Fairness as first principles – with less weight given to Authority, Loyalty, Purity, and Liberty – was percolating through the U.S. by the late 1950s. It was the implicit basis of studies like the Milgram Experiment, and of developmental psychology theorists like Piaget and Kohlberg.
It’s impossible to say how much of your moral transformation was your predisposition, how much was reasoning, and how much was picking up on the emerging Zeitgeist. Again, most ethicists and philosophers since Plato have lauded the ideal of Reason, and Western culture lauds the Individual … so your elephant is more comfortable with stories that credit your own rider.
Good morning! ::hugggggs::
Oh, some of it was definitely predisposition, and inherent sense of justice, right and wrong. I grew up in a time when a lot of that was sorely tested. At a very early age I was exposed to photographs from concentration camps, surrounded by the threat of a nuclear war that often kept me awake at night, and very strongly endowed with a sense of “right and wrong.”
So I won’t take mental credit for all of this. I can certainly look back and see what seemed to be inherent. How many 6 yr olds argue with a nun that Judas couldn’t possibly have gone to hell because Jesus needed someone to betray him to carry out his mission?
It was there, all along, often drawing gasps. 😉 But I know the moment I put the pieces together, and I’m quite certain I was swimming upstream in my home, my church and my high school.
So while I generally agree with what Haidt is saying, and it’s very important, I still want to argue that not every elephant keeps going with elephants it likes. But I guess that isn’t the point, is it? The point is that most of the time, in fact almost all of the time, we justify ourselves after the fact. And that’s something we need to be aware of.