Most of our judgments about fairness are fast and sure. Alas, a sense of confidence does not guarantee those judgments are well-reasoned. (More)

Fair or Unfair? Part II: Thinking Fast and Sure

This week Morning Feature looks at the deceptively complex concept of fairness. Yesterday we reviewed Jonathan Haidt’s research in The Righteous Mind on how we form moral judgments and what progressives and conservatives mean by fairness. Today we see how Daniel Kahneman’s Thinking, Fast and Slow exposes the limits of intuition and reason. Tomorrow we’ll conclude with how to discuss fairness with Fred, our archetypal median voter.

“Pay attention”

You’ve probably been asked to “pay attention.” You may have asked a child, friend, or coworker to “pay attention.” We think of it as a metaphor, but Daniel Kahneman began Thinking, Fast and Slow by explaining why we should take the phrase “pay attention” literally. Each of us has what Dr. Kahneman calls an “attention budget,” and it’s a lot smaller than we think.

For example, reading does not usually require all of our attention. Most of us can read and still sip coffee, listen to music, and even count the number of times the letter “r” appears in the next sentence. But reading requires more attention than we have left when we are multiplying 17×24 or adding 3 to each digit in the number 642531. (Yes, there were seven r’s.) We can read, or we can solve those problems, but we can’t do both at the same time. Together, those two tasks exceed our attention budget, and that budget helps us understand the limits of what Dr. Kahneman calls “System 2,” the conscious, analytical mind.

Because System 2 is so labor-intensive, we rely primarily on what Dr. Kahneman calls “System 1,” the subconscious, intuitive mind. We don’t imagine ourselves “thinking” about how to walk, although our brains send and receive a flurry of complex signals to coordinate our muscles, vision, inner ear, and other cues to keep us on balance, on course and – if we’re walking with a friend – on pace. The “thinking” we use to walk happens with System 1 and we rarely even notice it.

“But why?”

Yet as we saw yesterday, we make decisions with System 1 (what Jonathan Haidt calls the Elephant) and that includes moral judgments about fairness. When we involve System 2 (Dr. Haidt’s Rider) it’s usually as a press secretary, to explain and justify those quick, effortless, subconscious, intuitive System 1 decisions.

Alas, we’re much better at explaining and justifying our decisions than we are at testing and weighing them. Dr. Kahneman summarizes the empirical evidence for a long list of common cognitive biases, from the ambiguity and anchoring effects to the zero-risk and the zero-sum biases. Many of those human foibles enable the motivated reasoning at which System 2 excels: convincing ourselves and like-minded friends that we’re correct, regardless of evidence to the contrary.

Indeed we’re so wired to answer the question “but why?” that we’ll accept a statement with a cause – even an unlikely cause – more readily than a statement with no cause at all. Consider which of the following seems more likely:

  1. A flood in California kills 1000 people.
  2. An earthquake in California causes a flood that kills 1000 people.

There are many possible causes for floods and answer #1 (tacitly) includes all of them. Statistically, a statement that includes all possible causes must be more likely than a statement that includes only one of many possible causes. But #2 specifies a cause – appealing to our System 1 Elephant’s “but why?” craving – so that feels more likely than #1, which offers no (specific) cause.

“Above average”

But you’re smarter than that … just like the college students who say they are “99% certain” of their answers on quizzes, despite getting 40% of those answers wrong. We’re all subject to this overconfidence effect, including the 90% of college professors who say they’re “above average” instructors.

Indeed we’re prone to overconfidence even when we know we should be wary. Brokers can tell you which stocks will rise or fall. Guidance counselors can tell you which students will succeed or fail in college. And a military psychologist could tell the Israeli army which officer candidates would perform well in action. That psychologist was Dr. Kahneman, and he writes that he was very confident of his officer candidate assessments … even after a statistical review found that his predictions were little better than random guesses.

And as Dr. Kahneman explains, the intuitive System 1 isn’t equipped to parse complex events where multiple causes interact with random factors. Unless we have a lot of practice under conditions that provide clear and immediate feedback, our Elephant’s best guess at a regression analysis is more likely to fit our beliefs than the evidence.

“Is it possible?”

That’s a problem when we look at public policy issues of fairness, because those almost always involve “complex events where multiple causes interact with random factors.” Consider the gender pay gap. Although there’s some minor disagreement on exact percentages, depending on how pay is calculated, progressives and conservatives generally agree that the median income for women is less than the median income for men.

The disagreement arises when we ask “is that fair or unfair?” As we saw yesterday, progressives assume that people’s contributions are basically equal (unless evidence proves otherwise) and that unequal outcomes suggest an unfair system. By contrast, conservatives assume that people’s contributions are unequal (unless evidence proves otherwise) and that unequal outcomes suggest a proportional-thus-fair system.

In theory, that parenthetical (unless evidence proves otherwise) should lead us to the same conclusion. If evidence disproves the progressive assumption, we should recognize that and agree with conservatives. If evidence disproves the conservative assumption, they should recognize that and agree with us.

But what if the evidence shows a complex interplay of multiple causes and (in any individual example) random factors? Our intuitive System 1 Elephants don’t parse that kind of problem well … so our analytical System 2 Riders are free to highlight evidence that supports our implicit (System 1 Elephant) assumptions and ignore evidence that challenges our implicit assumptions.

And once our System 2 Rider can tell a plausible, coherent story – one that answers our “but why?” craving – we’re likely to be sure we’re right …

… especially if most of the people we care about agree with us.

Tomorrow we’ll see why “the people we care about,” not “the facts,” is the key to talking about fairness.

+++++

Happy Valentine’s Day!