“But … why?” a child asks. “Just because!” an exasperated parent replies. We didn’t like it as children, and we still don’t. (More)
Thinking, Fast and Slow, Part II – Just Because!
For the next two weeks, Morning Feature will look at Daniel Kahneman’s book Thinking, Fast and Slow. Yesterday we saw the two ways we think, fast and slow. Today we examine our mental shortcuts and biases. Tomorrow we’ll explore overconfidence. Next Thursday we’ll consider how we make choices. Next Friday we’ll meet our two selves, experience and memory. Next Saturday we’ll conclude with how our improving model of human thinking should shape public policy.
Daniel Kahneman is a Professor Emeritus of psychology and public affairs at Princeton University’s Woodrow Wilson School of Public and International Affairs. In collaboration with Amos Tversky, Dr. Kahneman was a pioneer in the psychology of judgment, decision theory, and behavioral economics. He was awarded the 2002 Nobel Prize in Economics for his work in prospect theory, and has taught at universities in Israel and across the U.S.
But … why?
We all asked that question as children, and most parents among us have tried to explain, and tried to explain the explanation, and tried to explain that, until finally giving up with “Just because!” We didn’t like hearing that as children, and it turns out we still don’t like hearing it as adults. We want events to have causes, and for good reason. Causes make events more predictable, as well as more controllable. If we create the cause, we can elicit the event. If we prevent the cause, we can avoid the event. Eliciting good events, and avoiding bad events, begins with understanding causes.
That drive to understand causes has brought huge benefits. No longer accepting “just because god willed it” as a reason spurred the Enlightenment and the birth of modern science. From Issac Newton to James Maxwell to Albert Einstein to Neils Bohr and continuing today, our better understanding of physical causes allowed us to make predictions enabling machines and devices that were once only dreams, all birthed in the childlike question: “But … why?”
Our brains are wired to ask that question, and to find answers for it. In fact, our brains ask and find answers for that question so well … that we find answers even when they don’t exist.
In the late 1990s, Bill Gates tumbled onto a statistic: small high schools are far more likely to make their states’ “Top 25” lists than larger high schools. Causes leaped off the page: smaller high schools could give more individual attention, were less intimidating, had fewer cliques and fewer fights, and so on. From 2000-2010, the Gates Foundation and other groups poured over $2 billion into a project to shrink the size of American high schools, breaking up giant “factory schools” into “small learning communities” of 400 students or less. Over the past two years, Gates came to admit the smaller schools experiment had failed, and it failed for a good reason.
Or rather, it failed for an absence of reasons.
While we can imagine many reasons that smaller high schools outperform larger high schools, the statistic on which Gates based his $2 billion project was a predictable sampling error. Random events in a small sample set are more likely to yield extreme results than random events in a large sample set. For example, if Jack draws four random marbles from a jar with an equal number of white and red marbles, there is a 1-in-16 chance he’ll get all white marbles. If Jill draws seven random marbles from the same jar, there’s only a 1-in-128 chance she’ll get all white marbles. Four Marbles Jack will get the extreme result of all white marbles eight times more often than Seven Marbles Jill … just because he’s drawing fewer marbles.
Similarly, a small school needs only a handful of random overachievers to land in the “Top 25” list … and only a handful of random underachievers to land in the “Bottom 25” list. Indeed had Gates or others involved in the project looked at the “Bottom 25” lists, they might have noticed that those were also smaller schools. Their relying only on the “Top 25” lists was a classic example of System 1’s What You See Is All There Is (WYSIATI). And we’re very good at finding causes with questionable evidence.
Meet Tom W
Our brains are wired to tell and understand stories. Even preschool children can invent stories, and the events in their stories will have causes. We also remember stories, especially if the events in the stories are vivid and dramatic. We are intuitive storytellers. But we are not intuitive statisticians.
For example, Center University offers four graduate programs: 40% of the graduate students are in humanities, 30% are in law, 20% are in business school, and 10% are in computer science. Tom W is a graduate student at Center University. Here’s a personality sketch of Tom W written during his senior year in high school by a psychologist, based on psychological tests of uncertain validity:
Tom W is of high intelligence, although lacking in true creativity. He has a need for order and clarity, and for neat and tidy systems in which every detail finds its appropriate place. His writing is rather dull and mechanical, occasionally enlivened by somewhat corny puns and flashes of imagination of the sci-fi type. He has a strong drive for competence. He seems to have little feel and little sympathy for other people, and does not enjoy interacting with others. Self-centered, he nonetheless has a deep moral sense.
Take a moment and list Tom’s likely graduate programs, from 1 (most likely) to 4 (least likely) before reading on.
Welcome back. This is a System 2 task, balancing lots of details and facts, but Dr. Kahneman wrote Tom’s bio to include details – “corny puns,” “sci-fi,” “neat and tidy systems,” “little feel and little sympathy for other people” – that books and movies and maybe a few actual encounters have led us to associate with geeks.
If you rated computer science #1 as Tom’s most likely program, you’re not alone. Most people who answer this problem do. Yet statistically, computer science is the least likely answer (only 10% of CU’s graduate students are in that program) and the problem admits that Tom’s bio is both out of date (“written during his senior year of high school”) and not very reliable (“based on psychological tests of uncertain validity”).
Your estimate was based on representativeness: how well Tom’s bio matched your System 1 archetypes for students of humanities, law, business, or computer science. Your System 2 then accepted those reasons that Tom might choose one program and not choose another, despite the statistical probabilities … and despite the admission that your evidence (Tom’s bio) was out of date and not very reliable.
Which of the following natural disasters do you think is more likely?
1. A flood in California kills 1000 people.
2. An earthquake in California causes a flood that kills 1000 people.
If you answered #2, you answered your mind’s craving for causes.
In fact #1 is far more likely, because there are several potential causes for a severe flood: a severe rainstorm, fast snow melt, burst dam, etc. It is statistically impossible for one potential cause (an earthquake) to be more likely than all potential causes (including an earthquake) combined. Still, #2 is plausible (earthquakes happen in California) and it is more coherent (it includes a cause) … so most people will decide that disaster is more likely.
That’s a big problem if we’re trying to assess risks and formulate policies. Given a choice between statistical probability (the likelihood of obesity-related death) and a vivid story (Al Qaeda sleeper cells setting off dirty bombs in a subway) … we go with the vivid story. We may even support laws allowing indefinite military detention of terrorist suspects, even U.S. citizens captured in the U.S., “because the U.S. is part of the battlefield” – and oppose policies to reduce obesity “because we don’t want a nanny state telling us what to eat” – and insist that both of those positions are about “defending our freedom.”
“But … why?” has led to everything we associate with modern life. Yet finding causes that didn’t exist wasted $2 billion and broke up dozens of schools across the U.S., and favoring vivid stories over statistics leads us to grossly miscalculate risks, misallocate resources, and mistakenly waive basic constitutional liberties.
Fortunately, you’re smarter than that. And tomorrow we’ll see why …
… when we discuss overconfidence.