Politics is full of stories. Most are iffy or even false, but many still believe them. How do we talk about stories that sound true but…? (More)

Better Gossip, Part I – That Sounds True, But….

This week Morning Feature will distill the topics we’ve covered this fall into talking points we can use in Fred Whispering, conversations with coworkers, friends, family, and neighbors. Today we examine how to critique political stories that sound true but are not reliable. Tomorrow we’ll consider how to discuss opportunity and risk. Saturday we’ll conclude with how to avoid Pompous Expert Disorder.

Note: Many of this week’s talking points come from the “Speaking About” sections at the end of each chapter in Daniel Kahneman’s Thinking, Fast and Slow. We discussed that book in greater detail over a six-part Morning Feature series ending last Saturday. I have rewritten some of his examples to more directly apply in political conversations.

Fred Whispering as Gossip?

It may seem odd to describe talking with Fred, our archetypal median voter, as “gossip.” Gossip has a bad reputation. It calls to mind talking about people we know with other people we know, in ways that can tear apart relationships and cause serious emotional harm. That is bad gossip, the kind we should avoid. But we also discuss legitimate concerns at work, in civic organizations, and in our government. Those discussions often concern the plans or actions of people who are not present, and in that respect the conversations are gossip. And as we rarely talk directly with candidates or elected officials, almost all of our political discussions are that kind of gossip.

Dr. Kahneman makes a convincing argument that how we gossip can change how we think: the issues we discuss and how we discuss them. As we grow more fluent in recognizing and discussing common human mistakes in others’ decisions, and if we believe our decisions will be gossiped about in the same way, we become more alert to those mistakes in our own thinking. For that reason, he ends each chapter of Thinking, Fast and Slow with a “Speaking About” section with examples of how we might apply those ideas in “water cooler conversations” … what we at BPI call Fred Whispering.

Not so fast….

He had an impression, but some of his impressions are illusions.

This is your System 1 talking. Slow down and let your System 2 take control.

We are a storytelling species but, as websites like snopes.com and shows like Mythbusters attest, we often believe stories that are iffy at best and sometimes provably false. Of the elements that make a story memorable – what Chip and Dan Heath call stickiness – only one (credibility) even relates to truth, and only indirectly. A story can seem credible yet be false, or seem incredible yet be true.

This isn’t a problem you should solve while driving. It requires too much mental effort.

When I’m tired after a long day at work, I may watch the news without really thinking through what they’re telling me.

Alas, separating fact from fiction is hard work. It may require research, or at least asking ourselves “Where did I hear that?” Even if the source seems reliable, our conclusions may not be.

The world makes less sense than we think. The coherence comes mostly from the way our minds work.

His System 1 constructed a story and his System 2 believed it. It happens to all of us.

We’re inclined to believe that because it’s been repeated so often, but let’s think it through. Familiarity breeds liking. It’s called the exposure effect.

She can’t accept that she was just unlucky, so she ends up looking for causes that don’t exist.

We like stories where familiar pieces fit nicely together, especially if the story includes a cause and effect that make sense. Those kinds of stories feel true. But if a story matters – if we might decide to act on it – “feels true” can get us in trouble.

Jumping to conclusions

They based that decision on one report from one consultant. What You See Is All There Is. They should have looked for more information.

That was a mental shotgun. He was asked whether he agreed with the policy, but he gave an answer because he dislikes the president.

The important question is which candidate will be a better leader. The easier question is which candidate we liked in the debate. But that’s just one piece of information.

Even when we think we’re thinking through a problem, we can make mistakes. We take one piece of information and run with it, or answer an easier question instead of the important question. Worse, we often do this without realizing it.

Yes, his decision worked out there. But was it a good decision, or did he just get lucky?

This sample is too small to justify any reliable conclusion. Let’s not follow the law of small numbers.

We also tend to focus on results, and presume a good result came from a good decision or vice versa. Good decisions make good results more likely and bad decisions make bad outcomes more likely, but we can still get lucky or unlucky. When we can, we should look for a bigger sample set to see how much luck was involved.

Don’t count on it

Because he mentioned a number, we tend to adjust our estimate from his number. It’s called the anchoring effect. But his number may be completely wrong.

We’re very good with stories, and fairly good with simple numbers, but our brains aren’t wired to make accurate guesses about complex math problems. When a story involves numbers, we need to slow down and be careful.

He gives the same three examples every time, and they’re easy to remember so it sounds like a common problem. I know three stories of people who got hit by lightning, but getting hit by lightning is still extremely rare.

That’s a non-event inflated by the media until everyone’s talking about it. Psychologists call it an availability cascade. I call it Bright Shiny Object Syndrome.

How often we hear about something is not a good guide to how often something happens. We hear more about bad things more than good things, and remember bad things better as well. And when everyone else seems to be talking about something, it seems like it should be important. But it may not be.

That happens very rarely, but they’re talking as if it’s a sure thing. Why should we think this case is the exception and not the rule?

They’ve constructed a very complicated scenario and they insist it’s highly probable. But it’s not. It’s just a plausible story.

People don’t think in statistics, but we remember representative examples.

When someone says “But it will be different this time,” we should be cautious. And while vividly imagined details make a story more plausible, they don’t make an improbable outcome more likely. When someone gives an example, it’s worth checking to see if that example is representative. On the flip side, if you want Fred to remember a point, give a representative example rather than a statistic.

He did horribly last time and better this time. Is that real improvement, or was last time just a fluke?

That got off to a great start, but it probably won’t continue that well. On average, things are … average.

A great poll this week usually means a not-so-great poll next week. A lousy poll this week usually means a not-as-bad poll next week. That’s statistical noise, not data. The same is true of human performance.

But the experts said….

The question is not whether the pundits are well-informed. The question is whether the world is predictable.

He’s giving a complex explanation, but it’s based on a lot of irrelevant information.

She’s very confident about her decision, but confidence does not prove good judgment.

How much could he really know about this? Does he get lots of practice with immediate and clear results?

Knowing the details of a topic and being able to discuss it confidently does not prove someone has made or even could make a reliable prediction. When people talk about systems and events that are inherently uncertain, we shouldn’t give their predictions much weight. A checklist you could write on a Post-It note may be more consistently reliable than than an expert’s detail-driven intuition.

He’s planning for the best case, but how many ways could this go wrong? How many people have tried this plan, and how did it work out for them?

Things have always worked out for him, so he has an illusion of control. But he’s underestimating the obstacles here.

Before they start that they should ask themselves: How will they explain it if it fails?

He wants us to invest even more so we don’t waste everything we’ve put into this so far. But can this plan work, or are we just digging a deeper hole?

Plans are important, but plans alone don’t guarantee results. We need to be wary of plans that are too optimistic, especially when similar plans have taken longer and produced less than predicted. A “pre-mortem” – asking ourselves how we’ll explain failure – can force us to look for problems we may not otherwise have considered. And when a plan clearly won’t work, pouring more resources into it only increases the cost of failure.

What specific examples would you use for any of these points?

+++++

Happy Thursday!