We get a lot of news each day, both public and personal. How should that news change our views – our predictions – about the world and our lives? (More)

The Signal and the Noise, Part II: How Should The News Change Your Views?

This week Morning Feature considers Nate Silver’s new book The Signal and the Noise: Why So Many Predictions Fail – But Some Don’t. Yesterday we looked at four common reasons for weak predictions. Today we see the two most common methods for scientific predictions and why scientists – and the rest of us – should adopt the better method. Tomorrow we’ll conclude with why we need to make better predictions, and how to better sift through the predictions we see and hear.

Nate Silver is a mathematician who gained his reputation as a baseball statistical analyst before shifting to politics in 2008, when he correctly predicted the presidential winner in 49 states and all 35 U.S. Senate races. His FiveThirtyEight blog at the New York Times is widely cited by campaigns and media sources, and in 2009 Time Magazine included him in their 100 Most Influential People listing. He has a B.A. in economics from the University of Chicago and has written for Sports Illustrated, Newsweek, Slate, Vanity Fair, and many other publications.

Once Upon a Mammogram

Most women over age 50 have had at least one mammogram. Many had our first in our 40s and, for many, it was not only painful but frightening … and unnecessary.

Roughly one-in-eight women will develop breast cancer at some point in their lives, making it a serious health risk. As with most cancers, early detection greatly increases the survival rate, and a mammogram will detect about 80% of breast cancers. Thus, women were told to get mammograms every year or two, starting at age 40.

Yet that medical advice changed in 2009, as the U.S. Preventive Services Task Force recommended that women under age 50 should not get a mammogram unless suggested by a doctor based on the woman’s personal and family medical history. Why did the Preventive Services Task Force change their recommendation?

Meet Thomas Bayes

The reason has to do with someone you’ve probably never heard of, unless you’re a mathematician: Thomas Bayes. A minister and mathematician in London during the 18th century, Bayes developed a theory to predict the likelihood of an event, based on a prior estimate and new information. His work is called Bayes Theorem, and it’s a simple equation. To solve it you need three values:

  • Prior – How likely was this event, before you found any new information.
  • Signal – How often would the new information predict the event, if the event is happening (a “true positive”)?
  • Noise – How often would the new information predict the event, if the event is not happening (a “false positive”)?

Each of those numbers is expressed as a probability, where 1.00 = 100% (certain) and 0.00 = 0% (impossible). The probability of a 50-50 coin flip is 0.50. Then plug the numbers into this equation:

BayesTheoremEquation

Okay, I know, your eyes just glazed over. But trust me and walk through an example with me. Like … mammograms for women under 50.

Twice Upon a Mammogram

We know from the link above that a mammogram will detect 80% of breast cancers, so our Signal value is 0.80. But what is our Prior? How likely are women ages 40-50 to have breast cancer? A review by the American Cancer Society found that only about 1% of women under 50 have breast cancer. And what about the Noise: women who do not have breast cancer but would still get a ‘positive’ mammogram? A University of California San Francisco paper found “false positives” mammograms are not rare; Silver estimates their probability at about 7%. So we get these values:

  • Prior = 0.01
  • Signal = 0.80
  • Noise = 0.07

Plug those values into the Bayes Theorem equation and you get (0.008)/(0.008 + .069) = 0.103 … or about 10%. In other words, if a woman under 50 gets a ‘positive’ mammogram, there’s only a 10% that chance she really has breast cancer.

But how can that be, if mammograms will detect 80% of cancers? The answer feels wrong, but think of it this way. Assume 1000 typical women under age 50 go in for mammograms. On average, only 10 (1% Prior estimate) of those women actually have breast cancer, and the mammogram will detect it in 8 cases (80% Signal of true positives). Of the other 990 women – who do not have breast cancer – mammograms will falsely ‘detect’ cancer in 69 cases (7% Noise of false positives). Thus, only 8 of the 77 ‘positive’ mammograms (about 10%) are cases where women actually have breast cancer. The rest are women who are frightened, and probably sent for more tests and perhaps even biopsies … unnecessarily.

That’s why the Preventive Services Task Force said a woman under 50 should not get a mammogram unless her doctor suggests it based on her personal and family medical history. That personal or family history may make her much more likely to have breast cancer. In Bayesian terms: her own history may raise the Prior enough that the Signal of a ‘positive’ mammogram, for her, would outweigh the Noise.

Once Upon a Debate

Yes, new information should change our views of the world and our lives. But we often weigh new information poorly in light of what we already had good reason to believe. Take the presidential debate Wednesday night, which left most Republicans aglow with delight and many Democrats downcast with fear. Yes, most mainstream pundits and instant polls agree that Mitt Romney won the debate. But how much should that event change our views on President Obama’s electoral chances?

Not much, actually.

Before the debate, the New York TimesNate Silver had President Obama an 85% favorite to win on November 6th, so that’s a reasonable Prior. That article also includes instant polling data on the prior presidential debates. Because this was the first presidential debate of 2012, let’s look at only first debate outcomes in each election since 1984 (when instant polling began):

  • Winning Election – The winners of the first debates won only 3 of those 7 elections. On the other hand….
  • Gaining in Polls – The winners of the first debates gained in the polls in 4 of those 7 elections. In the other 3, first debate winner dropped in the polls.

We can use that data to estimate the Signal and Noise values for winning a first debate. In terms of winning the election, the Signal is 0.43 (4-of-7) and the Noise is 0.57. In terms of gaining at least something in the polls, those values are reversed. Of course, seven is a very tiny sample set, so we might boost the Signal a bit because debates seem like they should matter and there’s too little data to prove they don’t. On the other hand, the polls show there are few undecided voters this year and early voting has already begun in several states, so we might lower the Signal a bit because the debates can’t change as many votes. Let’s be generous and say winning the debate was 51% Signal and 49% Noise.

We then plug those numbers into Bayes Theorem and find President Obama should be about an 84% favorite to win the election, after having lost the first debate. In other words, that debate outcome shouldn’t change our prediction much at all. In fact Silver now projects President Obama an 87% favorite, because of other new polls.

Does this mean you have to grind through Bayes Theorem every time you ponder a medical test or news story? No. Tomorrow we’ll discuss a quick-and-dirty substitute that is usually ‘close enough,’ especially when you recognize that your values for Prior, Signal, and Noise will often be broad estimates.

Instead, Bayes Theorem says you should weigh news more cautiously, that you should not completely disregard previous data, and that you must be aware of the personal biases that shape your estimates of the Prior, the Signal, and the Noise.

+++++

Happy Friday!