Nate Silver writes that predictions are how we test our ideas against Realworldia. When we make better predictions – and reject bad predictions – we get less and less wrong. (More)

The Signal and the Noise, Part III: Less and Less Wrong (Non-Cynical Saturday)

This week Morning Feature considers Nate Silver’s new book The Signal and the Noise: Why So Many Predictions Fail – But Some Don’t. Thursday we looked at four common reasons for weak predictions. Yesterday we saw the two most common methods for scientific predictions and why scientists – and the rest of us – should adopt the better method. Today we conclude with why we need to make better predictions, and how to better sift through the predictions we see and hear.

Nate Silver is a mathematician who gained his reputation as a baseball statistical analyst before shifting to politics in 2008, when he correctly predicted the presidential winner in 49 states and all 35 U.S. Senate races. His FiveThirtyEight blog at the New York Times is widely cited by campaigns and media sources, and in 2009 Time Magazine included him in their 100 Most Influential People listing. He has a B.A. in economics from the University of Chicago and has written for Sports Illustrated, Newsweek, Slate, Vanity Fair, and many other publications.

Reality Check on Aisle 4

Most of us don’t think of ourselves as forecasters, or people who make predictions. Yet we all do, every day. When you leave home and take your usual route to the grocery or to work, you predict that the route you’re taking is probably the best way to get there. When you see the traffic jam and the “Road Work Ahead” signs, you might say “I should have gone another way.”

Your prediction was wrong, but that doesn’t always mean you did anything wrong. If you found the best information that was reasonably available, and weighed that information well, your “as good as you could” prediction just didn’t work out. But if you didn’t look for better information that was readily available, ignored information you didn’t like, or didn’t weigh it well, you could have made a better prediction. Better predictions might save us a few minutes in traffic … or they might save lives.

If we never think about the predictions we make, if we never specify them in ways we can test against Realworldia and adjust them as new information comes along, we can go merrily on thinking our ideas are never wrong. But Realworldia doesn’t care how right we think we are, and Realworldia isn’t shy about telling us we’re wrong.

Being Wrong Less

How can we make better predictions? Yesterday we saw how Bayes Theorem can help us specify and refine our predictions. But maybe you’re mathphobic and don’t want to do precise calculations. Or maybe you realize that your ballpark estimates of probability aren’t precise enough to justify doing those calculations. If the latter, congratulations: you’re probably right. If the former, also congratulations: there’s a simple way to approximate Bayes Theorem that will be ‘close enough’ for most of our predictions.

As we saw yesterday, for Bayes Theorem you need three values:

  • Prior – How likely would you estimate this event with no new information?
  • Signal – How often will this new information be true when this event does happen? (“true positive”)?
  • Noise – How often will this new information be true when this event does not happen (“false positive”)?

For this rough approximation, rate each of those on this seven-point scale, based on how often you’d expect to see that event if you checked exactly once every day:

ApproxBayesTheorem

Note – If you have reliable, precise data for one or two values but not for the third, round them to: Extremely Unlikely (1-in-128), Very Unlikely (1-in-16), Unlikely (1-in-4), Tossup (1-in-2), Likely (3-in-4), Very Likely (15-in-16), Extremely Likely (127-in-128). If you have reliable, precise data for all three, use the Bayes Theorem equation from yesterday.

Assuming you don’t have reliable, precise data for all three values, you can still approximate pretty well:

  • If your Signal has the same value as your Noise (e.g.: if both are “Tossup”) your prediction will not change (new prediction same as your Prior).
  • If your Signal is more likely than your Noise, adjust your prediction up one step for each step that your Signal is greater than your Noise. (E.g.: If your Prior is “Unlikely,” your Signal is “Likely,” and your Noise is “Unlikely,” adjust your prediction up two steps to “Likely.”)
  • If your Signal is less likely than your Noise, adjust your prediction down one step for each step that your Signal is less than your Noise. (E.g.: If your Prior is “Unlikely,” your Signal is “Tossup,” and your Noise is “Likely,” adjust your prediction down one step to “Very Unlikely.”)
  • If the Signal:Noise changed your prediction, adjust your prediction down two steps if your Prior was “Extremely Unlikely” or down one step if your Prior was “Very Unlikely.” Adjust upward if your Prior was “Extremely Likely” or “Very Likely.” (E.g.: If your Prior was “Extremely Unlikely,” adjust your new prediction down two steps. If your Prior was “Very Likely,” adjust your new prediction up one step.)

Example: Let’s revisit the results of mammograms for women under 50. The Prior is “Extremely Unlikely” because only about 1% of women under 50 have breast cancer. The Signal is “Likely” because a mammogram will detect 80% of breast cancers. The Noise is “Very Unlikely” because mammograms return about 7% false positives. Adjust your Prior up three steps to “Tossup” for the Signal:Noise … and down two steps because the Prior was “Extremely Unlikely” … and it is “Very Unlikely” (about 10%) that a typical woman under 50 has breast cancer based on a single positive mammogram result.

You may find this method more useful than the Bayes Theorem for three reasons. First, we rarely have exact probabilities for the events we’re predicting, and these broad categories are intuitive. Second, you don’t need any math; it’s just counting up or down the scale. Finally, this method forces us to admit when we don’t have exact numbers … such that even our most careful predictions can only be “in the ballpark.”

Knowing What We Can’t Predict

That last point is vital, because Silver’s book isn’t solely about making better predictions. It’s also about knowing when even the best predictions are not very reliable. A geologist with a Ph.D and a very impressive title may tell you Las Vegas will be hit by a magnitude 7.0 earthquake on Christmas Eve, but should you cancel that Christmas trip to Sin City?

Eh, no. Earthquakes just aren’t that predictable. Fault lines are very complex systems, and geologists don’t fully understand how they work. Worse, the data geologists would need for predictions that accurate is about 10 miles under the earth’s surface, so there’s no way to measure it. A geologist can tell you there’s a 0.169% chance of a magnitude 7.0 quake near Las Vegas over the next 50 years, but they can’t tell you if one will hit tomorrow, at Christmas, or in the next millennium. When it comes to earthquake prediction, there’s too little signal and too much noise.

That does not mean Las Vegas city officials should not plan for a major earthquake, or enact reasonable building codes. When you balance the cost of such planning against the human cost when a major earthquake strikes, Las Vegas should prepare for the foreseeable risks. But city officials in Fukushima had also prepared for foreseeable risks … and the tsunami that struck last March was worse than anyone expected.

Less and Less (and Less) Wrong….

Finally, Silver’s most important point is that Bayes Theorem never gives you a “final answer.” It gives you an adjusted prediction, based on your Prior and the Signal and Noise in some new information … but when you get newer information, you have to update your prediction again.

Your previously-adjusted prediction becomes your new Prior, and you go through the same process – again and again – each time you get new relevant information.

And here’s the real ‘magic’ of Bayes Theorem: even if you and I disagree wildly on our first predictions, we will gradually converge toward the same estimate, if we both update our predictions each each time we get new information, and if we estimate the Signal and Noise in the same ways each time.

We don’t have to stay stuck on our polar opposite views. Maybe you were right. Maybe I was right. Maybe we were both wrong. But if we apply Bayes Theorem together, we’ll both sneak up on the best new prediction we can find … until we get new information.

And that’s a good reason to hope.

+++++

Happy Saturday