Nate Silver writes that predictions are how we test our ideas against Realworldia. When we make better predictions – and reject bad predictions – we get less and less wrong. (More)

**The Signal and the Noise, Part III: Less and Less Wrong (Non-Cynical Saturday)**

This week *Morning Feature* considers Nate Silver’s new book *The Signal and the Noise: Why So Many Predictions Fail – But Some Don’t*. Thursday we looked at four common reasons for weak predictions. Yesterday we saw the two most common methods for scientific predictions and why scientists – and the rest of us – should adopt the better method. Today we conclude with why we need to make better predictions, and how to better sift through the predictions we see and hear.

Nate Silver is a mathematician who gained his reputation as a baseball statistical analyst before shifting to politics in 2008, when he correctly predicted the presidential winner in 49 states and all 35 U.S. Senate races. His FiveThirtyEight blog at the

New York Timesis widely cited by campaigns and media sources, and in 2009TimeMagazine included him in their 100 Most Influential People listing. He has a B.A. in economics from the University of Chicago and has written forSports Illustrated,Newsweek,Slate,Vanity Fair, and many other publications.

**Reality Check on Aisle 4**

Most of us don’t think of ourselves as forecasters, or people who make predictions. Yet we all do, every day. When you leave home and take your usual route to the grocery or to work, you predict that the route you’re taking is probably the best way to get there. When you see the traffic jam and the “Road Work Ahead” signs, you might say “I should have gone another way.”

Your prediction was wrong, but that doesn’t always mean you did anything wrong. If you found the best information that was reasonably available, and weighed that information well, your “as good as you could” prediction just didn’t work out. But if you didn’t look for better information that was readily available, ignored information you didn’t like, or didn’t weigh it well, you could have made a better prediction. Better predictions might save us a few minutes in traffic … or they might save lives.

If we never think about the predictions we make, if we never specify them in ways we can test against Realworldia and adjust them as new information comes along, we can go merrily on thinking our ideas are never wrong. But Realworldia doesn’t care how right we think we are, and Realworldia isn’t shy about telling us we’re wrong.

**Being Wrong Less**

How can we make better predictions? Yesterday we saw how Bayes Theorem can help us specify and refine our predictions. But maybe you’re mathphobic and don’t want to do precise calculations. Or maybe you realize that your ballpark estimates of probability aren’t precise enough to justify doing those calculations. If the latter, congratulations: you’re probably right. If the former, also congratulations: there’s a simple way to approximate Bayes Theorem that will be ‘close enough’ for most of our predictions.

As we saw yesterday, for Bayes Theorem you need three values:

*Prior*– How likely would you estimate this event with no new information?*Signal*– How often will this new information be true when this event does happen? (“true positive”)?*Noise*– How often will this new information be true when this event does not happen (“false positive”)?

For this rough approximation, rate each of those on this seven-point scale, based on how often you’d expect to see that event if you checked exactly once every day:

Note– If you have reliable, precise data for one or two values but not for the third, round them to:Extremely Unlikely(1-in-128),Very Unlikely(1-in-16),Unlikely(1-in-4),Tossup(1-in-2),Likely(3-in-4),Very Likely(15-in-16),Extremely Likely(127-in-128). If you have reliable, precise data for all three, use the Bayes Theorem equation from yesterday.

Assuming you don’t have reliable, precise data for all three values, you can still approximate pretty well:

- If your
*Signal*has the same value as your*Noise*(e.g.: if both are “Tossup”) your prediction will not change (new prediction same as your*Prior*). - If your
*Signal*is*more*likely than your*Noise*, adjust your prediction*up*one step for each step that your*Signal*is greater than your*Noise*. (E.g.: If your*Prior*is “Unlikely,” your*Signal*is “Likely,” and your*Noise*is “Unlikely,” adjust your prediction*up two steps*to “Likely.”) - If your
*Signal*is*less*likely than your*Noise*, adjust your prediction*down*one step for each step that your*Signal*is less than your*Noise*. (E.g.: If your*Prior*is “Unlikely,” your*Signal*is “Tossup,” and your*Noise*is “Likely,” adjust your prediction*down one step*to “Very Unlikely.”) - If the
*Signal:Noise*changed your prediction, adjust your prediction down two steps if your*Prior*was “Extremely Unlikely” or down one step if your*Prior*was “Very Unlikely.” Adjust upward if your*Prior*was “Extremely Likely” or “Very Likely.” (E.g.: If your*Prior*was “Extremely Unlikely,” adjust your new prediction down two steps. If your*Prior*was “Very Likely,” adjust your new prediction up one step.)

Example: Let’s revisit the results of mammograms for women under 50. The

Prioris “Extremely Unlikely” because only about 1% of women under 50 have breast cancer. TheSignalis “Likely” because a mammogram will detect 80% of breast cancers. TheNoiseis “Very Unlikely” because mammograms return about 7% false positives. Adjust yourPriorup three steps to “Tossup” for theSignal:Noise… and down two steps because thePriorwas “Extremely Unlikely” … and it is “Very Unlikely” (about 10%) that a typical woman under 50 has breast cancer based on a single positive mammogram result.

You may find this method more useful than the Bayes Theorem for three reasons. First, we rarely have exact probabilities for the events we’re predicting, and these broad categories are intuitive. Second, you don’t need any math; it’s just counting up or down the scale. Finally, this method forces us to admit when we don’t have exact numbers … such that even our most careful predictions can only be “in the ballpark.”

**Knowing What We Can’t Predict**

That last point is vital, because Silver’s book isn’t solely about making better predictions. It’s also about knowing when even the best predictions are not very reliable. A geologist with a Ph.D and a very impressive title may tell you Las Vegas will be hit by a magnitude 7.0 earthquake on Christmas Eve, but should you cancel that Christmas trip to Sin City?

Eh, no. Earthquakes just aren’t that predictable. Fault lines are very complex systems, and geologists don’t fully understand how they work. Worse, the data geologists would need for predictions that accurate is about 10 miles under the earth’s surface, so there’s no way to measure it. A geologist can tell you there’s a 0.169% chance of a magnitude 7.0 quake near Las Vegas over the next 50 years, but they can’t tell you if one will hit tomorrow, at Christmas, or in the next millennium. When it comes to earthquake prediction, there’s too little signal and too much noise.

That does not mean Las Vegas city officials should not plan for a major earthquake, or enact reasonable building codes. When you balance the cost of such planning against the human cost when a major earthquake strikes, Las Vegas should prepare for the foreseeable risks. But city officials in Fukushima had also prepared for foreseeable risks … and the tsunami that struck last March was worse than anyone expected.

**Less and Less (and Less) Wrong….**

Finally, Silver’s most important point is that *Bayes Theorem never gives you a “final answer.”* It gives you an adjusted prediction, based on your *Prior* and the *Signal* and *Noise* in some new information … but when you get newer information, you have to update your prediction again.

Your previously-adjusted prediction becomes your new *Prior*, and you go through the same process – again and again – each time you get new relevant information.

And here’s the real ‘magic’ of Bayes Theorem: even if you and I disagree wildly on our first predictions, we will gradually converge toward the same estimate, if we both update our predictions each each time we get new information, and if we estimate the *Signal* and *Noise* in the same ways each time.

We don’t have to stay stuck on our polar opposite views. Maybe you were right. Maybe I was right. Maybe we were both wrong. But if we apply Bayes Theorem together, we’ll both sneak up on the best new prediction we can find … until we get new information.

And that’s a good reason to hope.

+++++

Happy Saturday

I was thinking yesterday how little accurate data I have to apply to many things in my life. Then I realized I had one stellar instance in which to apply Bayes, or at last my non-mathematicl version: What was the likelihood that I would get a new contract after writing for this publisher for 22 years.

That’s a big data set, when I think about it. And as much as I chafed the last two months about their dallying I realized something: I’ve always gotten a contract from them in the past. So what was the likelihood that they’d come through despite the contracting market with a 5-book contract: Well, the signal is 100%. The noise… almost non-existent. Hence the conclusion: just be patient.

So after I applied this thinking yesterday morning, I calmed down. Yesterday afternoon, my “prediction” proved out. I received news that the new contract was being processed. I wish I’d known about this method a month or so ago. It would have saved a lot of grief.

Now I’m applying it to my eye. One went bad on me through what is a normal process. Upon asking my eye doctor was I more apt to have this in my other eye now, his response was “Your chances for the other eye are the same as if nothing had ever happened, the same as for everyone else.”

Thus my prediction is safe with current stats: one in six or seven. Not high odds that nothing will happen but not higher that they will.

This has been an interesting approach to things that I normally think of as “cloudy.” I’m going to see how many other ways I can apply this.

Thank you!

Your prediction about getting a new contract is a good example, although in Realworldia the

SignalandNoiseare almost never 100% or 0%, respectively.But let’s say your

Priorprediction of getting a new contract was “Very Likely” (about 93%) based on past experience. Let’s next say they asked you for a new book proposal (the new information), and in the past you have been submitted proposals about 30 times and you’ve only had a proposal turned down twice. YourSignalfrom their proposal request would be “Very Likely” (about 15-in-16) and yourNoisewould be “Very Unlikely” (about 1-in-16).So you count up four steps because your

Signalis four steps higher than yourNoise, and up one more step because yourPriorwas “Very Likely” … and for practical purposes we top out at “Extremely Likely” … more than 99% likely you’ll get a new contract.You could add a higher level: call it “Almost Certain” at 515-in-516. But in Realworldia our information is rarely that precise, and a prediction cannot be stronger than its weakest link. (The mathematical concept is

significant digits.)In other words … the right answer was “Be patient” … and congratulations! ðŸ˜€

Good morning! ::hugggggs::

Well, yes, the signal wasn’t quite 100%. It was more like 97%. And there was some noise. You make a very good point. But given the noise level, relaxing and waiting was the best choice.

It’s a great approach. I really am going to find other instances in which to apply it.

For example: parking lots. Without exception, every auto accident I’ve had has occurred in parking lots, yet there have been only three in my entire life, all occurring while I was backing up.

So I judge the increased likelihood of having an accident while backing out of a parking place to be high enough to try to park so I can pull out without backing up, even if it means extra walking. ðŸ™‚ I realize that’s not a high signal ratio, but it’s a good warning nonetheless. I think part of what needs to be included here is the negativity of the result. Under Bayes I shouldn’t be overly concerned given the thousands of times I’ve backed out of a parking place, but given the consequences I’m heavily weighting the few bad experiences.

Is that wrong?

It’s both right and wrong. ðŸ˜‰ Bayes Theorem only calculates probabilities, and the probability of an event is independent of its benefits or consequences.

To get all the way to a decision, you use

game theoryto weigh probabilities and risks or consequences together.Let’s say you use Bayes Theorem and predict that it’s “Extremely Unlikely” (about 1%) a given event will happen, but it will cost you $50,000 if it does. Conversely, it’s “Extremely Likely” (about 99%) the event will not happen, but you save only a penny (each time) by taking that risk.

So each time you risk that event, there’s a 1% risk of a $50,000 loss and a 99% chance you save one cent. You could crunch the numbers, but you probably don’t need to. Simply, in our hypothetical example you risk too much to save too little – even if the bad event is “Extremely Unlikely” – so you shouldn’t risk that event.

Good morning! ::hugggggs::

Aha! Got it. So in some instances Bayes will tell you one thing, but when you weigh the risk game theory might give a different result.

In the parking lot, I’m avoiding some very expensive grief, even though prediction would make it extremely unlikely to happen.

Thanks for your patience. I really am trying to get all this and how to best use it. It’s a change in perspective.

Bayes Theorem helps you estimate the probability of possible outcomes. Then, if you’re going to make a mathematically “rational” decision, you have to estimate the benefit or cost of each outcome and multiply that by the outcome’s probability. That gives you the mathematical expectation for each outcome … and you try to steer toward the one with the highest expectation.

But in Realworldia, it’s often difficult to precisely estimate benefits or costs. E.g.: How much does it cost you to get hit by a car while crossing without a crosswalk or against a light? Maybe it’s just a minor injury. Maybe you die. Hrmm….

The good news is that you can often approximate your benefit and cost estimates in the same way you do probabilities – “Extremely Bad” (about -128x), “Very Bad” (-16x), “Bad” (-4x), “Negilgible” (0), “Good” (+4x), “Very Good” (+16x), “Extremely Good (+128x) – and count one step up or down the scale above (with no change for “Negligible”). If you land nearer the good end of the scale, that option is safer than landing nearer the bad end of the scale.

(For math geeks, you can count up or down that scale because it’s based on logarithms. For non-math-geeks who are old enough, think “slide rule.”)

Good morning! ::hugggggs::

Plenty to think about now. And I will think about it. Thanks so much for your patience with me as I try to grasp this.

Thank you for the math geeks note. It saved a lot of time figuring out what was happening. The time scale is relative. Four year olds have a different time scale than seventy four year olds.

You’re welcome, Jim. These approximations are very vague, especially nearer the middle of the scale. It treats 60% likely and 40% likely as the same – both are “Tossup” – and of course they’re not. It is not close enough for actuaries or poker players.

But it’s close enough for many predictions we make … especially when you consider that our

Prior,Signal, andNoisevalues are often estimates based on so little data – and/or so much bias – that most of what we’re multiplying in the Bayes Theorem equation is our margins of error.Good afternoon! ::hugggggs::