Quantum physicist Neils Bohr famously said “Prediction is very difficult, especially if it’s about the future.”

That’s true, and it’s not limited to quantum physics. (More)

The Signal and the Noise, Part I: Confirmation Bias, Chaos, and Complexity

This week Morning Feature considers Nate Silver’s new book The Signal and the Noise: Why So Many Predictions Fail – But Some Don’t. Today we look at four common reasons for weak predictions. Tomorrow we’ll see the two most common methods for scientific predictions and why scientists – and the rest of us – should adopt the better method. Saturday we’ll conclude with why we need to make better predictions, and how to better sift through the predictions we see and hear.

Nate Silver is a mathematician who gained his reputation as a baseball statistical analyst before shifting to politics in 2008, when he correctly predicted the presidential winner in 49 states and all 35 U.S. Senate races. His FiveThirtyEight blog at the New York Times is widely cited by campaigns and media sources, and in 2009 Time Magazine included him in their 100 Most Influential People listing. He has a B.A. in economics from the University of Chicago and has written for Sports Illustrated, Newsweek, Slate, Vanity Fair, and many other publications.

Cockiness

The McLaughlin Group is a syndicated weekly news-talk show hosted by John McLaughlin. Each show features four panelists discussing a series topics, and ends with a segment where McLaughlin asks each for a prediction. As the panelists are prominent journalists, you might expect their predictions to be fairly reliable.

If so, you would be wrong. Silver cites a study of predictions made on the show, and the study found that you could predict future news events as reliably as McLaughlin’s panelists … by flipping a coin.

That’s right. The McLaughlin Group‘s panelists each scored about 50% on their predictions. But that show isn’t alone. A 2011 study at Hamilton College reviewed predictions made by 26 pundits in major print media or on Sunday morning news-talk shows over the period September 2007 to December 2008. They found that only one-third of the pundits scored any better than flipping a coin. And that was a fairly small sample set. Over a longer period, many if not most of the “good” pundits in their sample would likely also slip to around coin-flip status.

Why do pundits find it so hard to make good predictions? Part of the problem, as Silver explains, is which pundits get chosen to make their predictions in major media. The bolder and more confident the pundit’s predictions, the more likely they will attract attention. That translates into ratings, newspaper and magazine sales, page views, and ad revenues. Alas, the bolder and more confident the prediction … the more likely it will be wrong.

Confirmation Bias

Iffy media punditry is a minor problem at worst, but bad predictions in other fields can be catastrophic. As Silver explains, the 2008 economic collapse happened – in large part – because analysts at investment banks and ratings agencies made two very bad predictions. The analysts did not expect one family defaulting on a mortgage to signal any higher risk of default by other families, and they thought real estate prices would keep rising at roughly the same pace. Putting those two predictions together, the analysts thought that very few families would default, and that most families near default could easily refinance using their home’s rising equity as collateral.

In fact stagnant median wages and rising family debt were pushing the economy into recession, especially in some parts of the country, and in those areas housing prices were shaky. If people began losing jobs in a weakening economy, many more families would face default. If real estate prices stopped rising, those families would be unable to refinance. Both happened, and fancy mortgage-backed derivatives once rated as investment-grade AAA bonds turned into worthless paper.

This was not a case of “no one could have foreseen.” As Silver details, Google searches for “housing bubble” spiked in 2004 and 2005, when the bankers and ratings agency analysts were still insisting housing price increases reflected real rising value and not a speculative bubble. More people were also searching for jobs and debt relief agencies. And there was public data: several economists and journalists recognized and talked about the risks.

Why did the bankers and ratings agency analysts miss the signals? Silver attributes their failure to confirmation bias, our human tendency to sift through messy data for bits that confirm what we already believe. The more data we can access, and the more we are personally invested in a given conclusion, the more likely we can find nuggets that tell us we’re right.

Chaos Theory

Meteorologists also have mounds of data and, as Silver notes, weather forecasts have grown far more accurate over the past fifty years. Even so, forecasts are often wrong – or at least it feels as if a “30% chance of rain” means we get wet more often than three days in ten – especially if they predict more than a few days ahead.

Here the enemy is not confirmation bias, although Silver notes that local forecasters slightly exaggerate the chance of rain. (Most viewers are more upset if they expect sunshine and it rains than if they expect rain and it’s sunny.) Instead the enemy is chaos theory: where even small changes in initial conditions can have huge effects.

You may have heard of this as the “butterfly effect,” and it’s a built-in problem for weather forecasters. Although technology and computers have improved, there are still gaps in measurements of temperature, relative humidity, barometric pressure, wind speeds, and most other variables that forecasters rely on, especially at the micro-level. Forecasters may have very accurate measurements at the nearest airport but, as George Carlin famously quipped, no one lives at the airport. Even a degree or two of temperature difference where you live, or a percentage point or two more or less humidity because of that nearby pond, can mean you get rain when the forecast says all clear.

Complexity

Finally there are inherently complex systems that not even experts fully understand. Silver uses the nation’s economy as an example, citing predictions by economists about changes in Gross Domestic Product (GDP), the aggregate of goods and services exchanged in a given year. Silver looked at economists’ predictions over the last 40 years and found the 90% confidence interval on their forecasts was ±3.5%.

That is, if a typical economist predicts GDP will grow by 2.5%, you can be 90% confident that GDP change will be in the range of -1% (recession) to +6% (economic boom). Gee, thanks.

While some economists have better track records than others, and while the aggregate of many economists’ predictions is somewhat more reliable than almost any individual economist’s prediction, the basic problem is that a national economy is a very complex system and economists don’t yet fully understand the underlying causes, effects, and feedback loops. In fact, they’re not even certain that equations that worked well from 1950 to 1970 will work well from 2010-2030. Conditions have changed, and data that was very predictive back then may be less relevant today.

Worse, real-world complex systems are often impossible to test under controlled conditions, and the results of today’s predictions may not be testable for months, years, or decades. Other events will happen in the meantime, many of them difficult to predict, so it’s hard to know whether a prediction was right or wrong because of sound or unsound theory … or just good or bad luck.

If all that sounds gloomy – and it is – we’ll see tomorrow that it’s possible to make better predictions … if we use better methods.

+++++

Happy Thursday!