Odds Are, You’re Reading The Odds Wrong

The Weather Channel forecasts a 30% chance of rain tomorrow – and it absolutely pours. Was the Weather Channel’s prediction wrong?

To quote the prophet Andre 3000, “you can plan a pretty picnic – but you can’t predict the weather.” Of course they got it wrong, weathermen are morons!

Based on his career batting average of .305, a fan predicts that there is a ~70% chance Mike Trout will fail to get a hit in an at-bat. Of course, Trout goes 9-for-13 the next weekend. Was the fan’s prediction wrong?

Let me explain something to you, poindexter. Baseball isn’t a math problem. Hitters have hot stretches every once in a while, but that .305 batting average is about right.

Nate Silver’s FiveThirtyEight model predicts there is, approximately, a 30% chance that Donald Trump will win the Presidency. Well… you know how it went down. Was Silver’s prediction wrong?

THE POLLS! BURN THE POLLS! YOU CAN’T TRUST THE NUMBERS!

Congratulations, hypothetical strawman – you suck at probability! Unfortunately, our italicized imaginary friend is not alone. Too many of us fail to understand that any good prediction is tied to a probability, and that any one outcome doesn’t necessarily make a prediction right or wrong. 

A probability-based prediction takes the relevant information available – things like atmospheric pressure readings, batting averages, or polling results – and uses that information to estimate the chance an event happens. That chance is probability, usually expressed as a decimal or percentage.1

It may seem obvious, but people often fail to grasp that a high probability is not a guarantee. It’s easy to read 90% as a sure thing without realizing that, in the long run, 90% means an event will happen 9 out of 10 times. That 1 out of 10 not only still happens, but should be expected to happen – even if the prediction is perfectly calibrated. We don’t know that a prediction is wrong unless we see that, over many occurrences, the unlikely outcome is happening more often than possible given the projected odds.2

If we predict Mike Trout should get a hit 30% of the time, a 9-for-13 weekend doesn’t mean the prediction is wrong – but a 900-for-1300 stretch means we swung and missed.

You might be more guilty of this mistake than you think. Here’s a forecast pulled from the Weather Channel website for a three-day stretch in Minnesota:

What does that forecast tell you (besides not to live in Minnesota)? If you read that “it’s going to rain on Saturday, snow on Sunday, and be dry on Monday,” you’re missing the critical element of uncertainty. A 60% chance of rain on Saturday means there is a 40% chance of dry skies to go with the frigid temperature. Minnesotans know they can live somewhere else, right?

Let’s drive that point home – a 60% chance of rain means that rain isn’t a sure thing, and one dry day doesn’t make the prediction wrong. In fact, we should expect the forecast to be “wrong” four out of ten days – almost half the time! We need to see if it rains 6 out of every 10 days where the weatherman forecasted a 60% chance of rain over the long-run to know if the prediction is accurate or not.

Who’s nerdy enough to do that work? Meet Nate Silver! He invented PECOTA (a complex system which crunches baseball stats to project future player performance) and founded FiveThirtyEight (a data journalism website which analyzes polls and statistics across politics, sports, economics, and popular culture). Silver is like Keith Richards to rock stars or Michael Jordan to basketball players – he’s dork royalty.

In his 2012 book The Signal and the Noise, Silver compared the predicted chance of rain from three forecasters3 with the actual occurrence of rain. He found that when reputable sources like the Weather Channel or the National Weather Service say there is a 60% chance of rain… wait for it… it rains about 6 out of every 10 times.4

There’s a reason we think weathermen are morons, but it’s not because they can’t predict the weather. People don’t understand predictions.

If misunderstanding of probability is common around weather forecasts, it’s downright rampant in the land of political punditry. A Politico article offering a postmortem of the 2016 election opened by stating – bluntly – that “Everybody was wrong.” The authors explain that “headed into Election Day, polling evangelist Nate Silver’s 538 website put Clinton’s odds at winning the White House at about 72 percent.” Seventy two percent! They blew it!

Just like everyday people reading uncertain weather forecasts as guarantees, pundits shouting that polls and forecasters “were wrong” often misunderstand how probability-based predictions work. Silver’s model gave Clinton a 72% chance of winning, but it also gave Donald Trump a 28% chance of victory. That’s better than one in four! If we go off batting average, FiveThirtyEight gave Trump a higher chance of winning than the hitter had of getting a hit in a randomly picked MLB at-bat in 2017.5

It’s completely fair to criticize forecasters who portrayed Clinton as a lock, and it’s undeniably true that many so-called experts misinterpreted data to give misleading predictions. At the same time, Trump’s election doesn’t inherently mean that FiveThirtyEight, Nate Silver, or “the polls” were wrong. An underdog coming out on top doesn’t necessarily mean that the odds were off. Unlikely doesn’t mean impossible.

No one carries the weight of our misunderstanding more heavily than Silver himself – and Nate is losing it. As chaos swirled around the 2016 election, Silver responded to a critical article that failed to grasp his methods by delicately stating that “This article is so fucking idiotic and irresponsible.” Subtle!

Things have only gone downhill from there. After the recent special election in Pennsylvania, Silver ranted “If you think the polls were off in this race, you’re a fucking idiot, full stop… I’m just really, really tired of people substituting saying ‘the polls’ when they really mean idiotic media narratives based on cherry-picked misinterpretations of the polls.” Not the measured response one would expect from the poster boy for data-driven analysis, large sample sizes, and statistics!

Can you blame the poor guy? Wouldn’t you be infuriated if reporters, talking heads and Twitter trolls – people who don’t understand the basics of how prediction and probability work – misrepresented, maligned, or distorted your life’s work? Nate Silver is one bad day away from stalking around Gotham like Two-Face, a revolver in one hand and the latest polling numbers in the other.

It doesn’t have to be like this, guys. We can save Nate Silver before it’s too late.

There’s a real chance that we can do better – it’s just not a lock.

 

 

 

  1. If you want to dig a little deeper, this site created by an undergrad at Brown University does a great job of walking through some of the basic ideas behind probability. It also has many pretty colors and fun graphs you can play with. Enjoy!
  2. Statisticians have a host of techniques for evaluating if, given a predicted probability, an event is happening more often than possible due to random chance. Answering that question is the heart of most statistical analysis.
  3. Local meteorologists, the Weather Channel and the National Weather Service.
  4. Local meteorologists were far less accurate – but we probably should have seen that coming.
  5. The total batting average of MLB was .255 in 2017, meaning the hitter got a hit in 25.5% of at-bats.

Leave a Reply

Your email address will not be published. Required fields are marked *