By James Janega and Tom Skilling
8:10 AM AKST, November 9, 2012
Four years before the 2012 election, just after the bruising primary battle between Hillary Clinton and Barack Obama, I met Nate Silver and he explained how he had been reasonably certain that Obama would pass Clinton on Super Tuesday, and by how much.
The reason to interview Silver in 2008 for the Chicago Tribune was that Silver -- a University of Chicago economics student then still living in Chicago -- had predicted Obama would run away with that election over John McCain.
His qualifications at the time had to do with predicting the 2005 World Series win for the Chicago White Sox essentially before that team took the field in the 2005 season. His other interest was online gambling. But predicting political outcomes was why he was becoming most widely known.
He was right during the primaries, right in the 2008 general election, and right again this week. Now he's a social media meme as well as news celebrity. (By the way, here's the Instagram photo of Silver used for this post. Credit to @owlese!) Even so, coverage this week of Silver -- as stats celebrity or wise man in the wilderness -- doesn't do justice to what he does. It's not even an "average of polls," as a few call it, but rather a weighted average of forecasts from those polls. Which makes it harder and easier to do than most people think.
Easier in concept, anyhow. In fact, the methodology is so common you encounter it every day that you check a weather report. And as it happens, we in Chicago are pretty familiar with another guy who does that really well:
WGN Chief Meteorologist Tom Skilling.
"What a timely subject, in light of the just completed election," Skilling said in an email.
Each day, Tom looks at as many as 40 different model forecasts from 12 different weather forecast models to prepare his predictions. (This is what other meteorological forecasters do, too.) Several of those models are themselves “ensembles” or averages of multiple forecasts, he says. In turn, THOSE are the products of averaging as many as 50 different runs of the same model. Forcasters vary up the assumptions on each one to slightly alter weather features, their intensities, and when they appear in the forecast cycle.
Substitute "polls of likely voters" for "weather models," and you begin to get some idea of how Silver does what he does. (And if you were impressed by Silver's results, you'll also have a deeper appreciation for the work involved producing the daily weather report.)
"This ensembling or forecast-averaging technique has led to stunning forecast improvements, like halving the error on predicted hurricane tracks made five days ahead of time and more accurately pinpointing the move of huge weather systems, like the Groundhog Day blizzard of 2011 that stranded motorists on Lake Shore Drive," Skilling said. "We've had $16 billion in U.S. weather disasters over the past year-and-a-half, and advance predictions for many details in each of those calamitous weather events came a week in advance. That’s not happened before in operational meteorology."
"Our ability to look at and average across a range of supercomputer forecast simulations has made that kind of performance possible," Skilling said.
As the Romney campaign can now attest, ensemble forecasting is pretty reliable. Skilling points to studies that back up the methodology, too.
"There have been studies which definitively indicate that the weather forecasts which average -- or as we say in the weather profession "which 'ensemble'" a range of model forecasts -- actually produce more accurate predictions. There are a couple of reasons this is true, and I’m sure it’s true in political polling as well," Tom said.
UNLIKELY OUTLIERS STICK OUT
"For one, by looking at a range of model forecasts, it becomes apparent which of these predictions are 'outliers.' In other words, those runs of the model that generate forecasts quite different from others -- despite employing precisely the same data used to produce the other forecasts. You have better odds of generating an accurate forecast if you know a general scenario on how the atmosphere is likely to evolve is coming from a whole series of models. That makes you more comfortable rejecting the predictions which may be heading off on 'forecast tangents' that are unlikely to verify.
IMPERFECT INPUTS CREATE UNCERTAINTY
"Computer models of the atmosphere are incredibly complex. In fact, they’re considered the most complicated mathematical simulations of natural systems performed these days on supercomputers.
"We measure the atmosphere imperfectly, which is one reason why predictions of developing weather system can vary," Tom said. To generate forecasts of how weather events are likely to proceed, Skilling says assumptions are made from two million observations that go into each atmospheric forecast model run these days. Those two million observations describe the starting line for the projected atmospheric race -- i.e. where weather systems are located and how intense they are as the forecast begins. From there, fluid dynamics physics equations describe how these system are likely to move and evolve. But bad inputs? How can you expect the models to take you to the right destination if you start in the wrong spot?
Tom's email goes into even more weather-specific detail, which I'll save until the next big winter storm. But the comparison between modeling and averaging complex atmospheric models and doing something similar with public opinion polls is instructive. And will probably be part of our lives in lots of ways as we all march into the future.
As Tom says, this "is very timely and has relevance far beyond the world of weather forecasting. Ensembles of poll results are such terrific examples of another way in which the ensembling of individual forecasts -- in the case of polling, forecasts of voter behavior -- has led to more accurate outcomes."
-- James Janega
Join Trib Nation on Facebook for more of the how and why of Tribune journalism.