Nate Silver Is Not A Political Scientist

I’ve made this point before, most recently during the 2008 presidential campaign when Silver’s forecast model, with its rapidly changing “win” probabilities, made it appear as if voters were altering their preferences on a weekly basis.  This was nonsense, of course, which is why the political science forecast models issued around Labor Day proved generally accurate.

But in light of Silver’s column yesterday, it bears repeating: he’s not a political scientist.  He’s an economist by training, but he’s really a weathercaster when it comes to predicting political outcomes. That is, he’s very adept at doing the equivalent of climbing to the top of Mt. Worth (a local skiing area for those not familiar with God’s Green Mountains), looking west toward Lake Champlain to see what the prevailing winds are carrying toward us, and issuing a weather bulletin for tomorrow.  Mind you, this isn’t necessarily a knock on Silver’s work – he’s a damn good weathercaster.  In 2008, his day—before election estimate came pretty close to nailing the Electoral College vote. More generally, at his best, he digs up intriguing data or uncovers interesting political patterns.  At the same time, however, when it comes to his forecast models, he’s susceptible to the “Look Ma! No Hands!” approach in which he suggests the more numerous the variables in his model, the more effective it must be.  In truth, as Sam Wang demonstrated in 2008, when his much simpler forecast model proved more accurate than Silver’s,  parsimony can be a virtue when it comes to predictions.

Why do I bring this up now?  Because, in the face of conflicting data, weathercasters can become unstrung if they are used to simply reporting the weather without possessing much of a grasp of basic meteorology.  In yesterday’s column which the more cynical among us (who, moi?) might interpret as a classic CYA move, Silver raises a number of reasons why current forecasts (read: his!) might prove hopelessly wrong.  Now, I applaud all efforts to specify the confidence interval surrounding a forecast. But the lack of logic underling Silver’s presentation reveals just how little theory goes into his predictions.  For instance, he suggests the incumbent rule – which he has spent two years debunking – might actually come into play tomorrow.  (The incumbent rule says, in effect, that in close races, almost all undecideds break for the challenger).  Silver has provided data suggesting this rule didn’t apply in 2006 or 2008.  You would think, therefore, that he doesn’t believe in the incumbent rule.  Not so!  He writes, “So, to cite the incumbent rule as a point of fact as wrong. As a theory, however — particularly one that applies to this election and not necessarily to others — perhaps it will turn out to have some legs.”  Excuse me?  Why, if there’s no factual basis for the incumbent rule, will it turn out to apply in this election?

The rest of the column rests on equally sketchy reasoning.  Silver concludes by writing, “What we know, however, is that polls can sometimes miss pretty badly in either direction. Often, this is attributed to voters having made up (or changed) their minds at the last minute — but it’s more likely that the polls were wrong all along. These are some reasons they could be wrong in a way that underestimates how well Republicans will do. There are also, of course, a lot of reasons they could be underestimating Democrats; we’ll cover these in a separate piece.”

Let me get this straight: it’s possible the polls are underestimating the Republican support.  Or, they might be underestimating Democrats’ support.  I think this means if his forecast model proves incorrect, it’s because the polls “were wrong all along”.   Really?  Might it instead have something to do with his model?  Come on Silver – man up!  As it is, you already take the easy way out by issuing a forecast a day before the election, in contrast to the political scientists who put their reputations on the line by Labor Day. Do you believe in your model or not?

The bottom line: if you want to know tomorrow’s weather, a weathercaster is good enough.  If you want to know what causes the weather, you might want to look elsewhere.

16 comments

  1. Silver’s post today about the generic ballot question proves your point as well. He says, “Political scientists have formulas to translate the results of the generic ballot to an estimate of the number of seats that the Republicans will gain. I’m not much of a fan of these formulas…” He notes that these silly models predict a 53 seat pickup by Republicans. What a bogus result! His much more sophisticated model comes up with the much more logical and academic answer of…53. Silver does however have very pretty graphics, and I am intrigued to see just how right he will be when he literally breaks down every single congressional race. Then again perhaps too much info may be a bad thing. It is one thing to predict a presidential election, but mid terms seem to be a far trickier beast to tame.

  2. Mr. Dickinson goes his entire piece without admitting that Mr. Silver’s post is a “devil’s advocate” item which builds on a long string of previous items Mr. Silver has written showing an unusually wide range of potential outcomes in the current race. This essentially undermines the complaint that Mr. Silver shouldn’t invoke the incumbant rule. Honestly, couldn’t Mr. Dickinson analyze Mr. Silver’s linkage of variables or his high probability results found in individual races?

    I fear that Mr. Dickinson’s heading tells the real story here: Nate Silver Is Not A Political Scientist!! (Horrors!) Sometimes theorists feel emotional about the success of empiricists.

  3. Think of a spectrum in which political scientists’ formulas are at one end (true predictions), and a simple polling snapshot like mine is at the other end. Something like Silver’s model is some mixture of the two. It is my view that such a complex model might be splitting the difference – in a bad way. Let’s put it this way: empirically, what is the evidence that his model is any good at true prediction? If it were a real prediction, wouldn’t it stay the same all the time? At least it should fluctuate around the final outcome. I would love to see an empirical test of the approach.

    One thing I liked about my own Presidential EV snapshot in 2008 (available at my website) is that to all appearances, it looked as if it “liked” to sit near the final outcome, one that political science models predicted that year. There were events that knocked it away from that trajectory, notably the entrance of Sarah Palin onto the national political scence. But like a Weeble, it wobbled and then came back.

    Based on this storyline, it starts to emerge why one would want to keep predictions and snapshots separate. Doing so provides a way to tell the story. In this case the story is that in 2008, conditions heavily favored a Democratic president. In some sense the McCain campaign performed quite well by pushing the hurricane off course for a few moments in August.

    Sam Wang, Princeton University

  4. Craig, a problem is not that Silver is an empiricist. It’s just that he isn’t a very good one. Basically he applied heavy numerical modeling at the right time, when the data density got high enough to make it interesting. I find his work to be most interesting for interpolating missing values, for instance in one of the Dakotas (North I think) in the 2008 Presidential race. But his methods are a mess and not well supported.

    The tests one might want to apply (incumbent rule and so on) are interesting, but the difficulty is in designing the study well. It’s important to identify the boundary between what the data can and can’t tell you. For this reason the “prediction” that he makes is somewhat maddening – it isn’t one, but neither is it a snapshot.

    On the positive side, he has an extremely strong relationship with the data and writes entertainingly about it. But he’s not exactly making the most of the information. Many of the kludges in 2008 made no sense. Believe it or not, I think his math needs to be somewhat stronger – and be more carefully applied.

  5. As someone that has a little bit of robot sensing training, I’m finding the criticism of his basic approach a little silly. You don’t program robots to sense the world by just modeling it from first principles – you start with it, and adjust your expectations based upon sensor data coming in, in much of a Bayesian way. I find that Silver is doing something very similar. Perhaps his execution could be bettered by some refinements, and some better predictors from first political science principles could be nice. But he doesn’t need to produce either a hard “prediction” or a “snapshot” – it’s an expectation of the election with the data available, and that seems just fine.

  6. Has anybody bothered to look at the impact of error when utilizing a meta-analysis of polls ? I was always taught ( and agree with) the concept that the more one combines studies ( read here polls) to get a summative statement then the more that error must be considered. By some notions, could not Silver’s fluctuations be explained by his inability to control ( or even consider) error. I would like to hear/see discussion on the role of error in Silver’s work !

  7. Nate Silver is not a political scientist, he is a statistical analyst specializing in time series analysis. The models he uses are not particularly complicated from a statistics/machine learning perspective, and he is transparent about his methodology. You absolutely cannot judge the validity of a statistical model based on a single sample (one election), except to reject those models for which that sample falls way outside their confidence intervals. What Nate has been trying to do is to underscore the degree of uncertainty that is a direct consequence of his models. Particularly good has been his discussion of how correlated errors in polling produce much greater uncertainty in results; in other words, the error distribution has “fat tails”, and this should be part of any predictive model.

    Sam asks, “If it were a real prediction, wouldn’t it stay the same all the time?” No, a real prediction should respond to the best information currently available, and this information can change rapidly and unpredictably. There’s no “oracle” that knows the right answer that you can converge to if you just had a powerful enough model.

  8. Speaking of extrapolating from a single data point, Mr. Silver went into the election with a prediction of high uncertainty and came out looking pretty good on that count. OTOH, his numerical prediction itself pretty much landed in the same plateau as that of most everyone else. (“The Forecasts and the Outcome”, John Sides, Nov. 3, 2010, themonkeycage.org)

    Of course, the emotional difference between accepting a mixed approach or wanting a pure approach depends on whether you are an empiricist or a theorist in the first place. That is, an empiricist would say “Sure, let’s try a mixed approach and see if it works.” A theorist would not feel so free and easy.

  9. so, he ended calling 54 of 63 House seats and 36 of 37 Governor’s races in 2010. this gives him some degree of credibility, o?

  10. This article is like Sports Illustrated complaining about Peyton Manning’s throwing style. Silver is successful, more successful than almost all other polls. And has been consistently proven right in 2008 2010 2012.

    So he’s not a political scientists? Great!

  11. Sailor – No one is quibbling with Nate’s predictions. For political scientists, this isn’t about style points at all. It about contributing to a body of knowledge to increase our collective understanding of what drives elections. That’s what political scientists do – but, in Nate’s defense, it’s not what he does. No matter – the political scientists forecast models by Linzer, Jackman, Putnam all hit the final electoral college vote squarely on the head. So they are both accurate and transparent too!

  12. Matthew, if your value of Silver is defined by his being a political scientist or not, I’d re-assess the stock you put into political scientists. Simply put, Silver has been more accurate, with more frequency, than any political scientist.

  13. Kevin – Simply put, you are wrong. With no disrespect for Nate, who does wonderful work, there were a number of political scientists who nailed the 2012 election, and had it called long before Nate. See Drew Linzer’s work at Votamatic, for instance. And when it came to congressional election forecasting, any number of political scientist have proved more accurate – again, with no intent to disparage Nate’s very fine work. Of course, the biggest difference, and the point of my post, is that political scientists’ pride themselves on transparency, so we can see how they construct their forecast models. That’s why I focus on their work – I can learn from them in a way that I cannot from Nate.

Leave a Reply

Your email address will not be published. Required fields are marked *