Tag Archives: presidential forecast

I Can Read Faces! My Wager On The Election Results

On the eve of Election Day, I am a happy man.  Why is this, you ask?  Because the fundamentals-based forecasts issued by almost a  dozen political scientists before Labor Day are – in the aggregate – looking remarkably prescient. The average prediction of those eleven models has Obama winning 50.3% of the two-party vote, while the median gives him 50.6%.  So far, these forecasts seem to be holding up quite well, with both the RealClearPolitics and Pollster.com aggregate national poll showing this race, as measured by the popular vote, as essentially a dead heat, one day before the election.  Score one for political science!

Of course, that doesn’t tell us who is going to win, which is what most of you want to know.  Fear not!  We need only consult the state-based forecasts issued by Drew Linzer, Sam Wang, Simon Jackman, and Tom Holbrook and Jay DeSart.  (There are others out there, but these are the ones whose methods are most transparent, and with which I am most familiar.  If you want a bit of background on their methods, see this article on “the rise of the quants” ).  Although these forecast models differ in some of the particulars (whether to compensate for a pollster’s “house effect”, how to weight the state polls, the relative weight place on polls of likely vs. registered voters, etc.), they all operate on the same assumption: that state-based polls, taken in the aggregate, provide a very accurate indicator of who is going to win that state, particularly this late in the game.  That, in turn, makes it relatively easy to put together an Electoral College forecast.  All of them have done so, and as I’ve discussed in an earlier post, they all see it as more likely that Obama wins the Electoral College vote.  This doesn’t mean they believe Romney can’t win – they just see it as less probable than an Obama victory.

The process by which these political scientists (Wang is actually a neuroscientist, but he gets honorary membership) put together their predictions is in stark contrast to the methods used by the traditional pundits.  Consider this projection by Jay Cost, a very smart analyst who writes for the Weekly Standard.  Cost believes Romney will win this election, and in explaining why, he took a shot at political science forecast models:  “Both political science and the political polls too often imply a scientific precision that I no longer think actually exists in American politics. I have slowly learned that politics is a lot more art than science than I once believed. Accordingly, what follows is a prediction based on my interpretation of the lay of the land. I know others see it differently–and they could very well be right, and I could be wrong. I think Mitt Romney is likely to win next Tuesday.” As evidence for his prediction, Cost cites two points: Romney is leading among independents, and most voters think he will do a better job handling the economy.

Cost is not alone in thinking that Romney is going to win – there are some very smart people who have vast experience in electoral politics who agree with him.  Here is a list of the most prominent political pundits, and their predictions.   However, as I scan the list, I can’t help but notice that the bulk of people who agree with Cost in predicting a Romney victory are conservatives, including Karl Rove, Glenn Beck, Ari Fleischer, Jay Cost, Peggy Noonan and Dick Morris. On the other hand, many of the best-known liberal pundits – Markos Moulitsas, Jamelle Bouie, Jennifer Granholm, Donna Brazile and Cokie Roberts – think Obama will win.  Now, all of them claim to be looking at the same data – the same polls, the same candidate strategies, the same advertisements, etc.   How, then, can we explain why they end up with dramatically different predictions?  More generally, why do liberals think Obama will win, and conservatives think Romney will?

The answer, I think, is that people – liberals, conservatives and everyone else – are very good at seeing patterns in data that suggest outcomes that conform to their preferences.  Mind you – these aren’t implausible patterns – indeed, what makes them so seductive is that they are very plausible.  Cost, for instance, is correct that most polls indicate that Romney is viewed as better able to handle the economy.  But notice what he writes: “Poll after poll, I generally see the same thing. Romney has an edge on the economy. That includes most of the state polls.”  At the same time, however, he evidently is discounting those same state polls that, looked at in the aggregate by political scientists, indicate that Obama is more likely to win the Electoral College.  So, the question becomes: why value what the state polls say in one area – Romney’s handling of the economy – while discounting their overall projections that say Romney is more likely to lose?

The worry I have when analysts “interpret” data is that it leaves room for personal preferences to sneak in.  Taken to an extreme, it leads to far-fetched inferences like this one tweeted earlier today by Peggy Noonan: “I suspect both Romney and Obama have a sense of what’s coming, and it’s part of why Romney looks so peaceful and Obama so roiled.”  Really?  She can see the election outcome by “reading” their faces?  This presumes that both Obama and Romney know “what is coming” – highly unlikely in a 50/50 race- and that she has some method – a sixth sense? – for inferring when facial expressions reveal a person’s inner thoughts.  Maybe she can see dead people too.

Ok. That was a cheap shot. Let me be clear. I think Noonan is a very smart person.  Her memoir of her years as a Reagan speechwriter is one of the best accounts of life in the White House that I’ve ever read.  But I don’t believe she can read faces.

And that leads me to my broader point.  When I consider this latest election cycle, the most important development in how it has been covered, in my view, is the growing prominence of analysts whose methods are both more rigorous and more data-driven than what we are used to seeing from traditional “pundits”.  I think we are witnessing a sea change in political analysis, one that will leave an indelible mark on future coverage of presidential elections.  Increasingly, the traditional seat-of-the-pants, intuition-based method of analyzing elections is giving way to a less impressionistic mode of analysis. To be sure, these new methods are not infallible by any means. But they are a step forward. And political scientists are leading that movement.

To be fair to Cost, and Noonan, and all the rest of the “traditional” pundits, and the new ones too – they at least had the courage to put their professional reputations on the line and make a prediction.  So I am going to do the same – tomorrow morning.   I can tell you now – my prediction will be entirely atheoretical, and will be based on the latest state-based polling averages.  But to make it interesting, I will make a wager:  if my prediction regarding the winner tomorrow is incorrect, I will pay the bar bill (alcohol only) for everyone who attends the Election Night at the Grille, which Bert Johnson and I will be hosting.  So keep your receipts!  The festivities start at 7 p.m. and, as always, I’ll be living blogging the election returns while keeping the crowd at the Karl Rove Crossroads cafe – er, the Grille – entertained as well.  For those in the area, I hope to see you tomorrow night.  For the rest of you, please join me at this site.  We are hoping to break our all time record for participation.

I’ll see you tomorrow.

Dickinson and Silver, Take Two

Whether he did so out of frustration or some other emotion, I want to thank Nate Silver for taking time from his busy schedule to respond (twice!) to my critique of poll-based forecasting models similar to his.  This type of exchange is common for academics, and I always find it helpful  in clarifying my understanding of others’ work.  Based on the email and twitter feedback, I think that’s been the case here – my readers (and I hope Nate’s too!) have benefitted by Nate’s willingness to peel back the cover – at least a little! – on the box containing his forecast model, and I urge him to take up the recommendations from others to unveil the full model.  That would go a long way to answering some of the criticisms raised here and elsewhere.

Because of the interest in this topic, I want to take an additional minute here to respond to a few of the specific points Nate made in his comments to my previous post, as well as try to answer others’ comments. As I hope you’ll see, I think we are not actually too far apart in our understanding of what makes for a useful forecast model, at least in principle.  The differences have more to do with the purpose for, and the transparency with which, these forecast models are constructed.  As you will see, much of what passes for disagreement here is because political scientists are used to examining the details of others’ work, and putting it to the test.  That’s how the discipline advances.

To begin, Nate observes, “As a discipline, political science has done fairly poorly at prediction (see Tetlock for more, or the poor out-of-sample forecasting performance of the ‘fundamentals based’ presidential forecasting models.)”  There is a degree of truth here, but as several of my professional colleagues have pointed out Nate’s blanket indictment ignores the fact that some forecast models perform better than others.  A few perform quite well, in fact. More importantly, however, the way to  improve an underperforming forecast model is by making the theory better – not by jettisoning theory altogether.

And this brings me to Nate’s initial point in his last comment: “For instance, I find the whole distinction between theory/explanation and forecasting/prediction to be extremely problematic.”  I’m not quite sure what he means by “problematic”, but this gets to the heart of what political scientists do: we are all about theory and explanation.  Anyone can fit a regression based on a few variables to a series of past election results and call it a forecast model. (Indeed, this is the very critique Nate makes of some political science forecast models!) But for most political scientists, this is a very unsatisfying exercise, and not the purpose for constructing these forecast models in the first place.  Yes, we want to predict the outcome of the election correctly (and most of the best political scientists’ models do that quite consistently, contrary to what Silver’s comment implies), but prediction is best seen as a means for testing how well we understand what caused a particular election outcome.  And we often learn more when it turns out that our forecast model misses the mark, as they did for most scholars in the 2000 presidential  election, and again in the 2010 congressional midterms, when almost every political science forecast model of which I’m aware underestimated the Republican House seat gain (as did Nate’s model).  Those misses make us go back to examine the assumptions built into our forecast models and ask, “What went wrong? What did we miss? Is this an idiosyncratic event, or does it suggest deeper flaws in the underlying model?”

The key point here is you have to have a theory with which to start.  Now, if I’m following Nate correctly, he does start, at least implicitly, with a baseline structural forecast very similar to what political scientists use, so presumably he constructed that according to some notion of how elections work.  However, so far as I know, Nate has never specified the parameters associated with that baseline, nor the basis on which it was constructed. (For instance, on what prior elections, if any, did he test the model?) It is one thing to acknowledge that the fundamentals matter.  It is another to show how you think they matter, and to what degree. This lack of transparency (political scientists are big on transparency!) is problematic for a couple of reasons.  First, it makes it difficult to assess the uncertainty associated with his weekly forecast updates.  Let me be clear (since a couple of commenters raised this issue), I have no principled objection to updating forecast projections based on new information. (Drew Linzer does something similar in this paper, but in a more transparent and theoretically grounded manner.) But I’d like to be confident that these updates are meaningful, given a model’s level of precision.  As of now, it’s hard to determine that looking at Nate’s model.

Second, and more problematic for me, is the point I raised in my previous post.  If I understand Nate correctly, he updates his model by increasingly relying on polling data, until by Election Day his projection is based almost entirely on polls.  If your goal is simply to call the election correctly, there’s nothing wrong with this.  But I’m not sure how abandoning the initial structural model advances our theoretical understanding of election dynamics.  One could, of course, go back and adjust the baseline structural model according to the latest election results, but if it is not grounded on some understanding of election dynamics, this seems rather ad hoc.  Again, it may be that I’m not fair to Nate with this critique – but it’s hard to tell without seeing his model in full.

Lest I sound too critical of Nate’s approach, let me point out that his concluding statement in his last comment points, at least in principle, in the direction of common ground: “Essentially, the model uses these propositions as Bayesian priors. It starts out ‘believing’ in them, but concedes that the theory is probably wrong if, by the time we’ve gotten to Election Day, the polls and the theory are out of line.”   In practice, however, it seems to me that by Election Day Nate has pretty much conceded that the theory is wrong, or at least not very useful.  That’s fine for forecasting purposes, but not as good for what we as political scientists are trying to do, which is to understand why elections in general turn out as they do.  Even Linzer’s Bayesian forecast model, which is updated based on the latest polling, retains its structural component up through election day, at least in those states with minimal polling data (if I’m reading Drew’s paper correctly).  And, as I noted in my previous post, most one-shot structural models assume that as we approach Election Day, opinion polls will move closer to our model’s prediction. There will always be some error, of course, but that’s how we test the model.

(Drew’s work reminds me that one advantage scholars have today and a reason why Bayesian-based forecast models can be so much more accurate than more traditional one-shot structural models is the proliferation of state-based polling. Two decades ago I doubt political scientists could engage in the type of Bayesian updating typically of more recent models simply because there wasn’t a lot of polling data available.  I’ll spend a lot of time during the next few months dissecting the various flaws in the polls, but, used properly, they are really very useful for predicting election outcomes.)

I have other quibbles. For example, I wish he would address Hibbs’ argument that adding attitudinal data to forecast models isn’t theoretically justified. And if you do add them, how are they incorporated into the initial baseline model – what are the underlying assumptions? And I could also make a defense of parsimony when it comes to constructing models. But, rather than repeat myself, I’ll leave it here for now and let others weigh in.

 

 

Pay No Attention To Those Polls (Or To Forecast Models Based On Them)

Brace yourselves. With the two main party nominees established, we are now entering the poll-driven media phase of presidential electoral politics.  For the next several months, media coverage will be dominated by analyses of trial heat polls pitting the two candidates head-to-head.  Journalists will use this “hard news” to paint a picture of the campaign on an almost daily basis  – who is up, who is not, which campaign frame is working, how the latest gaffe has hurt (or has not hurt) a particular candidate.  Pundits specializing in election forecasting, like the New York Times’ Nate Silver, meanwhile, will use these polls to create weekly forecasts that purport to tell us the probability that Obama or Romney will emerge victorious in November.

Be forewarned: because polls are so volatile this early in the campaign, it may appear that, consistent with journalists’ coverage, candidates’ fortunes are changing on a weekly, if not daily basis, in response to gaffes, debates, candidate messaging or other highly idiosyncratic factors.  And that, in turn, will affect the win probabilities estimated by pundits like Silver.   Every week, Silver and others will issue a report suggesting that Obama’s chances of winning has dipped by .6%, or increased by that margin, of some such figured based in part on the latest polling data.

Pay no attention to these probability assessments. In contrast to what Silver and others may suggest, Obama’s and Romney’s chances of winning are not fluctuating on an almost daily or weekly basis.  Instead, if the past is any predictor, by Labor Day, or the traditional start of the general election campaign, their odds of winning will be relatively fixed, barring a major campaign disaster or significant exogenous shock to the political system.

This is not to say, however, that polls will remain stable after Labor Day.  Instead, you are likely to see some fluctuations in trial heat polls throughout the fall months, although they will eventually converge so that by the eve of Election Day, the polls will provide an accurate indication of the election results.  At that point, of course, the forecast models based on polls, like Silver’s, will also prove accurate.   Prior to that, however you ought not to put much stock into what the polls are telling us, nor in any forecast model that incorporates them.

Indeed, many (but not all) political science forecast models eschew the use of public opinion polls altogether.  The reason is because they don’t provide any additional information to help us understand why voters decide as they do. As Doug Hibbs, whose “Bread and Peace” model is one of the more accurate predictors of presidential elections, writes, “Attitudinal-opinion poll variables are themselves affected by objective fundamentals, and consequently they supply no insight into the root causes of voting behavior, even though they may provide good predictions of election results.”  In other words, at some point polls will prove useful for telling us who will win the election, but they don’t tell us why.  And that is really what matters to political scientists, if not to pundits like Silver.

The why, of course, as longtime readers will know by heart, is rooted in the election fundamentals that determine how most people vote.  Those fundamentals include the state of the economy, whether the nation is at war, how long a particular party has been in power, the relative position of the two candidates on the ideological spectrum, and the underlying partisan preferences of voters going into the election.  Most of these factors are in place by Labor Day, and by constructing measures for them, political scientists can produce a reasonably reliable forecast of who will win the popular vote come November.  More sophisticated analyses will also make an Electoral College projection, although this is subject to a bit more uncertainty.

But if these fundamentals are in place, why do the polls vary so much?  Gary King and Andrew Gelman addressed this in an article they published a couple of decades ago, but whose findings, I think, still hold today.  Simply put, it is because voters are responding to the pollsters’ questions without having fully considered the candidates in terms of these fundamentals. And this is why, despite my claim that elections are driven by fundamentals that are largely in place by Labor Day, campaigns still matter. However, they don’t matter in the ways that journalists would have us believe: voters aren’t changing their minds in reaction to the latest gaffe, or debate results, or campaign ad.  Instead, campaigns matter because they inform voters about the fundamentals in ways that allow them to judge which candidate, based on his ideology and issue stance, better addresses the voter’s interests.  Early in the campaign, however, most potential voters simply aren’t informed regarding either candidate positions or the fundamentals more generally, so they respond to surveys on the basis of incomplete information that is often colored by media coverage.  But eventually, as they begin to focus on the race itself, this media “noise” becomes much less important, and polls will increasingly reflect voters’  “true” preferences, based on the fundamentals. And that is why Silver’s model, eventually, will prove accurate, even though it probably isn’t telling us much about the two candidates’ relative chances today, or during the next several months.

As political scientists, then, we simply need to measure those fundamentals, and then assume that as voters become “enlightened”, they will vote in ways that we expect them to vote.  And, more often than not, we are right – at least within a specified margin of error!  Now, if a candidate goes “off message” – see Al Gore in 2000 – and doesn’t play to the fundamentals, then our forecast models can go significantly wrong.  And if an election is very close – and this may well be the case in 2012 – our models will lack the precision necessary to project a winner.  But you should view this as strength – unlike some pundits who breathlessly inform us that Obama’s Electoral College vote has dropped by .02% – political scientists are sensitive to, and try to specify, the uncertainty with which they present their forecasts.  It is no use pretending our models are more accurate than they are.  Sometimes an election is too close to call, based on the fundamentals alone.

The bottom line is that despite what the media says, polls – and the forecast models such as Silver’s that incorporate them right now – aren’t worth much more than entertainment value, and they won’t be worth more than that for several months to come.  As we near Election Day, of course, it will be a different matter.  By then, however, you won’t need a fancy forecast model that incorporates a dozen variables in some “top secret” formula to predict the winner.  Nor, for that matter, do you need any political science theory. Instead, as Sam Wang has shown, a simple state-based polling model is all you need to predict the presidential Electoral College vote.  (Wang’s model was, to my knowledge, the most parsimonious and accurate one out there for 2008 among those based primarily on polling data.)  Of course, this won’t tell you why a candidate won.  For that, you listen to political scientists, not pundits.  (In Wang’s defense, he’s not pretending to do anything more than predict the winner based on polling data alone.)

So, pay no heed the next time a pundit tells you that, based on the latest polls, Obama’s win probability has dropped by .5%.  It may fuel the water cooler conversation – but it won’t tell us anything about who is going to win in 2012, and why.