Brace yourselves. With the two main party nominees established, we are now entering the poll-driven media phase of presidential electoral politics. For the next several months, media coverage will be dominated by analyses of trial heat polls pitting the two candidates head-to-head. Journalists will use this “hard news” to paint a picture of the campaign on an almost daily basis – who is up, who is not, which campaign frame is working, how the latest gaffe has hurt (or has not hurt) a particular candidate. Pundits specializing in election forecasting, like the New York Times’ Nate Silver, meanwhile, will use these polls to create weekly forecasts that purport to tell us the probability that Obama or Romney will emerge victorious in November.
Be forewarned: because polls are so volatile this early in the campaign, it may appear that, consistent with journalists’ coverage, candidates’ fortunes are changing on a weekly, if not daily basis, in response to gaffes, debates, candidate messaging or other highly idiosyncratic factors. And that, in turn, will affect the win probabilities estimated by pundits like Silver. Every week, Silver and others will issue a report suggesting that Obama’s chances of winning has dipped by .6%, or increased by that margin, of some such figured based in part on the latest polling data.
Pay no attention to these probability assessments. In contrast to what Silver and others may suggest, Obama’s and Romney’s chances of winning are not fluctuating on an almost daily or weekly basis. Instead, if the past is any predictor, by Labor Day, or the traditional start of the general election campaign, their odds of winning will be relatively fixed, barring a major campaign disaster or significant exogenous shock to the political system.
This is not to say, however, that polls will remain stable after Labor Day. Instead, you are likely to see some fluctuations in trial heat polls throughout the fall months, although they will eventually converge so that by the eve of Election Day, the polls will provide an accurate indication of the election results. At that point, of course, the forecast models based on polls, like Silver’s, will also prove accurate. Prior to that, however you ought not to put much stock into what the polls are telling us, nor in any forecast model that incorporates them.
Indeed, many (but not all) political science forecast models eschew the use of public opinion polls altogether. The reason is because they don’t provide any additional information to help us understand why voters decide as they do. As Doug Hibbs, whose “Bread and Peace” model is one of the more accurate predictors of presidential elections, writes, “Attitudinal-opinion poll variables are themselves affected by objective fundamentals, and consequently they supply no insight into the root causes of voting behavior, even though they may provide good predictions of election results.” In other words, at some point polls will prove useful for telling us who will win the election, but they don’t tell us why. And that is really what matters to political scientists, if not to pundits like Silver.
The why, of course, as longtime readers will know by heart, is rooted in the election fundamentals that determine how most people vote. Those fundamentals include the state of the economy, whether the nation is at war, how long a particular party has been in power, the relative position of the two candidates on the ideological spectrum, and the underlying partisan preferences of voters going into the election. Most of these factors are in place by Labor Day, and by constructing measures for them, political scientists can produce a reasonably reliable forecast of who will win the popular vote come November. More sophisticated analyses will also make an Electoral College projection, although this is subject to a bit more uncertainty.
But if these fundamentals are in place, why do the polls vary so much? Gary King and Andrew Gelman addressed this in an article they published a couple of decades ago, but whose findings, I think, still hold today. Simply put, it is because voters are responding to the pollsters’ questions without having fully considered the candidates in terms of these fundamentals. And this is why, despite my claim that elections are driven by fundamentals that are largely in place by Labor Day, campaigns still matter. However, they don’t matter in the ways that journalists would have us believe: voters aren’t changing their minds in reaction to the latest gaffe, or debate results, or campaign ad. Instead, campaigns matter because they inform voters about the fundamentals in ways that allow them to judge which candidate, based on his ideology and issue stance, better addresses the voter’s interests. Early in the campaign, however, most potential voters simply aren’t informed regarding either candidate positions or the fundamentals more generally, so they respond to surveys on the basis of incomplete information that is often colored by media coverage. But eventually, as they begin to focus on the race itself, this media “noise” becomes much less important, and polls will increasingly reflect voters’ “true” preferences, based on the fundamentals. And that is why Silver’s model, eventually, will prove accurate, even though it probably isn’t telling us much about the two candidates’ relative chances today, or during the next several months.
As political scientists, then, we simply need to measure those fundamentals, and then assume that as voters become “enlightened”, they will vote in ways that we expect them to vote. And, more often than not, we are right – at least within a specified margin of error! Now, if a candidate goes “off message” – see Al Gore in 2000 – and doesn’t play to the fundamentals, then our forecast models can go significantly wrong. And if an election is very close – and this may well be the case in 2012 – our models will lack the precision necessary to project a winner. But you should view this as strength – unlike some pundits who breathlessly inform us that Obama’s Electoral College vote has dropped by .02% – political scientists are sensitive to, and try to specify, the uncertainty with which they present their forecasts. It is no use pretending our models are more accurate than they are. Sometimes an election is too close to call, based on the fundamentals alone.
The bottom line is that despite what the media says, polls – and the forecast models such as Silver’s that incorporate them right now – aren’t worth much more than entertainment value, and they won’t be worth more than that for several months to come. As we near Election Day, of course, it will be a different matter. By then, however, you won’t need a fancy forecast model that incorporates a dozen variables in some “top secret” formula to predict the winner. Nor, for that matter, do you need any political science theory. Instead, as Sam Wang has shown, a simple state-based polling model is all you need to predict the presidential Electoral College vote. (Wang’s model was, to my knowledge, the most parsimonious and accurate one out there for 2008 among those based primarily on polling data.) Of course, this won’t tell you why a candidate won. For that, you listen to political scientists, not pundits. (In Wang’s defense, he’s not pretending to do anything more than predict the winner based on polling data alone.)
So, pay no heed the next time a pundit tells you that, based on the latest polls, Obama’s win probability has dropped by .5%. It may fuel the water cooler conversation – but it won’t tell us anything about who is going to win in 2012, and why.