Take Two, Drew! Linzer Unveils His Senate Forecast Model

Longtime readers will know that I was a big fan of Drew Linzer’s 2012 presidential forecast model contained at his Votamatic website, and not just because he was kind enough fly out here to give a talk to Middlebury students during the middle of the presidential campaign. Linzer’s model, you will recall, correctly predicted the electoral vote outcome in each state during the 2012 presidential election, thus making his final Electoral College prediction of Obama 332, Romney 206 the most accurate of the transparent forecast models of which I’m aware (along with political scientist’s Simon Jackman’s) for that cycle. Now Linzer is back with another forecast model (two actually!) looking at both Senate and gubernatorial races during the current election cycle and, as was the case in 2012, he is once again opening up the model so we can see the moving parts.

As I’ve said repeatedly, for political scientists, getting a prediction right is not the ultimate objective – it’s knowing why the prediction was right that matters. For that reason it is imperative that we understand the assumptions built into the model and why I generally only discuss forecast models that are transparent to outside inspection.  Of course, for most of you, the bottom line – who is going to win – is what really matters!  Fortunately, I’m pretty sure Linzer’s model will satisfy both our needs.

What do we find when we look at Linzer’s model? Interestingly (and here I’m subject to Drew’s correction) his Senate forecast model appears to differ from his presidential model. In 2012, Linzer used a combination of a fundamentals-based forecast model combined with state-based polling data to generate his presidential prediction. Essentially, he began by establishing a baseline forecast using Alan Abramowitz’s original Time for a Change forecast model, which estimates the presidential popular vote using three variables: the incumbent president’s net approval rating at the end of June, the change in real GDP in the second quarter of the election year and a first-term incumbency advantage. Drew then updated that forecast based on state-level polling data, which factored more heavily into his prediction as the campaign progressed so that by the time of the presidential election his forecast model was based almost entirely on polling data.

However, for his 2014 Senate forecast at the DailyKos website, Linzer is no longer incorporating any “fundamentals” into his model. Instead, he appears to rely entirely on state-level polling data. Why the change in methodology from his phenomenally successful presidential forecast model? My guess is that Linzer is less confident that there exists a Senate-oriented fundamentals model equivalent to the Abramowitz time for a change forecast model in terms of accuracy. That is, he does not believe his forecast will be improved even at this early date by incorporating structural elements, such as measures of the economy, presidential approval ratings, or generic ballot results beyond what the state-based Senate polls tell him.  If so, I can see the logic to this – in contrast to a presidential election, there are a lot more moving parts in trying estimate which party will gain the majority in the Senate. To begin, with 36 Senate races there’s many more candidates, rather than just two, and they are operating in different local political contexts, instead of in the more national-based electoral context prevailing during a presidential election year. Linzer’s approach assumes state-based polling data is going to do a good enough job picking up these state-based variations without the added noise provided by incorporating fundamentals into his prediction model. In contrast, in a presidential election with only two major candidates and the likelihood of greater correlation in the votes across states due to the more nationalized environment, incorporating fundamentals as a starting point probably makes sense.

To put this another way, to justify including fundamentals into his Senate forecast model, Linzer would have to assume that there was some structural variable that influenced the election results that was not being picked up in the polling data. Since he has no baseline fundamentals tempering his Senate forecast even this early in the campaign, his prediction is based entirely on polls right from the start. (This may be one reason – a paucity of good Senate polls prior to Labor Day – that he has gotten a relatively late start on the prediction game compared to some of the other forecasters.)

Note that Linzer’s approach differs from the “mixed” forecast models presented at the New York Times Upshot  or the Monkey Cage Election Lab sites, both of which incorporate structural factors in addition to polling data. As a result, you should not be surprised to see Linzer’s initial Senate forecast differ from what these other models are predicting. And that is the case – as of today, the Linzer-based DailyKos model gives Democrats about a 56% chance of holding onto the Senate. That’s a bit more optimistic for Democrats than most of the models that include a structural component. For example, at last look the MonkeyCage’s Election Lab forecast model gives Democrats a 47% chance of retaining their majority, while the New York Times Upshot model gives them only a 33% chance.

Before you email me with the inevitable “What about this forecast model” prediction, let me conclude with two reminders. First, given the uncertainty in the models at this early date, you – unlike the pundits who are even now ready to “unskew” the models – should not place too much stock in the difference between a 56% and a 33% chance of retaining a majority. As we get closer to Election Day, most of the structurally-based forecast models will likely increasingly rely on polling data, and I expect their forecasts to move closer to Drew’s. But it is also the case that Drew’s current estimate is going to change as well, as his model incorporates more and better Senate polling data. (One potential variable to watch is the impact of pollsters switching from registered to likely voters samples as election day draws night.) Barring some dramatic poll-changing event, such as an escalation of the U.S. military presence in Iraq (or Ukraine?), all signs point to an extremely tight race for control of the Senate in 2014. At this point, I would be skeptical of any forecast model that suggests otherwise.

By the way, since I will likely be blogging increasingly about these various election forecasting models, you might be interested in hearing Drew assess how reliable presidential forecast models are (hint: he’s not a huge fan of fundamentals-only models). Some of his criticisms apply to the Senate fundamentals-based models as well. As you can see in this video, however, Lynn Vavreck and I both are a bit more optimistic about political scientists’ understanding of elections.  This panel took place at the Dirksen Senate building this past spring.

[youtube.com/watch?feature=player_detailpage&v=VU69zcAW5uk]

 

5 comments

  1. Ah! It’s post Labor Day and time for the midterm insanity to begin. Thanks for the details to start out the season.

  2. Tarsi – It’s likely to be more insane this midterm for a couple of reasons. First, there’s so many more forecast models and second, the race for Senate control in particular looks like it is going to be too close for the forecast models to call with any certainty.

  3. Thanks for this generous overview. You’ve got it exactly right. For the presidential race, I started from fundamentals because factors like presidential approval, incumbency, uniform swing, etc were informative about the state-level election outcomes in a systematic way. Unfortunately, I don’t have the same confidence in the predictive accuracy of fundamentals-based models of Senate or gubernatorial elections. It did mean I had to wait longer to get started. So instead of using a model that assumes state-level attitudes will “revert” to a long-term forecast, we simply put a random walk on our estimates of future voter preferences. The result is a polls-only model that contains what we think is a reasonable amount of uncertainty about how each race could evolve between now and Election Day.

    As for that visit to Middlebury in 2012, the pleasure was all mine! Seeing Vermont in the autumn was fantastic.

  4. Professor Dickinson: greetings from Dubai! The new blog format looks great.

    What does the literature say regarding the relationship between fundamentals and Congressional/gubernatorial elections? Do the “time for a chance” factors (incumbency, economy, and approval rating) have an effect at the state or district level, or are national-level effects more important (or maybe the fundamentals just don’t matter that much)? I’m especially curious if anyone’s examined the effect that local economic performance might have.

  5. Hi Max,

    Dubai? I’ll expect periodic updates on the view from there re: American politics. As for congressional elections, as you might expect there are a variety of forecast models floating around. Incumbency and the fact that it is a midterm are two important factors. For the House, it is difficult to get district-specific economic measures, so generally forecasters rely on national economic measures to predict overall shifts in seats, rather than make district-specific predictions. It is possible to get some state-based economic measures (a former Midd student did so for his Electoral College forecast) but I’m not sure how many forecasters bother with them in their Senate forecasts. Let me check on this.

Leave a Reply

Your email address will not be published. Required fields are marked *