Interpreting the polls: Likely versus registered voter surveys

I will have only brief opportunities to post during the next several days since I am traveling to give an election talk, but I wanted to respond to several of your comments and follow up on some points I made in my last posting regarding how to interpret surveys.  Those of you who are following the RCP (RealClearPolitics) site see that the Republican post-convention bounce seems to have peaked, although McCain retains a slight lead – within the margin of error – on most polls. However, a new tracking poll – the DiageoHotline tracker (Diageo, the sponsor, is in the drinks business) – has Obama up 4% over McCain.   Even factoring in the margin of error (about 3% on their poll), this makes Diageo an outlier from the other recent polls which have McCain in the lead.  How can this be?

The answer, I suspect, is that Diageo is sampling registered voters.  Most other major pollsters are winnowing their sample to focus on likely voters.  Which is more accurate?  This is still a matter of debate, but most pollsters believe that as you get closer to the election, the likely voter model is a bit more accurate. So why is McCain doing better in the likely voter models?  As I explained in my last post, historically Republicans tend to turnout better than their registered numbers would indicate, so pollsters adjust their likely voter models to oversample from Republicans.  Since Diageo doesn’t present crosstabs, I can’t tell how many Republicans they have in their sample, but my guess is that it is less than what pollsters using a likely voter model are including.  More generally, if you average the RCP national polls using likely voters versus those using registered voters, you’ll see a slight advantage for Obama in the registered voter models, while McCain does better in the likely voter surveys,

This is a reminder of two points I’ve made earlier: First, the RCP rolling average of the polls lump polls using different methodology together, so use with caution.  Second, if these pollsters using likely voter models are in fact underestimating Democratic turnout, or overestimating Republican turnout, they could be underestimating the actual level of support for Obama.  What I will try to do as the campaign continues is to look at individual surveys to see whether there are obvious differences in the results that can be traced to their sampling techniques.  This might provide clues as to whether some pollsters are missing hidden Obama support or not.

In a related issue, several of you have suggested that because so many people rely solely on cell phones, telephone surveys that use only landlines may be inaccurate. In 2004 about 7% of households had cell phone-only coverage.  That total may be as high as 15% today (I need to double check these figures).  And statistics show that younger and more affluent people are more likely to use cell phones. So doesn’t that suggest that pollsters calling landlines are likely under sampling from Obama voters?  Not necessarily.  In 2004 several studies were done to estimate whether the failure to sample cell phones was distorting survey results.  It turns out it didn’t.  The reason is because pollsters were very careful to weight their sample by age.  This meant they captured enough young people in their sample – even without calling cell phones – to accurately predict the impact of the youth vote in the 2004 election.  As long as they do so in this election campaign – as long as they accurately sample the demographic most likely to use cell phones – the failure to actually survey cell phone users should not bias the survey results.  Note also that some firms – such as Gallup – are now including cell phones in their survey.  Again, the lesson is to pay attention to the sampling techniques – not all polls are alike, although RCP treats them all alike.

Finally, we have one more indication of the Palin effect on the race. Obama adviser David Axelrod claimed in the Washington Post today that the post-convention Republican bounce, as reflected in national polls, is really only picking up movement toward McCain in red states.  In fact, this is only partly true – while Axelrod is right that the red states have become “redder”, McCain also gained about 4% in the battleground states as well since picking Palin.  That’s why in many of these online electoral vote calculators McCain has now pulled even or is ahead of Obama – because the Palin pick is swinging some independents and women into the McCain column.  But will that bounce stick in light of “troopergate” and increased scrutiny on Palin?   More on that in a bit.

3 comments

  1. What about ‘caller id’ based differences…

    I for one have it and do not answer calls that I do not recognize as being from someone I know – As such wondeirng what are the demographics for caller id subscribers…

  2. For anyone who wants more evidence of the difference between polls of registered voters and polls of likely voters, look no further than the most recent news from Virginia at

    http://www.realclearpolitics.com/epolls/2008/president/va/virginia_mccain_vs_obama-551.html

    Wednesday’s CNN poll of registered voters has McCain up by 9%. PPP (a Democratic pollster) has a different poll showing Obama leading McCain by 2% in a poll of likely voters. Similar differences appear in the last set of polls in the state.

    Interestingly, the VA polls actually indicate the opposite trend to the one Professor Dickinson mentioned in his blog about national polls. In Virginia, Obama actually seems to be polling better among likely voters than among the larger category of registered voters. I don’t have time to check the relationship of polls in every state but a quick glance at New Mexico’s polls shows that Dickinson’s theory holds in that state (I think it holds in most, actually)
    http://www.realclearpolitics.com/epolls/2008/president/nm/new_mexico_mccain_vs_obama-448.html

    Nonetheless, the example of Virginia may be an interesting one to consider. I’ll try to think of something that might explain what’s going on there, but I’d definitely be interested in your thoughts!

  3. George – Whenever pollsters conduct a telephone survey, they have a protocol they use to compensate for potential nonresponse bias, including any produced by people who screen their phone calls using caller i.d. In general, the non-response rate for telephone surveys – whether due to the use of cell phones, screening calls with caller i.d., the use of answering machines, has gone up in the last two decades, so that today up to 40% of calls get a “non-response”. This makes finding a truly random sample more difficult especially if that non-response is not randomly distributed, but instead falls within a particular demographic. Pollsters are aware of this, and do have a protocol for dealing with it. In the best case scenario, then, nonresponse bias shouldn’t affect survey outcomes, although it surely has made surveying more difficult.

Leave a Reply

Your email address will not be published. Required fields are marked *