Tag Archives: Research 2000 fraud

It Really Is a Science

Politico’s David Catenese has picked up on (story here) the Research 2000-Markos Moulitsas fraud story I blogged about yesterday.  In addition to the Blanche Lincoln-Bill Halter Arkansas Democrat Senate race, Catanese reminds us that Research 2000 also published a poll in May suggesting Tom Campbell was soundly whipping (up 15%) Carly Fiorina in the California Republican Senate contest.  In fact, Fiorina crushed Campbell by 34%.

Catanese cites the story because these polls – even though they were wrong – helped drive the media narrative in both races.  He hints that they may have changed the campaign dynamics in ways that, had the race been closer, even altered the outcome.

My purpose in writing about this again, however, is slightly different.  I’m less concerned with the impact of these inaccurate polls on the races, and more interested in the debate regarding whether the company defrauded the Daily Kos founder by manipulating (making up?) data and generally failing to do what Kos hired them to do.  As someone who is a voracious consumer of polling data in this blog (as longtime readers know) but who does not produce surveys, I have a very strong interest in understanding the details of this story.   I  want to be sure that the results I present you are credible.

Of course, I realize that not all of you share my fascination with the nuts and bolts of polling, so I’m not going to delve too deeply into the details of the fraud case.  But I thought it might be interesting for some of you (particularly my political science students!) to get a sense of the evidence that is being cited against Research 2000.  For example, here are a couple of points of contention that are being debated.

1. Minor details in the survey data released by Research 2000, such as the trailing digits in some of the cross-tabs, suggest the figures were not produced by a random sampling process but instead were created by other means. For instance, if the results from respondents who were men in a particular category (say, percent men who approved of Obama) ended in an even number, the same would be true for the results for women in that category – they would also end in an even number. This happened far too frequently to occur by chance; as Mark Blumenthal points out, one would have a greater chance of winning the lottery than of seeing this pattern of results in the trailing digits. But does this prove fraud?  Not necessarily – as others point out, it might be a function of how data was stored and retrieved, a data coding error, or some other weird statistical algorithm.

2. Nate Silver in this post suggests that Research 2000 results for its daily tracking poll did not vary as much as they should have with a simple random sampling procedure. Without going into too much detail, the argument here is similar to the idea that if you flip a coin, and keep track of the heads versus tails, you are going to get a 50/50 split on average, but there will be a statistically predictable variation around this average.  Research 2000 results don’t show the expected variation – they show much less.  Again, Silver suggests this indicates the tracking numbers were manipulated. But as Doug Rivers notes, Silver’s assumption that Research 2000 used a simple random sampling procedure is almost surely wrong.  Most polling is done through stratified sampling – that is, the pollster breaks the sample down into subgroups, such as Republicans and Democrats, or men and women, and then randomly samples from each subgroup.  This would have the effect of producing a final poll that would show much less variability around the sampling mean, that is, the Research 2000 daily tracking poll results would vary less than if the polling was done by simple (not stratified) random sampling, as Silver assumes.

These are just a couple of points that are being debated in much greater detail than I suggest here, and by some really smart people. I realize that this sounds like a lot of “inside baseball” that is not of interest to all of you.  However, I urge those who are interested (particularly my students) to click on some of the links I’ve placed here and work through the arguments.  The math is actually very straightforward – the more difficult part, really, is wading through the jargon that commentators use.

These examples serve a larger point, however, which is why I’ve returned to this topic for a second day.  None of what has been presented so far conclusively proves Research 2000’s guilt or innocence.  The on-going debate, however, is a reminder that there’s a peer-review process at work here, in which those with the expertise to tackle these issues are doing so in a very transparent, albeit somewhat messy, manner. Because of the wonders of the internets, you can actually track the debate virtually in real time. It’s a fascinating process to watch, not least because of the interaction of established statisticians with up-and-coming grad students hoping to make a name for themselves. There’s lots of give and take, some of it acrimonious, but most of it refreshingly free of personal animosity (albeit nerdy to the extreme.)

And it’s a reminder why I enjoy writing this blog – I get to piggyback on the work of a bunch of people who are smarter than me and bring you the results in ways that are, I hope, relevant to the news you read about – like polling results.

I expect that because of this debate over Research 2000’s methods, polling will be both more transparent and more credible – at least that’s the hope.  So come November, when I roll out my pre-midterm election forecasts based in part on polling data, you’ll have some confidence that there really is a peer-reviewed science at work on which I’m basing my arguments, and that I’m not just making it up as I go along.