Tag Archives: ezra klein

Five Wrong Lessons From the Vox’s “11 Political Lessons From Eric Cantor’s Loss”

Political punditry – the art of expressing instant commentary on political events in an authoritative manner – has never been for the faint of heart. But the task has grown both more competitive and more public with the growth of the interverse and the proliferation of political blogs and online media. These developments have increased pundits’ access to political events and related information, which in turn makes it easier to produce informed punditry. That is all for the good. But these changes have also ratcheted up the pressure for pundits to present their punditry quickly, in interesting and easily accessible form, in order to attract and keep an audience. This sometimes comes at the cost of accuracy. That, in turn, has made it easier for smug political scientists like me, safely ensconced on a deck nestled in the hills of Vermont and protected by rabid woodchucks to, with scotch in hand, blast the latest punditry for its misreading of data/unclear logic/ faulty methodology/all-of-the-above.

Which I am about to do again.

Those of you in the twitterverse will remember that I sent out several tweets on Tuesday night, during the Cantor election implosion, criticizing what I thought were some incorrect conclusions Ezra Klein was drawing in his “11 political lessons from Eric Cantor’s loss”. Now lest anyone accuse me of hating on Ezra and The Vox, rest assured that I think he and his minions do wonderful work at their site, and under difficult circumstances, given that their stated mission is to “explain everything you need to know, in two minutes.” Moreover, he is among the best of the punditocracy at drawing on political science research as much as his available time warrants. Finally, Klein notes at the outset of this particular column that his are “provisional” thoughts. So we should cut him some slack at the outset.

With those caveats, let me direct my ire at five of Ezra’s 11 lessons, roughly in the order in which they were presented.

Lesson two is that “Republicans” are not the same as “Republican primary voters.” Klein writes, “It’s possible and even likely that the vast majority of Republicans in Virginia’s 7th District liked Cantor just fine.” Klein’s point, which he develops in a later column, is that Cantor ran a horrible campaign and failed to turn out these sympathetic Republican voters, thus sealing his loss. The problem with this claim is that, according to this PPP poll, Cantor was in fact deeply unpopular among most Republicans in the district. Keeping in mind that Tuesday’s Republican primary turnout was up by some 20,000 over when Cantor won his primary challenge two years earlier, it is not clear that his loss was because he ran a poor campaign and that the “right” voters did not come out. Clearly he had deeper problems rooted in the perception that he was out of touch with his district that a clever campaign was not going to overcome.

2:04 UPDATE.  This “day after” poll of Republican voters in Cantor’s district is completely consistent with what I wrote above, and with my initial post on this issue taking Chuck Todd, Chris Cilliza and others to task for focusing on immigration as the key to Cantor’s defeat. The key finding is that “Immigration was not a major factor in Rep. Cantor’s defeat. Among those who voted for David Brat, 22% cite immigration as the main reason for their vote, while 77% cite other factors. Chief among those other factors cited by Brat voters were the idea that Cantor ‘was too focused on national politics instead of local needs,’ and that Cantor had ‘lost touch with voters.’”

Lesson three is that “Immigration reform is dead and Hillary Clinton’s presidential hopes are so, so alive.” Lesson six makes a similar point: that the likelihood of a Democrat winning the presidency in 2016 went up because of Cantor’s defeat. I hope I’ve persuaded you in my previous post that, based in part on the same PPP poll, that support for immigration reform did not cause Cantor’s defeat, and that it is not clear how Republicans will interpret his loss, given that other candidates who support immigration reform, like Senator Lindsay Graham, easily fought off primary challenges. (Moreover, for what it is worth, New York Democratic Senator Chuck Schumer is claiming that Cantor’s ouster has made immigration reform more likely, not less.)

Similarly, the idea that an upset in one Republican House primary with 12% turnout has somehow improved Democrats presidential hopes in 2016 seems to me to be a very big reach. The logic seems to be that the more moderate Republican candidates like Marco Rubio and Jeb Bush (assuming they run) will find it much harder in the aftermath of Cantor’s loss to win their party’s nomination unless they move Right by, for example, opposing immigration reform, which in turn will then make them less likely to win the general election. Or something like that. That presumes, however, that Republican candidates, their consultants, party activists and the media all draw the lesson from Cantor’s defeat that Klein and other media pundits want us to draw, which is that it was all due to immigration, and their response come 2016 will be conditioned on that one belief. But it is not clear to me that Cantor’s loss changed many priors – those who oppose immigration reform will swear he lost because he was on the fence on this issue. Others who support reform will say immigration didn’t cause Cantor’s loss. In short, I don’t see a huge shift in beliefs based on this one electoral result, once the media chatter dies down and pundits move on to other controversies. Party activists with strong partisan priors, like those who participate in primaries, tend to interpret events through their existing predispositions rather than change attitudes to conform to those events.

Klein’s lesson ten is that Cantor’s defeat by a Tea Party-backed candidate indicates that so-called “reform conservatism doesn’t have much of a constituency, even among Republican primary voters.” There is a prevailing tendency among pundits to describe the Tea Party as either the dominant force in Republican Party politics or a fringe element of looney ‘toons with dwindling influence. As I’ve written extensively before, they are neither. While the number of voters who are active in Tea Party politics is quite small, many of the movement’s core beliefs, particularly those dealing with the budget politics and the deficit, resonate with fully a quarter or more of American voters. So the Tea Party will wield some influence in the Republican nominating process, but not enough to dictate the party’s results. It probably bears repeating that in both 2008 and 2012 the Republicans chose the more moderate candidate who in both cases overcame strong challenges from the party’s Right.

The final one of Klein’s lessons I want to discuss is that Cantor’s defeat, alongside the losses suffered by other prominent Republicans in recent years like Dick Lugar and Mike Castle, “mean no Republican is safe. And that means that as rare as successful Tea Party challenges are, every elected Republican needs to guard against them.” Well, yes – but rest assured that most Republicans did not need Cantor’s loss to teach them this lesson.

Years ago, while a junior faculty member at The World’s Greatest University, a senior colleague informed me that a Ph.D. candidate had just failed his oral defense – a shocking outcome both because this student was extremely smart and because graduate students almost never failed their orals. I asked my senior colleague what the student had said when informed that he had failed. The senior colleague paused, smiled, and then replied, “He said, “I thought no one ever failed these!’ This, of course, is exactly the point.”

And that’s the real lesson here. House reelection rates are high – 95% or more – not because the incumbents don’t worry about losing. They are high because all most of them do is worry about losing. In this respect, Cantor’s loss doesn’t tell them anything new, and is not likely to change behavior that is already premised on the belief that House incumbents are, as Tom Mann puts it, “unsafe at any margin”.

Ok. Cue the woodchuck.

 

 

 

Is the Public Stupid? When A Chart Is Not Worth A Thousand Words

This item from yesterday’s Washington Post caught my eye as a useful teaching moment.  Ezra Klein posted this graph, based on this most recent Washington Post/ABC Poll,  under the label “A chart is worth a thousand words”:

It’s not clear what Klein’s point is, but presumably he means to point out just how illogical voters are, since they trust Democrats more than Republicans on handling the economy and to make the “right decisions”, and yet a plurality (by a slim percentage) are planning to vote Republicans into office.

There’s only one problem with this graph. If you actually go to the data in the poll from which Klein constructed it, you’ll see that the first two bars are based on a sample of all adults, while the last bar, which graphs the partisan breakdown of responses to the question “who do you plan to vote for?” is based on a sample of only registered voters.

Longtime readers have heard this refrain before, but it bears repeating for those who have just tuned in:  samples based on registered voters tend to skew slightly more Republican than do samples based on all adults.  The basic reason is that samples of all adults include more Democrat supporters who are less likely to vote. Or, to put it another way, Klein is comparing apples to oranges.

How big is the difference? It varies, of course, from poll to poll depending on sampling procedures, etc. Ideally, of course, we’d test the premise by having polling outfits conduct split sample surveys that compare response rates of both likely and registered voters to all adults.   That’s expensive, however, so most polling outfits do one or the other.  (Presumably WaPo switched to registered voters from all adults on the “who are you planning to vote for?” question because for this question they wanted to sample those most likely to vote, rather than all adults.)

However, we can get some leverage on the issue by looking at past Washington Post polls that have polled both groups in close temporal proximity, if not at the same time.   Here’s an example.

18. (ASKED OF REGISTERED VOTERS) If the election for the U.S.
House of Representatives in November were being held today,
would you vote for (the Democratic candidate) or (the Republican
candidate) in your congressional district? (IF OTHER, NEITHER,
DK, REF) Would you lean toward the (Democratic candidate) or
toward the (Republican candidate)?
NET LEANED VOTE PREFERENCE
               Dem     Rep     Other    Neither    Will not       No
               cand.   cand.   (vol.)    (vol.)   vote (vol.)   opinion
7/11/10  RV     46      47       *         2           *           5
6/6/10   RV     47      44       2         2           1           4
4/25/10  RV     48      43       1         2           1           6
3/26/10  RV     48      44       1         2           *           4
2/8/10   RV     45      48       *         3           *           4
10/18/09 All    51      39       1         3           2           5

Note the big difference when WaPo switches from sampling all voters, in October, 2009 versus a sample of registered voters in February, 2010.  Democrats go from being preferred by 12% to being behind by 3% – a net shift of 15%.  Yes, the polls were four months apart, so it’s possible voters’ preferences simply shifted that much in the intervening time.  Note that we don’t see any comparable shift in subsequent months, however.

Next, let’s look at previous WaPo/ABC polls that asked respondents which party they trusted more to handle the economy.  Fortunately, in September, 2002 WaPo asked a sample of all adults “Which political party, the (Democrats) or the (Republicans), do you trust to do a better job handling the economy?”  In the following month, they asked this of likely voters and then two months later of all adults again. Note that likely voters tend to skew Republican even more than registered voters. In September, the random sample of all adults indicated that they trusted the Democrats more, by 8%. The next month, when WaPo sampled only likely voters, the country changed its mind and now trusted Republicans more by 5%. That is, Republicans picked up 9% while Democrats dropped 4% in the switch from sampling all adults to sampling likely voters – a net switch of 13%.  Two months later, WaPo went back to sampling all adults, and Democrats closed the trust gap, essentially matching Republicans’ support.  The following table summarizes the results:

Date Democrat Republican Both Neither No Opinion
12/15/02 44 45 4 6 1
10/27/02  LV 43 48 3 3 2
9/26/02 47 39 3 6 5

Again, it’s possible that the public’s view toward the two parties’ ability to handle the economy changed from September to October, and then shifted back from October to December.  But it is more like, in my view, that the change reflects the difference response one receives when sampling all adults versus sampling likely voters.

This difference in partisan response rates for survey of all adults, registered voters, and likely voters, permeates all polls.  As evidence, look at this recent Times poll asking

“There will be an election for U.S. Congress in November. If you had to decide today, would you vote for the Democratic candidate in your district or the Republican candidate?”

Fortunately, the Times asked this of both likely voters and registered voters at the same time.  Here are the responses:

Population Sampled Democrat Republican Tea Party Other Unsure
Likely Voters 43 42 1 2 12
Registered Voters 47 43 1 3 6

Even when considering registered and likely voters, we see a slight Republican bias in the likely voter response.  Of course, the differences are small and close to the poll’s margin of error, so we can’t be sure the difference is driven by the different population samples.  But we can’t dismiss it either.  More generally, if you look at the dozens of polls that have asked versions of this “who will you vote for?” question this year, Republicans do better in surveys of likely voters versus registered voters, and better among registered voters than among all adults.  You can see for yourself here, by looking at the specific polls under the  2010 midterm section.

Now, let’s return to the original numbers on which Klein based his graph. The first table has the results for the question “who do you trust to make the right decision” asked of all adults.

Date Democrat Republican Both Neither No Opinion
7/11/10 42 34 3 17 5

And here are the percentages of the responses to “who do you trust to handle the economy?” again asked of all adults. (Respondents favoring Democrats in the first column, those favoring Republicans in the second.  I’ve omitted the “just some” and “not at all” categories to be consistent with Klein’s chart.)

7/11/10 32 26

What happens if you shift the net results to these two questions, say, about 5% toward the Republicans, which is consistent with the likely impact of surveying registered voters, as opposed to all adults?  Suddenly, given the 4% margin of error, Republicans are virtually tied with Democrats in terms of which party is preferred by voters for handling the economy, and for making the right decisions.  That is, the findings Klein cites in his chart that shows Republicans and Democrats in a dead heat in terms of registered voters’ preferences in the 2010 midterms seems quite consistent with the survey results for these two questions.

My point isn’t that Klein is wrong.  In fact, the public could be acting illogically by voting for the party they trust least to handle the economy or to make the right decisions.  However, the difference he cites might just be a function of sampling different populations. We can’t be sure.  If I’m Klein, I would likely point this out rather than present a chart in a way that implies the public is, as one person commenting put it, “Stupid.”

I should be clear: I’m not accusing Klein of any chicanery here.  It’s possible he didn’t notice that his first two survey responses were based on samples of all adults, while the third was based on a sample of registered voters.  More likely, in my view, is that he saw the chance to flag an “Aha!” moment, which makes for a good column, and simply didn’t bother checking the underlying data. Whatever  the explanation, it is a reminder (yes, I know, you’ve heard it from me a thousand times) that you can’t simply take a columnist’s word about how to interpret polling results.  You have to look at the poll itself.

Bottom line?  A chart may be worth a thousand words – but sometimes it doesn’t say anything at all.