Tag Archives: nate silver

Dear Nate Silver: IT’S NOT MY FAULT!

Once again I am reminded of the power – both good and bad – of social media.

My post today discussing the results from the latest Senate forecast models is up here at U.S. News.  I should point out that I very much appreciate the opportunity to post there – it enables me to reach a wider audience and generates more feedback here at my regular Presidential Power site. So I encourage you to check the U.S. News site out on a regular basis – it includes some great writers, including former Middlebury student Rob Schlesinger.

But you should also know that I don’t write the titles to my posts there, and I certainly don’t get to create the twitter feed U.S. News uses to publicize the posts. So, when a tweet goes out from U.S. News linking to my post, as it just did, that reads “No, Nate Silver can’t predict who will win the Senate http://ow.ly/CAl3Q via @MattDickinson44”, and when CBS news correspondent Major Garrett retweets it to his more than 111,702 followers and when pollster Frank Luntz then forwards the link to his 48,000 followers, including Nate Silver, in this way – “@USNewsOpinion Them’s fightin’ words. (@MattDickinson44 vs. @NateSilver538)” – just remember: IT’S NOT MY FAULT!

In fact, if you read the post (please do!) you’ll see that I actually did not single out Nate Silver in any way that could be perceived as a knock on his forecasting abilities. Instead, I pointed out that the purely poll-based forecasting models have, over the last month, begun converging with the models that include fundamentals, exactly as I predicted they would in my previous post here. My other point, however, was that even though all the major forecast models that I follow are now giving Republicans a better than 50% chance of gaining enough seats to take a Senate majority, that is not the same thing as saying Republican control come November 4th is now a lock. Because so many of the Senate races remain close, with polling averages of less than 3% difference between the two candidates, and because the outcomes of those close races will affect who has a Senate majority, I don’t think the forecast models can really tell us right now who will control the Senate. With less than three weeks to go there still too much variability in the polling and, with the races so tight, the possibility that an unpredictable event will influence the outcome becomes greater. This isn’t a critique of the models – indeed, most of them (the Washington Post model is a notable exception) favor the Republicans by relative slim margins at this point, as this table indicates (purely poll-based forecasts in italics).

Picture1Put another way, the models are simply not precise enough for us to have much confidence regarding who will control the Senate on Nov. 4 based on the data we have today.

Two years back Nate Silver and I had a very constructive exchange regarding his presidential forecast model. (Interestingly, Sam Wang – whose forecast model Silver recently critiqued, also joined in on that previous exchange.) My major critique then was the lack of transparency in Silver’s model, which made it difficult for others to decipher the logic driving his predictions. For political scientists, forecasting is primarily a means to achieve a better understanding of elections more than it is an opportunity to showcase our forecasting skills, so it is imperative that we know how forecasts are constructed in order to assess their results. Since my earlier critique, however, Silver has come a long way toward showing us some of the moving parts in his models, as his post here explaining his Senate forecasts demonstrates. The ideal in this regard, however, is Wang, who generously provides the code for his forecasting model at his Princeton Election Consortium site.

So, please, take a peek at my latest post at U.S. News. And if you don’t hear from me for a while, it’s because I’m ducking the incoming twitter blast that is surely coming my way.

Addendum 8:44 p.m. Frank Lunz responds to my twitter-based defense: “ 5m5 minutes ago

Yes, I too have had my own headache with a clickbait headline this week.”  Here’s the background to his experience with what he calls clickbait.

In Defense of Nate Silver (Sort Of)

Nate Silver’s new 538 website opened last week to generally lackluster reviews.  “It just isn’t working,” according to Tyler Cowens. Paul Krugman agrees: “I’m sorry to say that I had the same reaction. Here’s hoping that Nate Silver and company up their game, soon.” Ryan Cooper is more blunt: “To summarize: it’s terrible.”  What do all these critics find so objectionable?

One major criticism is that the problems and related questions that dominate the news, particularly in the political arena, are not always amenable to the type of unrelenting statistical analysis that Silver and his minions emphasize.  As the New Republic’s Leon Wieseltier puts it, “Many of the issues that we debate are not issues of fact but issues of value. There is no numerical answer to the question of whether men should be allowed to marry men, and the question of whether the government should help the weak, and the question of whether we should intervene against genocide.”

Related to this is a fear that Silver’s claim that he just does analysis, and not advocacy, masks the truth that his punditry is no more bias free than that of any other pundit.  As Cooper puts it, “In an attempt to focus solely on objective analysis, Silver is ignoring one of the hardest-won journalistic lessons of the last decade — there is no such thing as ideology-free journalism.”

I confess that I find both objections less than compelling.  To begin, no political scientist that I know would disagree with Wieseltier’s observation regarding the fact-value distinction.  Indeed, in his classic study of decisionmaking in organizations, the late, great Herbert Simon (he won a Nobel Prize for his study of decisionmaking) observed, “It is a fundamental premise of this study that ethical terms are not completely reducible to factual terms.”  But, Simon cautioned, to assert that there may be an ethical component to an administrator’s decisions is not to say that those decisions involve only ethical elements.  Put another way, political scientists may not be able to tell voters whether electing Barack Obama over Mitt Romney is a better (or worse) outcome for the nation, or the world.  But we can say something about what the determinants of the presidential vote are likely to be, and what election outcome it is likely to produce.  If I understand Silver correctly, that’s all he and his team are trying to do at this new website.  Should government help the weak?  I doubt Silver knows the answer.  But he might be able to tell us if government can help the weak.

Similarly, I doubt that Silver believes he views his analyses through a value-free ideological lens. Anyone who read his FiveThirtyEight column at the New York Times understands what Silver’s political views are. But rather than compensating for one’s implicit biases, political or otherwise, by – as Cooper advocates – “wear[ing] your ideology on your sleeve” I’d argue instead that there is merit in trying very hard to prevent those biases from contaminating one’s analysis.  One way to do so is to be explicit about the theoretical and methodological assumptions built into one’s analysis. This approach differs from, and is more useful, than analysis that is explicitly harnessed to the cause of advocacy. There is something to be said for disciplined thinking designed to discover underlying truths, no matter how inconvenient.  And that means not only clarifying the assumptions built into one’s analysis – it also means trying to specify how certain one is about one’s conclusions. How confident am I in my prediction that Obama will beat Romney?

It is on this last point, I think, that I come closest to agreeing with Silver’s critics like Krugman, who worry that without explicit theorizing, Silver’s data-driven research may tell us less than we think. As Krugman writes: “But you can’t be an effective fox just by letting the data speak for itself — because it never does. You use data to inform your analysis, you let it tell you that your pet hypothesis is wrong, but data are never a substitute for hard thinking. If you think the data are speaking for themselves, what you’re really doing is implicit theorizing, which is a really bad idea (because you can’t test your assumptions if you don’t even know what you’re assuming.)”

Krugman echoes a point I’ve made before about Silver’s work: that unlike political scientists, he is not fully transparent about what goes into his analyses, such as his presidential forecast models.  Without a glimpse at the moving parts, we can’t be sure what we are learning.  It is one thing to say that Obama will win the Electoral College vote, but we need a theory to understand why he won that vote.  In Silver’s defense, however, he is not pretending to be a political scientist – and why should he?  He has a wider audience and (presumably) earns more money, than any political scientist I know.  If he wants to hide the ingredients that go into his “special” forecast brew in order to make it appear more satisfying (and more original!), I say more power to him.  As a career move, it certainly has served him well to this point.

I confess that I believe some of the carping by pundits regarding Silver’s website is a reaction to his contrarian and what some perceive to be condescending attitude toward the work of mainstream journalists whose writings grace the major newspaper op-ed pages. In that vein, here Silver is describing how his new site differs from the work of the best known columnists: “Uhhhh, you know … the op-ed columnists at the New York Times, Washington Post, and Wall Street Journal are probably the most hedgehog-like people. They don’t permit a lot of complexity in their thinking. They pull threads together from very weak evidence and draw grand conclusions based on them. They’re ironically very predictable from week to week. If you know the subject that Thomas Friedman or whatever is writing about, you don’t have to read the column. You can kind of auto-script it, basically.”

It is true that Silver has always presented his work with a certain “Look Ma, no hands!” flair that in some cases overstates the novelty, and effectiveness, of what he is doing.  (See, for instance, how his model stacks up to political scientists’ when it comes to forecasting the last midterm elections.)  He has made a living by accentuating – exaggerating? – the difference between his data-driven analyses and what he sees as the hedgehog-like tendencies of more conventional columnists.  I understand why columnists resent Silver’s tone. (I confess that my tone when criticizing pundits has sometimes crossed that line as well!) But I also think that he’s not entirely incorrect in his criticisms of conventional punditry.  Too often it does harness data – if it uses data at all – to the cause of advocacy.

The bottom line?  If you want a data-based take on the likely outcome of political events, like the upcoming midterm elections, Silver’s site is probably as good as any. (And here I would recommend the work of Harry Enten at Silver’s site). But if you want to understand why those outcomes occur, there are better places to start.

Of course, if you want your political analysis spiced up with a dose of the plucky, in-your-face speak-truth-to-power attitude exemplified by this young analyst, then you’ve come to the right place:

My son

 

Dickinson and Silver, Take Two

Whether he did so out of frustration or some other emotion, I want to thank Nate Silver for taking time from his busy schedule to respond (twice!) to my critique of poll-based forecasting models similar to his.  This type of exchange is common for academics, and I always find it helpful  in clarifying my understanding of others’ work.  Based on the email and twitter feedback, I think that’s been the case here – my readers (and I hope Nate’s too!) have benefitted by Nate’s willingness to peel back the cover – at least a little! – on the box containing his forecast model, and I urge him to take up the recommendations from others to unveil the full model.  That would go a long way to answering some of the criticisms raised here and elsewhere.

Because of the interest in this topic, I want to take an additional minute here to respond to a few of the specific points Nate made in his comments to my previous post, as well as try to answer others’ comments. As I hope you’ll see, I think we are not actually too far apart in our understanding of what makes for a useful forecast model, at least in principle.  The differences have more to do with the purpose for, and the transparency with which, these forecast models are constructed.  As you will see, much of what passes for disagreement here is because political scientists are used to examining the details of others’ work, and putting it to the test.  That’s how the discipline advances.

To begin, Nate observes, “As a discipline, political science has done fairly poorly at prediction (see Tetlock for more, or the poor out-of-sample forecasting performance of the ‘fundamentals based’ presidential forecasting models.)”  There is a degree of truth here, but as several of my professional colleagues have pointed out Nate’s blanket indictment ignores the fact that some forecast models perform better than others.  A few perform quite well, in fact. More importantly, however, the way to  improve an underperforming forecast model is by making the theory better – not by jettisoning theory altogether.

And this brings me to Nate’s initial point in his last comment: “For instance, I find the whole distinction between theory/explanation and forecasting/prediction to be extremely problematic.”  I’m not quite sure what he means by “problematic”, but this gets to the heart of what political scientists do: we are all about theory and explanation.  Anyone can fit a regression based on a few variables to a series of past election results and call it a forecast model. (Indeed, this is the very critique Nate makes of some political science forecast models!) But for most political scientists, this is a very unsatisfying exercise, and not the purpose for constructing these forecast models in the first place.  Yes, we want to predict the outcome of the election correctly (and most of the best political scientists’ models do that quite consistently, contrary to what Silver’s comment implies), but prediction is best seen as a means for testing how well we understand what caused a particular election outcome.  And we often learn more when it turns out that our forecast model misses the mark, as they did for most scholars in the 2000 presidential  election, and again in the 2010 congressional midterms, when almost every political science forecast model of which I’m aware underestimated the Republican House seat gain (as did Nate’s model).  Those misses make us go back to examine the assumptions built into our forecast models and ask, “What went wrong? What did we miss? Is this an idiosyncratic event, or does it suggest deeper flaws in the underlying model?”

The key point here is you have to have a theory with which to start.  Now, if I’m following Nate correctly, he does start, at least implicitly, with a baseline structural forecast very similar to what political scientists use, so presumably he constructed that according to some notion of how elections work.  However, so far as I know, Nate has never specified the parameters associated with that baseline, nor the basis on which it was constructed. (For instance, on what prior elections, if any, did he test the model?) It is one thing to acknowledge that the fundamentals matter.  It is another to show how you think they matter, and to what degree. This lack of transparency (political scientists are big on transparency!) is problematic for a couple of reasons.  First, it makes it difficult to assess the uncertainty associated with his weekly forecast updates.  Let me be clear (since a couple of commenters raised this issue), I have no principled objection to updating forecast projections based on new information. (Drew Linzer does something similar in this paper, but in a more transparent and theoretically grounded manner.) But I’d like to be confident that these updates are meaningful, given a model’s level of precision.  As of now, it’s hard to determine that looking at Nate’s model.

Second, and more problematic for me, is the point I raised in my previous post.  If I understand Nate correctly, he updates his model by increasingly relying on polling data, until by Election Day his projection is based almost entirely on polls.  If your goal is simply to call the election correctly, there’s nothing wrong with this.  But I’m not sure how abandoning the initial structural model advances our theoretical understanding of election dynamics.  One could, of course, go back and adjust the baseline structural model according to the latest election results, but if it is not grounded on some understanding of election dynamics, this seems rather ad hoc.  Again, it may be that I’m not fair to Nate with this critique – but it’s hard to tell without seeing his model in full.

Lest I sound too critical of Nate’s approach, let me point out that his concluding statement in his last comment points, at least in principle, in the direction of common ground: “Essentially, the model uses these propositions as Bayesian priors. It starts out ‘believing’ in them, but concedes that the theory is probably wrong if, by the time we’ve gotten to Election Day, the polls and the theory are out of line.”   In practice, however, it seems to me that by Election Day Nate has pretty much conceded that the theory is wrong, or at least not very useful.  That’s fine for forecasting purposes, but not as good for what we as political scientists are trying to do, which is to understand why elections in general turn out as they do.  Even Linzer’s Bayesian forecast model, which is updated based on the latest polling, retains its structural component up through election day, at least in those states with minimal polling data (if I’m reading Drew’s paper correctly).  And, as I noted in my previous post, most one-shot structural models assume that as we approach Election Day, opinion polls will move closer to our model’s prediction. There will always be some error, of course, but that’s how we test the model.

(Drew’s work reminds me that one advantage scholars have today and a reason why Bayesian-based forecast models can be so much more accurate than more traditional one-shot structural models is the proliferation of state-based polling. Two decades ago I doubt political scientists could engage in the type of Bayesian updating typically of more recent models simply because there wasn’t a lot of polling data available.  I’ll spend a lot of time during the next few months dissecting the various flaws in the polls, but, used properly, they are really very useful for predicting election outcomes.)

I have other quibbles. For example, I wish he would address Hibbs’ argument that adding attitudinal data to forecast models isn’t theoretically justified. And if you do add them, how are they incorporated into the initial baseline model – what are the underlying assumptions? And I could also make a defense of parsimony when it comes to constructing models. But, rather than repeat myself, I’ll leave it here for now and let others weigh in.

 

 

Pay No Attention To Those Polls (Or To Forecast Models Based On Them)

Brace yourselves. With the two main party nominees established, we are now entering the poll-driven media phase of presidential electoral politics.  For the next several months, media coverage will be dominated by analyses of trial heat polls pitting the two candidates head-to-head.  Journalists will use this “hard news” to paint a picture of the campaign on an almost daily basis  – who is up, who is not, which campaign frame is working, how the latest gaffe has hurt (or has not hurt) a particular candidate.  Pundits specializing in election forecasting, like the New York Times’ Nate Silver, meanwhile, will use these polls to create weekly forecasts that purport to tell us the probability that Obama or Romney will emerge victorious in November.

Be forewarned: because polls are so volatile this early in the campaign, it may appear that, consistent with journalists’ coverage, candidates’ fortunes are changing on a weekly, if not daily basis, in response to gaffes, debates, candidate messaging or other highly idiosyncratic factors.  And that, in turn, will affect the win probabilities estimated by pundits like Silver.   Every week, Silver and others will issue a report suggesting that Obama’s chances of winning has dipped by .6%, or increased by that margin, of some such figured based in part on the latest polling data.

Pay no attention to these probability assessments. In contrast to what Silver and others may suggest, Obama’s and Romney’s chances of winning are not fluctuating on an almost daily or weekly basis.  Instead, if the past is any predictor, by Labor Day, or the traditional start of the general election campaign, their odds of winning will be relatively fixed, barring a major campaign disaster or significant exogenous shock to the political system.

This is not to say, however, that polls will remain stable after Labor Day.  Instead, you are likely to see some fluctuations in trial heat polls throughout the fall months, although they will eventually converge so that by the eve of Election Day, the polls will provide an accurate indication of the election results.  At that point, of course, the forecast models based on polls, like Silver’s, will also prove accurate.   Prior to that, however you ought not to put much stock into what the polls are telling us, nor in any forecast model that incorporates them.

Indeed, many (but not all) political science forecast models eschew the use of public opinion polls altogether.  The reason is because they don’t provide any additional information to help us understand why voters decide as they do. As Doug Hibbs, whose “Bread and Peace” model is one of the more accurate predictors of presidential elections, writes, “Attitudinal-opinion poll variables are themselves affected by objective fundamentals, and consequently they supply no insight into the root causes of voting behavior, even though they may provide good predictions of election results.”  In other words, at some point polls will prove useful for telling us who will win the election, but they don’t tell us why.  And that is really what matters to political scientists, if not to pundits like Silver.

The why, of course, as longtime readers will know by heart, is rooted in the election fundamentals that determine how most people vote.  Those fundamentals include the state of the economy, whether the nation is at war, how long a particular party has been in power, the relative position of the two candidates on the ideological spectrum, and the underlying partisan preferences of voters going into the election.  Most of these factors are in place by Labor Day, and by constructing measures for them, political scientists can produce a reasonably reliable forecast of who will win the popular vote come November.  More sophisticated analyses will also make an Electoral College projection, although this is subject to a bit more uncertainty.

But if these fundamentals are in place, why do the polls vary so much?  Gary King and Andrew Gelman addressed this in an article they published a couple of decades ago, but whose findings, I think, still hold today.  Simply put, it is because voters are responding to the pollsters’ questions without having fully considered the candidates in terms of these fundamentals. And this is why, despite my claim that elections are driven by fundamentals that are largely in place by Labor Day, campaigns still matter. However, they don’t matter in the ways that journalists would have us believe: voters aren’t changing their minds in reaction to the latest gaffe, or debate results, or campaign ad.  Instead, campaigns matter because they inform voters about the fundamentals in ways that allow them to judge which candidate, based on his ideology and issue stance, better addresses the voter’s interests.  Early in the campaign, however, most potential voters simply aren’t informed regarding either candidate positions or the fundamentals more generally, so they respond to surveys on the basis of incomplete information that is often colored by media coverage.  But eventually, as they begin to focus on the race itself, this media “noise” becomes much less important, and polls will increasingly reflect voters’  “true” preferences, based on the fundamentals. And that is why Silver’s model, eventually, will prove accurate, even though it probably isn’t telling us much about the two candidates’ relative chances today, or during the next several months.

As political scientists, then, we simply need to measure those fundamentals, and then assume that as voters become “enlightened”, they will vote in ways that we expect them to vote.  And, more often than not, we are right – at least within a specified margin of error!  Now, if a candidate goes “off message” – see Al Gore in 2000 – and doesn’t play to the fundamentals, then our forecast models can go significantly wrong.  And if an election is very close – and this may well be the case in 2012 – our models will lack the precision necessary to project a winner.  But you should view this as strength – unlike some pundits who breathlessly inform us that Obama’s Electoral College vote has dropped by .02% – political scientists are sensitive to, and try to specify, the uncertainty with which they present their forecasts.  It is no use pretending our models are more accurate than they are.  Sometimes an election is too close to call, based on the fundamentals alone.

The bottom line is that despite what the media says, polls – and the forecast models such as Silver’s that incorporate them right now – aren’t worth much more than entertainment value, and they won’t be worth more than that for several months to come.  As we near Election Day, of course, it will be a different matter.  By then, however, you won’t need a fancy forecast model that incorporates a dozen variables in some “top secret” formula to predict the winner.  Nor, for that matter, do you need any political science theory. Instead, as Sam Wang has shown, a simple state-based polling model is all you need to predict the presidential Electoral College vote.  (Wang’s model was, to my knowledge, the most parsimonious and accurate one out there for 2008 among those based primarily on polling data.)  Of course, this won’t tell you why a candidate won.  For that, you listen to political scientists, not pundits.  (In Wang’s defense, he’s not pretending to do anything more than predict the winner based on polling data alone.)

So, pay no heed the next time a pundit tells you that, based on the latest polls, Obama’s win probability has dropped by .5%.  It may fuel the water cooler conversation – but it won’t tell us anything about who is going to win in 2012, and why.

Predictions, Predictions: Congress and the Courts

How reliable is the generic ballot survey question in helping forecast the size of the Democratic losses that will occur in November?  As we move closer to November 2, a growing proportion of my blog posts will undoubtedly focus on the midterm elections.    Many of these posts will likely mention the generic ballot question that is asked by a number of polling firms.  As most of you know, this question typically take some version of the following form: “Looking ahead to the Congressional elections in November, which party do you plan to vote for if the election were being held today?”  The survey results, as many of you have heard me say, are actually a useful indicator of the likely outcome of the November midterm election.

But how useful? In a recent blog post, Nate Silver at fivethirtyeight.com raised important questions about the utility of the generic ballot question results. Silver writes: “It might be the case that the generic ballot is fairly stable, but that doesn’t necessarily mean it’s all that useful an indicator. In addition to the fact that the consensus of polls (however careful we are about calibrating it) might be off in one or the other direction, there’s also the fact that the thing which the generic ballot is ostensibly trying to predict — the national House popular vote — is relatively irrelevant to the disposition of the chamber, or the number of seats that each party earns.”

Political scientist Alan Abramowitz, (via Brendan Nyhan) takes issue with Silver’s comments, calling portions of them “pretty silly.”  I wouldn’t characterize Silver’s comments as silly, but I certainly disagree with portions of his post, particularly his claim that the national House popular vote is “relatively irrelevant to the …number of seats each party earns.”  Although it is not a perfectly precise indicator of how many seats a particular party will win (or lose) come November, it does provide a useful approximation.  Similarly, the generic ballot question does help us predict, within a margin of error, what the likely popular vote will be come November. To be sure, in some midterms it has proved more useful than others.  But it is not irrelevant.

The bottom line, then, is that the generic ballot results, properly understood, helps us predict the November midterm results. To see why, it helps to understand how political scientists factor the results of this question into their midterm forecast model.

When a generic ballot survey comes back with results showing, as the most recent Gallup survey poll does, that Republicans are favored over Democrats by 7%,  50%-43%, that doesn’t necessarily mean Republicans will win 7% more congressional seats.

Indeed, as Silver correctly points out, the generic ballot result can’t even tell us how many overall votes each party will get in the midterm, never mind how they will do in each of the 435 districts.  But political scientists understand this.  As it turns out, they don’t necessarily have to perform a district-by-district level analysis to get a decent handle on the total number of seats each party will win.  Instead political scientists can use the generic ballot results as one factor in statistical models that also take into account the existing political context as well as “structural” attributes associated with a midterm election.  So, for example, Abramowitz’s midterm forecast model uses only four variables to predict the midterm outcome, as measured by seats lost/gained:  the results of the generic ballot question, presidential popularity (as measured by the Gallup poll), the number of seats held by Republicans going into the midterm election, and a simple indicator variable that signals this is a midterm election, rather than a presidential one.  (This last variable is important because the president’s party typically loses seats in the midterm election.)

How do we know that the Abramowitz model (or others like it) is reliable?  Because it is constructed using previous midterm election results.  In effect, by looking at previous midterms, Abramowitz can build an overall “generic” statistical model that says, on average, how important each of these four variables is in determining the midterm election results.  Then, by plugging the current values for each variable into the existing model, he comes up with a forecast.  Political scientists have constructed different forecast models, but they typically all include some mix of variables that measure voters’ partisan sentiment, structural attributes associated with the midterm, and some indicator of the environment (political and otherwise) in which the election is being held.

Now, these models aren’t completely foolproof.   They are predicated on the assumption that the factors influencing the current midterm will behave pretty much as they have in the past. Say something unexpected happens – terrorists attack the World Trade Center, for example.  That may unexpectedly distort the relative importance of some variables, thus throwing the forecasts slightly off.  Moreover, even without unexpected events, there is always some uncertainty involved with these forecasts – some unexplained variance for which the model cannot account.

But it would be wrong to suggest, as Silver does, that the generic ballot question is “irrelevant.”  In fact, it is a highly relevant and useful predictor of the midterm outcome – as long as it is evaluated within an overall understanding of the factors that drive midterm elections.  And right now, it favors the Republicans by 7% – not a good sign for Democrats at all.

I will have much more to say about the generic ballot in the next several posts.  Before doing so, however, there’s another prediction that I need to discuss: Elena Kagan’s Senate confirmation vote to the U.S. Supreme Court.  Long-time readers will recall that in an earlier post I set the over/under for the Kagan no votes at 35.  In a sign that my predictive powers may be slipping (remember that I nailed Sotomayor’s exact confirmation vote) Kagan was confirmed with 37 votes in opposition, so I was two votes off.  (If you must know, it was Nelson and Brown). Will Loubier, on the other hand, hit the final vote margin squarely on the head, thus winning a Presidential Power “It’s the Fundamentals, Stupid!” t-shirt.   Here’s Will, looking justly proud of his prognosticative abilities:

For the envious among you, note that I’ll be giving away another t-shirt in the “predict the midterm outcome” sweepstakes.  Stay tuned!