As I mentioned last week, Jay DeSart and I have done some work on using September state-wide trial-heat polls to predict presidential election outcomes in the states. We use the Democratic candidate's share of the two-party vote in state-level September polls (averaged across publicly available polls) , as well as a lagged vote variable to predict state-level outcomes. While the lagged vote variable is important to our model, most of the predictive power comes from the September poll average. Some of the data on September polls are presented below.
First, let's look at some scatterplots of the relationship between September polls on November votes from 1992 to 2004:
Just so-so. No doubt one of the problems with September poll accuracy in 1992 was Perot's entry into the race in October. Even with this, however, point estimates from September polls called the correct winner in 39 of 50 states.
Better--just 6 errant calls (one of the two tied poll results was allocated as a "correct" prediction and the other as "incorrect").
Still good--stronger correlation and six errant calls.
Even better--just two errant calls (WI, NH)
The overall accuracy of September polls from 1992 to 2004 (below) is pretty impressive. The September poll average called the wrong winner in only 25 of the 200 election outcomes. And if you toss out 1992 on the basis of Perot's October candidacy, the polls were "wrong" in only 14 of the remaining 150 cases (9.3%).
One interesting thing to note from last week's post, though, is that in 2004 the correlation between earlier (May and June) state polls and the eventual outcomes was almost as strong as the correlation between the September polls the eventual outcome. The big caveat, however, is that the May and June polls only included results for23 and 21 states, respectively.
Thursday, April 17, 2008
Subscribe to:
Post Comments (Atom)
1 comment:
Hi ggreat reading your post
Post a Comment