Few Pollsters Get High Marks in ’08 Presidential Races
Everyone seems to conduct polls these days, but not everyone conducts good ones. That’s the message I drew after reviewing dozens of presidential polls conducted from Iowa through Indiana and North Carolina.
[IMGCAP(1)]It’s beyond time for those of us who write about politics to evaluate the seemingly endless number of polls conducted in the race for the White House. A few thoughtful souls monitor and write about polls on a regular basis, including ABC News’ Gary Langer and Mark Blumenthal at Pollster.com, a wonderful Web site that everyone should read regularly. Unfortunately, too many people mindlessly accept any and all numbers, treating them as if they are equally accurate.
I did not examine every poll conducted in every state. Instead, I looked at most of the high-profile contests, especially where a considerable number of different polls were available on RealClearPolitics.com (from which I gathered the numbers that follow). I stopped looking at GOP primaries after it became clear that Arizona Sen. John McCain would be his party’s nominee.
I also looked at the so-called RealClearPolitics average in each state, which is available from the Web site with the same name.
Ultimately, I focused on five different polls that have received considerable attention: American Research Group, Rasmussen Reports, the Reuters/Zogby/C-SPAN poll, SurveyUSA and the Suffolk University poll.
The worst-performing poll has been Suffolk.
Suffolk University’s pre-primary survey correctly predicted the winner in only five of nine contests. It was wrong in both New Hampshire primaries, the California Democratic primary and, incredibly, the Democratic primary in Massachusetts, the state where the university is located.
The other five contests in which Suffolk polled, the results were quite good, within a couple of points of the actual results. But in polling, being right about half the time isn’t a record to be proud of.
Rasmussen, Reuters/Zogby/C-SPAN and ARG produced better results — but not by much. Each firm picked the winner a little under two-thirds of the time.
ARG correctly picked the winner in seven contests but blew four. It missed the Iowa and New Hampshire Democratic contests badly (everyone botched New Hampshire), missed the South Carolina Republican contest badly and picked the wrong winner in the Michigan GOP race.
Rasmussen got 11 primaries right and six wrong (the Democratic races in New Hampshire, California, Missouri and Texas, and the GOP primaries in California and Alabama), a mediocre record at best. Even more disconcerting, in five of the primaries that Rasmussen got “right,” the firm was embarrassingly far off from the actual vote.
For example, while Rasmussen’s last poll in Massachusetts had Sen. Hillary Rodham Clinton (D-N.Y.) up by 6 points, she actually won by more than 15 points. The firm had Sen. Barack Obama (D-Ill.) up by 4 in Wisconsin; he went on to win Wisconsin by more than 17 points.
The Reuters/Zogby/C-SPAN’s overall results mirror Rasmussen’s. Zogby’s California Republican and Democratic polls were an embarrassment. The firm was way off in the Granite State Democratic race, but it wasn’t alone. The firm’s Ohio poll was bad (showing a tie while Clinton won by a comfortable 10 points), and it picked the wrong winner in the Michigan GOP and Indiana Democratic contests.
But unlike Rasmussen, when Zogby got the right winner, the firm usually came pretty close to the winner’s margin, as in North Carolina on Tuesday.
The best pollster, by a wide margin, was SurveyUSA, which coincidentally has conducted some House and Senate race polls for this newspaper.
I’ll admit to not being a fan of SurveyUSA’s automated polling, and some of the firm’s past results have struck me as simply wrong. But in examining presidential primary polling, SurveyUSA stands well above its competition.
SurveyUSA called 11 of 14 races correctly, missing the Missouri Democratic primary badly and picking the wrong winner in two close races, the GOP contest in Alabama and the Democratic primary in Texas. The firm did not poll in the New Hampshire Democratic primary, which was missed by every major polling firm.
Until this week, SurveyUSA could boast that not only did it pick the winners, it also forecast the margin of victory. The poll was spot on in the South Carolina and Missouri GOP races and in the California and Ohio Democratic contests, for example.
This week, however, SurveyUSA got the winners in the two Democratic primaries but was far off in predicting the Clinton margin in Indiana and the Obama margin in North Carolina.
Finally, RealClearPolitics’ own “RCP average,” which averages a handful of recent polls, improved on the individual polls that I checked. The average got only four of 21 contests “wrong”: Democratic primaries in New Hampshire, California and Missouri, as well as the GOP primary in Alabama. But in some state contests, the RCP average was way off, including in Massachusetts, Georgia and Wisconsin.
Polling is a tricky business, and even the methodologically most rigorous firms get things wrong. That’s why every poll comes with a statistical margin of error.
On the other hand, I have to ask why pollsters with such dubious track records continue to get as much attention as they do. “Given how wrong some of these firms have been,” one partisan pollster told me recently, “they wouldn’t be rehired next time if they worked for a candidate.”
Of course, I know how they exist and why they are hired and rehired. Most of what is written or aired on TV is for mere amusement, and people in charge of Web sites, hosting TV programs or booking guests don’t know much about politics or methodology, or they don’t care about those things. For them, it’s about the sizzle, not the steak — even if the steak isn’t worth eating.
Stuart Rothenberg is editor of The Rothenberg Political Report.