Measuring the Bounce Is Harder Than It Looks
Sometimes, two polls aren’t any better than one.
While we all want to measure changes of public opinion over time, doing so depends on the accuracy of not just one poll, but of two. If either of those surveys is amiss, the “change” from the first to the second can misstate the movement (or lack of it). [IMGCAP(1)]
For example, two early polls seeking to measure Sen. John Kerry’s (D-Mass.) “bounce” after the Democratic National Convention raise interesting questions about the change that they uncovered.
Let me state up front that I’m not challenging the methodology or fundamental reliability of the two surveys, or of the pollsters involved in them. I’m merely offering a more general caution about reading too much into surveys — even those conducted by the same organization over time.
The surveys in question are a CNN/USA Today/Gallup poll conducted July 30-31 and an ABC News/Washington Post poll conducted Aug. 1.
CNN/USA Today/Gallup sought to uncover changes in public opinion by comparing the new results with a July 19-21 poll. ABC/Washington Post post-convention numbers were compared with the same pollsters’ July 25 survey.
The most recent ABC/Post poll shows Kerry leading President Bush 50 percent to 44 percent among registered voters, a 4-point bounce from the pre-convention poll, which had Bush leading 48 percent to 46 percent. Among likely voters, Bush’s 50 percent to 46 percent pre-convention lead turned into a very believable 49 percent to 47 percent Kerry advantage.
All of those numbers make sense. They pass my “smell test.”
So do the favorable/unfavorable numbers. Kerry’s favorables went up and his unfavorables went down, just as you would expect. For Bush, the opposite occurred, again as you would expect after four days of Democratic rhetoric and message.
My question about the ABC/Post numbers comes when looking at the question about who voters trust on the economy.
By a solid 52 percent to 41 percent, registered voters prefer Kerry to handle the economy. That’s a marked change from the poll’s pre-convention numbers, when a plurality (47 percent to 46 percent) trusted Bush. Voila, a significant Kerry bounce on the economy.
The problem in assessing the bounce is the July 25 data. That poll stands out as the only ABC/Post survey since April in which Kerry wasn’t more trusted on the economy.
In the July 11 poll, Kerry held a 50 percent to 44 percent advantage on the economy, and in the June 20 poll, Kerry’s advantage was a nearly identical 50 percent to 45 percent. Both polls surveyed registered voters, just like the post-convention survey.
Was the July 25 poll an aberration? Did it understate Kerry’s standing on his ability to handle the economy, therefore creating the appearance of a Kerry post-convention bounce when there wasn’t one? Or did the late-July poll pick up a Bush bounce on the economy — a possibility given the economy’s improvement over the previous few months — that subsequently dissipated?
Of course, we can’t know for certain, and that uncertainty should lead to caution in our interpretation of the implications of the results.
But if one ABC/Post poll question on the economy is an eyebrow-raiser, the Gallup post-convention ballot is simply impossible to swallow.
According to the Gallup numbers, Kerry fell behind Bush (50 percent to 47 percent) in post-convention polling of likely voters after leading him in polling (49 percent to 47 percent) conducted before the Democratic National Convention.
This simply isn’t logical, and I don’t believe that the president is in a better position after Boston than he was before the Democratic convention. Nothing happened at the convention that would have alienated potential Kerry supporters or made Bush more appealing to either undecided voters or swing voters.
Of course, both sets of numbers are within the survey’s margin of error, so from a statistical point of view both show a dead heat. But we all know that reporters don’t emphasize the margin of error, and the natural conclusion when two polls show different leaders is that something has changed — even though statistically that may not be the case.
Most observers wisely caution reading too much into changing poll numbers when the surveys have been conducted by different polling organizations, since they invariably use slightly different methodologies. But even comparing poll data from the same organization can lead to questionable conclusions.
Stuart Rothenberg is editor of the Rothenberg Political Report.