House Majority Leader Eric Cantor lost his primary in convincing fashion, but there appear to be two separate questions: How did he lose? And why was it a surprise?
One thing that everyone should be able to agree on is that Cantor’s loss was historic. According to CQ Roll Call research, he is the first majority leader to lose in a primary since the creation of the position in 1899.
So how did Cantor lose? The initial postmortems on the race point to a variety of potential reasons: Cantor was too moderate on immigration, ran ineffective campaign ads, had poor internal polling, suffered from a local fight over the state central committee, was too focused on becoming the next speaker, was part of the unpopular Republican leadership, had ineffective constituent service, didn’t spend enough time in the district and wasn’t a good stylistic match for the primary electorate.
In reality, these all could have (and seem likely to have) contributed to his loss in some way. And in the days ahead, there are likely to be more deep dives analyzing the nuts and bolts of the race.
But the other fascinating question was, given the growth of the political analysis industry, why didn’t anyone see the upset coming?
On one level, Randolph-Macon College economics professor Dave Brat’s victory over the second-highest ranking Republican in the House fits the insider versus outsider, establishment versus anti-establishment narrative very neatly. But that’s about where the similarities to other successful challengers starts and ends.
Cantor was not caught off-guard in the race. Unlike some of his colleagues who lost in 2012 primaries, including Ohio’s Jean Schmidt, Oklahoma’s John Sullivan, and Florida’s Cliff Stearns, Cantor ran a campaign. He began airing attack ads against Brat in April.
Even if his television ads missed their mark, incumbents who have taken their races seriously at an early stage usually are victorious, particularly when they outspend their challengers by a huge margin, which is what happened in Virginia. According to pre-primary reports detailing activity through May 21, Cantor spent more than $4.8 million and had $1.5 million in the bank. Brat spent $122,000 and had $84,000 on hand.
Anti-establishment groups remained on the sidelines. Usually a successful challenger needs a boost from outside from groups such as Club for Growth, FreedomWorks, the Senate Conservatives Fund (which is open to playing in House races), and Madison Project. Those groups weren’t involved, so it seemed unlikely Brat would get over the top without them.
At the same time, the typical establishment groups such as the U.S. Chamber of Commerce or Republican Mainstreet Partnership were not involved for Cantor. That has been a sign of a lack of vulnerability in the past, but clearly was not the case in this race. Not asking for help could have been Cantor’s biggest and final mistake.
That also means the tea party still hasn’t shown an ability to defeat the establishment at full power this cycle. That could happen on June 24 in Mississippi.
Without some of the usual indicators of primary intensity and competitiveness, we were left to turn to polling. And there wasn’t much to turn to.
From a handicapping perspective, the surprise ultimately rested on the survey research. There was a shortage of polling data, and those that were available turned out to be very wrong, and quite misleading.
The Cantor campaign released a now-dated poll by McLaughlin & Associates, conducted May 27-28, which showed the congressman with a 62 percent to 28 percent advantage. The only other public poll was an automated survey which showed Cantor ahead by 13 points, but the firm has been in business for about 15 minutes, so there was little reason to put a lot of stock in that single poll. In any case, both surveys showed Cantor over 50 percent on the ballot and comfortably ahead. The congressman lost by at least 10 points on Tuesday.
At the end of the race, polling data should be the most important piece in handicapping and projecting a race. In the case of Cantor’s primary, the limited survey data misinformed the analysis and led to a surprise outcome.