Mayor Michael Bloomberg was coasting to victory. A day before the 2009 general election in New York City, a Quinnipiac University poll showed the mayor with a comfortable 12-point lead over Bill Thompson, the Democratic challenger. A Marist College poll, released four days before the election, gave Bloomberg a 15-point advantage. Despite some grumbling about the mayor’s decision to overturn term limits, he seemed to have locked up another four years in City Hall.
But as the returns started streaming in on election night, the mayor’s aura of invincibility evaporated. The initial numbers were enough to declare Bloomberg the victor, but with only slightly more than 50 percent of the vote. Thompson, who took 46 percent, came within less than five points of a stunning upset.
In the postmortem of the race, pollsters were criticized for promulgating the notion that Bloomberg’s victory had been inevitable. Some blamed the polls for hurting Thompson’s chances by limiting his coverage in the media, deterring potential supporters from getting involved in the campaign and dampening his voters’ enthusiasm to cast their ballots.
“I think the history of a lot of the public polls has shown that they’re wildly inaccurate,” Thompson said in a recent interview. “The one thing that the people of New York City have learned over a period of years and in different elections is that they’re just wrong. The public polls are just incredibly wrong, and I don’t think people put a lot of faith in them any longer.”
Pollsters brushed off the criticism, pointing to a variety of factors that they said made it hard to accurately measure voter sentiment and turnout. But they weren’t the only ones polling the race. The Thompson campaign released its own internal poll the same week that the Quinnipiac and Marist polls came out, which found its candidate lagging Bloomberg by just eight points, 46 percent to 38 percent. Unlike the two public polls, the internal poll detected a “significant shift” in the race as voters, who had been moving toward Bloomberg, were now breaking for Thompson. The campaign also noted that its poll showed that “the undecided voters are disproportionately minorities, which favors Thompson.”
“What you clearly had is a strategic attempt by the Bloomberg campaign to sell the inevitability of the mayor’s re-election in an attempt to suppress Democratic voters,” said Eduardo Castell, who was Thompson’s campaign manager. “Unfortunately, the media and Quinnipiac and Marist were complicit in that. That made our job a lot harder, when we knew, both through the activity that was happening in the streets and our internal polls, that it was a close race.”
Thompson, who is again running for mayor this year, isn’t the first candidate to complain about polls that miss the mark in New York City, nor was his 2009 race the first in which the projected numbers ended up differing noticeably from the ultimate outcome.
In 1989, David Dinkins’ 14-point lead over Rudolph Giuliani in a late mayoral poll dwindled to just two points in the actual voting. In 2001, only one out of six pollsters had Bloomberg beating Mark Green. In 2005, the final polls had Bloomberg up by 34 to 38 points—twice his actual 19-point margin of victory. In 2009, Quinnipiac misjudged the Democratic race for public advocate, both in the primary and the runoff.
So this year, as the crowded field of citywide candidates collides with a cascade of election polls, questions regarding the polls’ accuracy are certain to return—especially when candidates don’t like their findings. And as for the odds that pollsters’ predictions will more closely match the election returns this fall, it’s anybody’s guess.
* * *
Pollsters are quick to defend their work. For one thing, their polls rarely fail to identify the winning candidate. During the 2005 and 2009 mayoral races, they predicted a greater margin of victory for Bloomberg than he ultimately enjoyed, but they correctly picked him as the winner. Quinnipiac’s final pre-election poll in the 2001 mayoral race had Bloomberg and Green tied at 42 percent, which was a bit off the mark but captured some of Bloomberg’s late surge.
It is also a challenge to track a race involving a better known incumbent running against a lesser known candidate, some pollsters say. While voters typically have more settled views of the incumbent, the challenger’s relative lack of exposure can leave voters undecided and make it hard to predict which candidate they will eventually vote for.
Bloomberg was a highly recognizable figure by 2009, and he found himself making little headway in the months leading up to the election, even as he poured tens of millions of dollars into the race. The Quinnipiac and Marist polls showed him hovering around 50 percent for months, dipping no lower than 47 percent and rising no higher than 54 percent.
“As I recall, our Bloomberg number was within a point or two of what he ended up with,” said Lee Miringoff, director of The Marist College Institute for Public Opinion. “The difference was in the Thompson number. I think when you have a situation when you have a well-known incumbent and a lesser known challenger, the undecideds tend to gravitate to the challenger. In essence, they had already rejected the incumbent. They just needed to get more comfortable with the challenger.”
Several pollsters maintained that polls predicting a blowout could also alter a race in a self-correcting way. When such polls are published and widely reported, the front-runner’s supporters may feel a sense of complacency and skip a trip to the polls. In 2005, when Quinnipiac had Bloomberg beating Fernando Ferrer 68–30, and Marist had him up 64–30, some Bloomberg voters may have decided to stay home. In the end, the mayor won with 58 percent, while 39 percent voted for Ferrer.
Micheline Blum, the director of Baruch College Survey Research, argued that this dynamic was also at play in 2009, when Bloomberg supporters were less than thrilled about voting for him. Democrats were tired of voting for someone on another ballot line, and voters in both parties were disillusioned by his decision to overturn term limits.
“When someone’s way ahead like that at the end, their supporters, even though they think they’re going to vote for them and normally vote, at the very last minute they think, ‘He doesn’t need my vote, he’s going to win anyway,’ ” Blum said. “They won’t kill themselves to get there. On the other hand, the underdog’s supporters don’t want him to be humiliated and wiped out, and they want to make a statement, so they’re a little more motivated in those situations where you’re seeing someone 20 points ahead.”
Then there are the caveats that pollsters raise about any poll. Sampling can be a challenge since today’s voters are less likely to pick up the phone, and when they do they’re less likely to answer questions from a stranger. The margin of error always allows for a certain amount of wiggle room. Final pre-election polls are conducted several days or a week or more before an election, giving the candidates time to make gaffes or connect with more voters.
A candidate may also have momentum that continues after the final polls. In 2001 Bloomberg’s come-from-behind surge was captured by some polls, but only SurveyUSA showed him actually beating Mark Green.
Jay Leve, the CEO of SurveyUSA, said that most polls that year had Mark Green cruising to victory, the conventional wisdom being that the Democratic nominee would win easily in New York City.
“We were very, very fearful of those numbers,” Leve said. “We were so far out on a limb saying that Green would lose, and that Bloomberg, this comparatively unknown businessman, would win, we didn’t sleep that night. And then the next day Bloomberg won by three points.”
While Quinnipiac and Marist misjudged the 2001, 2005 and 2009 mayoral races, they can still point to their strong overall track records as two of the country’s premier pollsters. Marist’s recent presidential polls have performed remarkably well; Quinnipiac accurately predicted President Obama’s victory in the key 2012 swing states of Florida, Ohio and Virginia, and came close to nailing the final margins.
“Here you have a case where even two of the country’s best pollsters, which just happen to be located close to New York City, could manage to miss by a number of points, and still you’d have to say there are plenty of other pollsters who are inferior to Marist and Quinnipiac in their approach, in their rigor and in their methodology,” Leve said. “You could never say that they cut corners. These guys are really buttoned up and trying really hard. In the end it shows you what an imprecise science that public opinion polling turns out to be.”
Leve’s SurveyUSA poll got less attention in the 2009 mayoral race than Quinnipiac and Marist, but it outperformed them that year, too. Its poll, released a day before the vote, showed Bloomberg with 52 percent and Thompson with 43 percent, within three points of the actual outcome for each candidate and well within the poll’s margin of error.
Yet for Quinnipiac and Marist, the challenges at the local level, combined with the impressive records they have established with their national and state polls, only raise more questions. What is it about New York City that makes it such a tough place to figure out what voters are really thinking?
* * *
Part of the problem of polling can be chalked up to the fact that local candidates in New York City have a harder time getting people to vote, getting their message across or even developing much name recognition. National candidates like Mitt Romney and Barack Obama are household names. Bill Thompson, Fernando Ferrer and Mark Green? Not so much.
This year more than half of the city’s voters still don’t know enough about Bill de Blasio, New York City’s public advocate, to have an opinion of him, according to one recent poll, even though he is one of the more prominent mayoral candidates. However, de Blasio should know as much as anyone that his low name recognition and his underdog status in the polls are no reason to count him out just yet.
For much of his 2009 public advocate campaign, de Blasio was struggling to break into double digits in the polls. In late July Quinnipiac showed him with only 10 percent of the vote among Democrats, compared with 37 percent for Mark Green, the former public advocate and mayoral candidate, and 13 percent for Norman Siegel, a civil rights lawyer. About three weeks out, de Blasio’s share had inched up to 14 points, putting him solidly in second place. Then, in mid-September, he won 33 percent in the primary, edging Green by a point but falling short of the 40 percent threshold needed for an outright victory.
The following week, Quinnipiac had Green and de Blasio tied at 46 percent in the runoff, with 7 percent undecided. On Election Day just six days later, de Blasio hammered Green with 62 percent of the vote.
Jef Pollock, the president of the political consulting firm Global Strategy Group, which did polls for de Blasio in 2009, said that the public polls in the race didn’t necessarily get it wrong. The final Quinnipiac poll in the public advocate primary came a full 20 days before the actual vote, “which is a ton of time in a race like that, because all of the communicating happened between that time,” he said. “Every single thing that happened in that campaign happens in the last four weeks in the public advocate’s race. Nothing happens before that.”
George Arzt, a political consultant who handled de Blasio’s campaign communications in 2009, said he paid little attention to the polling that year, viewing it as little more than a curiosity.
“Sometimes I feel that polls are a mixture of a little bit of science and a little bit of voodoo,” he said. “Don’t forget that you throw in your weighting, what you think is the number of the group coming out. You don’t really know what’s going on in the communities. You have to seriously go out there.”
Arzt, who also worked on John Liu’s comptroller campaign in 2009, said that pollsters misjudged that race as well by failing to account for the heavy turnout among Asians as well as “a lot of African-American support that no one guessed.” In the Democratic primary for comptroller, Quinnipiac had Liu in a dead heat with fellow Council Members David Yassky and Melinda Katz, but he went on to beat Yassky by 7 points. In a runoff, Quinnipiac had him up 49–43 over Yassky. Liu won with 56 per-cent of the vote.
“How would you know that John Liu was a rock star in the Asian community?” Arzt asked. “Would you know? Did they poll those groups? Did they know that they were going to come out in such vast numbers?”
* * *
New York City has a long history of confounding pollsters. Polls were put under the microscope following the 1989 mayoral race between David Dinkins and Rudolph Giuliani, as well as in their 1993 rematch. In the week leading up to the 1989 general election, Gallup issued a poll that gave Dinkins a 14-point lead. On Election Day Dinkins barely eked out a win, garnering just 50 percent to Giuliani’s 48 percent.
Larry Hugick, a researcher with Princeton Survey Research Associates at the time, attributed the discrepancy to racial factors. Hugick found that in elections pitting a black Democrat against a white Republican, Democratic-leaning white voters were more likely than others to say they had not decided whom to vote for. The theory was that these white voters were reluctant to admit that they planned to vote for the white candidate—choosing race over party—since they didn’t want to appear to be racist.
Hugick argued that the same effect was at play in the 1993 race between Mayor Dinkins and Giuliani, although to a lesser degree. In that contest, a Harris poll conducted a few days before the election had the two men tied, at 47 percent apiece. In the actual vote, Giuliani won with 51 percent. In 1993, the lessons of 1989 had not been forgotten, Hugick noted.
“Journalists did not forget that Dinkins won by a razor-thin margin the last time despite double-digit leads in the final pre-election polls,” he wrote in a recap of the 1993 election. “Thus, they were less likely to uncritically accept any set of poll results as reality.”
There is a similar skepticism leading up to the 2013 election, thanks to the shaky performance of several pollsters in the last few mayoral races. A number of experts have concluded that the phenomenon Hugick described is no longer a factor—and some doubt that it ever was—but recent polls have still had trouble reaching minority voters and predicting their turnout. And in a city where minorities outnumber non-Hispanic whites, the under-counting of minority voters can amplify errors.
In 2009 Quinnipiac’s final election poll showed Thompson with 62 percent of the black vote and 43 percent of the Hispanic vote. Marist had Thompson with just 53 percent of the black vote and 36 percent of the Latino vote. But on Election Day exit polls showed Thompson winning a commanding 76 percent of the black vote along with 55 percent of Hispanics. As for whites, both polls accurately projected that two thirds would vote for Bloomberg.
“We now have a firm minority majority in the electorate, which for whatever reason, pollsters like Quinnipiac have had a very bad track record tracking,” said Bruce Gyory, a political consultant and an adjunct political science professor at SUNY Albany. “When you underestimate growing shares of the electorate, and they break by that landslide of a margin, you have problems. So it leaves me scratching my head as to why we’re anointing front-runners based on polls in New York City that empirically would need you to have to have collective amnesia if you were going to think they were predictive of outcome.”
One target for criticism is the polls’ “likely voter” methodology. Public pollsters typically find respondents by randomly dialing phone numbers by area code, asking whether or not a person is a registered voter and then using a series of questions, such as whether a person has voted before and if they plan to vote again, to arrive at a subset of “likely voters.”
Pollsters often report the results based on the larger sample of registered voters early on in a campaign, then switch to the smaller likely voter sample closer to Election Day. But in some New York City polls, the results using the registered voter list have proven to be more accurate. Marist’s registered voter results were closer to the mark than its likely voter results in the 2005 mayoral race. Quinnipiac does not publish its registered voter results when it switches to its likely voter model, in an effort to avoid confusion.
“Here’s the problem: The pollsters, for understandable reasons, like to project the likely voter data because that makes them seem as if they’re Merlin in King Arthur’s Court, and they and they alone have divined who’s going to vote,” Gyory said.
There are risks inherent in the questions used to determine likely voters, as well as the number of questions that have to be answered a certain way to designate a voter as “likely.”
Pollsters may ask how closely a respondent is following an election, for example, to help determine his or her likelihood of voting. The answer to the question can prove misleading, however, if a voter is strongly committed to a particular candidate but isn’t paying much attention to a race.
Joel Benenson, who has signed on as the pollster for Council Speaker Christine Quinn’s mayoral campaign, said some polls made this mistake during the 2012 Obama-Romney race, resulting in the false expectation that fewer young voters would turn out.
“One of the biases that occurred against President Obama in a lot of these so-called likely voter samples was if you were an Obama voter and you made up your mind in July, why did you have to pay attention to the election? You didn’t,” said Benenson, who was Obama’s lead pollster in 2008 and 2012. “Pollsters need to carefully think through the questions they use in determining likelihood to vote.”
Benenson said some polls also suffer from a bias against renters and transient voters because of likely voter models that put too much emphasis on where a person voted in the last election or whether they know where to vote.
“Sometimes traditional metrics create a bias on a national level against urban voters,” Benenson said. “Now, whether that’s happening in New York City, I can’t say. I haven’t looked at their likely voter screeners closely, but you could inadvertently ask a question, like some polls have asked, ‘Have you voted in this polling place before? Do you know where your polling place is?’ Well, you know, in an age when people can look on their cell phones and the Board of Elections will tell you where to vote, you don’t have to know where your polling place is.”
Another potential problem that may have hurt Quinnipiac in 2009 is that it did not include cell phone lists. Calling cell phones, in addition to landline numbers, is standard practice today, and failing to do so tends to undercount both young and minority voters. Quinnipiac is planning to include more cell phones than ever this year.
Yet even when cell phones are included, pollsters cannot capture a subset of voters who have moved to the city but have kept numbers which have a non–New York City area code.
“If they don’t have a landline and they’re cell phone only, and they have that cell phone from another state, that could be an issue,” said Doug Schwartz, director of the Quinnipiac University Poll. “But I don’t think it’s going to be a big issue. I don’t think that it’s that prevalent that it’s going to throw off the polls, but it is something that pollsters have to deal with.”
* * *
Schwartz said he expects Quinnipiac’s mayoral polls to be more accurate this year and on a par with its strong performance on the state level in gauging presidential elections. Using cell phones and reverting to the standard random dialing approach instead of the registered voter files it tried using in 2009 should help, he said. Some pollsters also suggested that the lack of an incumbent this time around could improve projections.
“We’re always taking a look back at our likely voter model, and we’re trying to tweak things, to see if that will help us be more accurate,” Schwartz said. “We’re going to be looking over how we did, and what we might do to improve our performance.”
The polls released so far in the 2013 mayoral race largely reflect name recognition. While it is early on and there is still time for another candidate to gain ground, there is also little doubt that Quinn is the front-runner.
Gyory, the SUNY Albany professor, suggested that both Thompson and Liu, a likely mayoral candidate, could be underestimated in the polling so far because both are minorities who have strong track records in the minority community, which pollsters have had a hard time counting.
Two recent Democratic mayoral primaries are instructive when it comes to the fluid nature of early polls. Mark Green held a double-digit lead over Fernando Ferrer for most of the 2001 primary, but Ferrer closed the gap in the closing weeks and beat Green by four points. Green then narrowly edged Ferrer in the runoff.
Four years later, in 2005, Ferrer was the early Democratic front-runner, much like Quinn is now. In a March 2 Quinnipiac poll that year, Ferrer garnered a solid 40 percent. C. Virginia Fields, the Manhattan borough president, had 14 percent, while Council Speaker Gifford Miller and U.S. Rep. Anthony Weiner each had 12 percent. By the September primary vote, Weiner had narrowed the gap enough to prevent Ferrer from getting 40 percent.
“I thought the public polling for mayor pretty well captured a late move by Anthony Weiner and a runoff for Ferrer and Weiner, which is where it ended up, though Weiner chose to concede before the runoff,” said Benenson, Quinn’s pollster.
Still, Benenson criticized the glut of election polls and questioned what purpose they serve.
“How much polling do we need?” he asked. “Is the point of it just to promote the name of a university or a media organization? Or is the point of it to really help inform the electorate about the decisions that they’re making? The challenge today, with the proliferation of polling by so many entities and an unending appetite in the media for the latest poll, is how do all these institutions do this in a more responsible way that’s contributing to a more informed electorate and not just being the brand of the poll of the day?”
Benenson, who declined to criticize Marist and Quinnipiac directly, said that the degree to which public pollsters had missed in predicting the margin of victory should not be easily dismissed.
“From our perspective, if we’re off 10 points in a poll, we’re held accountable by our client,” he said. “You’re also held accountable by other people in the campaign, because you have a different strategy you’re running if a poll says you’re down 15 points versus five points. Who are you targeting? How are you campaigning at the end? What are you saying? All of those things are affected by what the state of the race is and what the state of play is.”
Pollock, the president of Global Strategy Group, said that the public polls right now, which show Quinn way ahead and Thompson, de Blasio and Liu bunched up in second place, match the private polling he has heard about.
“I don’t think anybody doubts that Chris Quinn right now is in the lead in the polls because she has higher name ID than everybody else. That’s the advantage of her position,” Pollock said. “Who’s going to make the move from second place to first, if anybody can, and can she get to 40 or not? Those are the real questions. Both of those are possibilities and open questions.”
Tags: African-American, Anthony Weiner, Barack Obama, Bill De Blasio, Bill Thompson, Bruce Gyory, Christine Quinn, David Dinkins, David Yassky, Eduardo Castell, election, Fernando Ferrer, George Arzt, Gifford Miller, Global Strategy Group, Hispanic, Jay Leve, Jef Pollock, Joel Benenson, John Liu, Larry Hugick, Latino, Lee Miringoff, Marist, mark-green, Melinda Katz, Michael Bloomberg, Micheline Blum, Mitt Romney, New York City, Quinnipiac, Rudy Giuliani, SurveyUSA