Poll Power
by Scott Keeter
As the votes were counted on the night of this past January’s New Hampshire Democratic presidential primary, pollsters and other professionals in the political game began to grapple with an uncomfortable fact: Virtually all of them had been dead wrong. Despite unanimous poll results predicting a Barack Obama victory (by an average of eight points) on the heels of Senator Obama’s surprising triumph in the Iowa caucuses, Hillary Clinton was going to emerge the winner.
The New Hampshire debacle was not the most significant failure in the history of public-opinion polling, but it joined a list of major embarrassments that includes the disastrous Florida exit polling in the 2000 presidential election, which prompted several networks to project an Al Gore victory, and the national polls in the 1948 race, which led to perhaps the most famous headline in U.S. political history: “Dewey Defeats Truman.” After intense criticism for previous failures and equally intense efforts by pollsters to improve their techniques, this was not supposed to happen.
New Hampshire gave new life to many nagging doubts about polling and criticisms of its role in American politics. Are polls really accurate? Can surveys of small groups of people give a true reading of what a much larger group thinks? What about bias? Don’t pollsters stack the deck?
At a deeper level, the unease about polling grows out of fears about its impact on democracy. On the strength of exit polls in the 1980 presidential election, for example, the TV networks projected a Ronald Reagan victory—and Jimmy Carter conceded—even though people in the West still had time to vote. Critics charged that this premature call may have literally stopped some westerners from taking the trouble to cast their ballots. There is also a more generalized suspicion that polls (and journalists) induce political passivity by telling Americans what they think. As the New Hampshire story unfolded on January 8, former television news anchor Tom Brokaw seemed to have this idea on his mind when he said, with a bit of exasperation, that professional political observers should simply “wait for the voters” instead of “making judgments before the polls have closed and trying to stampede, in effect, the process.”
At the same time, some worry that polls put too much power in the hands of an uninformed public, and that they reduce political leaders to slavish followers of public opinion. In the White House, efforts to systematically track public opinion date back to the dawn of modern polling, during the administration of Franklin D. Roosevelt, and nobody seems to get very far in American politics today without a poll-savvy Dick Morris or Karl Rove whispering in his or her ear.
But while there may be reason to worry about the public’s political competence, a far more serious threat to democracy arises from the large disparities in income, education, and other resources needed to participate effectively in politics. Compared with most other Western democracies, the United States has a more pronounced class skew in voter turnout and other forms of political participation, with the affluent much more politically active than those who are less well off. This uneven distribution of political engagement is what makes public-opinion polls especially valuable. Far from undermining democracy, they enhance it: They make it more democratic. As Harvard political scientist Sidney Verba observed in 1995, “Surveys produce just what democracy is supposed to produce—equal representation of all citizens. The sample survey is rigorously egalitarian; it is designed so that each citizen has an equal chance to participate and an equal voice when participating.”
Elections are blunt instruments for transmitting the public will. One candidate wins, the other loses. Did the victor prevail because he or she proposed a compelling agenda of new policies, or simply because the alternative was less acceptable? On the day after his reelection in 2004, President George W. Bush declared, “I earned capital in the campaign, political capital, and now I intend to spend it.” The president’s troubles in his second term indicate that this reading of his mandate was incorrect, as he vigorously pursued many policies on which the public was, at best, divided. Opposition to the war in Iraq grew in 2005. Most voters did not want to see private accounts created in the Social Security system. Seven in 10 disapproved of Bush’s personal intervention in the case of Terri Schiavo, the brain-damaged Florida woman who was removed from life-support.
Obviously, polls do not always stop politicians from going their own way—and they should not always do so—but without polls we would not even know how disconnected official actions are from public opinion. Bush’s actions were not unlike those of many other political leaders who mistook a narrow victory for a mandate. In such cases, polling can provide a useful check. Between elections, polls provide guidance to legislators, the executive branch, journalists, and the public itself about what the public wants and what it will stand for.
There is no question that modern American politics is drenched in public-opinion polling. More than 20 entities, from the Gallup Organization and the Pew Research Center (where I work) to the relatively new “robo-poll” firms, such as Rasmussen Reports, with their computerized telephone surveys, regularly conduct national political polls and make the results available to the public. Dozens more work at the state and local levels. The total number of surveys conducted in a campaign is large, but impossible to count with certainty. Leaving aside all the research carried out for the campaign organizations, parties, and interest groups, at least 50 national opinion polls were released to the public in the month before the 2004 presidential election.
All told, including surveys by business, foundations, and others, marketing and public-opinion research is an $8.6 billion industry, according to one recent estimate. But in addition to being a big business and an integral part of American’s political machinery, survey research has also become an academic discipline, with its own academic journals, such as Public Opinion Quarterly, and input from scholars in related areas such as sociology and political science. People in the field have been grappling with a large number of problems. Fewer Americans are willing to participate in polls, and an increasing number are reachable only by cell phone; people with cell phones are more difficult for pollsters to reach and interview. And there are many knotty intellectual and methodological challenges, such as improving the accuracy of polls dealing with matters including drug and alcohol use or sexual behavior that many people are not willing to be frank about.
This phenomenon of “social desirability bias” is central to one theory about the failure in New Hampshire. Polling is a transaction between humans, and people may not answer a question honestly if they think the person interviewing them will judge them negatively. They regularly overreport their virtues, such as church attendance and charitable giving, and underreport their vices. When the American Society for Microbiology asked people whether they washed their hands after using the toilet, 94 percent declared that they always did. But when researchers watched what actually happened in public restrooms they found that only 68 percent did. In New Hampshire, it is possible that people who feared they would be branded racists didn’t tell pollsters they were going to vote against Obama, even if race had nothing to do with their choice, while others simply avoided pollsters.
The race factor is well documented in the history of polling. In 1982, Los Angeles mayor Tom Bradley, an African American, reached Election Day in his race for California’s governorship with a six-point lead in the polls but lost to white Republican George Deukmejian by less than one percent. Virginia gubernatorial candidate L. Douglas Wilder was luckier in 1989, pulling out a narrow victory after leading by five to 10 points in the final polls. But the so-called Bradley effect seems to have died out after the 1990s—perhaps because of generational and attitudinal change. In five statewide contests in 2006 that featured black and white candidates, polls were very accurate. So race probably wasn’t a factor in the New Hampshire surveys. In the 2008 primaries that followed New Hampshire, polls sometimes overestimated and sometimes underestimated Obama’s support. The only clear pattern was that his strength was underestimated in states with large black populations, chiefly because Obama got a higher percentage of the black vote than the polls indicated he would.
We may never know what went wrong in New Hampshire. It is possible that the unique circumstances, with intense media scrutiny just days after the Iowa caucuses and two very popular candidates, created an extraordinary dynamic.
Exit polls were not the problem in New Hampshire, but in the past they have occasionally been a source of great controversy. In addition to the erroneous early call of Florida for Gore in 2000, leaks of early exit poll results in 2004 that showed John Kerry leading caused a sharp drop in the stock market and wild mood swings among partisans on both sides. Though the TV news organizations that largely fund the polls did not make any incorrect calls on election night, the leaks led them to agree to keep future exit poll results sealed until 5:00 pm (est) on Election Day. Now the network’s poll analysts are literally locked in a windowless “quarantine room” and deprived of all communication with the outside world. There were no leaks in the 2006 elections and the 2008 primaries.
The more serious challenge in conducting exit polls today is the growing number of voters who choose to vote before Election Day by absentee ballot or early voting procedures. In Oregon, all voters cast their ballots by mail, and in several other states more than a quarter of the votes will be cast early. Telephone surveys can create a picture of these voters, but voting in advance poses a growing problem.
Exit polls have other limitations as well: Respondents must fill out paper forms, limiting the number and complexity of questions that can be asked. Yet they provide a window on voter psychology that no other method allows. Interviews are conducted immediately after people leave the voting booth, offering a more definitive accounting than other methods of who turned out and what motivated their choices.
Pollsters often hear the accusation that they can manipulate results, and it is true: They can. In a 1992 effort to gauge the impact of wording questions differently, for example, a New York Times poll offered two different questions about antipoverty efforts. When asked if they favored spending more money for “welfare,” only 23 percent of the respondents said yes; asked if they favored spending more on “assistance to the poor,” nearly two-thirds said yes. Pollsters working for groups that advocate particular viewpoints or solutions may be under pressure to find favorable results, and it is possible for them to formulate questions that get the most favorable response. (In fact, it is exceedingly difficult to write clear, unbiased, comprehensible questions, and pollsters will be the first to admit that they don’t always get it right.) Or, less ethically, pollsters can simply suppress results unfavorable to the client’s point of view. But most pollsters belong to associations with formal codes of ethics, and, more important, have a strong interest in maintaining their reputations, which is especially true for polling organizations that work in the public sphere.
Despite all the grumbling about polling, hard evidence that the public dislikes it is difficult to find. Pollsters, of course, have asked. More than three-fourths of respondents in a 1998 Pew Research Center study agreed that surveys on social and political issues serve a useful purpose. Still, there seems to be widespread skepticism about poll results. Another Pew study, for example, found that two-thirds of respondents didn’t believe that surveys of a small part of the population can yield an accurate picture of the whole population’s views.
We pollsters have a stock reply to this criticism: If you don’t believe in random sampling, ask your doctor to take all of your blood next time you need a blood test. Sampling is used in many fields—by accountants looking for fraud, medical researchers, and manufacturers testing for quality. The key is that every person in the population has a chance of being included, and that pollsters have a way to calculate that chance. The usual method of sampling the public is through random digit dialing, which gives every home telephone number in the United States an equal chance of being included. (Internet polls posted on websites do not have random samples, since people volunteer for them and are thus very different from the average—much more engaged in public affairs, more ideological in their views, and not very typical demographically.)
Still, even with random sampling, some types of people are a little more likely than others to participate in polls. Statistical weighting—which gives greater clout to the answers of people from demographic groups that are underrepresented in the survey and less to the overrepresented—can mitigate most of this bias. Because it typically increases the contribution of people with lower levels of education and income, weighting tends to increase the percentage of those who say they will vote Democratic. In a July Pew poll, the unweighted horse-race result among registered voters gave Obama a one-point advantage over John McCain, 44 percent to 43 percent. The weighted result was a five-point lead, 47 to 42.
Weighting does not cure all ills. People who are interested in the topic of the survey are more likely to participate, potentially leading polls to overstate how involved the public is in a subject, whether it is sports, politics, or technology. Weighting can only partially adjust for this, since interest in a topic may not be closely related to demographic factors. This is one of the reasons why post-election polls often overstate the percentage of the public that turned out to vote. (The charge that there is a liberal bias in telephone polls because conservatives are less likely to participate in surveys sponsored by the mainstream media, however, has been shown to be incorrect by experiments in which extraordinary efforts were made to ensure a high response rate. There was no ideological difference in the results.)
Participation rates have become a more generalized problem for pollsters in recent years. Americans are overwhelmed by demands on their time and are bombarded with requests of all kinds, and they are increasingly using technologies such as voice mail and call blocking. As a result, survey response rates have declined sharply. The Pew Research Center’s response rates are now around 22 percent, down from about 36 percent 10 years ago. That is fairly typical of the polling industry. As pollsters work harder to recruit participants, costs rise. The average political survey may require calling 15,000 numbers to identify approximately 5,000 working telephone numbers, of which about 1,000 will produce a person who agrees to be interviewed. Altogether, this effort will require 30,000 to 40,000 phone calls. It is difficult to provide an average cost, but a typical telephone survey with a response rate of 20 to 25 percent and good quality control (including extensive interviewer training, questionnaire testing, and close supervision of the interviewing process) can cost $40 to $50 per interview or more.
Rising costs may have serious consequences, since they increase the temptation to cut corners. For example, reputable pollsters typically make multiple calls to each telephone number to obtain an interview. It is cheaper to dial fresh numbers and interview whoever is available and willing to talk, but that approach risks biasing the sample toward people who are usually at home and willing to participate. Another cost-saving measure is the use of interactive voice response technology, or “robo-polling,” in which a computer dials numbers and a recorded voice conducts the survey. About one-third of all published polls in the Democratic primary elections this year and a majority of the published statewide general-election polls completed by mid-September were robo-polls. Overall, they performed well in the 2006 elections and the 2008 primaries, achieving an accuracy rate comparable to that of conventional telephone surveys. But they typically have to include very few questions, which limits their value for shedding light on what’s behind voters’ positions.
Another problem facing telephone polling is that a growing number of people are out of reach because they have a cell phone and no landline—currently 15 percent of adults, according to U.S. government studies. Cell-only Americans tend to be much younger than average, more likely to be members of a minority racial or ethnic group, and less likely to be married or own a home. Pollsters are responding; most of the major media polling organizations are now adding cell phones to the samples for some surveys. And, for now, experimentation by Pew and other survey organizations is finding that surveys that include cell-only respondents get the same results on most topics as those without cell phone samples. This is because the kinds of people who are reachable only on cell phones—the young, the unmarried, renters, minorities—have the same kinds of attitudes as similar individuals reached on landline phones. But no one knows how long this will hold true.
Whatever their pitfalls, election polls face the ultimate measure of accountability: reality. By that standard, their track record is very good. In 2004, nearly every national pollster correctly forecast that Bush would win in a close election, and the average of the polls predicted a Bush total within a few tenths of a percent of what he achieved. Among statewide polls in races for governor and U.S. Senate, 90 percent correctly forecast the winner, and many that did not were still within the margin of sampling error. The record in 2000 was similar, though that was an even closer election.
It is doubtful that the Founding Fathers would have taken much comfort in the reliability of survey research. They were skeptical of public opinion and fearful of direct democracy, believing, as James Madison artfully declared, that the public’s views should be “refine[d] and enlarge[d] . . . by passing them through the medium of a chosen body of citizens, whose wisdom may best discern the true interest of their country, and whose patriotism and love of justice will be least likely to sacrifice it to temporary or partial considerations.”
That skepticism is shared today by those who argue that the public simply does not know enough to form rational opinions on most issues of the day. But political leaders have to divine the public’s views from somewhere in order to “refine and enlarge” them. If the public is too ill informed to be consulted through surveys, why bother consulting it through elections?
There are four essential arguments that support the case for a greater role for the public and public opinion in political life. First, while some citizens may be uninformed or irrational, collective public opinion as expressed in polls is rational and responsive to the events and needs of the times. Much as juries reach accurate decisions after pooling the perspectives and knowledge of a range of individual members, collective preferences in polls reflect an averaging of the perspectives of many different kinds of people that offsets the errors introduced by the uninformed.
Second, people are able to make effective use of “information shortcuts” to develop opinions and reach voting decisions that are consistent with their underlying values, even when they don’t have detailed knowledge about the issues. Party affiliation is perhaps the most useful shortcut, allowing voters to select candidates likely to be ideologically in tune with them even if they know little about where the candidates stand on a range of specific issues. Voters also take cues from trusted interest groups and organizations, such as the National Rifle Association, Planned Parenthood, or the League of Conservation Voters.
A third argument is that citizens are more knowledgeable than they seem. As psychologists have noted, people often cannot cite the specific factual information on which they base judgments, whether the subject is politics, movies, or even other people. But that does not mean they made their judgments in the absence of information. Rather, it reflects the fact that people often use facts to form impressions and then forget the facts while remembering the overall impression. I may recall that I liked watching The Usual Suspects and not recall who starred in it or the specifics of the plot. But if I watch it again, I am likely to reach the same conclusion about it.
Finally, opinion polls plumb other important questions apart from people’s views on complex decisions about public policy. They gauge assessments of the state of the national and local economies, the health care system, the importance of one issue versus another, and people’s day-to-day experiences and struggles. On these matters, the views of people with less political sophistication and knowledge can be as important as those of the better informed.
None of this is to say that shortcuts or collective public opinion always compensate for the failures of the citizenry, or that there is no room for improvement. But the larger point is that the public is better able to make meaningful distinctions than many elites assume. When news of a possible affair between President Bill Clinton and former White House intern Monica Lewinsky began to seep out in January 1998, the common judgment in political Washington was that Clinton’s presidency would be over if the charges proved to be true. The public would demand that the president resign or be removed, it was said. The charges did turn out to be true, but the predictions were wrong. From the very beginning, Americans told pollsters they opposed the idea of Clinton resigning or being impeached. Majorities described themselves as “disgusted” by the affair, but also said that special counsel Kenneth Starr should drop his investigation. The public was able to separate its judgments about Clinton the leader from those about Clinton the person. Indeed, Clinton’s job approval rating went up after the scandal broke: “It is not an exaggeration to say that these judgments saved Clinton’s presidency,” said my Pew colleague Andrew Kohut. “And it is inconceivable to think that public opinion could have had such an impact in an era prior to the emergence of the media polls.”
While political professionals must be attuned to public sentiment in order to survive, their perceptions are sometimes wrong. Polling can be an invaluable antidote in such situations. That doesn’t mean that leaders will always heed it. Later in 1998, polling showed strong opposition to the Republican Congress’s impeachment proceedings, but the GOP pressed on. It paid dearly for its persistence in the congressional elections that fall.
Clinton himself, the master of “triangulation,” embodies for some critics another fear about polls—that they will turn leaders into followers or panderers. In subtler form, this is a concern that polls provide an ultimately unreliable expression of the public mind, and because of their apparent authority as “the voice of the people” get more weight than they deserve. The Republican Party’s performance in the Clinton scandal is a good example of politicians pandering. There is no doubt that politicians sometimes bend with the political wind—as they should in a democracy—but there is very little evidence that they slavishly follow polls. In fact, quite the opposite is true, according to a study by political scientists Lawrence R. Jacobs and Robert Y. Shapiro. In Politicians Don’t Pander (2000), they wrote: “What concerns us are indications of declining responsiveness to public opinion and the growing list of policies on which politicians of both major political parties ignore public opinion and supply no explicit justification for it.”
Indeed, Clinton himself misjudged the potential for a public backlash when he moved ahead, early in his first term, with a plan to ease the ban on homosexuals serving in the military. Polls showed that the public was, at best, divided on this question. That’s not to say that the military’s prohibition of service by gays and lesbians was right, but Clinton bucked strong opposition without adequately preparing public opinion for the change. The ensuing controversy weakened him and contributed to the troubles he and his party faced the following year in the 1994 midterm elections.
The leaders of the impeachment drive during Clinton’s second term were insulated from public opinion, in part because they represented states or districts that were homogeneously conservative and thus unlikely to rebuke them for reaching beyond what the general public would support, Jacobs and Shapiro say. This pattern is increasingly typical of a Washington populated by legislators who are from highly gerrymandered districts and can be pushed to extremes by partisan interest groups that demand ideological loyalty as the price for avoiding a challenge in the political primary before the next election.
Even when they turn to opinion polls, politicians may use them less for guidance than for manipulation—to help them craft rhetoric that will allow them to avoid conforming to majority opinion when it conflicts with their personal or ideological goals. This is not always a bad thing, but it is ironic that polling has made it much easier for officials to minimize the influence of public opinion when it serves their interests to do so.
For all their flaws, polls are a unique source of information about America’s citizenry—not just their opinions on issues but also their experiences, life circumstances, priorities, and hopes and fears. All of these elements of everyday Americans’ lives are potentially relevant to the making of policy, and—compared with phone calls and elections—polls provide a fair and detailed accounting of them.
The eminent political scientist V. O. Key once defined public opinion as “those opinions held by private persons which governments find it prudent to heed.” Though by no means a perfect instrument, polls make it possible for more opinions, held by a broader and more representative range of citizens, to be known to the government and thus, potentially, heeded. Scott Keeter is director of survey research for the Pew Research Center in Washington, D.C. A political scientist and survey methodologist, he is the author of A New Engagement? Political Participation, Civic Life, and the Changing American Citizen (2006), among other books. Reprinted from Autumn 2008 Wilson Quarterly.
As the votes were counted on the night of this past January’s New Hampshire Democratic presidential primary, pollsters and other professionals in the political game began to grapple with an uncomfortable fact: Virtually all of them had been dead wrong. Despite unanimous poll results predicting a Barack Obama victory (by an average of eight points) on the heels of Senator Obama’s surprising triumph in the Iowa caucuses, Hillary Clinton was going to emerge the winner.
The New Hampshire debacle was not the most significant failure in the history of public-opinion polling, but it joined a list of major embarrassments that includes the disastrous Florida exit polling in the 2000 presidential election, which prompted several networks to project an Al Gore victory, and the national polls in the 1948 race, which led to perhaps the most famous headline in U.S. political history: “Dewey Defeats Truman.” After intense criticism for previous failures and equally intense efforts by pollsters to improve their techniques, this was not supposed to happen.
New Hampshire gave new life to many nagging doubts about polling and criticisms of its role in American politics. Are polls really accurate? Can surveys of small groups of people give a true reading of what a much larger group thinks? What about bias? Don’t pollsters stack the deck?
At a deeper level, the unease about polling grows out of fears about its impact on democracy. On the strength of exit polls in the 1980 presidential election, for example, the TV networks projected a Ronald Reagan victory—and Jimmy Carter conceded—even though people in the West still had time to vote. Critics charged that this premature call may have literally stopped some westerners from taking the trouble to cast their ballots. There is also a more generalized suspicion that polls (and journalists) induce political passivity by telling Americans what they think. As the New Hampshire story unfolded on January 8, former television news anchor Tom Brokaw seemed to have this idea on his mind when he said, with a bit of exasperation, that professional political observers should simply “wait for the voters” instead of “making judgments before the polls have closed and trying to stampede, in effect, the process.”
At the same time, some worry that polls put too much power in the hands of an uninformed public, and that they reduce political leaders to slavish followers of public opinion. In the White House, efforts to systematically track public opinion date back to the dawn of modern polling, during the administration of Franklin D. Roosevelt, and nobody seems to get very far in American politics today without a poll-savvy Dick Morris or Karl Rove whispering in his or her ear.
But while there may be reason to worry about the public’s political competence, a far more serious threat to democracy arises from the large disparities in income, education, and other resources needed to participate effectively in politics. Compared with most other Western democracies, the United States has a more pronounced class skew in voter turnout and other forms of political participation, with the affluent much more politically active than those who are less well off. This uneven distribution of political engagement is what makes public-opinion polls especially valuable. Far from undermining democracy, they enhance it: They make it more democratic. As Harvard political scientist Sidney Verba observed in 1995, “Surveys produce just what democracy is supposed to produce—equal representation of all citizens. The sample survey is rigorously egalitarian; it is designed so that each citizen has an equal chance to participate and an equal voice when participating.”
Elections are blunt instruments for transmitting the public will. One candidate wins, the other loses. Did the victor prevail because he or she proposed a compelling agenda of new policies, or simply because the alternative was less acceptable? On the day after his reelection in 2004, President George W. Bush declared, “I earned capital in the campaign, political capital, and now I intend to spend it.” The president’s troubles in his second term indicate that this reading of his mandate was incorrect, as he vigorously pursued many policies on which the public was, at best, divided. Opposition to the war in Iraq grew in 2005. Most voters did not want to see private accounts created in the Social Security system. Seven in 10 disapproved of Bush’s personal intervention in the case of Terri Schiavo, the brain-damaged Florida woman who was removed from life-support.
Obviously, polls do not always stop politicians from going their own way—and they should not always do so—but without polls we would not even know how disconnected official actions are from public opinion. Bush’s actions were not unlike those of many other political leaders who mistook a narrow victory for a mandate. In such cases, polling can provide a useful check. Between elections, polls provide guidance to legislators, the executive branch, journalists, and the public itself about what the public wants and what it will stand for.
There is no question that modern American politics is drenched in public-opinion polling. More than 20 entities, from the Gallup Organization and the Pew Research Center (where I work) to the relatively new “robo-poll” firms, such as Rasmussen Reports, with their computerized telephone surveys, regularly conduct national political polls and make the results available to the public. Dozens more work at the state and local levels. The total number of surveys conducted in a campaign is large, but impossible to count with certainty. Leaving aside all the research carried out for the campaign organizations, parties, and interest groups, at least 50 national opinion polls were released to the public in the month before the 2004 presidential election.
All told, including surveys by business, foundations, and others, marketing and public-opinion research is an $8.6 billion industry, according to one recent estimate. But in addition to being a big business and an integral part of American’s political machinery, survey research has also become an academic discipline, with its own academic journals, such as Public Opinion Quarterly, and input from scholars in related areas such as sociology and political science. People in the field have been grappling with a large number of problems. Fewer Americans are willing to participate in polls, and an increasing number are reachable only by cell phone; people with cell phones are more difficult for pollsters to reach and interview. And there are many knotty intellectual and methodological challenges, such as improving the accuracy of polls dealing with matters including drug and alcohol use or sexual behavior that many people are not willing to be frank about.
This phenomenon of “social desirability bias” is central to one theory about the failure in New Hampshire. Polling is a transaction between humans, and people may not answer a question honestly if they think the person interviewing them will judge them negatively. They regularly overreport their virtues, such as church attendance and charitable giving, and underreport their vices. When the American Society for Microbiology asked people whether they washed their hands after using the toilet, 94 percent declared that they always did. But when researchers watched what actually happened in public restrooms they found that only 68 percent did. In New Hampshire, it is possible that people who feared they would be branded racists didn’t tell pollsters they were going to vote against Obama, even if race had nothing to do with their choice, while others simply avoided pollsters.
The race factor is well documented in the history of polling. In 1982, Los Angeles mayor Tom Bradley, an African American, reached Election Day in his race for California’s governorship with a six-point lead in the polls but lost to white Republican George Deukmejian by less than one percent. Virginia gubernatorial candidate L. Douglas Wilder was luckier in 1989, pulling out a narrow victory after leading by five to 10 points in the final polls. But the so-called Bradley effect seems to have died out after the 1990s—perhaps because of generational and attitudinal change. In five statewide contests in 2006 that featured black and white candidates, polls were very accurate. So race probably wasn’t a factor in the New Hampshire surveys. In the 2008 primaries that followed New Hampshire, polls sometimes overestimated and sometimes underestimated Obama’s support. The only clear pattern was that his strength was underestimated in states with large black populations, chiefly because Obama got a higher percentage of the black vote than the polls indicated he would.
We may never know what went wrong in New Hampshire. It is possible that the unique circumstances, with intense media scrutiny just days after the Iowa caucuses and two very popular candidates, created an extraordinary dynamic.
Exit polls were not the problem in New Hampshire, but in the past they have occasionally been a source of great controversy. In addition to the erroneous early call of Florida for Gore in 2000, leaks of early exit poll results in 2004 that showed John Kerry leading caused a sharp drop in the stock market and wild mood swings among partisans on both sides. Though the TV news organizations that largely fund the polls did not make any incorrect calls on election night, the leaks led them to agree to keep future exit poll results sealed until 5:00 pm (est) on Election Day. Now the network’s poll analysts are literally locked in a windowless “quarantine room” and deprived of all communication with the outside world. There were no leaks in the 2006 elections and the 2008 primaries.
The more serious challenge in conducting exit polls today is the growing number of voters who choose to vote before Election Day by absentee ballot or early voting procedures. In Oregon, all voters cast their ballots by mail, and in several other states more than a quarter of the votes will be cast early. Telephone surveys can create a picture of these voters, but voting in advance poses a growing problem.
Exit polls have other limitations as well: Respondents must fill out paper forms, limiting the number and complexity of questions that can be asked. Yet they provide a window on voter psychology that no other method allows. Interviews are conducted immediately after people leave the voting booth, offering a more definitive accounting than other methods of who turned out and what motivated their choices.
Pollsters often hear the accusation that they can manipulate results, and it is true: They can. In a 1992 effort to gauge the impact of wording questions differently, for example, a New York Times poll offered two different questions about antipoverty efforts. When asked if they favored spending more money for “welfare,” only 23 percent of the respondents said yes; asked if they favored spending more on “assistance to the poor,” nearly two-thirds said yes. Pollsters working for groups that advocate particular viewpoints or solutions may be under pressure to find favorable results, and it is possible for them to formulate questions that get the most favorable response. (In fact, it is exceedingly difficult to write clear, unbiased, comprehensible questions, and pollsters will be the first to admit that they don’t always get it right.) Or, less ethically, pollsters can simply suppress results unfavorable to the client’s point of view. But most pollsters belong to associations with formal codes of ethics, and, more important, have a strong interest in maintaining their reputations, which is especially true for polling organizations that work in the public sphere.
Despite all the grumbling about polling, hard evidence that the public dislikes it is difficult to find. Pollsters, of course, have asked. More than three-fourths of respondents in a 1998 Pew Research Center study agreed that surveys on social and political issues serve a useful purpose. Still, there seems to be widespread skepticism about poll results. Another Pew study, for example, found that two-thirds of respondents didn’t believe that surveys of a small part of the population can yield an accurate picture of the whole population’s views.
We pollsters have a stock reply to this criticism: If you don’t believe in random sampling, ask your doctor to take all of your blood next time you need a blood test. Sampling is used in many fields—by accountants looking for fraud, medical researchers, and manufacturers testing for quality. The key is that every person in the population has a chance of being included, and that pollsters have a way to calculate that chance. The usual method of sampling the public is through random digit dialing, which gives every home telephone number in the United States an equal chance of being included. (Internet polls posted on websites do not have random samples, since people volunteer for them and are thus very different from the average—much more engaged in public affairs, more ideological in their views, and not very typical demographically.)
Still, even with random sampling, some types of people are a little more likely than others to participate in polls. Statistical weighting—which gives greater clout to the answers of people from demographic groups that are underrepresented in the survey and less to the overrepresented—can mitigate most of this bias. Because it typically increases the contribution of people with lower levels of education and income, weighting tends to increase the percentage of those who say they will vote Democratic. In a July Pew poll, the unweighted horse-race result among registered voters gave Obama a one-point advantage over John McCain, 44 percent to 43 percent. The weighted result was a five-point lead, 47 to 42.
Weighting does not cure all ills. People who are interested in the topic of the survey are more likely to participate, potentially leading polls to overstate how involved the public is in a subject, whether it is sports, politics, or technology. Weighting can only partially adjust for this, since interest in a topic may not be closely related to demographic factors. This is one of the reasons why post-election polls often overstate the percentage of the public that turned out to vote. (The charge that there is a liberal bias in telephone polls because conservatives are less likely to participate in surveys sponsored by the mainstream media, however, has been shown to be incorrect by experiments in which extraordinary efforts were made to ensure a high response rate. There was no ideological difference in the results.)
Participation rates have become a more generalized problem for pollsters in recent years. Americans are overwhelmed by demands on their time and are bombarded with requests of all kinds, and they are increasingly using technologies such as voice mail and call blocking. As a result, survey response rates have declined sharply. The Pew Research Center’s response rates are now around 22 percent, down from about 36 percent 10 years ago. That is fairly typical of the polling industry. As pollsters work harder to recruit participants, costs rise. The average political survey may require calling 15,000 numbers to identify approximately 5,000 working telephone numbers, of which about 1,000 will produce a person who agrees to be interviewed. Altogether, this effort will require 30,000 to 40,000 phone calls. It is difficult to provide an average cost, but a typical telephone survey with a response rate of 20 to 25 percent and good quality control (including extensive interviewer training, questionnaire testing, and close supervision of the interviewing process) can cost $40 to $50 per interview or more.
Rising costs may have serious consequences, since they increase the temptation to cut corners. For example, reputable pollsters typically make multiple calls to each telephone number to obtain an interview. It is cheaper to dial fresh numbers and interview whoever is available and willing to talk, but that approach risks biasing the sample toward people who are usually at home and willing to participate. Another cost-saving measure is the use of interactive voice response technology, or “robo-polling,” in which a computer dials numbers and a recorded voice conducts the survey. About one-third of all published polls in the Democratic primary elections this year and a majority of the published statewide general-election polls completed by mid-September were robo-polls. Overall, they performed well in the 2006 elections and the 2008 primaries, achieving an accuracy rate comparable to that of conventional telephone surveys. But they typically have to include very few questions, which limits their value for shedding light on what’s behind voters’ positions.
Another problem facing telephone polling is that a growing number of people are out of reach because they have a cell phone and no landline—currently 15 percent of adults, according to U.S. government studies. Cell-only Americans tend to be much younger than average, more likely to be members of a minority racial or ethnic group, and less likely to be married or own a home. Pollsters are responding; most of the major media polling organizations are now adding cell phones to the samples for some surveys. And, for now, experimentation by Pew and other survey organizations is finding that surveys that include cell-only respondents get the same results on most topics as those without cell phone samples. This is because the kinds of people who are reachable only on cell phones—the young, the unmarried, renters, minorities—have the same kinds of attitudes as similar individuals reached on landline phones. But no one knows how long this will hold true.
Whatever their pitfalls, election polls face the ultimate measure of accountability: reality. By that standard, their track record is very good. In 2004, nearly every national pollster correctly forecast that Bush would win in a close election, and the average of the polls predicted a Bush total within a few tenths of a percent of what he achieved. Among statewide polls in races for governor and U.S. Senate, 90 percent correctly forecast the winner, and many that did not were still within the margin of sampling error. The record in 2000 was similar, though that was an even closer election.
It is doubtful that the Founding Fathers would have taken much comfort in the reliability of survey research. They were skeptical of public opinion and fearful of direct democracy, believing, as James Madison artfully declared, that the public’s views should be “refine[d] and enlarge[d] . . . by passing them through the medium of a chosen body of citizens, whose wisdom may best discern the true interest of their country, and whose patriotism and love of justice will be least likely to sacrifice it to temporary or partial considerations.”
That skepticism is shared today by those who argue that the public simply does not know enough to form rational opinions on most issues of the day. But political leaders have to divine the public’s views from somewhere in order to “refine and enlarge” them. If the public is too ill informed to be consulted through surveys, why bother consulting it through elections?
There are four essential arguments that support the case for a greater role for the public and public opinion in political life. First, while some citizens may be uninformed or irrational, collective public opinion as expressed in polls is rational and responsive to the events and needs of the times. Much as juries reach accurate decisions after pooling the perspectives and knowledge of a range of individual members, collective preferences in polls reflect an averaging of the perspectives of many different kinds of people that offsets the errors introduced by the uninformed.
Second, people are able to make effective use of “information shortcuts” to develop opinions and reach voting decisions that are consistent with their underlying values, even when they don’t have detailed knowledge about the issues. Party affiliation is perhaps the most useful shortcut, allowing voters to select candidates likely to be ideologically in tune with them even if they know little about where the candidates stand on a range of specific issues. Voters also take cues from trusted interest groups and organizations, such as the National Rifle Association, Planned Parenthood, or the League of Conservation Voters.
A third argument is that citizens are more knowledgeable than they seem. As psychologists have noted, people often cannot cite the specific factual information on which they base judgments, whether the subject is politics, movies, or even other people. But that does not mean they made their judgments in the absence of information. Rather, it reflects the fact that people often use facts to form impressions and then forget the facts while remembering the overall impression. I may recall that I liked watching The Usual Suspects and not recall who starred in it or the specifics of the plot. But if I watch it again, I am likely to reach the same conclusion about it.
Finally, opinion polls plumb other important questions apart from people’s views on complex decisions about public policy. They gauge assessments of the state of the national and local economies, the health care system, the importance of one issue versus another, and people’s day-to-day experiences and struggles. On these matters, the views of people with less political sophistication and knowledge can be as important as those of the better informed.
None of this is to say that shortcuts or collective public opinion always compensate for the failures of the citizenry, or that there is no room for improvement. But the larger point is that the public is better able to make meaningful distinctions than many elites assume. When news of a possible affair between President Bill Clinton and former White House intern Monica Lewinsky began to seep out in January 1998, the common judgment in political Washington was that Clinton’s presidency would be over if the charges proved to be true. The public would demand that the president resign or be removed, it was said. The charges did turn out to be true, but the predictions were wrong. From the very beginning, Americans told pollsters they opposed the idea of Clinton resigning or being impeached. Majorities described themselves as “disgusted” by the affair, but also said that special counsel Kenneth Starr should drop his investigation. The public was able to separate its judgments about Clinton the leader from those about Clinton the person. Indeed, Clinton’s job approval rating went up after the scandal broke: “It is not an exaggeration to say that these judgments saved Clinton’s presidency,” said my Pew colleague Andrew Kohut. “And it is inconceivable to think that public opinion could have had such an impact in an era prior to the emergence of the media polls.”
While political professionals must be attuned to public sentiment in order to survive, their perceptions are sometimes wrong. Polling can be an invaluable antidote in such situations. That doesn’t mean that leaders will always heed it. Later in 1998, polling showed strong opposition to the Republican Congress’s impeachment proceedings, but the GOP pressed on. It paid dearly for its persistence in the congressional elections that fall.
Clinton himself, the master of “triangulation,” embodies for some critics another fear about polls—that they will turn leaders into followers or panderers. In subtler form, this is a concern that polls provide an ultimately unreliable expression of the public mind, and because of their apparent authority as “the voice of the people” get more weight than they deserve. The Republican Party’s performance in the Clinton scandal is a good example of politicians pandering. There is no doubt that politicians sometimes bend with the political wind—as they should in a democracy—but there is very little evidence that they slavishly follow polls. In fact, quite the opposite is true, according to a study by political scientists Lawrence R. Jacobs and Robert Y. Shapiro. In Politicians Don’t Pander (2000), they wrote: “What concerns us are indications of declining responsiveness to public opinion and the growing list of policies on which politicians of both major political parties ignore public opinion and supply no explicit justification for it.”
Indeed, Clinton himself misjudged the potential for a public backlash when he moved ahead, early in his first term, with a plan to ease the ban on homosexuals serving in the military. Polls showed that the public was, at best, divided on this question. That’s not to say that the military’s prohibition of service by gays and lesbians was right, but Clinton bucked strong opposition without adequately preparing public opinion for the change. The ensuing controversy weakened him and contributed to the troubles he and his party faced the following year in the 1994 midterm elections.
The leaders of the impeachment drive during Clinton’s second term were insulated from public opinion, in part because they represented states or districts that were homogeneously conservative and thus unlikely to rebuke them for reaching beyond what the general public would support, Jacobs and Shapiro say. This pattern is increasingly typical of a Washington populated by legislators who are from highly gerrymandered districts and can be pushed to extremes by partisan interest groups that demand ideological loyalty as the price for avoiding a challenge in the political primary before the next election.
Even when they turn to opinion polls, politicians may use them less for guidance than for manipulation—to help them craft rhetoric that will allow them to avoid conforming to majority opinion when it conflicts with their personal or ideological goals. This is not always a bad thing, but it is ironic that polling has made it much easier for officials to minimize the influence of public opinion when it serves their interests to do so.
For all their flaws, polls are a unique source of information about America’s citizenry—not just their opinions on issues but also their experiences, life circumstances, priorities, and hopes and fears. All of these elements of everyday Americans’ lives are potentially relevant to the making of policy, and—compared with phone calls and elections—polls provide a fair and detailed accounting of them.
The eminent political scientist V. O. Key once defined public opinion as “those opinions held by private persons which governments find it prudent to heed.” Though by no means a perfect instrument, polls make it possible for more opinions, held by a broader and more representative range of citizens, to be known to the government and thus, potentially, heeded. Scott Keeter is director of survey research for the Pew Research Center in Washington, D.C. A political scientist and survey methodologist, he is the author of A New Engagement? Political Participation, Civic Life, and the Changing American Citizen (2006), among other books. Reprinted from Autumn 2008 Wilson Quarterly.
No comments:
Post a Comment