Some guidelines for evaluating campaign poll
By MICHAEL McKEON
As the 1988 election year unfolds, the release of polling data examining the presidential, congressional and local races becomes almost a daily occurrence. Here are some guidelines to keep in mind when evaluating this information.
Basically there are two types of polls that the media cover: the media poll that is paid for by a news organization and the candidate poll that is paid for by the candidate himself or by groups supporting the candidate.
When examining either type of poll, two things to look for are the sample size and who is being polled, because sample size can be very deceiving unless you know who is being polled. This is particularly true with media polls. Because news organizations have set budgets for the amount of polling they can do, they have to get as many stories out of the data as possible. Toward this end, many media organizations will do a combined primary and general election poll. This way stories can be done on who is leading in the Democratic primary, who is leading in the Republican primary and who will win the general election.
To generate all this "news," media organizations often use a large sample size, usually more than a thousand interviews spread across the state. This "large" sample size can be very deceiving, however, when results are given for some of the specific races. A good example of this type of "primary/general" polling occurred during the 1986 Illinois Democratic primary. Several news organizations did surveys with a sample size of 1,000 voters statewide, and depending on the survey, between 500 and 600 respondents said they planned to vote in the Democratic primary election and expressed a preference in each of the contested Democratic races. In a Democratic primary, 500 to 600 interviews would appear to be an adequate sample to gauge the races, but in fact, the numbers were almost specious. It was reported that the 1,000 interviews were done statewide, this sample reflected the mood of all Illinois voters, and 500 respondents, 50 percent of all voters in the state, said they would vote in the Democratic primary. The fact is that only 17 percent of the registered voters actually voted in the 1986 Democratic primary.
This discrepancy means either that the overall sample group had too many Democrats in it or that about 66 percent of the respondents who expressed a preference in the poll to vote in the Democratic primary did not do so come election day. If the first reason was the case, Democrats would have been leading Republicans by large margins in the general election part of the polls. That was not true, however, since Gov. James R. Thompson was leading both Democratic candidates. The second explanation would mean that so many non-participants were allowed in the Democratic primary sample group that the numbers became almost useless.
May 1988 | Illinois Issues | 38
The other type of poll that receives a lot of coverage is released by the candidate. Why do candidates release or leak polling data to the media? The most obvious answer is that it shows them winning or gaining momentum in the campaign. This is not always the case. Some candidates don't like to release polls that show them too far ahead because they feel that workers and supporters will begin to think the race is already won and let down a bit, which can be disastrous. Other candidates like to release polls showing them with a big lead because they feel it will hurt their opponents' ability to raise money.
Whatever the reason for releasing these polls, there are two questions to ask when evaluating them: Was the polling done by a polling firm or was it done "in house"? What type of questions were asked? The problem with in-house polls is that the interviews are conducted by people who are affiliated with the candidate's campaign. No matter how well-schooled these individuals are or how unbiased they try to be, loyalty to their candidate almost always taints the results.
In examining the types of questions asked, the most misleading and deceiving is the candidate A, candidate B question. Usually used when a relatively unknown candidate is running against a well-known incumbent, the question works this way: The positives and negatives of the candidates are described and then the respondent is asked which candidate he would vote for. On the positive side candidate A (the nonincumbent) is usually described as bright, with new ideas; on the negative side, he's not a political insider and does not know how the system works. Candidate B (the incumbent) is described positively as a political figure who knows his way around and how to get a deal done, and negatively as a person many believe is a political hack. Obviously, candidate A does very well on this question.
The results are usually presented to contributors and the media in this way: "Our poll shows that when the facts are known about both candidates, our candidate ["A"] will win and here are the numbers to prove it." The inherent weakness of this question is that even if the facts about candidates A and B are true, the voters at large are not going to be presented with this information the way it was stated in the question during the poll.
In the heat of any campaign there are all kinds of charges and countercharges between candidates that blur their messages. When poll results and their potential discrepancies are added to the daily media campaign reports, any voter could get confused or even worse feel deceived. □
Michael McKeon is head of McKeon & Associates, a national polling organization.
May 1988 | Illinois Issues | 39