Election season is the time for lots of polls, which makes it a good time to highlight what they can teach us about our surveys!
Basically a poll is just a type of survey, so the principles that make polling powerful also apply to surveys.
Can the Results of Polls Be Trusted?
I just read an article today on USAtoday.com that stated:
[Sarah] Palin said she did not realize until the night of the election that the ticket would probably lose and was initially surprised at the margin of the loss. Palin said she had felt that voters would in large measure pick the Republicans, despite the polls.
The fact is the pre-election polls were spot on and exit polling enabled the news rooms to call Pennsylvania for Obama before even 20% of the precincts there had reported, thus ending any hope for the McCain-Palin ticket.
I, for one, wasn't surprised by the results.
Polls have been predicting elections for decades and the good pollsters have perfected their methods. The same methods you and I should be using for surveys.
What Makes Polls Work?
First, it takes good survey design to get good results. There are three kinds of bias to avoid when designing your poll (or survey):
- Selection bias
- Non-response bias
- Questions bias
Avoiding Selection Bias With Surveys
Selection bias occurs when you select participants in such a way that you end up with a disproportionate number from a group with known preferences on the poll topic.
For example, for a political poll you chose your random sample from big cities (New York, Chicago and Los Angeles). Big cities are known to favor democrats and thus are not representative of the country as a whole.
Eliminating Non-Response Bias
Non-response bias is bias associated with the group that does not respond to your poll. If that group tends to have different preferences on the poll topic from those that do, then bias is introduced.
For example, if we wanted to poll the public about their reactions to responding to polls, those that answer will most likely have different views than those that choose not to respond. There is always non-response, but telephone surveys often avoid bias because whether someone answers the phone is independent of the survey topic.
The Question Bias Quandry
Question bias is the bias introduced by the way your questions are asked. Leading or misleading questions bias your results (question wording is a foolproof way to weed out surveys that lack credibility).
For example, "National health care in Canada has led to substandard care; do think the USA should have national health care?"
Real World Polling Problems
Second, there are a couple of real world issues that pollsters think about to make sure they hit the target.
One, polls are snapshots in time and people can change their views or shift their opinions, so pollsters take this into account by setting the frequency of polling to match the likelihood for changing views. For presidential elections they even do daily tracking polls! Since most polls try to predict the future, change can cause problems.
This is why "Exit Polls" are so much more accurate. An exit poll asks participants how they DID vote not how they WILL vote.
The issue of evolving viewpoints is eliminated for an exit poll.
Two, pollsters use chance to their benefit. If hundreds of polls are done, then just by chance you might expect a small number of them to be off.
That is why pollsters will look at polls over time.
Sampling Lessons From Polls
Third and finally, polls rely heavily on sampling to get the results they do!
One of the hardest things for a layperson not familiar with polling methods to understand is how a poll of 1200 voters can accurately predict the votes of 20 million people. The fact is; that is the power of sampling.
If the biases mentioned above are avoided then the error in the sampling is the only thing left to consider.
First of all, the accuracy of a sample does not depend on the number in the target population (20 million), it only depends on the sample size (1200).
The margin of error for a sample of 1200 is 3-4% which is the accuracy needed for political polls. The margin of error for a sample of 250 is 7-9%.
Most surveys do not warrant the level of accuracy needed for polls.
That is why I usually recommend sample sizes between 200 and 300 for surveys. Increasing accuracy beyond what is needed increases costs (and resources) without increasing information.
Samples Sizes for Polls vs. Surveys
In order to illustrate why a poll needs a sample of 1200 and your survey may only needs a sample of 300; consider a political poll where the actual population is divided 52% to 46% (the final actual percentages in this year's election).
A margin of error in the range of 3-4% is clearly needed to predict a winner when sampling.
Now consider your satisfaction survey that shows 52% satisfied customers and 46% dissatisfied customers.
Does our reaction to these results change when we switch the numbers; 46% satisfied and 52% dissatisfied? Hardly, it is inconsequential, either way your action is the same; you need to improve customer satisfaction!
In the case of the political poll 52% to 46% is vastly different than the reverse!
So, think about some of these issues for your next survey and you might want to consider using the power of sampling!