February 2013

Opinion polls (surveys) are used extensively to understand almost everything that cannot be looked up as a scientific fact. Polls are reported everyday on subjects ranging from what flavor toothpaste kids like most (Colgate probably already funded this poll), to what voters think is the best way to get out of an economic recession. Opinion polls help ascertain the policies, products, services, and leaders that affect our daily lives. In litigation, surveys are often used in trademark, unfair advertising, and other business disputes to assist in determining issues involving customer behavior.

Sources of Data Bias

Like anything, surveys can be misused and misinterpreted. Survey or poll inaccuracies generally fall into the following five categories:

  1. Sampling error – This entails correctly identifying the represented population, and identifying a sufficiently large sample to obtain reliable results. Our online interactive tool can help calculate a proper sample size.
  2. Coverage bias – The method used to collect the sample may not be representative of the population to which the conclusions are directed. For example, assume a poll is being conducted by phone. Some people only have cell phones and not a landline. Yet, it is unlawful in the United States for pollsters to make unsolicited calls to phones where the owner may be charged for taking a call. Thus, cell phone-only users will not be included in polling samples conducted via phone. If the subset of the population with a landline phones differs from the rest of the population, the poll results will be skewed.
  3. Non-response (Selection) bias – Some people may not answer calls from strangers, or refuse to answer, perhaps because of the time involved. Because of this selection bias, the characteristics of those who agree to be interviewed may be significantly different from those who decline.
  4. Response bias – Answers given by respondents may not reflect their true beliefs, perhaps because of embarrassment. This bias can sometimes be controlled by the wording and/or order of poll questions.
  5. Wording and order of questions – The (i) wording of questions, (ii) order of questions, and (iii) number and the form of alternative answers offered may influence poll results.

When surveying attitudes, opinions, and/or projections of future behavior, collection bias is difficult to control. Responses to questions can vary based on factors such as:

  1. The respondent’s perception of the person asking the questions (most respondents react to the person as well as the question), and
  2. The setting or environment in which the questions are asked (e.g., one’s own living room filling out a paper survey, versus a telephone survey, versus being stopped in a shopping mall).

The technical aspects of data collection (i.e., the first four items above) are usually handled well (or as well as is possible) by the major national polling organizations (e.g., Gallup, Pew Research, Rasmussen). However, we are surprised by how frequently these technical collection matters are handled poorly by customized surveys done in support of either marketing claims or litigation claims.

Drafting Survey Questions

In spite of these various challenges involving sampling and collection, question wording and question order (i.e., context/placement) are usually the largest source of bias. The goal is to have a clear, consistent, and unbiased meaning and intent for each question. Small question wording/order differences can result in significantly different results between seemingly similar surveys.

Here are examples of potential question biases from presidential election polls and policy issues:

  1. When the question is read to respondents, pollsters may or may not include (i) the name of the vice presidential candidates along with the presidential candidates, and (ii) the party affiliation of each candidate. These inclusions or exclusions affect some respondents’ answers. One solution is to phrase the question in a way that mimics the voting experience (i.e., the way that the voter would normally see the names when reading the ballots in the voting booth).
  2. Studies indicate that respondents are more likely to support a person (i) described as one of the “leading candidates”, and/or (ii) listed at the beginning of the choices versus towards the end. A suggested solution is to have multiple surveys in which the listing order of the choices is rotated.
  3. Policy issues have an even wider range of wording options. For example, when asking whether respondents favor or oppose programs such as food stamps and Section 8 housing grants, should they be described as “welfare” or as “programs for the poor”? Should the recent 2010 health care reform (i.e., Patient Protection and Affordable Care Act) be described as “ObamaCare”, “health care reform”, or “health care system overhaul”? Each of these word choices may have impact on the responses, with such differences varying based on the ethnic and economic demographic of each respondent.

There is substantial research that attempts (i) to measure the impact of question wording differences and (ii) to develop methods that minimize differences in the way respondents interpret what is being asked. Some of the items to consider when formulating survey questions include:

  1. Did you ask enough questions to allow necessary aspects of the issue(s) to be covered?
  2. Are the questions worded neutrally (without taking sides on an issue)?
  3. Is the order of the questions logical? General questions should usually be asked before specific questions. For example, overall job approval should be asked before specific questions are asked that remind respondents about the leader’s successes or failures.
  4. Do questions asked early in the survey have any unintended effects on ho