January 2014

In the course of affirming the district court’s decision in Kraft Foods Group Brands LLC v. Cracker Barrel Old Country Store, Inc., 2013 WL 6017396, Judge Posner went a step further.  While upholding the injunction, he ended his assessment with some comments “for future reference” when it comes to consumer surveys offered to demonstrate consumer confusion in support of a trademark infringement claim:

“Consumer surveys conducted by party-hired expert witnesses are prone to bias. There is such a wide choice of survey designs, none fool-proof, involving  such  issues  as  sample  selection and size, presentation of the allegedly confusing products to the consumers involved in the survey, and phrasing of questions in a way that is intended to elicit the surveyor’s desired response—confusion or lack  thereof—from  the  survey  respondents….All too often “experts abandon objectivity and become advocates for the side that hired them”….it’s clear that caution is required in the screening  of proposed experts on consumer surveys.”

There is certainly the potential for surveys to be misused and/or misinterpreted. Consistent with Judge Posner’s advice, care must be taken to make sure your survey results are accurately representative of the true nature and beliefs of the underlying population.  Otherwise they may be disregarded due to bias (real or perceived) or simply poor design.  Survey inaccuracies generally fall into the following five categories:

  1. Sampling error – Correct identification of the represented population and a sufficiently large sample to generate reliable results is required. Our online interactive tool can help calculate a proper sample size.
  2. Coverage bias – Occurs when the method used to collect the sample may cause it to not be representative of the population to which the conclusions are directed. For example, when performing a phone survey, one must assess whether the subset of the population with a landline phones (pollsters are not legally allowed to call cell phones unsolicited) differs from the rest of the population, and whether this may skew the poll results.
  3. Non-response (Selection) bias – It is likely that certain individuals will choose not to participate.  While this may occur for a variety of reasons, selection bias occurs when important characteristics of those who agree to be interviewed are significantly different from those who decline.
  4. Response bias – Occurs when respondents are not honest and/or accurate in their responses, perhaps because of discomfort with the question. This bias can sometimes be controlled by the wording and/or order of poll questions.
  5. Wording and order of questions – The (i) wording of questions, (ii) order of questions, and (iii) number and the form of alternative answers offered may influence poll results.

The technical aspects of data collection (i.e., the first four items above) require planning and design that is too often mishandled in customized surveys done in support of either marketing or litigation claims.  An even larger bias concern exists with question wording and question order (i.e., context/placement).  Small question wording/order differences can result in significantly different results between seemingly similar surveys.

There is substantial research that attempts (i) to measure the impact of question wording differences and (ii) to develop methods that minimize differences in the way respondents interpret what is being asked. Some of the items to consider when formulating survey questions include:

  • Did you ask enough questions to allow necessary aspects of the issue(s) to be covered?
  • Are the questions worded neutrally (without taking sides on an issue)?
  • Is the order of the questions logical? General questions should usually be asked before specific questions.
  • Do questions asked early in the survey have any unintended effects on how respondents answer subsequent questions (aka “order effects”)?
  • Are the questions written in clear, unambiguous, concise language to insure that all respondents, regardless of educational level, understand them?
  • Did you ask one question at a time? Questions that require respondents to evaluate more than one concept (aka double-barreled questions) often lead to respondent confusion, and/or confusion in interpreting the results.

As demonstrated above, the courts recognize the potential for surveys and their evaluation to be seriously flawed.  A careful practitioner should always take steps to ensure proper methodology and wording/order of the questions.  Similarly, when evaluating litigation surveys, careful analysis of the sampling techniques, survey instrument, and data analysis should demonstrate that the results are not biased in favor of any particular position.

Fulcrum Inquiry performs economic and statistical consulting. We prepare and analyze surveys as a means of obtaining data for our consulting assignments when needed information is not otherwise available.