While designing surveys, it is important to minimize potential bias as much as possible in order to get high quality data (read more on different types of survey bias here).
One of the sneakiest culprits to consider is question order bias. A poorly designed survey will first ask you "Which did you enjoy watching more - Barbie or Oppenheimer?" and then proceed to ask "What was your favourite movie of 2023?" (I'm suddenly struggling to think of names apart from the two movies that were just fed to me). Although it may seem intuitive, we often underestimate how sequencing of survey questions can introduce bias.
In this article, we highlight three unobvious ways in which question order changes can compromise your survey results. These are examples from case studies and experiments we ran on our platform, which affected the results in significant ways.
But first, what is question order bias?
Question order bias in surveys refers to a type of response bias that occurs when the order of questions in a survey influences the way respondents answer them.
The bias could be because :
(i) The inherent position or question order within the survey. For instance, whether it appears at the beginning or towards the latter half of the survey.
If you have a set of questions that don't follow a logical flow, for example you have a series of statements you want respondents to rate in no particular order, it is good to randomise the order because respondent motivation and fatigue can fluctuate as a respondent progresses through the survey.
(ii) The context that precedes the question. For instance, how you answer a question can be influenced by the question(s) or context that came before it
Consider asking respondents the following two questions in this order :
- What is your favourite movie?
- What is your favourite pastime?
There is a good chance that respondents might be tempted to answer “watching a movie” for the second question even if it wasn’t an answer on top of their minds. It is always good to place broader questions (in this case favourite pastime) before specific questions (favourite movie) such that you go from general to specific.
Another example would concept tests. If you show the concepts in the same order to every respondent, you’ll always run the risk of having the first concept influence respondent's perception about the subsequent concepts. To avoid this, researchers randomize the question order to cancel out bias.
These are relatively easy and straightforward examples of order bias. However, order bias can creep into surveys in other unpredictable ways too. Below we highlight findings we unearthed from actual studies that were tested via controlled experiments.
Case Study 1 : Yes/no questions at the start
Context
We ran a survey in which we wanted to gauge awareness levels of particular influential report on the digital technology industry among the general population. The survey was run across multiple markets and here are the results :
Problem
Vietnam's results caught us off guard. It was surprising to find that 25% of the respondents claimed to have read the report. This implies that 1 in 4 individuals from the general public had engaged with a highly technical business report. The prevalence of this response was noticeably higher in Vietnam compared to other Southeast Asian markets. We began to suspect that the questionnaire design might have played a role in influencing these results.
Hypothesis
One of our hypotheses was that since this was the very first question asked in a survey, there could be an over-selection of “Yes, I’ve read the report” at play. It is possible that Vietnamese respondents mistook it for a screener question (commonly used at the beginning of a survey to assess eligibilty for the survey) and feared it would impact their ability to qualify for the rest for the survey.
Experiment
To investigate this issue, we conducted a rerun of the survey in Vietnam, shuffling the question order to ensure it was not the first question asked, and moving it to the middle of the survey (also keeping in mind it didn't disrupt any other logical flow of questions). We also targeted it to a lookalike audience to ensure that the demographics are comparable. The results were intriguing. The selection rate for "Yes, I've read the report" dropped by almost half, from 25% to 12%, aligning more closely with other Southeast Asian markets. This is a factual question to which responses should not differ depending on its position in the survey or the context that precedes it. What we suspect here is that respondents might be assuming that questions at the start of a survey are “screener/qualifier” question which led to an over-selection of “Yes, I’ve read the report”, in order to be eligible for the rest of the survey.
<insert image here>
Learning
It is always worthwhile to examine the questions at the start of your survey.
Here are a few tips to keep in mind if your questions at the start are not screener questions,
- Do provide your respondents some assurance that they are eligible for the entire survey, and that how they answer won't impact their eligibility. Giving them some context on why they have been chosen to do the survey also helps in some cases if they have been targeted based on certain criteria already.
- It also helps to nudge your respondents to provide honest and carefully throughout answers.
- If you have a question at the start that you are worried might lead to over-selection of a favourable answer, you may even consider starting with a couple of dummy questions to warm them up and ease them in.
- Avoid presenting questions with binary (yes/no options. These questions tend to be leading and/or lead to over-selection of a "favourable answer". Instead, present questions with multiple answer options.
(i) When possible, try framing the question as a multi-select instead of a yes/no question. For instance, if you want to target a survey to only regular purchasers of skincare products you may frame the question as below so that it is not immediately obvious which is the desirable answer to qualify for the survey. It is also a good practice to clean out respondents who select all options.
<insert image>
Case study 2 : question order in longitudinal tracker surveys
Context
Since the order of questions in a survey can greatly influence how respondents answer questions, maintaining consistent question order is paramount when conducting tracker surveys across different time points.
Tracker surveys, also known as longitudinal surveys, are research studies conducted over multiple time points to track changes and trends in attitudes, behaviors, or opinions of a target population. The importance of ensuring question order consistency cannot be overstated in such surveys.
Here is an example from a case study. A brand we were conducting a study for was keen on tracking changes in brand perception over time. So we ran a brand tracking survey for them which was run once every month.
One of the question sets involved presenting the respondents with a series of attributes related to the brand and measuring their agreement levels.
For example,
Q. To what extent do you agree or disagree with the statement : Brand X’s products and services are relevant to me.
- Strongly agree
- Agree
- Neither agree nor disagree
- Disagree
- Strongly disagree
Problem
When we compared changes in brand perception between the first two waves, the top-2-box agreement levels for the various brand perception statements were generally positive and were within +/- 5% of each other. However, in the third wave we saw a sharp decline across all the attributes.
This made us skeptical. Is there a genuine decline in brand perception, perhaps triggered by some negative events related to the brand or are there methodological factors contributing to the differences in results?
Hypothesis
Upon further examination we hypothesized that the considerable drop in brand perception in the latest wave could be attributed to a change in question order., since there were changes made to the questionnaire before the third wave.
What this revealed is that in the original questionnaire, before the agreement questions on brand perception, respondents were (i) exposed to the brand’s positively worded tagline and asked questions on it (ii) asked questions that required them to think more deeply and articulate what the organisation does/stands for, and (iii) if they had heard anything positive/negative about the brand. However, in the modified version, the questions on brand perception appeared very early in order.
Experiment
To test whether the drastic change in brand perception was a result of re-ordering of questions, we reran the survey in the original sequence. The rerun was conducted during the same month as the last wave and was targeted to an audience with a similar demographic make-up to ensure comparability.
We also ran a set of controlled experiments to isolate the effects of each of the changes made to the survey, and to assess if they individually have any impact on the brand perception questions.
Results
The results were intriguing! A rerun with the original sequence of questions revealed that restoring the question order had a considerable impact on the results bringing it within +/-6% compared to the previous wave (as opposed to +/- 23% differences in results between the re-ordered survey and the previous wave).
Upon further controlled experiments, it was also revealed that each of the following type of preceding questions led to more positive ratings in the subsequent brand attribute questions.
- showing brand-related taglines
- prompting respondents to think deeply about the brand (using open-end questions)
- getting them to think about recent news/ad exposure
Learning
In the case of trackers, keep question order across waves consistent to avoid unexpected trend breaks
- Changing the order of questions might lead to drastic changes in trends
- One cannot always predict how question order affects responses, always good to keep the order consistent across waves so that the conditions remain constant
- In case you switch up the order, run a pilot test first with a smaller sample size or always run the old version in parallel as a backup
If you have a set of statement agreement questions or attribute perception questions on a brand, think about the questions that come before it
- The context set before brand perception questions is important
- For example, showing an image related to a brand that communicates strong brand values can affect how respondents answer questions about the brand later. Steer away from this.
- Open-end questions that encourage a respondent to think deeply about the brand (what it does, what it stands for) can affect how they answer brand attribute/value association questions later
- Questions on recent news/ad exposure and OEs related to positive/negative news can influence how they answer brand attribute/value association questions later
Marketing research agencies like Milieu Insight leverage digital survey tools to help companies make informed decisions.