Survey questions with context ambiguities are usually a sign that the researcher either doesn’t understand the context the respondent might draw upon when answering the question or the context within which the company might use the data. One way to address both kinds of context ambiguity is through cognitive interviewing: having test respondents thinkaloud while they answer survey questions. The interviews help ensure respondents are interpreting questions the way researchers intended and also that they are measuring what they intended. Here’s an example that shows why cognitive interviews are usually a good idea. It is from a survey asking about a large multinational bank.
Personally, my life’s dreams can’t be achieved through “prioritizing financial goals”. They in fact, aren’t related to financial goals and that is not the fault of this bank. Also, in general, I dislike large, traditional banks with branches and therefore if I did need to prioritize financial goals, it wouldn’t be with this bank. But I digress.
If I recall correctly, I don’t think this survey included a question asking whether I was a customer of this bank. Which means responses might represent either brand perception or an opinion about a relationship that actually exists, which are entirely different things in the context of potential use of the data. And without distinguishing customers from non-customers, there is no way to remove this context ambiguity.
Let’s talk about the meaning of “context”
For market researchers conducting surveys, we typically talk in a relatively narrow way about context effects. This means that in a survey, preceding questions might influence or bias how a respondent answers a later question. If conducting an international survey, we also talk about cultural context.
UX researchers and designer also think a lot about context around the usage scenarios of their products. This could be physical location, time, situation: anything that influences how users might perceive or respond to their product.
So market researchers typically think of context in terms of mental models whereas UX folks think of context as external influencers. With this example, if I were to wear my market research hat, I’d want to make sure that something like a question asking about bank fees wasn’t asked immediately before this question. With my UX researcher hat on, I’d wonder if users can find the “your life’s dreams” link on their member homepage online.
Why should you care about this?
Context, or lack of context, impacts insights and actionability of a survey’s findings. The end result is that you can simply report on frequencies and cross tabs but you can’t say anything more. Context problems also make it more difficult to play a consultant role by linking your survey results to client organization metrics external to your survey data, and provide recommendations on how to best influence customer purchasing behavior.
My second example is from a survey provided to customers of a specialized fast-casual restaurant with locations in the Pacific and Pacific Northwest parts of the US.
I might be missing the context of use that the researchers have in mind, but to me, the mode of transportation seems obviously important in such a question. The same amount of travel time might represent a routine trip in one transportation context and a special effort in another, revealing different customer intentions. But the survey did not ask about that. Instead of asking, the researchers might add a variable to the data on the backend about the location being urban/suburban, or the zip code of the store, and inferred whether the respondent walked, drove or used public transportation from that. A more precise backend method would be to code each store location as an area where people take public transportation frequently or an area where people drive themselves. But such inferences are best avoided if possible, by simply asking respondents directly. They would then have a general idea about if a store is a “car” location or a “public transportation” location based on the frequency of responses.
Let me illustrate the point by using myself as an example.
In the past few years, I have lived in a city with fantastic public transportation (Washington, D.C.), a city where everyone drives (Las Vegas), and a city that has okay public transportation and is easily walkable in some parts, but most people still have a car (Seattle). So I would have interpreted these time ranges differently in each city.
Las Vegas: I’d have considered driving 20 minutes or more in my car really far. If this question is trying to determine how much passion I have for this restaurant, it would be a lot.
Washington D.C.: Taking the metro (i.e., subway) for 20 minutes or more is perfectly acceptable, and so is walking. Or more likely, I would do a combination of both and I would barely notice a 15-minute metro ride followed by a 10-minute walk.
Seattle: Sitting on a bus for 20 minutes or more would mean I really love this restaurant, particularly since I live downtown. A 20-30 minute walk is typical.
With public transit in DC and Seattle, wait times matter too. Depending on the day and time, I might wait for 10-15 minutes. I would not be bothered by this wait time; I’d consider it normal.
If you can’t conduct cognitive interviews with real respondents, then try to get inside the head of a respondent and channel them. Think about the internal and external context of each question and how your respondent might think about it. Try to get a friend, colleague at work, or spouse to do a cognitive interview with you: sometimes testing with one is better than with none.
Pingback: Including Extra Instructions Before a Survey Question: A Great Example from UW | Insight Chaos