Category Archives: Survey Design

Survey Reporting Basics, An Example of What Not to Do: King County’s Employee Survey

I’m writing a series of posts to provide guidance to people who are learning the basics of market research. I’ll be using real-life examples of both great and bad research, analytics, and reporting.

In this first post about reporting survey data, we’ll discuss when it is appropriate to report a Top Box Score (i.e., % of respondents who gave a “5”) or a Top 2 Box Score (i.e, % of respondents who gave a “4” or a “5”)? I’ll use King County’s employee survey as an example of what not to do.

King County conducts an annual employee engagement survey for their approximately 14,000 employees. Below is how they asked the question in 2012.

Q. King County employees are treated with respect, regardless of their race, gender, sexual orientation, gender identity or expression, color, marital status, religion, ancestry, national origin, disability or age. (Using a scale where “1” means “I strongly disagree and “5” means “I strongly agree”.)

Here are the results from their 2012 report (with my mark-ups):

2012 updated

My interpretation of this chart, using a Top Box score, is that 56% of employees are treated with respect but 44% (4 out of every 10 employees) aren’t treated with respect for some reason at work. That is nearly half of all employees! To present the data more positively, as King County did, you might use a Top 2 Box and say: “…more than 90% [answered] positively”. NO. JUST NO. Why? Continue reading


Survey Basics: Instagram Example

I received a survey from Instagram asking about which features I use. Can you spot the small, meaningful omission in the question below?

instagram survey

Continue reading

Why Context Matters in Survey Research

Survey questions with context ambiguities are usually a sign that the researcher either doesn’t understand the context the respondent might draw upon when answering the question or the context within which the company might use the data.  One way to address both kinds of context ambiguity is through cognitive interviewing: having test respondents thinkaloud while they answer survey questions. The interviews help ensure respondents are interpreting questions the way researchers intended and also that they are measuring what they intended. Here’s an example that shows why cognitive interviews are usually a good idea. It is from a survey asking about a large multinational bank.


Personally, my life’s dreams can’t be achieved through “prioritizing financial goals”. They in fact, aren’t related to financial goals and that is not the fault of this bank. Continue reading

Question Writing 101 from Joe Dumas

I created and presented this summary as part of my usability testing class (HCDE 517) at the University of Washington. It covers basic question writing concepts that are applicable to both market researchers and usability researchers.

For me, the highlight of this article was Joe’s discussion on labeling the end points of scales. He reminds even us experienced question writers that: one’s choice of scale end points might activate two different cognitive structures instead of one. Measuring something as being “difficult” is not the same as measuring it as being “not at all easy”.

Sauro’s 4 Steps to Translating a Questionnaire

The process for translating questionnaires is one thing that is exactly the same in Market and Usability Research. Jeff Sauro, a highly regarded usability professional, posted “4 Steps to Translating a Questionnaire” on his blog last month. Continue reading

How Not to Ask About Educational Attainment

Researchers frequently need to ask about educational attainment in surveys. This question is often asked imprecisely.  The example below (adapted from a survey about computer systems) includes one of my all time top pet-peeves: a response option for post-graduate degree in surveys targeted at Americans.  It is not a term commonly used in talking about higher education in the United States.


Continue reading

The One Question that Rules Them All

It seems an increasing number of companies have been jumping on the “likelihood to recommend” Net Promoter question bandwagon, but are not willing to go so far as letting it be the one question they need to ask. Meaning the question is usually buried in a much longer survey  Even worse, sloppy surveys seem to modify the original form of the question in ways that break the validity of the Net Promoter model. Below is one such example (click to enlarge):

Continue reading