On Surveys
Originally published on medium.com on February 23, 2015.
Surveys are the most dangerous research tool—misunderstood and misused. They frequently straddle the qualitative and quantitative, and at their worst represent the worst of both.
In tort law the attractive nuisance doctrine refers to a hazardous object likely to attract those who are unable to appreciate the risk posed by the object. In the world of design research, surveys can be just such a nuisance.
Easy Feels True
It is too easy to run a survey. That is why surveys are so dangerous. They are so easy to create and so easy to distribute, and the results are so easy to tally. And our poor human brains are such that information that is easier for us to process and comprehend feels more true. This is our cognitive bias. This ease makes survey results feel true and valid, no matter how false and misleading. And that ease is hard to argue with.
A lot of important decisions are made based on surveys. When faced with a choice, or a group of disparate opinions, running a survey can feel like the most efficient way to find a direction or to settle arguments (and to shirk responsibility for the outcome). Which feature should we build next? We can’t decide ourselves, so let’s run a survey. What should we call our product? We can’t decide ourselves, so let’s run a survey.
Easy Feels Right
The problem posed by this ease is that other ways of finding an answer that seem more difficult get shut out. Talking to real people and analyzing the results? That sounds time consuming and messy and hard. Coming up with a set of questions and blasting it out to thousands of people gets you quantifiable responses with no human contact. Easy!
In my opinion it’s much much harder to write a good survey than to conduct good qualitative user research. Given a decently representative research participant, you could sit down, shut up, turn on the recorder, and get good data just by letting them talk. (The screening process that gets you that participant is a topic for another day.) But if you write bad survey questions, you get bad data at scale with no chance of recovery. This is why I completely sidestepped surveys in writing Just Enough Research.
What makes a survey bad? If the data you get back isn’t actually useful input to the decision you need to make or if doesn’t reflect reality, that is a bad survey. This could happen if respondents didn’t give true answers, or if the questions are impossible to answer truthfully, or if the questions don’t map to the information you need, or if you ask leading or confusing questions.
Often asking a question directly is the worst way to get a true and useful answer to that question. Because humans.
Bad Surveys Don’t Smell
A bad survey won’t tell you it’s bad. It’s actually really hard to find out that a bad survey is bad—or to tell whether you have written a good or bad set of questions. Bad code will have bugs. A bad interface design will fail a usability test. It’s possible to tell whether you are having a bad user interview right away. Feedback from a bad survey can only come in the form of a second source of information contradicting your analysis of the survey results.
Most seductively, surveys yield responses that are easy to count and counting things feels so certain and objective and truthful.
Even if you are counting lies.
And once a statistic gets out—such as “75% of users surveyed said that they love videos that autoplay on page load”—that simple “fact” will burrow into the brains of decision-makers and set up shop.
From time to time, people write to me with their questions about research. Usually these questions are more about politics than methodologies. A while back this showed up in my inbox:
“Direct interaction with users is prohibited by my organization, but I have been allowed to conduct a simple survey by email to identify usability issues.”
Tears, tears of sympathy and frustration streamed down my face. This is so emblematic, so typical, so counterproductive. The rest of the question was of course, “What do I do?”
User research and usability are about observed human behavior. The way to identify usability issues is to usability test. I mean, if you need to maintain a sterile barrier between your staff and your customers, at least use usertesting.com. The allowable solution is like using surveys as a way to pass notes through a wall, between the designers and the actual users. This doesn’t increase empathy.
Too many organizations treat direct user research like a breach of protocol. I understand that there are very sensitive situations, often involving health data or financial data. But you can do user research and never interact with actual customers. If you actually care about getting real data rather than covering some corporate ass, you can recruit people who are a behavioral match for the target and never reveal your identity.
A survey is a survey. A survey shouldn’t be a fallback for when you can’t do the right type of research.
Sometimes we treat data gathering like a child in a fairy tale who has been sent out to gather mushrooms for dinner. It’s getting late and the mushrooms are far away on the other side of the river. And you don’t want to get your feet wet. But look, there are all these rocks right here. The rocks look kind of like mushrooms. So maybe no one will notice. And then you’re all sitting around the table pretending you’re eating mushroom soup and crunching on rocks.
A lot of people in a lot of conference rooms are pretending that the easiest way to gather data is the most useful. And choking down the results.
Customer Satisfaction Is a Lie
A popular topic for surveys is “satisfaction.” Customer satisfaction has become the most widely used metric in companies’ efforts to measure and manage customer loyalty.
A customer satisfaction score is an abstraction, and an inaccurate one. According to the MIT Sloan Management Review, changes in customers’ satisfaction levels explain less than 1% of the variation in changes in their share of spending in a given category. Now, 1% is statistically significant, but not huge.
And Bloomberg Businessweek wrote that “Customer-service scores have no relevance to stock market returns…the most-hated companies perform better than their beloved peers.” So much of the evidence indicates this is just not a meaningful business metric, rather a very satisfying one to measure.
And now, a new company has made a business out of helping businesses with websites quantify a fuzzy, possibly meaningless metric.
“My boss is a convert to Foresee. She was apparently very skeptical of it at first, but she’s a very analytical person and was converted by its promise of being able to quantify unquantifiable data—like ‘satisfaction.’”
This is another cry for help I received not too long ago.
The boss in question is “a very analytical person.” This means that she is a person with a bias towards quantitative data. The designer who wrote to me was concerned about the potential of pop-up surveys to wreck the very customer experience they were trying to measure.
There’s a whole industry based on customer satisfaction. And when there is an industry that makes money from the existence of a metric, that makes me skeptical of a metric. Because as a customer, I find this a fairly unsatisfying use of space.
Here is a Foresee customer satisfaction survey (NOT for my correspondent’s employer). These are the questions that sounded good to ask, and that seem to map to best practices.
But this is complete hogwash.
Rate the options available for navigating? What does that mean? What actual business success metric does that map to? Rate the number of clicks—on a ten point scale? I couldn’t do that. I suspect many people choose the number of clicks they remember rather than a rating.
And accuracy of information? How is a site user not currently operating in god mode supposed to rate how accurate the information is? What does a “7” for information accuracy even mean? None of this speaks to what the website is actually for or how actual humans think or make decisions.
And, most importantly, the sleight of hand here is that these customer satisfaction questions are qualitative questions presented in a quantitative style. This is some customer research alchemy right here. So, you are counting on the uncountable while the folks selling these surveys are counting their money. Enjoy your phlogiston.
I am not advising anyone to run a jerk company with terrible service. I want everyone making products to make great products, and to know which things to measure in order to do that.
I want everyone to see customer loyalty for what it is—habit. And to be more successful creating loyalty, you need to measure the things that build habit.
Approach with Caution
When you are choosing research methods, and are considering surveys, there is one key question you need to answer for yourself:
Will the people I’m surveying be willing and able to provide a truthful answer to my question?
And as I say again and again, and will never tire of repeating, never ask people what they like or don’t like. Liking is a reported mental state and that doesn’t necessarily correspond to any behavior.
Avoid asking people to remember anything further back than a few days. I mean, we’ve all been listening to Serial, right? People are lazy forgetful creatures of habit. If you ask about something that happened too far back in time, you are going to get a low quality answer.
And especially, never ask people to make a prediction of their own future behavior. They will make that prediction based on wishful thinking or social desireability. And this, I think, is the most popular survey question of all:
How likely are you to purchase the thing I am selling in the next 6 months?
No one can answer that. At best you could get 1) Possibly 2) Not at all.
So, yeah, surveys are great because you can quantify the results.
But you have to ask, what are you quantifying? Is it an actual quantity of something, e.g. how many, how often—or is it a stealth quality like appeal, ease, or appropriateness, trying to pass itself off as something measurable?
In order to make any sort of decisions, and to gather information to inform decisions, the first thing you have to do is define success. You cannot derive that definition from a bunch of numbers.
To write a good survey, you need to be very clear on what you want to know and why a survey is the right way to get that information. And then you have to write very clear questions.
If you are using a survey to ask for qualitative information, be clear about that and know that you’ll be getting thin information with no context. You won’t be able to probe into the all important “why” behind a response.
If you are treating a survey like a quantitative input, you can only ask questions that the respondents can be relied on to count. You must be honest about the type of data you are able to collect, or don’t bother.
And stay away from those weird 10-point scales. They do not reflect reality.
How to put together a good survey is a topic worthy of a book, or a graduate degree. Right here, I just want to get you to swear you aren’t going to be casual about them if you are going to be basing important decisions on them.
“At its core, all business is about making bets on human behavior.”
—Ben Wiseman, The Wall Street Journal
The whole reason to bother going to the trouble of gathering information to inform decisions is that ultimately you want those decisions to lead to some sort of measurable success.
Making bets based on insights from observed human behavior can be far more effective that basing bets on bad surveys. So go forth, be better, and be careful about your data gathering. The most measurable data might not be the most valuable.
Enroll in Our Four-Week Live Course on Outcome-Driven UX Metrics.
Establish your team’s 2025 UX metrics and goals by investing just 4 hours a week in our new Outcome-Driven UX Metrics course, featuring 8 hours of pre-recorded lectures and 8 hours of live coaching sessions with Jared.
You’ll learn to set inspiring UX goals, boost your team’s strategic impact, and receive personalized coaching, all while gaining access to a community of 51,000+ UX leaders.