Get A Better UX Metric From Your NPS Survey Data

by Jared M. Spool

A few weeks ago, I wrote a somewhat controversial analysis of Net Promoter Score, a business metric employed in many organizations. Many who were critical of my article stated they thought I should’ve provided a replacement for their beloved instrument, if I was going to tell them they can’t use it any more. While there is no replacement for the numeric score, there is a way to get value out of the survey used to collect the Net Promoter Score data.

Keep in mind that this method for getting value from an NPS survey isn’t easy. As you’ll read, it involves a series of difficult steps.

However, it’s not rocket science either. (NASA has been a client of ours and they’ve confirmed, it’s not rocket science. They have very strict definitions and this does not match that.)

Here’s our process for getting value from NPS surveys:

Step 1: Collect Up The Surveys

We go after every NPS survey we can find. (Sometimes they are scattered throughout the organization and not well organized or coordinated.)

Step 2: Throw Away The Net Promoter Score

That number the survey respondents gave has no value. It’s like the skin of the mango. There’s nothing good about it. Just throw it away.

Step 3: Put The NPS Feedback Answers In One Place

These are all the qualitative answers that “explain why” the respondent gave the score they did. NPS proponents and haters alike agree that these feedback answers are where the real gold is.

We put them in a big spreadsheet. Or better yet, we write them on sticky notes for sorting.

Step 4: Sort The Feedback, Looking For Patterns

We look for repeated feedback about any trouble areas, like the process being too long or checkout being complicated or difficulty finding the customer service number. We start grouping the feedback under similar trouble areas.

Some feedback may mention more than one trouble area. When we find those, we duplicate the feedback and classify it under each area.

Step 5: Pick The Most Fun Trouble Areas To Explore

Ideally we find something that, if we made it a better experience, we could move an important needle like more sales or more new customers.

Step 6: We Reach Out To The People Who Wrote That Feedback

We try to learn everything we can about where their feedback came from. What were they trying to do? How did the problem come up? What effect did it have on their experience?

Step 7: Reach Out to Customer Service About the Trouble Area

Our goal in this step is to learn more about the issue we’re interested in.

  • Is it something that happens frequently?
  • Are there different variations that appear?
  • Can we identify what triggers the frustration for our users?
  • Can we talk to customers who have contacted support about the issue?

Step 8: Try To See It Happening

Can we see this problem happening in the wild? Can we find a way to see it happening with our own eyes?

Maybe we can visit customers or conduct usability testing to trigger it? We want to talk in depth to those users when it happens, to find out more details.

This is a great opportunity to get other teammates exposed to our users.

Step 9: Look for ‘Footprints’ of the Trouble Area

Now that we’ve seen it in the wild, we look for repeatable patterns that could happen in the analytics data. This data could tell us if others are experiencing the same issue.

Our goal is to answer a seemingly simple question: How often does this happen across our customer base? (You can read more about identifying footprints in analytics.)

Step 10: Quantify the Cost of Frustration, If Possible

Frustration almost always shows up on an organization’s Profit and Loss statement. We look to how much money the organization is losing due to lost sales or increased support costs. Using (often readily-available) business numbers, we calculate the money based on our analytics research on frequency.

This is how we calculate the Cost of Frustration. For example, the number of support calls about the trouble multiplied by average cost of each call. That tells us the total amount of money spent on supporting that trouble. (This is how we uncovered the ‘$300,000,000 Button’ problem.)

Step 11: Report What You’ve Found

Work isn’t done until it’s been reported. We give a solid presentation to the stakeholders and executives. We started with the NPS Survey data and ended with a simple metric those executives can understand: Lost Revenues.

As I mentioned, this process isn’t easy. There’s heavy work in every step. Yet, the result of that hard work is a clear connection between the business and the customer’s delight or frustration. This is a UX metric that makes a difference.

 

About the Author

Jared M. Spool is a co-founder of Center Centre and the founder of UIE. In 2016, with Dr. Leslie Jensen-Inman, he opened Center Centre, a new design school in Chattanooga, TN to create the next generation of industry-ready UX Designers. They created a revolutionary approach to vocational training, infusing Jared’s decades of UX experience with Leslie’s mastery of experience-based learning methodologies.

Enroll in Our Four-Week Live Course on Outcome-Driven UX Metrics.

Establish your team’s 2025 UX metrics and goals by investing just 4 hours a week in our new Outcome-Driven UX Metrics course, featuring 8 hours of pre-recorded lectures and 8 hours of live coaching sessions with Jared.

You’ll learn to set inspiring UX goals, boost your team’s strategic impact, and receive personalized coaching, all while gaining access to a community of 51,000+ UX leaders.

Join this course and establish your UX Metrics today.