Measuring Experiences, Not Product Use
We’ve seen it now many times. The right UX metrics can be powerfully influential for UX leaders.
Armed with the right metrics, UX leaders can vividly demonstrate the immense value of a well-crafted user experience. Executives and senior stakeholders — those who approve budgets and set delivery deadlines — start setting high-level corporate objectives around improving those metrics, making it more likely they’ll approve bigger budgets and set more accommodating deadlines.
UX leaders use these metrics and goals to inspire their development and product team peers to prioritize UX, pushing off their strong desire to rush something into delivery before it’s ready. Everyone focuses on delivering great experiences for customers and users.
Great metrics are crucial to aligning everyone on the goals of what better user experiences can bring: the improvement in the lives of your users and customers.
What happens without the right UX metrics?
We’ve also seen, far too often, that most UX leaders don’t pick the right metrics. They neglect to pick metrics that are inspirational. Instead, they gravitate toward metrics whose only redeeming quality is that they’re simple to measure.
These metrics aren’t winning over executives and stakeholders. The UX leaders miss their opportunity to impress these folks because they chose the wrong metrics. As a result, the UX leaders end up seeing these metrics as numbers they are supposed to produce to check off the objective of having something — anything — to report that’s measurable.
These metrics are also ignored by product and development peers. These peers don’t find them aspirational and quickly push them aside when the winds shift. They’re inclined to move the goalposts when someone has a new, sexy idea for a product enhancement. (“AI all the things!”)
Choosing the right metrics that inspire and influence everyone throughout the organization is challenging. It’s easy to grasp any old metric, and that’s what many UX leaders do.
However, it becomes easier once you realize there’s a difference between UX metrics: focusing on someone’s experience and basic metrics: simply reporting how the product is used. Measuring your users’ and customers’ experiences is the secret to employing the metrics that inspire and influence. And that’s what we’ll explore here today.
Case study: Building an Application Tracking System.
Imagine you’re leading the effort to build a system to track people who apply for jobs, something the Human Resources industry often calls an Application Tracking System or ATS. You map out how a user might apply for a job using your ATS:
- Land on the Start Your Application page.
- Create an account for the ATS so the applicant can track their status in the system.
- Enter basic personal information about the applicant, such as name and contact info.
- Upload a prepared résumé into the ATS, which the system scans and parses.
- Correct or amend any work history information from the scanned résumé.
- Correct or amend any educational background information from the résumé.
- Certify that the information entered is accurate.
- Submit the application to apply for the position.
Part of your leadership responsibilities include demonstrating how your team’s UX efforts are improving the ATS. So, you pick some measurements to track that could show how well the ATS works for applicants:
- You can count the number of people who arrive at the Start Your Application page.
- You can also count the number of applications that get submitted.
- Using these two measurements, you can calculate an application rate of the number of applications divided by the number of people who start the process.
- You can also count the number of people visiting intermediate pages in the process, which might indicate where people “drop out.”
- Using the contact information, you could reach out to each applicant and ask them if they were satisfied with the process.
- You might even pull out your exec’s favorite measurement, Net Promoter Score (AKA NPS), and ask if your applicants would recommend your ATS to their friends and family members. (Not that that’s weird or anything.)
What can these measurements tell you about your UX?
Each of these measurements will generate data. However, it’s unclear what that data is trying to tell you. Is your design great for the user? Or should you improve it? Should we set a success goal of reaching a 100% application rate?
Looking at the data for the last few weeks, you might see an application rate of less than 100%. Only some applicants who start the process finish it by submitting their application.
That means something. What it means is not clear.
Should all those people have finished? After all, why would they have started the process if they hadn’t intended to apply?
Some people may stop the application process because they realize their résumé isn’t current or doesn’t adequately express their qualifications for the position. That’s not the fault of your design.
Your design could be working well since it suggested that the applicant do more necessary work before applying. In that case, not applying was a successful outcome for that user.
When you try to set a goal for the next release, an application rate of 100% wouldn’t be the right choice. It needs to be less than 100%. But what? Nothing in the metrics themselves could tell you what goal to choose.
What about tracking where people drop off? It’ll be hard to know what any drop-offs mean without knowing what percentage of applications should go through. After all, applicants who have realized they’re not ready to submit their application must stop somewhere in the application process. So, monitoring drop-off points doesn’t help your team know what to do better.
There’s always the satisfaction and NPS data. If your users are 100% satisfied or have the highest NPS score, does that mean there’s nothing to improve? And if they don’t, what might you do differently?
Can you even trust they’ve answered those survey questions honestly? After all, they’re hoping to get a job from you. The applicant might be afraid that the HR department would remove them from consideration if they told you what they thought of the ATS process.
What do all these measurements tell you about your ATS’s user experience? Unfortunately, not much.
And if you can’t see what these measurements say, neither can the people you work with. Your UX metrics won’t demonstrate the value of your work to your executives and senior stakeholders. They won’t inspire your development and product peers to prioritize UX efforts when they think about what they’re delivering next.
There are better choices than these metrics if you want to make your work visible and inspirational to your organization.
Where do you find the right metrics? In the users’ experiences.
The metrics we’ve discussed up until now are all measuring the use of the product. These metrics count when someone has clicked or loaded a page. It’s all about the application tracking system, not about any user’s experience with the ATS.
Measuring the use of the product isn’t bad. However, it does not measure the UX of the product. The UX is the user experience, and these metrics have nothing experiential in them.
You can’t tell from these metrics if someone is having a great experience or a poor experience. If someone already has excellent experiences with the design you’ve offered, why change it? If they’re having a poor experience, you’ll need to make it better for them.
That’s what experiential metrics do. This class of metrics guides you to prioritize those design changes that will best improve your users’ experiences. However, if you’re not measuring the users’ experiences, you can’t tell what needs to be a priority.
More importantly, your executives and senior stakeholders, who are looking at the metrics you give them, aren’t seeing anything specific to UX. (No wonder it’s hard for them to value UX. You’re not making it visible to them.)
When measuring the product’s use — and not the experience — it’s also difficult for our development and product management peers to prioritize UX work against other things with clear benefits, like shiny new features. They might think new features will move the product-use metrics more than UX improvements. Why bother making UX improvements when there are new features to build? (“AI all the things!”)
This focus on product-use metrics is why convincing your developers and product managers to take UX seriously is hard. You’re not measuring the right things.
But what if you could? What if you could show the value of your UX effort and inspire your peers with your metrics?
You must measure your product’s experience, not your product’s use.
Case study: Workday’s ATS experience.
In the months before I sat down to write this article, I noticed that people I follow on LinkedIn complained about job positions hosted in Workday’s Application Tracking System. They didn’t complain about other ATS platforms, just Workday’s. I found this curious.
So, I asked about it. I posted a question on LinkedIn inquiring about these complaints. I asked, “If you’ve had to use Workday to apply for an open position, what was your experience like?”
Frankly, I didn’t expect much. Maybe a small group of people grumbling the way people do about anything they don’t particularly like.
I was surprised when, within a week, I received more than 300 responses. Almost all of them detailed specific problems with the Workday application process. From these responses, it seems the system creates some problematic challenges for applicants.
[Disclosure: Workday has never been a client of Center Centre. And, after I publish this, there’s a good chance they won’t ever be. I’m sure their team has excellent UX folks, and I’m assuming they’re aware of many of the issues below. There are probably excellent reasons why these issues exist in their customers’ implementations. They may have even fixed many of these problems before you read this. We’re not judging the Workday team here.]
People complained that when they apply for positions at more than one company, Workday makes them create a new account (including a unique password) for each one, which can be challenging for the user to track. It also has issues with data entry, even though it “reads” the applicant’s résumé. They reported that the system introduces errors in their job history after reading the résumé, which the applicant must detect and fix.
Many complained that the application process was far more laborious and time-consuming than on other platforms. And for all this effort, it offered no real benefit beyond Workday’s competitors.
All of these commenters described their poor experience applying for jobs using Workday. [My post was biased. I intentionally didn’t encourage people with great experiences to respond because it was the descriptions of poor experiences I was most interested in.] It was fascinating how much their explanations of what made the experiences poor overlapped.
One commenter, Lauren, was kind enough to detail her frustrations in a series of steps:
Step 1: Oh this company is hiring! I’ve always wanted to work for them!
Step 2: hit that sweet, sweet apply button.
Step 3: get taken to the JD on the company website.
Step 4: hit the apply button again.
Step 5: Crap. It’s a Workday portal. They ask you to create a login. (How many people already have a login for a jobs website for a single company? Why am I confronted with the login screen when most users are not returning users?) Did I mention that I loath Workday?
Step 6: confirm my email (at this point, what was that awesome job anyway?).
Step 7: sometimes they take me to the JD to hit the apply button again. Sometimes I have to find the JD to hit that apply button again in my brand new verified single-use user profile
Step 8: of course, I would love to upload my resume.
Step 9: cool, you’ll pull in my work history from my resume? That’s nice.
Step 10: after carefully reading through the job history that you populated, 80% of it is wrong. I guess I will copy and paste from LinkedIn or my resume doc myself. That was a stupid waste of time.
Step 11: try to move forward from my work history.
Step 12: Gandolf-style error: YOU SHALL NOT PASS.
Step 13: scroll through the page to try and find the error. I don’t see it. Click on the error message. Gibberish. There is something about dates being wrong? Which dates?
Step 14: scroll through the dates.
Step 15: I did not update one of my work experiences from where they prefilled. They flip-flopped the dates.
Step 16: Rage editing.
Step 17: submit work history… And it worked! Mini celebration.
Step 18: manually fill in education because this thing is … Not good.
Step 19-21: demographic info and legal stuff.
Step 22: I promise I didn’t lie but who knows what Workday did to my information. I hope I didn’t lie.
Step 23: submit. What did I even apply for?
Many of the other commenters’ posts echoed Lauren’s various complaints. Her comment garnered more than 100 reactions from people, signaling that it reflected their own experiences.
Adding users’ experiences on top of their journeys.
If you look closely, you’ll see that Lauren’s journey follows the same steps as the journey from the imaginary Application Tracking System. What Lauren adds to the description is her experience during that journey.
An experience is like a journey (which we can think of as a sequence of noteworthy user events), except it is also what the user feels at that moment, on a scale from extreme frustration to extreme delight. As you read through Lauren’s experience, you’ll see it’s easy to sense when she’s frustrated and delighted. (If you sat beside her while she walked through this process, you’d have no trouble picking up her feelings.)
The frustrating portions of her experience are of the most interest. Those are the places where we could make Lauren’s life better.
Lauren wasn’t the only person who shared similar frustrations about creating accounts, correcting the résumé upload, and dealing with educational background issues. Many people pinpointed these details as being why they found Workday frustrating.
Therefore, fixing the problems in Lauren’s description would fix many people’s problems. That’s important because it signals that Lauren’s experience is an excellent basis for experience metrics.
Measuring the users’ experiences.
The initial metrics we identified for our ATS, such as the application rate and the visits to intermediate pages, wouldn’t reflect what Lauren and others experienced. Those metrics wouldn’t give you any hints on where to make the product better.
However, by observing (or, in this case, reading) what Lauren was experiencing, you can quickly identify many opportunities to improve the application process. Your team could create a less complex login, make the résumé upload less error-prone, enhance the delivery of error messages, or fix the educational background choices.
A measurement is an observation of a quantity or a change. You can stand two children next to each other and see which one is taller. That’s a measurement.
An instrument is what you use to determine the quantity of the measurement. In many cases, just eyeballing it is good enough. You don’t need a ruler when you can see which kid is taller. You only need the ruler if the kids’ heights are too close for visual observation, which happens rarely.
Measuring experience isn’t any different. From Lauren’s description above, you can quickly tell when she is frustrated or delighted with each process step. You don’t need an instrument to measure those changes.
The experience baseline: The current experience.
Good measurement practice starts with establishing a baseline. If you want to see how much your kids will grow this year, measure them at the beginning of the year. (Or at some other known point, like on their birthday.) That first measurement is your baseline.
Measuring experience also needs to have a baseline. Lauren’s experience is our baseline for this thought experiment.
Lauren’s experience is what we call the current experience. The current experience is the baseline for measuring any improvements your teams will make to your product.
To establish a baseline for a measured experience, you need only observe two things: when someone becomes frustrated in their journey and when they become delighted. These two transitions, between frustration and delight, are the core of measuring experience.
“You can observe a lot just by watching.” — Yogi Berra
You can observe the transition between frustration and delight by watching someone try to do what your product helps them do. There’s not much more to it. (It’s even more effective if you get your executives, senior stakeholders, development, and product peers to watch with you.)
Case study: The résumé uploader.
Lauren is very frustrated by the résumé uploader. Every time she uploads her résumé, even if she’s uploaded it with Workday a dozen times before when previously applying to other jobs, it misreads her résumé and introduces errors into her work history. She then has to spend her precious time finding all the mistakes the uploader made and correcting them.
Imagine your team wants to fix the résumé uploader problem for Lauren. How do you know this would be a good use of your team’s time? After all, it’s likely a complex problem to solve. What if Lauren is the only person who is having this problem?
One approach is to observe a few more people applying for jobs. My LinkedIn post shows Lauren wasn’t the only one with this uploader experience. Watching a few others will quickly tell you if this is a common issue.
However, you could also instrument the problem. Remember, we use instruments (like a ruler) when the differences are difficult to see through direct observation. If millions of people upload résumés, you probably need more to go on than just a handful of people you observed.
With your knowledge of Lauren’s experience (and the experience of others you’ve observed with the same issue), you can look for the patterns of behavior that indicate a problem. In the résumé uploader, each person uploaded their resume, then spent more than five minutes, often as much as 45 minutes, making changes to their work history before moving to the next stage in the journey.
Instrumenting for scale and frequency.
With the help of your development team, you could instrument your ATS to measure how long the post-upload work history editing time was. You could also measure the number of people who abandon the application process before moving to the next step. If your instrumentation tools are sophisticated enough, you could even measure if a single individual makes similar edits on subsequent job applications to indicate they are in a loop of uploading and editing every time they apply for a new job.
These measurements would indicate how many people are running into similar issues with the résumé uploader as Lauren. The data would give your team a sense of the problem’s scale and how often it occurs.
Let’s say your ATS processes a million applications daily worldwide. Your new measurement instruments tell you that one out of every four applicants spends more than 5 minutes updating their work history after they upload their résumé. Your data shows that the most extended edit session lasts almost two hours. However, the average for those who spend over 5 minutes is about 22 minutes.
For 25% of your users, they’re spending cumulatively 5,500,000 minutes a day, or just over 33 million hours a year correcting mistakes from the uploader. That’s a lot of time.
Now, imagine your team believes they can improve the smarts in the résumé uploader enough to get that average editing time down to about 4 minutes for almost everyone. That’s 27 million cumulative hours a year you’d save for all the Laurens of the world.
Your executive team might get excited over improvements of this scale. These improvements translate directly into marketing promotional material and add to the customer’s perceived value of your product. They show the kind of investment worth making in the product, even if increasing the uploader’s smarts will take a lot of work.
Plus, your development and product peers will see how their hard work will translate into apparent benefits to the organization. The scale of the benefits would inspire them to prioritize these kinds of UX improvements for their upcoming sprints.
A new approach with tremendous value.
Here’s what you’ve done: you’ve taken a few observations of your users’ experiences and turned them into measurements at scale. You looked for patterns in the direct observations and then instrumented the observation of those same patterns using the product.
The most potent results come from adding experiential measurements from your observations to these tailored product-use measurements from your instruments. You’re showing the depth of the experience issues and how much impact the solutions will have.
Setting targets for better experiences.
“If you don’t know where you’re going, you might end up somewhere else.” — Yogi Berra
At some point, you’ll want to establish a goal for your UX efforts. It’s great to make incremental improvements, such as improving the résumé uploader. However, there are times when revolution is better than evolution.
Let’s say you’d like your organization to deliver a much better job-seeker experience a year from now. That’s great, but what does “much better” translate into?
That’s where using your experience metrics to set a target comes in. As you observe your users’ current experiences, you can look for patterns with significant improvement opportunities.
Imagine you notice that many of today’s out-of-work job seekers apply to dozens of positions daily. Each application takes approximately 30 minutes, and you’ve noticed some of these folks fill their week with hundreds of applications.
Looking at the experience from all sides.
On the hiring side, you’ve also noticed that many posted jobs get hundreds of applications immediately. And many of those applications are people who aren’t very qualified. It takes a lot of work for the hiring team to sift through the unqualified applicants to get to the qualified few.
All this time and labor is expensive, and much of it feels unproductive. What if you could fix that?
Perhaps you could create a single place where your hiring teams can describe the position, what skills they’re seeking, and the comparable experience they’d like to see in their top candidates. Similarly, job seekers could upload their résumés, highlight their best skills, and describe their most promising career experience. Your new tool would match the seekers with the jobs they’re most qualified for.
What would that be worth to your organization’s customers? You could calculate the value using the same data that our instrumentation produces.
Because you’ve been measuring the experiences of your users (both the job seekers and the hiring team members), you now know what the target experiences would be for this new application system. You know the frustrations in today’s hiring process. You can establish how you’ll eliminate that frustration and replace it with the delight of quickly identifying highly qualified candidates for every new position.
The more you know about the current experience, the easier it becomes to establish precise success criteria, such as:
- A single login for each job seeker, independent of the number of positions they’re being considered for.
- Less than five minutes to correct any résumé uploading errors in their work history and educational background.
- Clear feedback to the job seeker showing which positions they are most qualified for and why.
These success criteria become the ‘definition of done’ for this new project, prioritizing UX as the main driver for this new, innovative approach to hiring. Instrumentation will clearly show which customers would benefit from these innovations the most, which will power the sales and marketing efforts.
Experience measurements promote strategic UX.
The conventional approach of measuring how the product is used produces ineffective data that doesn’t help your team prioritize UX. It’s hard to explain why product-use numbers matter to your executives and senior stakeholders. Your peers don’t see how your UX efforts will be the best way to improve those metrics.
These conventional metrics don’t reduce the battle to do your job as a UX leader. They’re not helpful.
On the other hand, the switch to measuring experience immediately makes UX a priority. Executives and senior stakeholders instantly see the costs of poor UX and the benefits of great UX. Your development and product management peers become inspired to work with your team to improve the product substantially.
Measuring experience is a strategic approach to UX metrics. It makes user and customer experience top of mind for everyone in your organization. It’s what your organization means when it says customer-centricity is the priority, even though they often don’t realize it.
If you’re still measuring the use of your product and not the experience of using your product, you’re missing out on making UX a significant priority.
Craft + Lead a Strategic UX Vision Intensive.
Oct 7-11, 2024 Intensive led by Jared Spool • Bring to focus a clear vision of highly desirable experiences.
In six 90-minute sessions, explore how your vision will influence your organization’s priorities and demonstrate your value up the org chart. Join us for a chance to elevate your leadership and shape the future of your organization with a powerful UX vision.
Reenergize your career and transform your organization into a lean, mean UX delivery machine by utilizing the power of a Strategic UX Vision.
Explore the details of our Craft + Lead A Strategic UX Vision Intensive.