Menu

Observing What Didn’t Happen

by Will Schroeder

Sherlock Holmes once deduced the solution to a case when he noticed that something didn’t happen: a dog didn’t bark at night, proving to Holmes that there was no intruder.

During usability tests, everyone notices when a user fails because a feature breaks down. We don’t need Holmes to solve these! But when expected things don’t happen, or illogical things do happen, it can mean that developers didn’t understand what the users needed, or how they would use the product.

We try not to miss these non-events. We find them in two ways:

  • Establishing what we expect the users to do beforehand.
  • Looking for behavior patterns that “don’t make sense.”

Know What You Expect

Before testing, we ask the developers how they expect users to accomplish the tasks and what features they expect users to use. In prototyping, we only build what we expect the user to use. When users don’t use a feature in the prototype, it’s a clue that something didn’t happen.

The Case of the Fourth Option

In one test, the web site developers expected users to choose a path to information by selecting an option that classified their work: manager, webmaster, or designer. Early in the testing, users consistently chose a fourth option, the “documentation” link, not what the developers wanted or expected. It did the job without forcing users to choose.

In successive revisions, though we made the fourth option less visible, users continued to choose it. Until we removed the fourth option, we didn’t think about why the users were not choosing one of the three preferred options. At that point, users began to balk; some even refused to proceed.

Only at this point did we discover the fundamental problem: users did not want to characterize themselves or their work into limited categories. If we had paid more attention to what users were not doing—instead of trying to force them to do it—we would have discovered the problem much sooner.

The Mouse Trap

We tested a prototype application that let network managers perform an operation two different ways, by using a right-mouse menu or by directly dragging and dropping. The developers had put considerable effort into implementing drag-and-drop, so we agreed to note when users actually used it. They didn’t. Users employed only the right-mouse menu to complete their tasks.

By watching the users and talking with them after the test, we learned that they didn’t know about the drag-and-drop feature. The opening screen showed them a message about the right-mouse menu, but there was no message about drag-and-drop. Even when we told them about drag-and-drop, the users said they preferred using the right mouse menu. So, the developers kept the drag-and-drop feature, but decided that making it work perfectly had a significantly lower priority.

The Unseen Tab

In developing a complex process-modeling application, the designers decided to add print options to the tabbed dialog used to “run” the process. They thought users would welcome this as a convenience. When user after user ignored the tab and went right to File|Print, the developers decided to spare themselves the effort.

Deja Non Vu

(Or, as they don’t say in France, “I’ve seen that not happen before.”) Interesting behavior—and non-behavior—patterns often surface after the test when we ask ourselves, “What else didn’t take place?” This is more subtle than the examples above.

Users Don’t Learn Structure

While evaluating Help methods, we asked users to look for help in an application’s three online books. When they did, the users failed all the tasks.

As we reviewed the results, we realized that not one user had even looked at the books’ tables of contents. But we sometimes do see users going to the table of contents in printed books. From our observations, we inferred that no one tried to learn the structure of the online books or how they differed from each other.

We repeatedly saw them use the search feature—often in the wrong book—and this led to failure. Based on what we didn’t see, we recommended that the developers add terms to the index, rather than focus on the table of contents.

It’s Hard to Build Communities

We tested a search engine site at the Brimfield Antique Fair in Massachusetts. The antique dealers and collectors at this fair are a close-knit group. However, when we asked them to use a prototype web site to find what they were looking for at the fair, we noticed one thing that didn’t happen: none of them used the site’s “community” features.

Instead, they almost always searched on a topic related to their business—and found their “community” in the results list. We were surprised to see that nearly every link prompted comments such as, “I know her,” or “I didn’t know he had a site,” or “ How did they get into the top 10?”

These users already had a community based on the type of antiques they collect or sell, so they didn’t need a web site to create one for them.

Original Syntax

In a series of tests of different web search engines, users never learned the syntax that each search engine required. Instead, they brought their own syntax—a simple list of unpunctuated descriptive words—and tried it in every search box that appeared. Even when their searches failed, none of the users tried either the Help or the Advanced Search features.

This pattern mystified us until we ran another test, asking knowledgeable users to search for things they wanted and knew about. They also tried the same approach—a simple list of unpunctuated descriptive words—but it worked! They found what they were looking for.

The difference? This second group of users knew a lot about the subject area, and knew which words to use in the search. Apparently the problem was with the keywords, not the syntax. The developers learned that users, particularly knowledgeable users, may not need the advanced search feature. Therefore, the team was less concerned about how frequently this feature was used and was able to focus on other, more important issues that surfaced in the testing.

About the Author

Will is the Principal Usability Specialist at MathWorks.  He specializes in usability testing and analysis, interaction design, design planning and management.  Previously he was a Principal Consultant and Researcher here at UIE.

How to Win Stakeholders & Influence Decisions program

Gain the power skills you need to grow your influence on critical product decisions.

Get mentored and coached by Jared Spool in a 16-week program.

Learn more about our How to Win Stakeholders & Influence Decisions program today!