The CAA: A Wicked Good Design Technique
Here, in the nether-regions of the Greater Boston area, we have a linguistic habit of leaving the letter ‘R’ out of words like ‘car’ (pronounced caa) or ‘Harvard’ (pronounced haavaad). It’s what makes us special.
Around our office, CAA has another meaning: Category Agreement Analysis. This turns out to be a ‘wicked good’ (another New England-ism) technique to help designers arrive at a usable information architecture.
CAA, like its big brother, card sorting, helps determine what the top-level categories for a site should be. Also like card sorting, CAA tells us what category structure is most natural to users.
Card sorting, which involves writing the content on index cards and asking users to sort them into logical piles, with the piles becoming the architecture of the site, tells us how to organize content. CAA is similar, except there aren’t any cards.
Instead, the primary CAA instrument is a survey. To prepare the survey, you write down the most important content elements in a list. For example, if you are working on a technical support site, you might list all the different types of questions people want answered (possibly gathered from your call center logs). If you’re working on an intranet site, you might list content elements, such as the travel expense policy or critical sales information.
The survey consists of each item listed separately in the left column. In the middle column, you place a box for the user to fill in which you label “Main Category.” In the right column, you place another fill-in box labeled “Second-level Category (if necessary)”.
Then, to complete your data collection, all you need to do is distribute the survey to users. We find that you can get decent results from the survey with 100 users, but 300-400 is better. In addition to getting more users, varying your distribution techniques (such as using remote offices or handing out surveys in your retail stores) will help you get the perspectives of the broader community.
Once you’ve collected the data, analysis is quite simple. For each element in your content list, you count the different categories that users suggested, looking to see if you get 70% or better agreement on the top 2 or 3 terms.
For example, when we asked people where they’d expect to find Iguana Food on a pet supply site, 53% told us they’d expect the major category to be “Reptile”. (We were pleasantly surprised that 53% of Americans knew that iguanas were reptiles.) Another 20% suggested “Food”. The 73% agreement of these two terms told us that a category called “Reptile Food and Supplies” would do very well.
Contrast that with what people told us when we asked for the major categories for “Recordable Audio CDs” at a site selling computer supplies. The most agreement we got was 19% for “Accessories”. (Almost everything a computer supplies site sells, such as cartridges, mice, and cables, can be called “accessories”.) 16% said “Storage”, 13% said “CDs”, another 13% said “Media”, and 10% said “Audio”. There was no consensus on any of the categories.
When you have consensus on one or two category names, the users are telling you clearly what the category for that content should be. Our research shows that you can use that category with confidence because users will understand what is found when they click on it.
However, as in the case of our recordable audio CD’s, if you don’t have consensus, you need to gravitate to a different strategy, where you explain in the categories what users will find underneath. In essence, your top-level design has to teach users where they can find the specific content they are seeking.
If you’ve done card sorting before, CAA is different in two ways. First, a typical approach to card sorting is you list the categories and ask users to sort under them. Another popular alternative is that you have users name the categories after they’ve put cards in the piles.
Instead of presenting the user with a notion of structure and telling them the names of categories, CAA is more fluid. It uses a spontaneous word-association style, not restricting the users to designing a category structure. With CAA, you’re looking to measure the agreement on terms, not create the design of entire hierarchy.
Second, CAA uses far more participants. Card sorting, which is time intensive to set up and run, is really only practical with a dozen or so participants. CAA takes advantages of sampling a larger portion of the population.
Both techniques complement each other and you can use them together. When creating a new design, you might start with CAA to get a sense as to what you’re dealing with—do users have agreement on the top-level terms? Then, if they do, you can use card sorting to help work out the details of the hierarchy.
CAA is a good way to identify the key terms that users think of when dealing with the content on your site. You can quickly employ this easy technique without much setup or analysis cost. It’s a wicked good tool to put into your designer’s toolbox.
Enroll in Our Four-Week Live Course on Outcome-Driven UX Metrics.
Establish your team’s 2025 UX metrics and goals by investing just 4 hours a week in our new Outcome-Driven UX Metrics course, featuring 8 hours of pre-recorded lectures and 8 hours of live coaching sessions with Jared.
You’ll learn to set inspiring UX goals, boost your team’s strategic impact, and receive personalized coaching, all while gaining access to a community of 51,000+ UX leaders.