Menu

Home Alone? How Content Aggregators Change Navigation and Control of Content

by Joshua Porter

Originally published on Digital-Web Magazine on November 3, 2004.

Jason Kottke is fantastic at aggregating content. Every time I read his latest list of links on Kottke.org, I find some tidbit of information that interests me, one I probably wouldn’t have read otherwise. How does he choose content, I wonder? (Recently, his ideas and links about what Google is doing have been particularly interesting.) Some of Kottke’s links don’t interest me at all. But it’s not hard to weed those out. I scan over them quickly, and forget I ever saw them.

Every time someone makes a list, be it on a blog like Kottke’s or a list of groceries, content is aggregated. The act of aggregating content (usually content that is alike in some way) makes it more understandable. Instead of looking at a whole field of information, you choose smaller, more logical subsets of it in the hopes of understanding those. After you’ve done that, you can apply what you’ve learned to the whole, or even just a larger subset.

Aggregation lies at the heart of the Web. It has to, given the amount of information that the Web contains. Were it not for aggregation, all the world’s information would be on a single Web page in a single domain. Wouldn’t that be exciting? (And painful!)

Aggregated content can be viewed on a spectrum, with human-aggregated content on one end and machine-aggregated content on the other end. The difference is in the way the content is chosen, and can range from a very strict machine algorithm to the whim of a human who simply “felt like it.” 1

Search engines are the most common type of machine aggregators. They send out spiders to crawl the Web and index pages, and allow users to submit queries to them. Big search engines such as Yahoo! and Google attempt to aggregate the entire Web, while more specialized services such as Blogdex aggregate only a certain subset of the Web—those containing blogs.

Blogs themselves, however, are examples of human-aggregated content because a human makes an explicit choice about what content to include. 2 Other examples of human-aggregated content include news sites that aggregate stories from the Associated Press or blog feeds in RSS/Atom readers like Shrook, NetNewsWire, or FeedDemon. And there are other examples of human-aggregated content, like browser bookmarks, the “links” section of your Web site, and even political sites that link to stories undermining opponents.

Aggregation hinges on gathering content from other domains. This dramatically affects the search for content. Users no longer need to start their search in the domain where the content lies. In fact, they almost never do.

What About Starting from the Home Page?

With all these aggregators providing new places to start our searches for content, what will become of the home page? The hallowed ground of the home page is the most contested space in the history of the Web, and millions of valuable hours have been spent discussing its design and refining its content.

Whether or not it is important to users, the home page holds such a place in the minds of designers that it usually gets the top spot in the hierarchy of information. The reason for doing this is not entirely clear. It may be because home pages are the first pages to be indexed by search engines. Or perhaps everybody knows that the home page is (or should be) an index of what can be found on the site, so it becomes as good a place as any to start designing.

Whatever the reason, it is the state of the art that home pages get highest priority. For example, the recent redesign effort of the Boxes and Arrows site places the home page on its own in the highest level of the hierarchy, as shown by an early draft of its IA.

In this “home page as the starting point” paradigm, the possible routes to a hypothetical Web page holding a user’s target content will look something like the following:

on-site navigation view

Content Aggregators Change Navigation

Despite our long hours and good intentions, content aggregators throw this site-centric idea out the window. They allow users to bypass a large portion of the design, whose sole purpose is to get them to target content. In this way the information architecture the designer envisioned may go unused, with users never clicking on the carefully crafted navigation links, never using the location-specific breadcrumbs, and in some cases never even seeing the much-fretted-over home page.

In these cases, users navigate completely outside the site containing the target content. The only page they see is the one that the aggregator links to. So the IA that ends up getting users to the target content page isn’t the one on the site they end up on, it’s the aggregator’s site’s IA. 3

distributed navigation view

This “distributed navigation” idea is not new. In fact, linking at the page level between sites is the essence of the Web and always has been. Why, then, is so much of our design focus spent on figuring out how certain pages fit within our own biased and limited store of information when those pages are very often used in a completely different, distributed context? Put another way, why do we assume that our site is enough for our users’ domain-ignorant needs, rarely considering how our content fits into the larger, aggregated architecture of the Web?

What does it mean that our content is increasingly becoming part of an IA that is not of our own making? Should we be concerned that aggregators are increasingly allowing users to find their own ways to use our content how they see fit?

In a word, no, because this is what users always do. They make content work for them. Or, in some cases, content providers change to accommodate users. For example, Microsoft very recently decided to stop showing complete articles in its developer network blog feed. Apparently, the update requests from the aggregator programs took up too much bandwidth (programs often update every few minutes in the hopes of discovering new content). So Microsoft decided to show only the first 500 characters of articles instead of the full-length texts. They quickly reversed this decision, however, when users complained bitterly that they didn’t want to have to leave their aggregator program to read the rest of the content.

A Shift in Control

Aggregators are promoting a shift in the control of content. They’re challenging the idea that we as designers control public access to information in our domains, that users must view things in the way we prescribe, and that our hierarchy is best to present our content. This change is also suggesting that we need the help of others to market our own ideas. It is plausible that another’s approach to our information may be working better than our own.

More concretely, it means that the skill set of designers and information architects will have to be augmented. In addition to the skill set that we have now and the current ways of producing IA, we’ll need to add whatever skills are necessary to get our content on rapidly changing aggregators that our audiences prefer. This includes an element of the unknown—a discovery of how we can create and organize content optimized for aggregation systems that don’t yet exist.

While strategizing for the unknown can be a fool’s errand, here are a few basic things I think can help us design for the coming of aggregators.

Embrace Web Standards

One easy way to get started is by learning Web standards, built from the ground up to allow documents to exist, have unique meaning, and be found in a distributed network. One example of how Web standards help support contexts created by aggregators is the use of the id attribute as a linking mechanism. id attributes allow anyone to link to any element within your page, and not just the page itself. This allows content aggregators to choose the level of depth necessary for the context they’re supporting.

Focus on the Page Level

Content-rich pages, not navigation pages, are the focus of aggregators. Because of this, any useful context created by a user starting on a home page and moving through the navigation pages is nonexistent. This reinforces the basic idea that each Web page has a unique URI for a reason: It contains a unique set of content declared by its title, described by its headers, and discussed in its paragraphs. Because our IA may not be used, we will have to put more trust in the aggregators (both human and machine) to create the supporting IA. In a sense, our pages will live on their own.

Design for Different Aggregator Types

Different aggregator types will affect our design as well. The field of search engine optimization is growing fast. However, the way humans aggregate content is hardly discoverable like it is in machine aggregators. This means we’ll have to come up with new strategies to get our content aggregated by the people who can help drive visitors to our sites. For bloggers this is already becoming a part of daily routine, often characterized (unfortunately) by superficial comments on someone else’s blog written primarily to garner click-throughs.

This makes the social aspect of design more obvious. Perhaps leveraging the social aspect is as simple as getting an RSS feed up on your site, because you know that many of the people who read your type of content are finding more of their content that way. Or, perhaps it’s a little more involved, like cultivating a relationship with the person whose blog you would really like a link from.

Or perhaps you can leverage it by simply asking people. Anil Dash, of Movable Type fame, recently won a contest against supposed “search engine optimization companies” by leveraging the popularity of his blog. He simply asked people to help him out and link to his site using the words “negritude ultramarine.”

Move Toward User-driven Aggregation Systems

More generally, site designs will move toward more flexible aggregation systems. Instead of a rigid navigation system that gives users a pre-defined hierarchy of choices, we’ll see many more user-driven systems. Faceted classification systems (like the articles section on Digital Web Magazine’s site) are an example of this. These are essentially a special kind of aggregation system that lets users aggregate content according to the facets inherent in it. In contrast to a one-hierarchy-fits-all approach, faceted systems let the users choose the navigation scheme that fits them best.

The overall effect of “distributed navigation” brought upon by content aggregators is that we’re witnessing the control of content shift from designers to users. Users are finding new, highly effective aggregators much to their liking, and in doing so are bypassing much of what we’ve built for them. In one sense it’s scary, because we won’t be able to control the user experience as much. In another sense it’s rather exciting. We’re becoming caretakers of content, creating quality Web pages to be judged on their own merit in an ever-aggregating world.


Many thanks to Professor Bill Hart-Davidson of Michigan State University, who provided me with valuable feedback over the course of our conversations about this topic.

Note: For the definition of information architecture I used the following from Information Architecture for the World Wide Web (2nd edition): “The combination of organization, labeling, and navigation schemes within an information system.”

1 Note that aggregation is different than sorting or ranking. I’m using this definition: “To gather into a mass, sum, or whole.” Sorting and ranking is something you can do to the aggregate in order to increase (or decrease) its usefulness. back to article

2 The distinction between human- and machine-aggregated content proves rather arbitrary when put to the test. It could be argued, for instance, that any content aggregated by a machine is actually aggregating according to the will of the human who created it. back to article

3 Of course, human behavior is rarely this formulaic, and assuming that site-specific IAs will become unnecessary is a possible but hasty conclusion, I think. In most cases users will not view just a single page but will view whatever number of pages are necessary to complete their task. I believe aggregation technologies, as they improve, are pushing that number toward one. In those cases, the challenge of the designer is to get users to continue on their site even after the user’s original goal has been satisfied. back to article

About the Author

Joshua Porter is the brains behind the popular design blog, Bokardo.com, and wrote the book Designing for the Social Web. Having worked as a Research Consultant at User Interface Engineering for five years, he started Bokardo Design in 2007, where he focuses exclusively on social web applications. His expertise on designing social experiences is sought by companies around the world.

How to Win Stakeholders & Influence Decisions program

Gain the power skills you need to grow your influence on critical product decisions.

Get mentored and coached by Jared Spool in a 16-week program.

Learn more about our How to Win Stakeholders & Influence Decisions program today!