You Need to Manage Digital Projects for Outcomes, Not Outputs
An excerpt from Sense & Respond (Harvard Business Press), printed with permission from the author. This excerpt was originally published Feb 6, 2017 on Harvard Business Review.
When is a project finished? For most of us, it seems pretty simple: when we ship the product or launch the service. But we need to take a step back and consider what “done” really means.
Most teams in business work to create a defined output. But just because we’ve finished making a thing doesn’t mean that thing is going to create economic value for us. If we want to talk about success, we need to talk about outcomes, not just outputs. And as the world continues to digitize and almost every product and service becomes more driven by (or at least integrated with) software, this need grows even stronger.
For example, we may ask a vendor to create a website for us. Our goal might be to sell more of our products online. The vendor can make the website, deliver it on time and on budget, and even make it beautiful to look at and easy to use, but it may not achieve our goal, which is to sell more of our products online. The website is the output. The project may be “done.” But if the outcome — selling more products — hasn’t been achieved, then we have not been successful.
Most companies manage projects in terms of outputs, not outcomes. This means that most companies are settling for “done” rather than doing the hard work of targeting success.
Defining Done as Successful
In some situations these ideas are the same thing or have such a clear, well-understood relationship that they might as well be the same thing. This is frequently the case in industrial production. Because of the way industrial products are designed and engineered, you know that when your production line is spitting out Model T cars, you can be reasonably certain they will work as designed. And because of years of sales history, you can be reasonably certain that you will be successful: You will sell roughly the number of cars you expected to. Managers working in this context can be forgiven for thinking that their job is simply to finish making something.
With software, however, the relationship between we’ve finished building it and it has the effect we intended is much less clear. Will our newly redesigned website actually encourage sharing, for example, or will the redesign have unintended consequences? It’s very difficult to know without building and testing the system. And, in contrast to industrial production, we’re not making many instances of one product. Instead, we’re creating a single system — or a set of interconnected systems that behave as one system — and we are often in the position of not knowing whether the thing we’re making will work as planned until we’re done.
This problem of uncertainty, combined with the nature of software, means that managing our projects in terms of outputs is simply not an effective strategy in the digital world. And yet our management culture and tools are set up to work in terms of outputs.
Using the Alternative to Output: Outcomes
The old cliché in marketing is true: Customers don’t want a quarter-inch drill. They want a quarter-inch hole. In other words, they care about the end result, and don’t really care about the means. The same is true of managers: They don’t care how they achieve their business goals; they just want to achieve them.
In the world of digital products and services, uncertainty becomes an important player and breaks the link between the quarter-inch drill and the quarter-inch hole. Some managers try to overcome the problems caused by uncertainty by planning in increasingly greater detail. This is the impulse that leads to detailed requirements and specification documents, but, as we’ve come to understand, this tactic rarely works in software.
It turns out that this problem — the way our plans are disrupted by uncertainty, and the fallacy of responding with ever-more-detailed plans — is something that military commanders have understood for hundreds (if not thousands) of years. They’ve developed a system of military leadership called mission command, an alternative to rigid systems of leadership that specify in great detail what troops should do in battle. Mission command is a flexible system that allows leaders to set goals and objectives and leave detailed decision making to the people doing the fighting. Writing in The Art of Action, Stephen Bungay traces these ideas as they were developed in the Prussian military in the 1800s and describes the system that those leaders developed to deal with the uncertainty of the battlefield.
Mission command is built on three important principles that guide the way leaders direct their people.
- Do not command more than necessary or plan beyond foreseeable circumstances.
- Communicate to every unit as much of the higher intent as is necessary to achieve the purpose.
- Ensure that everyone retains freedom of decision within bounds.
For our purposes, this means that we would direct our teams by specifying the outcome we seek (our intent), allowing our teams to pursue this outcome with a great deal of (but not unlimited) discretion, and expecting that our plans will need to be adjusted as we pursue them.
Case Study: Putting This into Practice
In 2014 the Taproot Foundation wanted to create a digital service that would connect nonprofit organizations with skilled professionals who wanted to donate their services. Think of it as a matchmaking service for volunteers. Taproot had to work with vendors, and ended up choosing our firm for the project.
In our early conversations, Taproot leaders described the system that they wanted to build in terms of its features: It would have a way for volunteers to sign up, a way for volunteers to list their skills, a way for nonprofit organizations to look up volunteers based on these skills, and so on. We were concerned about this feature list. It was a long list, and although each item seemed reasonable, we thought we might be able to deliver more value faster with a smaller set of features.
To shift the conversation away from features, we asked, “What will a successful system accomplish? If we had to prove to ourselves that the system was worth the investment, what data would we use?&rdquoe; This conversation led to some clear, concrete answers. First of all, the system needed to be up and running by a specific date, about four months away. The foundation participates in an annual event to celebrate the industry, and executives wanted to have a demonstrated success that they could show off to funders at that event. We asked, “What does up and running mean?” Again, the answers were concrete: We need to have X participants active on the volunteer side, and Y participants active on the organization side. Because the point of the service would be to match volunteers with organizations so that they could work on projects together, we should have made Z matches, and a certain percentage of those matches should have yielded successful, completed projects.
This was our success metric: X and Y participants; Z matches; percentage of completed projects. (We actually set specific numerical targets, but we’re using variables here.)
Next, we asked, “If we can create this system and achieve these targets without building any of the features in your wish list, is that OK?” This was a harder conversation.
The executives signing the contract were understandably concerned. What guarantee did they have that we would complete the project?
This is the bind that executives and managers face. As they negotiate with partners, they are bound to protect their organizations. They need to find contractual language that ensures the partners will deliver. The problem with contracts, though, is that to make them work, managers are forced to settle for the protection they find in the concrete language of features: You build feature A, and we will pay you amount B. But this linguistic certainty is a false hope. It guarantees only that your vendor will get to “done,” as in, “The feature is done.” It does not guarantee that the set of features you can describe in a contract will make you successful. On the other side, vendors are understandably hesitant to sign up to achieve an outcome, mostly because vendors rarely control all of the variables that contribute to project success or failure. Thus both sides settle for a compromise that offers the safety of “done” while at the same time creating constraints that tend to predict failure rather than create the freedom that breeds success.
Our contract with Taproot, then, contained not only a list of desired features but also a list of desired outcomes. It included: The system will connect volunteers to organizations [at the following rate]; it will allow these parties to find each other, communicate well with each other, and report on the success of their projects; it will do so at [the following rates] and by [the following date]; etc. Of course, there was also some legalese. But this compromise — listing the features we thought were important, but being clear about outcomes and agreeing in advance that outcomes are more important — is the key to managing with outcomes instead of output.
The team decided that the most important milestone was to get the system up and running. Rather than wait four months, the length of the project, they decided to launch as quickly as possible, going live to a pilot audience within one month. They launched a radically simplified version of the service, one with very few automated features. The Taproot team knew it would need more automation if it wanted the system to scale, but it also knew automation could come later. Launching early achieved two goals. First, it ensured that the team would have something to show to funders at the annual event. This was a hugely important marketing and sales goal. But launching early addressed an even more important goal: It allowed the team to learn what features it would actually need in order to operate the system at scale. In other words, it allowed the team to establish a sense-and-respond loop — a two-way conversation with the market that would guide the growth of the service.
The project planners had imagined, for example, that the skilled volunteers would need to be able to create profiles on the service. Organizations would then browse the profiles to find volunteers they liked. This turned out to be exactly wrong. When the team tried to get volunteers to make profiles, they responded with indifference. The team realized that, in order to make the system work, volunteers had to be motivated to participate; they needed to find projects that they were passionate about. In order to do this, the system needed project listings, not volunteer listings. In other words, the team had to reverse the mechanics of the system, because the initial plans were wrong.
By the second month of the project, the team had built the system with the revised mechanics. Then they concentrated on tuning the system, identifying the details of the business processes needed and building software to support those processes. How would the team make it easy for organizations to list their projects? How would team members make sure the listings were motivating to volunteers? How simple could they make the contact system? How simple could they make the meeting scheduler? At the end of the four-month project, the team had a system that had been up and running for three months and that far exceeded the performance goals written into the contract.
This project worked because the team followed the principles of mission command, which is based on outcomes, not outputs. Give teams a strategy and a set of outcomes to achieve, along with a set of constraints, and then give them the freedom to use their firsthand knowledge of the situation to solve the problem. This approach to project leadership is not common, but we see it more frequently on startup teams and in smaller organizations. Scaling the approach to multiple teams and to larger organizations can be a difficult, subtle challenge, requiring careful balance between central planning and decentralized authority. But it is quickly becoming a necessary shift in our software-driven world.
Enroll in Our Four-Week Live Course on Outcome-Driven UX Metrics.
Establish your team’s 2025 UX metrics and goals by investing just 4 hours a week in our new Outcome-Driven UX Metrics course, featuring 8 hours of pre-recorded lectures and 8 hours of live coaching sessions with Jared.
You’ll learn to set inspiring UX goals, boost your team’s strategic impact, and receive personalized coaching, all while gaining access to a community of 51,000+ UX leaders.