Summary:
3 methods for increasing UX quality by exploring and testing diverse design ideas work even better when you use them together.
This is an updated version of an article written by Jakob Nielsen in 2011.
There’s no one perfect user-interface design, and you can’t get good usability by simply shipping your one best idea. You have to try (and test) multiple design ideas. Iterative design, parallel design, and competitive testing are three different ways to consider design alternatives. While each of them independently can improve a design’s usability, combining them enables you to explore the best ideas at a lower cost than sticking to a single approach.
Iterative Design
An “iteration” is an intentional repetition of a step in the design process with the goal of improving the design at that stage. For each version of the design, you conduct a usability evaluation (such as user testing or heuristic evaluation) and revise the next version based on the usability findings.
Iterative design is the:
- simplest process model (a linear progression)
- oldest foundation for user-centered design (UCD)
- cheapest (you often can iterate in a few hours)
- most progressive because you can keep incrementally improving for as many iterations as your budget allows (competitive and parallel testing are usually one-shot components of a design project)
How Many Iterations?
We recommend at least 2 iterations. These 2 iterations correspond to 3 versions: the first draft design (which we know is never good enough), followed by 2 redesigns. However, it’s better to incorporate 5–10 iterations or more, particularly when testing weekly.
Of course, one iteration (that is, a single redesign, for a total of 2 design versions) is still better than shipping your best guess without usability-derived improvements. Experience shows that the first redesign will have many remaining usability problems, which is why it’s best to plan for at least two iterations.
More iterations are better: you’ll be hard-pressed to find anyone who has iterated so much that there were no usability improvements to be had from the last iterations. Our study in 1993 demonstrated measured usability improvements by 38% per iteration. These metrics came from traditional application development; if we look at websites, improvements are typically bigger. In one case study, the targeted KPI improved by 233% across 6 iterations (7 design versions = 6 iterations between versions), corresponding to 22% per iteration. The key lesson from this latter case study is that it’s best to keep iterating, because you can keep piling on the gains.
To get many iterations within a limited budget and timeline, you can use discount usability methods: create paper prototypes for the early design versions, planning about one day per iteration. In later versions, you can gradually proceed to higher-fidelity renderings of the user interface, but there’s no reason to worry about fine details of the graphics in early stages, especially when you’re likely to rip the entire workflow apart between versions.
Simple user testing (5 users or less) will suffice because you’ll conduct additional testing for later iterations.
Limitations of Iterative Design
A classic critique of iterative design is that it might encourage “hill-climbing” — that is, working toward a local maximum rather than discovering a superior solution in a completely different design space. While iterative design is a highly effective technique to create usability gains, it’s true that it limits us to improving a single solution. If you start out in the wrong part of the design space, you might not end up where you’d really like to go. While the criticism is valid, the vast majority of user-interface design projects typically employ components or features that already have established, well-documented best practices.
Of course, superior solutions that exceed current best practices are possible; after all, we haven’t seen the perfect user interface yet. But most designers would be happy to nearly double their business metrics. Simply polishing a design’s usability through an iterative design has extremely high ROI, and is often preferable to the larger investment needed for higher gains.
That said, to avoid this problem of fixation on a single design solution, it may help to start with a parallel-design step before proceeding with iterative design.
Parallel Design
During parallel design, you create multiple alternative designs at the same time. These designs could be produced by either a single designer or by multiple designers who are assigned different design directions and generate each one draft design.
In any case, to stay within a reasonable budget, all parallel versions should be created quickly and cheaply. They don’t need to embody a complete design of all features and pages. Instead, for a website or intranet, you can design a few key pages and, for an application, you can design just the top features. Ideally, you should spend just a few days designing each version and refine them only to the level of rough wireframes.
Although you should create a minimum of 3 different design alternatives, it’s not worth the effort to design many more. 5 is probably the maximum.
Once you have all the parallel versions, subject them to user testing. Each test participant can test 2 or 3 versions. Any more and users get fatigued and can’t articulate the differences. Of course, you should alternate which version they test first because users are fresh (and unbiased) only on their first attempt. When they try the second or third UI that solves the same problem, people inevitably transfer their experience from using the previous version(s). Still, it’s worth having users try a few versions so that they can do a compare-and-contrast at the end of the session.
Unlike with A/B or multivariate testing, the goal of testing with parallel design is not necessarily to identify “a winner” from the parallel designs. Instead, the goal is to create a single merged design that uses the best ideas from each of the parallel versions. Finally, proceed with iterative design (as above) to further refine the merged design.
In 1996, we conducted a research study of parallel design, in which we evaluated 3 different approaches:
- Out of 4 parallel versions, simply pick the best one and iterate on it. This approach resulted in measured usability 56% higher than the average of the original 4 designs.
- Follow the recommended process and use a merged design, instead of picking a winner. Here, measured usability was 70% higher, giving us an additional 14% gain from including the best ideas of the “losing” designs.
- Continue iterating from the merged design. After one iteration, measured usability was 152% higher than the average of the original designs. (So, an extra iteration added 48% usability to the merged design — calculated as 2.52/1.70. This is within the expected range of gains from iterative design.)
Of course, there is no empirical reason to stop with one iteration after the merged design, but budget constraints will often dictate how many iterations are possible. As stated above, we still recommend at least 2–3 iterations to ensure measurable usability improvements.
Competitive Testing
In a competitive usability study, you test your own design and 3–4 designs from your competitors. The process looks the same as for parallel design, except that the original design alternatives are preexisting sites or apps as opposed to wireframes you create specifically for the study.
Competitive testing is not the same as a competitive review. While both are forms of competitive evaluation, competitive testing requires that designs be tested with actual users.
The benefit of competitive testing is also the same as that of parallel design: you gain insight into user behaviors with a broad range of design options before you commit to a design that you’ll refine through iterative design.
Competitive testing is also advantageous in that you don’t spend resources creating early design alternatives. For example, when designing a website, you can simply pick from among the ones available on the web. Competitive testing doesn’t work as well for intranets and other domains where you can’t easily access other companies’ designs.
Just as with parallel design, a competitive test shouldn’t simply be a benchmark to anoint a “winner.” Sure, it can get the competitive juices stewing in most companies to learn that a hated competitor scores, say, 45% higher on key usability metrics. Such numbers can spur executive action, but quantitative measurements provide weaker insights than qualitative research, and they can often be more costly to obtain, given the need for larger sample sizes. A more profitable goal for competitive studies is to understand how users behave and why; learn what features they like or find confusing across a range of currently popular designs; and discover opportunities to serve unmet needs.
Many design teams skip competitive testing because of the added expense of testing several sites. But this step is well worth the cost because it’s the best way to gain deep insights into users’ needs before you attempt to design something to address these needs.
Competitive testing is particularly important if you’re using an Agile development methodology because you often won’t have time for deep explorations during individual sprints. You can do the competitive study before starting your development project because you’re testing existing sites instead of new designs. You can later reach back to the insights when you need to make fast decisions during a sprint. Insights from competitive testing thus serve as money in the bank that you can withdraw when you’re in a pinch.
Exploring Design Diversity Produces Better UX
All 3 methods — iterative, parallel, and competitive — work for the same reason: Instead of being limited to your one best idea, you try a range of designs and see which ones work with your customers in user testing. The methods represent different ways of exploring diverse ideas and progressing your designs in different directions. This is important because there are so many dimensions in interaction design that the resulting design space is incredibly vast.
In the ideal process, you’d first conduct competitive testing to get deep insights into user needs and behaviors with the class of functionality you’re designing. Next, you’d proceed to parallel design to explore a wide range of solutions to this design problem. Finally, you’d go through many rounds of iterative design to polish your chosen solution to a high level of user experience quality. And, at each step, you should be sure to judge the designs based on empirical observations of real user behavior instead of your own preferences. (Remember: “You are not the user.”)
Combining these 3 methods prevents you from being stuck with your best idea and maximizes your chances of finding even something better.