Guidelines for product usability testing

Bryan McClain · May 23, 2011 · Short URL: https://vator.tv/n/1aaa

This is not just a test - it's primetime

When engaging in any form of product usability test, there are certain very important guidelines to keep in mind. One guideline that user researchers commonly overlook is testing with a version or mockup that is free of glitches, bugs, or known errors. In essence, you want what you’re testing to be ready for primetime. We have found it is very common for companies to test with incomplete builds of a product that is rife with known issues. We always advocate for using a clean build or mockup of a product, because of negative consequences we’ve encountered in the past. Of course, it is always possible to test with a buggy build of a product, but it is very important to be aware that testing with a product with known issues can extend a usability study’s schedule, compromise the accuracy of its results, and inflate its cost.

Users fixated on glitches, bugs and errors

When you put a product that has glitches—represented in Figure 1—bugs, or errors in front of users, they usually discover them. When they do, they tend to fixate on them and miss actual usability issues, especially those that are more subtle. It is because of this phenomenon that it is important to do iterative testing to uncover all of a user interface design’s critical usability issues. However, even if you are using an iterative testing approach, participants’ encountering many glitches could extend the number of test iterations you’ll need to do. If you are relying on just one or two iterations of usability testing to get your product ready, you are almost certainly going to miss quite a few issues if participants encounter errors.

Encountering a glitch

When you are doing concept testing, errors can distort participants’ true reactions to a concept. So, rather than responding to the concept’s value proposition, participants instead respond to the execution of its implementation. When discussing barriers to adoption in a previous column, “Barriers to Adoption and How to Uncover Them,” we defined confidence as users’ believing that a product can deliver the value it promises. When you perform concept testing with a build that has obvious errors, participants’ confidence in the product suffers. In turn, their reaction to the concept suffers. Thus, even if participants attempt to compensate for the errors by ignoring them, their reaction is still somewhat tainted, and they can end up overcompensating or imagining a product that differs radically from the intended design. In that situation, your best course of action is to ask participants about the design they imagine. But keep in mind that they are still describing untested and vague design ideas rather than providing true feedback on the concept.

Increased costs and extended schedules

Whenever we test with a buggy build, we know that the testing will take 25–50% longer than it would have with a stable build, because of the troubleshooting that tends to occur during and between test sessions. When a product freezes or crashes during a test, you must stop the session to solve the problem. This can often result in sessions’ extending far beyond their scheduled time. An hour-long session can easily become a 90-minute session, pushing back the start time for the following sessions, during which other participants may experience the same problems. At times, we’ve had to cancel sessions shortly after they started, because the complete failure of the software required that we wait for an engineer to repair or reinstall the build.

Such failures are time consuming and costly because they extend the timeline for acquiring adequate, usable data; require additional engineering support; and can result in the loss of paid participants. The need to replace lost participants means you’ll accrue additional costs for recruiting, participant compensation, and session moderation. In addition, adding more sessions when you replace lost participants can put a research deliverable date in danger of slipping, which could shift the development schedule, which is very costly, or result in design decisions’ being made without research findings, which is risky.

Obviously, it’s not always possible to get a perfectly clean, workable build. When you cannot, the best way to overcome the hurdles we’ve mentioned is to be as prepared as possible before starting usability testing.

We always test a build thoroughly prior to starting a usability study to determine its stability. It’s very helpful to know the ways in which a product can break ahead of time and whether there are incomplete sections you should avoid during testing. It’s also imperative to know whether a build could require a cold boot or even a reinstall. When we know that we are dealing with a buggy build, we can

  • compensate by scheduling additional buffer time between sessions—This allows each session to run long as necessary.
  • recruit additional participants in case there are cancelled sessions—We can cancel later sessions with some participants if they aren’t necessary. It’s much easier to cancel than to scramble in an attempt to find suitable replacements.
  • have a research partner on a study if the budget allows—This allows one person to aggregate data while the other collects data or to troubleshoot while the other person performs a post-session interview or preps the next participant.

Anything that you can do to streamline your process and ensure you meet your deadlines is a sensible enhancement to your test plan.

Testing for mockups or prototypes

For usability testing, if you don’t have software to test, fake it. In most cases, testing with mockups or prototypes can provide excellent, actionable data. This holds true whether you are using a simple, clickable Flash demo for usability testing or paper prototypes for concept testing. By limiting the development of a prototype to a product’s front end, you can quickly create a prototype of a user interface that is adequate for usability testing.

If a designer or researcher on your team is familiar with simple Flash development or HTML/CSS prototyping, you can develop a prototype with minimal support from Engineering. There are also some software solutions available from companies like Balsamiq and Napkee that allow just about anyone to produce a clickable HTML prototype like the one in Figure 2.

Figure 2—A prototype made using Balsamiq Mockup

Balsamiq Mockup

When taking this kind of approach, you should match your test-session design to the fidelity of the mockup or prototype. If you have paper prototypes or fairly simple clickable prototypes, focus primarily on core features, brand messaging, and perceived value. If you have a more complete prototype, you can progress to a more rigorous test of additional features and their added value.

When devising an end-to-end research plan, we typically start with need-finding research such as ethnography, home visits, or interviews, then do concept testing using paper prototypes to assess the value proposition, brand messaging, and feature set. As the design progresses, we test with paper prototypes, incorporating simple test tasks that address a product’s core functionality . Eventually, we’ll transition to testing core functionality, using low-fidelity, clickable mockups. The next step is more robust usability testing with medium-to-high-fidelity prototypes. Finally, we’ll transition to testing a reasonably stable build of the actual product, doing in-depth usability tests or following more advanced testing methods such as competitive benchmarking. We’ve found that this kind of iterative testing schedule is extremely effective in providing actionable design intelligence.

Conclusion

It’s never a great option to test with buggy or unstable builds, because doing so can compromise your data collection, complicate your study’s logistics, and potentially, impact your study’s budget and schedule. You can test mockups or prototypes of various types as alternatives to testing incomplete builds, but it is important to design your study to be compatible with the fidelity of the mockups or prototype you are using.

When a prototype simply won’t do the job and you need to use a build that you know has errors, it’s important to plan for the problems that are likely to arise. Before starting your study, test the build as extensively you can, note the areas in which you encounter difficulties, and plan for troubleshooting. In your test plan, it’s also important to accommodate the possibility of cancelled or extended sessions by recruiting extra participants, including extra buffer time between sessions, and working with a research partner or team of researchers.

As user research professionals, our goal is always to provide accurate, actionable research findings on schedule and on budget. For testing, we recommend to our clients that they provide a build that is ready for primetime. But, if that can’t happen, we rely on the tools we’ve described and a little creativity, so we can anticipate problems, quickly find solutions when we encounter problems, and keep our research objectives on track.

(Demetrius Madrigal contributed to this article)

(Image source: Usabilis)