Getting Good Intel: How User Research Can Go Wrong

Bryan McClain · November 21, 2011 · Short URL: https://vator.tv/n/21d3

By: Demetrius Madrigal and Bryan McClain

As we’ve discussed in previous articles, the belief that some research is better than none is not accurate. This month we’ll explore this belief and its possible consequences in depth.

In much the same way governments use intelligence in making military or diplomatic decisions, user research provides the information that product designers and stakeholders need to make decisions. If a government’s intelligence is accurate, leaders can make good, well-informed decisions. However, if the intelligence is inaccurate, leaders might make costly decisions that accomplish little or have negative consequences. Think of user research as design intelligence that helps product designers and stakeholders make informed decisions regarding product direction. Inaccurate design intelligence tends to lead to bad decisions, even when the best designers and savviest stakeholders make them. So, what are some of the ways in which user research can go wrong?

The Wrong Users

Perhaps the most serious problem that you might encounter is user research with the wrong participants. When you do user research with participants who are not representative of your target users for a product, any and all of your findings are likely to be incorrect. Information from the wrong participants can lead to designing a product for the wrong population of users. These could be people who are unlikely to adopt the product, do not experience the need that the product attempts to address, or are not qualified to provide accurate feedback on the product.

For example, early in our research careers, we assisted in a research study for which others defined the protocol and performed the recruiting prior to our involvement. We helped only to moderate the sessions. The study focused on how people drive and technologies that could help them drive more effectively and safely. The problem was that several of the participants did not drive on a regular basis, but instead took buses or carpooled. A couple of them didn’t even have their driver’s licenses! Any data that you gather from the wrong participants, as in this example, would not be representative of your actual, intended users, so making decisions based on that data could take you in the wrong direction.

The most obvious cause of doing research with the wrong users is making mistakes in recruiting research participants. To prevent this, make sure that your recruitment screener includes both descriptive and behavioral questions. For example, if you are recruiting Web developers, ask them to describe their work and what programming languages they use. An appropriate participant who is a Web developer should be able to tell you, for example, that he primarily does back-end development and uses JavaScript as his primary programming language. If you’re recruiting users who develop mobile games, be sure to ask what type of phone they own, whether they play games, and what their favorite games are. At the beginning of each research session, start off with similar questions to double check that you’re doing research with the right participant. Even if you find out too late that someone isn’t right for your study and have to scratch a participant, it’s better than getting inaccurate data.

A less obvious cause of doing research with the wrong participants is inaccurate user profiles or personas. If the research that identified users was flawed, it can result in personas that don’t accurately describe users. Factors such as small sample size, idiosyncratic users, too much focus on extremely salient findings, and investigator bias can lead to inaccurate conclusions. If you do not correct such problems through a review process or follow-up research, you can develop personas that don’t accurately reflect your users, and these inaccurate personas can start a domino effect, resulting in inaccurate research throughout the complete development cycle for a product.

The Wrong Questions

Understanding a problem you’re trying to solve is fundamental to meeting user needs through product design. Conversely, failing to sufficiently understand the problem tends to lead to an inability to solve it. The biggest obstacles to gaining this understanding are your own preconceived assumptions regarding users, their needs, and the product. Researchers tend to form their research questions around their assumptions, so our assumptions are the foundation for all the knowledge we gain through user research. If your assumptions are incorrect and you never realize it, everything else that you discover will be flawed.

For example, according to Guy Kawasaki, the prevailing assumption in the early 80s was a “better, faster, and cheaper MS-DOS machine.” If Steve Jobs had never challenged this assumption, the Macintosh would never have revolutionized personal computing with its GUI and desktop metaphor. But he did, and Apple competitors that were committed to MS-DOS soon realized that a market dominated by the Mac, then Windows had left them behind. More than that, they missed out on the chance to take the lead by thinking outside the box and creating their own innovations.

Unchallenged assumptions are dangerous, so it’s always important to challenge your assumptions in your research. When we’ve worked with the military and the FAA—organizations where lives can depend on the accuracy of our research—they’ve required that we document all of our assumptions in our test plans and final reports. Doing this helps with interpreting the findings and judging their reliability. We advise our clients to engage in a similar exercise. We recommend that you organize a collaborative session in which designers, engineers, researchers, and stakeholders try to get all of their collective assumptions on a whiteboard, discuss each one to determine the safety of each assumption, as well as the need to test them and an effective method of testing them. Incorporating assumptions into existing research efforts is relatively easy, only marginally increases cost—if at all—and pays off in big ways.

The Wrong Conclusions

Once you have recruited good research participants and obtained good data by asking good research questions, the next step is interpreting the data to arrive at accurate conclusions that drive product and design decisions. However, interpreting data sometimes leads to inappropriate conclusions, which tends to occur for two primary reasons: investigator bias and being overly aggressive in interpreting data.

Investigator bias can be very difficult to overcome. Biases tend to surface in the form of findings that seem like they should be right or that you expected to see. They become a problem when the data doesn’t support your conclusions. At times, research data may refute your conclusions, but in a way that might be too subtle to easily recognize. As the investigator, you may not be aware of your own biases, which can make it extremely difficult to guard against this problem. The more deeply you’ve worked within a subject area, the more deeply ingrained your biases become.

One effective method of overcoming your biases is to work with a partner who can act as a check against your biases. A good partner is someone who is knowledgeable about user research methods, but isn’t deeply involved in the project or subject area of your research—and thus can maintain objectivity. Everyone has their biases, so it’s important that you serve as a check against your partner’s biases as well.

Biases are closely related to the other source of inappropriate conclusions: interpreting data too aggressively. This can happen when a participant reports a particularly interesting and salient point of data—the kind of data that really gets your attention, usually because it’s something that you haven’t thought about before. However, for this very reason, it’s likely that you haven’t systematically tested the finding across multiple users.

You have to be careful not to rush into turning such a finding into a recommendation without fully vetting it. Our rule of thumb is that once is an outlier, twice is a coincidence, and three times is a trend. You want to look for trends to report rather than outliers.

You might find that someone has a really interesting idea, but upon investigation, most other users hate it. This doesn’t necessarily mean that you have to discard the idea, but it does indicate that it is a much more complex topic than you might originally have perceived. This is something you should be aware of before recommending the idea to stakeholders—or the product team may implement the idea to the detriment of the product and its users.

During your research, if you uncover something interesting with a given participant, try to assess your discovery with subsequent participants by adding it to your research protocol. If this isn’t possible, in your final report, document it as a new research question to explore further and recommend including it in follow-up research.

Conclusions

This month we’ve taken a look at some common reasons for faulty and inaccurate research findings that could end up threatening a product’s success. There are other threats—ways in which research can go wrong—but these are the most common ones and tend to be the most damaging. It is extremely important that the consumers of user research are able to recognize the difference between good and faulty research to maximize their chances of success in product development. The best protection against faulty research is to have a good research team and a good research plan. Next month, we’ll get into planning effective research for an entire product lifecycle and how doing the right research at the right time can help to ensure that you are getting good product intelligence.