Good enough never is, or is it?

Eric Ries · September 29, 2010 · Short URL: https://vator.tv/n/122a

What matters is that our pursuit of learning is ongoing and our commitment is absolute

One of the sayings I hear from talented managers in product development is, “good enough never is.” It’s inspirational, always calling the team to try harder and do better. It works to undermine excuses for poor or shoddy work. And, most importantly, it helps team members develop the courage to stand up for these values in stressful situations. Especially in teams that are managing by objectives (or OKR's), the pressure to deliver is intense. Under such pressure, the temptation to cut corners, to quit prematurely, or to hand off shoddy work to another department is overwhelming. It requires courage to stand up and say: "this work is simply not good enough. Sure, we could get away with it, but that's not how we work."

Good managers work hard to create an environment where this courage thrives.

On the other hand, there are many stories of companies achieving a breakthrough by shipping something that was only "good enough." One such rumor, which I’ve heard from several sources, tells of the launch of Google Maps. The team was demoing their AJAX-powered map solution, the first of its kind, to senior management at Google. They were impressed, even though the team considered it still an early prototype. Larry and Sergey, so the legend goes, simply said: “it is already good enough. Ship it.” The team complied, despite their reservations and fear. And the rest is history: Google Maps was a huge success. This success was aided by the fact that it did just one thing extremely well – its lack of extra features emphasized its differentiation. Shipping sooner accentuated this difference, and it took competitors a long time to catch up.

So which is it? Is "good enough" good enough? Rules of thumb can be infuriatingly unhelpful. When should you settle for good enough and when should you push yourself to do your best?
This is precisely the dilemma that the doctrine of minimum viable product is designed to solve. And it’s really hard.
Most of us intuitively have a “split-the-difference” attitude when faced with recurring difficult choices. That is not a long-term solution. The reason: it actively encourages factional strife. Everyone naturally falls along a spectrum, from “ship anything soonest” to “always build it right, no matter what it takes.” When members of a team realize that the final answer will be some kind of average, they face an overwhelming incentive to express desires in the strongest possible terms. After all, someone else’s view will be averaged in, too.
Any excesses are likely to be moderated by others. Of course, this logic applies to members of all factions. Over time, such teams either explode due to irreconcilable differences or dramatically slow down. The latter is actually more dangerous. Divided teams usually can’t agree on facts or interpretations. Yet startups rely on collective learning in order to find their way. Factional strife is learning kryptonite. I believe this is one reason why the myth of the dictatorial startup founder has such enduring appeal. Faced with these kinds of disagreements, strong arbitrary action is much superior to paralysis.
But action/paralysis are not the only options. As in many false dichotomies, we can find a third way that gives both factions a positive message to rally around.
Without an affirmative message, managers can cause lasting harm. I certainly have. When people start using quality, reliability, or design as an excuse to delay, it used to make me nervous, even when these suggestions were well intentioned. After all, how would Craig Newmark’s life (and the rest of ours, too) be different today if he had waited to build something with a high-quality design before starting his famous list? Rather than having this repeated argument, I sometimes found it easier to play dictator on the other side, forcing teams to ship sooner than they were comfortable with. As I found out to my dismay, this is a dangerous game: in many cases, you’re asking trained professionals to violate their own code of best practices, for the good of the company. Once you go down that road, you risk opening a Pandora’s box of possible bad behaviors. And yet, it does not have to be that way.
Almost everything we know today about how to build quality products in traditional management has its origins with W. Edwards Deming, the original quality guru. He had two concepts that are especially important to this discussion. The first is that “best efforts are not enough.” Despite what it seems in the moment, most quality problems are not caused by people slacking off or acting maliciously. (It seems that way only because of a psychological phenomenon called the fundamental attribution error.) In reality, most quality problems are systemic in nature. They have to be solved in the boardroom by making a company-wide commitment to building quality into the very systems the company uses to build products. Lean manufacturing, agile software development, and Theory of Constraints are all examples of this idea in action.
However, a commitment to quality alone is not enough. In old school manufacturing, quality was defined as reliability: parts and products that did not wear out, break down, or fail unexpectedly. And so Deming’s contribution was especially prescient, as he saw that “the customer is the most important part of the production line.” This means that quality is defined in the eye of the customer, not necessarily by arbitrary standards loved by insiders to the production process. In today’s world, this is increasingly important, as quality is often defined by factors beyond reliability: design, ease of use, aesthetic appeal, and convenience.
Now we come to the heart of the minimum viable product issue: how can we build quality in if we do not yet know who the customer is? All of our professional standards that lead us to want to get it right the first time – all of them were developed originally in a non-startup context, one where the customer was known in advance. Startups are different, leading to this axiom: if you do not know who the customer is, you do not know what quality is.
Which takes us right back to the original definition of minimum viable product:
the minimum viable product is that version of a new product which allows a team to collect the maximum amount of validated learning with the least effort.
In other words, the minimum viable product is a test of a specific set of hypotheses, with a goal of proving or disproving them as quickly as possible. One of the most important of these hypotheses is always: what will the customer care about? How will they define quality?
One common worry is that this might lead companies to “release crap,” shipping too soon with a product of such low quality that it alienates potential customers and, thus,  causes entrepreneurs to abandon their vision. This critique combines two misunderstandings in one.
First, I want to explore the idea of releasing crap: that our product is of such low quality that we will release it, customers will hate it, and we’ll have accomplished nothing but alienating them. But notice how many hypotheses are baked into this supposedly simple scenario: we believe we have already solved the distribution problem for our product (or else how could customers try it?). We already know who to distribute the product to (or else why would we care what they think?). Naturally, we already know the standard of quality that they will use to judge our product. And, of course, we already know that they will care enough to be offended. In fact, we know so much that we already know what they will care enough about (namely, the product’s quality – as opposed to, say, missing features).
Even better, this is a falsifiable hypothesis. It is entirely possible that we can ship “crap” and have one of the aforementioned facts fail to materialize. In fact, that is one of the best possible outcomes, because it will force us to learn something. What if customers actually like the “crap” product? Or what if we can’t get any of them to even try it? Or what if the features they demand we build are different from the ones we were planning to build? In those cases, we can’t help but learn a great deal. Remember, the minimum in minimum viable product does not mean that you should ship just anything at the nearest possible date. It means to ship as soon as it is possible to learn what you need to learn.
The second misunderstanding is a concern for what will happen if things turn out exactly as we originally predicted (namely, badly). Entrepreneurs, faced with an early defeat, might lose their commitment to seeing their vision through. I understand this fear. It is a direct consequence of the reality distortion field, that ability most visionaries have to get people to believe in a vision as if it was already true. Data can undermine this field. It's easier to believe in a glorious future when you have only zeroes, for everyone: founders, investors, and employees.
But this fear is way overblown, in my experience. The great visionaries I’ve worked with can incorporate a commitment to iteration into their process. However, there are some important ground rules. As I wrote in Don’t Launch, it’s essential to remember that these early minimum viable product launches are not marketing launches. No press should be allowed. No vanity metrics should be looked at. If there are investors involved, they should be fully briefed on the expectation that these early efforts are designed to fail.
Again, even if they do "fail," it is improbable that they will fail in the way we originally expected. In fact, in all of the startups I have worked with, I have never seen this happen. There is always something unexpected when customers react to a product in the real world: we thought they’d be offended by low quality, but actually they refused to download it; we thought they’d share it with their friends, but actually they wanted us to provide the friends; we thought they’d care a lot about our beautiful design, but actually they wanted more features. As in any experiment, the important thing is not the bare fact that the hypothesis was invalidated. More important is to understand the reasons why. This is not an academic exercise; the goal of these experiments is to immediately get up off the mat and design the next one. And the next, and the next, until we have not just learned but proved our learning with hard facts: through the attainment of validated learning.
Minimum viable product is an attempt to get startups to simplify, but it is not itself simple. How do you know which features are essential and which should go? There is no formula, it requires judgment. Any scientific method requires the choice of a hypothesis to test. This leads to two questions:

  1. By what standard is this hypothesis to be chosen? Minimum viable product proposes a clear standard: the hypothesis that seems likely to lead to the maximum amount of validated learning.
  2. How do you train your judgment to get better over time? Again, the answer is derived from the hard-won wisdom of the scientific method: making specific, concrete predictions and then testing them via experiments that are supposed to match those predictions helps scientists train their intuition towards the truth. 
(Fans of the history of science will recognize this as Thomas Kuhn’s theory of scientific paradigms. Minimum viable products are not a single hypothesis. They should therefore be properly understood as product paradigms. As in science, the paradigms that survive will be those that allow practitioners to discover the most productive experiments to try, during the period Kuhn calls “normal science.” A paradigm crisis is analogous to a pivot.)
I told you it wasn’t simple. And this leads to a last criticism of minimum viable product that I hear from time to time: it’s just too complicated. Most people prefer simple, short, pithy startup advice. I remember this acutely from my debate with David Heinemeier Hansson, of 37 Signals fame. As I was explaining the MVP concept, I could see the look of horror on his face. His answer, to paraphrase, was something like this: “that’s way too complicated. Just build something awesome, something that you yourself would love, and ship it.”

Other similar forms of this advice abound: “release early, release often,” “build something people want,” “just build it,” etc. This Nike school of entrepreneurship is not entirely misguided. Compared to "not doing it," I think “just do it” is a superior alternative.
But the teams I meet in my travels are often one step beyond this. What do you do the day after you just did it? It really doesn’t matter if you took a long time to build it right or just threw the first iteration over the wall. Unless you achieve instantaneous overnight success, you will be faced by difficult decisions. Pivot or persevere? Add features or remove them? Charge money or give it away for free? Freemium or subscription or advertising?
I won’t apologize for this aspect of the Lean Startup methodology. These are complicated questions. We are drawn to easy answers because we look at the landscape of successful companies with a biased lens. We see examples of startups who did things “our way” and were successful. Unfortunately, that’s true no matter which way we prefer. Even in the narrow field of giant tech companies, their early products were wildly different. Compare eBay and Google, Apple and Sun, Oracle and Seibel. And, of course, there’s incredible selection bias. For every successful company we think we know that “built it right” or “shipped crap” from the start, there are plenty we’ve never heard of, because they followed that same strategy and promptly died. That’s the deep flaw in most startup advice: it argues from selective examples.
So what about the question of whether good enough really is? What’s needed, I believe, is an alternative discipline that teams can get excited about. When we’re talking about being disciplined, following our methodology with rigor, continuous improvement, there is no such thing as good enough. Our pursuit of learning is ongoing and our commitment is absolute. But when it comes to the specific of a product release, business plan, or marketing launch, all that matters is: do we have a strong hypothesis that will enable us to learn? If so, execute, iterate, and learn. We don’t need the best possible hypothesis. We don’t need the best possible plan. We need to get through the build-measure-learn feedback loop with maximum speed.

Over time, I believe we will build a new professional discipline that will seek excellence at this kind of product-centric learning. And then that new breed of managers will, I'm sure, confidently go around saying: good enough never is.

Want more?
If you liked this post, please subscribe or follow me on Twitter.

(Image source: s3.amazonaws.com)

 

Support VatorNews by Donating

Read more from our "Lessons and advice" series

More episodes