Startups require a constant balancing act between execution and learning
Some people, when they start to realize the power of using data to inform their decisions, become obsessed with optimization. I think this idea is particularly appealing to those of us from an engineering background. By reducing the decisions we have to make to a series of quantitative questions, we can avoid a lot of real-life messiness.
Unfortunately, most decisions that confront startups lack a definitive right answer. Sometimes an early negative result from an experiment is a harbinger of doom for that product, and means it should be abandoned. Other times, it’s just an indicator that further iteration is needed. The only way to get good at these decisions is to practice making them, pay attention to what happens, compare it to what you thought would happen, and learn, learn, learn.
This has given rise to another school of thought, one that sees quantitative analysis, models, and anything involving spreadsheets as inherently anti-innovative and, therefore, anti-startup. But this is wrong, too. Spreadsheets, and predictive modeling in particular, have an important role to play in startups. It’s just very different than what it looks like in other contexts.
Let’s first take a look at what happens when spreadsheets go horribly wrong. For a change of pace, I’ll take an example from a startup inside a large enterprise. Imagine a general manager that has read The Innovator’s Dilemma and related books, and is therefore trying hard to help her organization make a transition to a new product category via disruptive innovation. She knows the internal politics are tricky, but she’s navigated them well. She has a separate team, with its own culture and office, and a mandate straight from top management to innovate without regard to the company’s historic products, channels, or supply chain. So far, so good.
Still, this manager is going to spend the company’s money, and needs to be held accountable. So somebody from the CFO’s organization prepares an ROI-justification spreadsheet for this new team. Because this is a new skunkworks-type project, everyone involved is savvy enough to understand that the initial ROI is likely to be low, much lower than projects that are powered by sustaining innovation. And so the spreadsheet is built with conservative assumptions, including a final revenue target.
Everything that’s happened so far seems reasonable. And yet we’re now headed for trouble. No matter how low we make the revenue projections for this new product, it’s extremely unlikely that they are achievable. That’s because the model is based on assumptions about customers that are totally unproven. If we already knew who the customer was, how they would behave, how much they would pay, and how to reach them, this wouldn’t be a disruptive innovation. When the project winds up getting canceled for failing to meet its ROI justification, it’s natural for the entrepreneur to feel like it was the CFO – and their innovation-sucking spreadsheet – that is the real cause.
And yet, it’s not really fair to ask that the company’s money be spent without anyone bothering to build a financial model that can be used to judge success. Certainly venture-backed startups don’t have this luxury – every business plan has a model in it. Just because entrepreneurs tend to forget about these models doesn’t mean their investors do. Companies that reliably fail to make their forecasted numbers are exceptionally prone to “management retooling.”
I think the problem with this approach is not the presence of the spreadsheet, but how it’s used. In a startup context, numbers like gross revenue are actually vanity metrics, not actionable metrics. It’s entirely possible for the startup to be a massive success without having large aggregate numbers, because the startup has succeeded in finding a passionate, but small, early adopter base that has tremendous per-customer behavior. Similarly, it’s easy to generate large aggregate numbers by simply falling back to non-disruptive or non-sustainable tactics (see Validated learning about customers for one example). And in a corporate context, a result in which the startup proves that a particular innovation is non-viable is actually very valuable learning.
The challenge is to find a way to use spreadsheets that can reward all of these positive outcomes, while still holding the team accountable if they fail to deliver. In other words, we want to use the spreadsheet to quantify our progress using the most important unit: validated learning about customers.
The solution is to change our focus from outputs to inputs. One way to conceive of our goal in an early-stage venture is to incrementally “fill in the blanks” for the business model that we think will one day power our startup. For example, say that your business model calls for a 4% conversion rate – as ours did initially at IMVU.
After a few months of early beta at IMVU, we discovered that our actual conversion rate was about 0.4%. That’s not too surprising, because our product was pretty bad in those days. But after a few more iterations, it became clear that improvements in the product were going to drive the conversion rate up – but probably not by a factor of 10. As the product got better, we could see the rate getting closer and closer to the mythical “one percent rule.” Even that early, it became clear that 4% was not an achievable goal. Luckily, we also discovered that certain other metrics, like LTV and CPA were much better than we initially projected. Running the revised business model with these new numbers was great news – we still had a shot at a viable business.
That’s hardly the end of the story, since there is still a long way to go between validating a business model in micro-scale and actually building a mainstream business. But proving your assumptions with early adopters is an essential first step. It provides a baseline against which you can start to assess your long-term assumptions. If it costs $0.10 to acquire an early adopter, how much should it cost to acquire a mainstream customer? $0.50? $1.00? Maybe. But $10.00? Unlikely.
Think back to the conflict between our Innovator’s Dilemma general manager and her nemesis, the CFO. The resolution I am suggesting is that they jointly conceive of their project as filling-in the missing parts of the spreadsheet, replacing assumptions and guesses with facts and informed hypotheses. As the model becomes clear, then – and only then – does it make sense to start trying to set milestones in terms of overall revenue. And as long as the startup is in learning and discovery mode – which means at least until the manager is ready to study Crossing the Chasm – these milestones will always have to be hybrids, with some validation components and some gross revenue components.
This model of joint accountability is at the heart of the lean startup, and is just as applicable to venture-backed, bootstrapped, and enterprise startups. As with most startup practices, it requires us to do a constant balancing act between execution and learning – both of which require tremendous discipline. The payoff is worth the effort.
(image source: virtualgalfriday.com)