The Four Stages of Using Data to Manage your Startup
DATA DATA DATA! There is such emphasis on data these days, that sometimes, especially with founders or product leaders who come from larger companies, it can feel like people are tracking data for the sake of having imposing dashboards, rather than to help them make better decisions. Data is a tool, but how you use it must evolve as your company does — over-reliance will inevitably hurt a product, but blissful ignorance will too.
To me, there are four stages in this evolution:
Stage 1: First principles >> data.
In the first stage of a company, you can’t use data to figure out what to build. In this pre-data world, you must first begin the wandering search of trying to build a product that users want (or better yet, feel like they need).
If you are successful and have something starting to work (Level one in my Hierarchy of Engagement), the surface area of your product and user base is still too small. If you relied on data to figure out what to ship next, you’d end up optimizing around a small local maxima. What you need to do instead is use first principles and user research to find the step-functions in the product experience that let you continue to grow your user base beyond that intense early adopter, power user group.
In my experience, for most product founders at these early stages, the next 6–12 months of roadmap is obvious. There is still so much building to do to go from that early MVP that is working to the v1 of your vision. But the second before it stops being obvious is when you need to start building out your data infrastructure and thinking about Stage 2.
Stage 2: Deciding what to measure.
The second stage happens when for the first time, you actually have both the ability to measure engagement of your product, enough users to achieve statistical significance quickly enough, and also the ability to do experiments with what is probably a rudimentary experiment framework. The question then is: what matters?
As I’ve written about before, what you measure, matters. People often say that what you measure, improves. While true, it overlooks how strategic the decision of what you measure is. If you get stuck measuring the wrong thing, you could end up wasting a lot of time on the wrong initiatives.
In the early days of Pinterest, we were measuring everything — DAUs, WAUs, MAUs, visits, saves (aka “repins”), clickthroughs, follows, likes, time on site, sign-ups, etc. Basically, anything that Google Analytics made easy to track, we tracked. That was the right first step, but we didn’t know what the right trade-offs were between all those actions — if a new feature drove up people saving pins, but decreased follows and likes, was that a success or a failure? Or if MAUs are going up but pins are staying flat, is that okay? Would you even notice? We had to work to get past the noise of all the defaults metrics in our analytics tool and figure out what mattered, and once we did, it was incredibly clarifying.
So stage 2 is about being super intentional about what is important to measure — forcing yourself to have the intellectual rigor and honesty to let go of any vanity metrics and really distill what the core action is for your product, and making sure you have an experiment dashboard that lets you understand how different segments of your users (i.e., new users, core users, casual users, dormant users) perform. Only then will you be able to truly make the right trade-offs with data. (And by the way, you may find yourself coming back to this stage again and again until you really figure out what matters.)
Stage 3: Data becomes a necessary BUT not sufficient tool.
Now that you have clarity on what matters, you can begin to make decisions using data. This is a new muscle to build, and if you want to continue to expand beyond your initial early adopters, it’s a critical one.
Most often, when you as the founder first built your product, you either understand your target customer deeply, or you are the target customer. You’re building something for a pain point you understand deeply. That’s what lets you find the red-hot need of your early adopters. But as your user base grows, an important and stealthy thing starts to happen: you as the founder and/or early product leader, your understanding of your target customer starts to dilute.
The inevitability of growth is that your product has expanded beyond that early adopter power user that you understand deeply and likely represents a narrow demographic, to a broader, eventually more casual user. If you keep on building for your early adopter power user, you’ll make growing harder because your product becomes more intimidating for each incremental new user, making it harder for them to convert to being an engaged core user. To continue to grow into new user groups, you need data to validate your product hypotheses. You may even need to ignore the feedback your power users are giving you (lesson #3), and just focus on how your new users are performing. Data + user research is the killer combination that’s the only way to build for a user that isn’t you.
It’s this stage when companies really learn how to use data, and a new generation of tools like Amplitude (a Benchmark portfolio company), Looker, Optimizely (another Benchmark portfolio company), and others are ready for you when you are.
- I say necessary but not sufficient, because there are things that you just can’t measure with an experiment dashboard — consumer trust, how consumers internalize your brand, how complex or heavy your product feels to a new user. A single experiment or new feature rarely move the needle on these critical assets by themselves, but the cumulative effect of a bunch of experiments or features can erode them over time if you’re not vigilant. e.g., https://twitter.com/chrismessina/status/1085995973753487360. So don’t forget your first principles, user research, and of course your own judgment. You need to always take a holistic view of your product, not the experiment you are measuring.
- There are some decisions (e.g., big strategic initiatives) where you need to be prepared to see the data, acknowledge it, and then discard it. Remember: Data is not a substitute for judgment or strategy.
Stage 4: System impact.
If you are successful… and I’m talking really successful here… there comes a time when you have to stop optimizing for a metric, and start having to consider the impact of experiments or new features have on your overall product’s system, or, if you are REALLY big, the system outside your system. For example, the effect your relevance ranking has on democracy I mean diversity of content sources (see what I did there?). What you are looking for is how the changes you make are effecting not only the core metric the company is built around, but also the health of your system.
When a person transitions from a mature company that is already in stage 3 or 4, to an earlier stage company, the things you’ve taken for granted are often a surprise (e.g., all the tools and infrastructure an engineer at Google has). One of the important things you miss is the process of truth seeking that went into deciding the core metrics of the business. So don’t skip stages 1 or 2, and give yourself permission to ignore the data to find that global maxima.