Last year we started an experiment. We promised 7 SaaS companies to help them with data-driven decisions by answering every question they had regarding their user behavior and product usage.
I lead a small team that consisted of a web analyst, a mathematician and a data processing programmer.
We had regular calls with the companies in which we simply asked: “what do you need to know about your users so you can do your job better?”
The following are the lessons we’ve learned after more than one year of running this experiment.
When it comes to data, the opposite of “the right question” is not the “wrong question”. It’s an “ambiguous question”.
Let’s take an example. This is a question we often hear: “What’s my best traffic source?”.
What does “best traffic source” mean? The question is ambiguous as we don’t know what to look for into the data that will help us get a clear answer for it.
A quick but often superficial approach would be to look for the best conversion rate. That usually comes from traffic sources that can generate small amounts of traffic and there is little you can do to increase it. Not really actionable.
A high conversion rate for creating accounts does not equal as many users who are actually using your product.
So, whenever we hear an ambiguous question we reply with another question and in the end we find out what is truly needed. In this case, it was: “How should I split my marketing budget between my traffic sources in order to generate the most customers in the short term?”
We had a defined goal and we knew what we needed to look for: traffic sources that could bring the most activations at the highest conversion rate.
We needed to calculate a rank for each traffic source based on these criteria so we had a way of comparing them. We then used that comparison to decide how to split the marketing budget.
We looked at all the variables that could affect the result:
- How much traffic can each source generate?
- What was the conversion rate to generate leads?
- What was the activation rate of the leads?
Taking all these variables into account we came up with an algorithm for calculating the traffic sources that gave the best chance to generate new customers.
In the end we delivered the report that the customer needed: a distribution of the traffic sources that they could use in order to decide how to invest money in traffic sources based on results.
Grabbing the low hanging fruits
Data is only useful when you can apply the lessons learned and get a positive outcome. The best result we’ve seen so far was a 40% increase in activated users from the paid campaigns.
In almost every scenario we would start with an ambiguous question but always manage to get the “right” questions asked in the end. Here are just a few examples we worked with:
- What is the impact of my emails on converting free users to paid customers?
- What is the customer lifetime value per traffic channel?
- What is the influence of live chat on user upgrades?
- How long do people spend on my website before creating an account?
- How long does it take for onboarded users to become customers?
- What is the difference of activity between users who return to my product and those who churn?
Often, the insights we present to customers are not what they expect. Sometimes they are taken by complete surprise.
The natural and most common reaction we got was a skeptical one.
We love skeptics because they always challenge us and try to look for ways of disproving the results.
That meant that we always needed to make it possible to validate the results. This usually took up most of our time, but it had to be done if we wanted to deliver actions triggered by our insights.
The best way to validate the data was to make it easy to manually verify every insight.
So, if the insights said that there were 90 out of 100 users that did a specific action after finishing the onboarding and then churned, we let the customers dive into the behavior of the 90 users so they could randomly check that behavior.
This checking and double checking of the data often uncovered user behavior scenarios that no one thought about but that had a significant impact. It also uncovered issues that skewed the analysis and insights. In such cases we would fix them and then redo everything.
As analysts, we know that we don’t need 100% data accuracy to generate insights.
We just need precise data. CEOs, product managers, customer success people don’t work like that. If they find the slightest inaccuracy,they immediately question the validity of the whole analysis.
We ended up investing a lot of time in getting things as close as possible to 100% accuracy, just to remove this blocker from the decision making.
While the enthusiasm of all 7 companies was way up high when we started working together, 2 of them ended up spending less and less time with us.
Those 2 companies were the ones where there was no leading champion with a clear stake in the company.
While getting insights on how to do their jobs better sounded really good, in theory at least, in reality they were so busy with what they were doing that they didn’t even have time to think of what might help them do their jobs better.
In both cases, it was only when a big project/redesign was ready to be launched that they would get in touch with us, asked questions and set clear tracking goals.
The other 5 companies though had a leading person (product or marketing manager) who always needed to learn something more about their users.
In time questions became more and more specific, making them easier to answer and also more actionable for the customers.
Every company dreams of small changes in their product or marketing program that would have a big impact on the bottom line.
We have actually managed to witness such moments twice so far. They were triggered by these questions:
- What is the difference in onboarding rates between organic and paid traffic?
- What are the actions that users do between the first and the second onboarding steps?
In the first case, the company made iterations on their AdWords campaigns for 2 weeks till they managed to double the onboarding rates of paying users.
In the second case they removed a button from the onboarding process that had a big negative impact on users finishing it. That increased the overall onboarding rate by 20%.
The biggest change we’ve seen in the companies we worked with was that, once armed with data-driven insights, people would stand up firmly against the opinions of their colleagues or even superiors.
Once we even received an “urgent call”. Basically, a manager wanted to make a major change within their product without backing it up with data: remove a feature from free tier.
I was asked to do a quick impact analysis of removing that feature by answering:
- What is the influence of that feature on the retention of free users accounts?
- How long does it take free users to upgrade after activating that feature?
Once companies find a way to get answers for every question, it’s hard to stop them.
One of our fears or concerns was that we would start to receive analysis requests just for the sake of it.
It didn’t happen. They had such tight deadlines and roadmaps that every request they had was truly meant to make the most out of every implementation or improvement.
Every single insight that we have delivered to the companies we’ve worked with was from the data they already had access to.
What they lacked was the data science expertise to process it so that they could use it to their advantage.
Our experiment was meant to explore what happened when those resources became available to different types of businesses.
We’ve learned three major lessons:
- If people don’t trust data, they don’t use it.
- Asking the right questions is just as hard as getting the answers.
- Data-driven decisions are highly rewarding, especially for the business bottom line.
It’s your turn now. Tell us your story on your road from data to decision.