Minimum Viable Test: The Framework We Use To Kill It With Our A/B Tests

Share on:
Share on:

We recently had the opportunity to chat with Dennis van der Heijden, the CEO of Convert, about research they did on their user base.

As a part of their research, they found that only 1 in 7 experiments conducted in-house has a winner while, for experiments run by a marketing agency, that figure increases to 1 in 3.

We did some meta-research 2 years ago and the numbers were interesting

  • 1/7 tests have a winner for direct clients (people running e-commerce or lead gen sites)
  • 1/3 tests have a winner for agency clients (people optimizing as a business)

When anecdotally trying to explain this, a survey showed agencies only spend 5% of their optimization time testing and 95% on analysis, surveys, UX, mockups and hypothesis before going to testing. So we think that there is a correlation between these two things.

Agencies and professional teams run around 1 test per 300,000 unique visitors a month (so with a 1M visitors site this is around 3) when they are on an ongoing contract or in-house conversion team.

Direct clients begin the first months with around 1 test per 100,000 and then lower that in time (my guess also correlated to the bad results they get).

Dennis van der Heijden

Dennis van der Heijden

CEO of Convert

That means that as many as six tests run are a complete waste of time.

With each experiment taking a lot of development and design time, if you’re having a hard time getting those departments to prioritize A/B tests, this is probably the reason why.

Minimum Viable Test

It appears that, on average, companies get a winner for their experiments when testing about once every other month, although this will depend on how much traffic you’re getting to your website.

When you’re wanting to focus on growing a company, that’s a long time to wait!

What if there was a faster (and better!) way to get solid results from your A/B testing? Cue drum roll: We give you “Minimum Viable Tests”.

Here at InnerTrends we’ve been playing with Minimum Viable Tests as a concept for quite some time, but it wasn’t until a 500 Startup conference that we heard Brian Balfour use this term.

Later, in a podcast, he described it as being the most efficient but valid way to get data around a hypothesis.

It all starts with the hypothesis

In a regular A/B test someone comes up with a hypothesis. For example: our users don’t understand our pricing.

The web analytics team dig into the data to pull every metric they have around this hypothesis.

In our example they would be looking for data about how people are currently interacting with the pricing page. This is labeled as the control version.

test hypothesis

The UX team looks at this and comes up with one or more mockup variants based on what they feel will help users understand the pricing better and act on it.

Once the mockups are agreed by all the people involved, they are sent to be implemented by the development team ahead of running an A/B test to validate the hypothesis and declare a winner.

The process takes a long time (sometimes even months) and, as the research from the team at Convert shows, all too often it does not give a winner. We wondered why and how this can be fixed.

By taking a step back and reviewing every stage of the process we noticed a problem:

The variants are based on guesses, not data.

Data is available only for the control version or the current state of the page or app that people interact with.

Data tells us that there is a problem and while it might even tell us where the problem is, it won’t tell us if we guessed the right solution to the problem.

Instead, we are forced to try and guess which variants will work.

Educated guesses are a great start but, using the regular methodology, those guesses are never validated ahead of investing lots of resources building the final versions.

While the UX team will know what does not work, there can be many options around what might work.

The final options are often picked randomly or, at best, based on past experiences.

Here is where Minimum Viable Tests come into play. Before sending anything to development, answer the following question:

What is the least we can do to validate the variants we are considering?

Often these tests require a simple change such as tweaking the copy in a heading or minor design changes.

The goal for these tests is not necessarily to have a winner, but rather to gather the data you need to build the final A/B test.

How do you run a Minimum Viable Test?

When we run Minimum Viable Tests we look at the following requirements:

  • Minimum involvement of other departments in the process.
  • Track as many metrics related to the tests as possible. We want to learn, not to pick a winner.
  • Make sure everyone in the company knows these tests are running.

In our case, these minimum viable tests often mean applying different hacks to get to a result as soon as possible.

We use the following framework

1. Google Tag Manager serves the tests

We write the whole test code directly in Google Tag Manager. This way we don’t involve the development team at all in this process.

We let Google Tag Manager split the traffic in 2, 3 or as many variants as we have and we make sure, with the help of cookies, that a user always sees the same variant, just like any other A/B testing platform.

2. Manipulate the page with JavaScript.

Because these tests require minimum changes to the page it is quite easy to render the changes directly with JavaScript, or even easier with frameworks like jQuery.

Need to change a heading, the size or position of a button or a piece of text? It usually requires just a few lines of code.

Even if you don’t have the skills to do this yourself and you need a developer to help you, these changes will take only a matter of minutes, compared to having to create a new version of the page and make changes directly in the app, which takes much longer.

3. Interaction data is tracked in web analytics tools

Every time a user interacts with any of the Minimum Viable Test variants, we log every interaction in InnerTrends so we can analyse the data and build the reports we need to support the final test.

If you need to correlate the test data with traffic sources or AdWords campaigns, pushing the data to Google Analytics will also help, although it won’t allow you to analyse individual user interactions with the test.

Minimum Viable Test Rocks!

In the words of Brian Balfour, “startups MUST do Minimum Viable Tests today.”

The learning process when using Minimum Viable Tests is much shorter and, instead of wasting time and money only to be left with a stalemate, you’ll be taking your educated guesses and validating them before choosing to invest resources in an experiment that will give you a winner.

Have you used Minimum Viable Tests as part of your A/B testing arsenal? And, if not, what questions do you have about getting started?

Drop a note in the comments and we can figure it out together.

Looking for deep insights into how your customers use your product?

InnerTrends can help. You won’t have to be a data scientist to discover the best growth opportunities for your business, our software will take care of that for you.

Schedule a Demo with us and witness with your own eyes just how powerful InnerTrends can be.

Subscribe to the #DataDiary

Data driven insights from behind the curtains of high growing companies.

Thank you for subscribing