Why and how to replicate AB Testing experiments?

Share on:
Share on:

AB Tests are about people’s reactions, not website elements

In Reading Virtual Minds, a book that applied a reset on my own view of how a website works, Joseph Carrabis makes a great case for how each element on a website that a user views or interacts with will impact his behavior.

Every path a user takes is a story and that story is shaped by each word, image, call to action, form or even design element that the user interacts with. Changing any element on that path changes the story, and often the end of the story: the conversion.

Why is the concept of story so important? In every good story all elements are linked. They are not independent.

When we test a call to action, form, text or design element we actually test how the context, the whole story, changes for the user.

When we plan to replicate an AB test we should not try to simply test the same website elements, but rather understand how the winning variants changed the story for the user, and then try to obtain the same effect.

Different websites will do different implementations, based on their specific context, but the impact on the users should be the same. It should be something that can be replicated.

Replicate the story, not the independent variables

The Internet offers a huge number of AB testing case studies, some with winning scenarios that we only dream of.

Let’s take this case study from Which Test Won: Straighterline – Form Elements – 2015 Honorable Mention Winner, and see how we might replicate it.

The winning variant got them 70% more submission clicks, but that is not what we should be after when trying to replicate the AB test. Buffled? Allow me to explain …

We always start by fitting the AB testing or experiment into one of the following buckets:

  • Clarity (most UX tests fit here) – are people experiencing the website in the way in which we expect them to?
  • Trust/security – would adding more elements that imply trust or a more secure environment make a difference?
  • Emotions/empathy – is the human element critical to getting the users to act?
  • Engagement level – what are users expectations from your website or app?

The above test could fit either in the engagement bucket: people are interested in the proposition of the page more than on the details on how it works and can help them, or could fit in the clarity bucket: people want to know how the page can help them and the sooner they see the form, the clearer it is for them.

In order to know which bucket fits the scenario we would need to know more details about the behavior of the users on the page. I would look for the answer to this question:

  • Did people in both variants spent comparable amounts of time going through the rest of the content on the page?

Tools like Inspect Let or Crazy Egg can help a lot in answering such questions.

If the answer is yes then we would fit the test under the ‘clarity’ bucket. If the answer is no, we would fit the test under the ‘engagement’ bucket.

Moving the form from a footer to the header is not the way to replicate the success of Straighterline.

To replicate this experiment is to look at our own page that we believe needs improvement and come up with variants that will offer the same level of clarity or the same engagement level as the case study from Which Test Won.

We might end up with a form in the sidebar or even change the form fields or form title in order to replicate the success of Straighterline.

Focus on the conclusion, not just on the winner

In the above case study, the hypothesis is stated as: Combining the form request and moving it to the top of the page would increase submission clicks.

What is missing from this case study is the Why?.

Straighterline had great results in increasing the number of form subscriptions, but knowing why would allow us to deepen the understanding of the audience.

Each website is a story and simply moving the form from footer to header changes how the user perceives the story. Learning how and why this happens allows us to drive conclusions about our audience, conclusions like:

  • It is not clear for most of our users what exactly we are offering, or
  • Our users know exactly what we offer and expect us to meet their expectations in a straightforward manner.

In this example we have two totally opposite conclusions that could derive from the same AB test results.

Conclusions allow us to make statements about our users (remember we test user reactions not website elements) which in turn allow us to understand them better.

When trying to replicate an AB test, be it your own or from a well documented case study, the goal is not to get the same winner or the same results. The goal is to validate the initial conclusion which stands even after repeated tests.

The winning variant got them 70% more submission clicks and we said that’s not what we should be after when replicating. What we should be after is that whichever of the above conclusions is true stands true, no matter how many times, and in how many shapes, the AB test is replicated.

Replicating is not repeating

Replicating AB tests is actually an amazing method for knowing your users, especially when you start AB testing on different segments of traffic.

The easiest experiments to replicate are your own. Take the last AB test you successfully closed, see what it says about your users, run a different experiment that proves the same hypothesis.

The beauty of it is that you have access to all the web analytics data which allows you to fit the experiment in the right bucket and drive initial conclusions about your users.

Take that initial conclusion and see what other AB test would validate them. This is what ‘always be testing’ means for us: challenging the conclusion over and over again.

If you are out of ideas try to experiment winning AB tests done by others. No, don’t re-create the exact AB test they implemented. It might give you a winner but that pales in comparison with the knowledge you get by trying to understand why it won.

From Which Test Won to the Unbounce blog, from Visual Website Optimizer case studies to our own The Experiment you’ll find a huge number of AB testing ideas.

When you see an idea that you think might improve the story your website tells to your audience, look for clues as to why a variant won and what it says about the users. Don’t just repeat the case study, replicate how the users experience the AB test.

The bottom line is that replicating experiments is actually one of the best methods available to validate what we know about our users so pick one and make a start.

Which AB test will you replicate first today? And what questions do you have before making a start? Let’s talk about it in the comments.

Looking for deep insights into how your customers use your product?

InnerTrends can help. You won’t have to be a data scientist to discover the best growth opportunities for your business, our software will take care of that for you.

Schedule a Demo with us and witness with your own eyes just how powerful InnerTrends can be.

Subscribe to the #DataDiary

Data driven insights from behind the curtains of high growing companies.

Thank you for subscribing