By Oliver Walker

A/B testing sounds easy and fun, but you shouldn't be treating it like a game (it is fun though!). In order to make it as effective as possible you need to start learning and stop guessing your way through the CRO process. I’ll talk through some problems with simple methods of A/B testing and offer solutions to help you learn more and really improve your website.

Back to blog home

Problem 1: Finger in the air testing

Compared to PPC (which requires technology, time and knowledge), Programmatic (which requires a ton of time and as ever, knowledge) and Google Tag Manager / Analytics set-up (which requires a ton of knowledge as well as time), Conversion Rate Optimisation or A/B testing seems pretty simple. A lot of companies are likely running processes that go something like:

  1. Look at a page on their site
  2. Think of what might be interesting to change, for example a green button instead of a red button
  3. Set the test live
  4. Woop! Green wins, I’m awesome at A/B testing!!!
Happy Worker

As rudimentary as this sounds, I think you (and me!) would be surprised at how many organisations and people are adopting this sort of approach, even if in reality it’s more “I think we should do this…” based on someone’s gut-instinct. So what’s the problem with this approach? Well, first of all the chances of getting a winner aren’t great. There’s loads of colours in the world so knowing that green beats red is interesting but what next? Does blue beat green? How about cyan or (my favourite), turquoise? Running a test for every colour button is not a great runway for ongoing success. The second problem is learnings are limited. What can you do with information that green buttons beat red? You could adopt green buttons all over the site but even then, there might have been something specific about the test you ran which meant that in that instance green won (e.g. because of the contrast with the surrounding content). It’s not much more widely adoptable than that.

Problem 2: Throwing paint at the wall testing

Similar to the first problem, this is a step in the right direction but another step back in the wrong direction too. Here, even good test ideas can be corrupted. Let’s say you’ve done a smidge of user testing and one of the pieces of feedback was that users don’t like being forced into the checkout process early with an array of ‘Buy Now’ call to actions. Great – real user insight, this is a perfect first step! However it’s corrupted by someone following the process:

  1. Look at page on their site
  2. Awesome, we’ll change the text on the buttons to say “Find out about our services” instead of “Buy Now”
  3. …but whilst I’m here, I’m also going to test a different image on that hero banner…
  4. …and I just can’t help shake the feeling that a green button would be better than red!
  5. Set the test live
  6. Woop! Variation is a winner, I knew I was the boss of A/B testing!
Man in tie

This approach of throwing a few different elements at a test, because you think “they’ll make a page better” is fundamentally flawed. Now we know that the variation has won – but what caused this version of a page to be more successful? You don’t know whether the call-to-action change, the image change or the colour of the button made any difference. So, whilst it’s great that you have a winner, again, your next steps and learnings are very limited. An MVT test here would be a great solution (even if a couple of the original ideas are highly spurious!).

Solution: Be more scientific in your approach to testing and learn from tests!

One of the greatest benefits of A/B testing is in allowing you to learn about what your users, or even better – a segment of your users, want or would prefer on your site / app. Therefore an absolutely ideal testing process should follow:

  1. Use quantitative and qualitative tools to identify an area for improvement on your site
  2. Use this insight to craft a hypothesis, singular in focus, to test
  3. Set the test live
  4. Woop! Variation is a winner!
  5. Use quantitative and qualitative tools to add context to the data. Woop, you’re test was a winner and you’ve learnt something that has wider appeal.

Let’s briefly break those stages up:

  1. It’s absolutely imperative to use data to identify where to focus testing. Web analytics tools offers a multitude of metrics to help, including bounce rate, exit rate, funnel abandonment rate, site search exit rate, site search refinement along with goal conversion / ecommerce rates of course. Use tools like surveys and user testing tools to contextualise that web analytics data is perfect for identifying where and what to test. Even if you use colleagues or passers-by off the street, it’s possible to make an approach more scientific than just sticking a web page under their nose and saying “whaddya think?”. Try to get users who fit your clients’ target market and/or structure questions. E.g. “imagine you want to book a holiday for two people in Europe at a place with a pool. Go through this on site X and give us your thoughts”.

  2. Ensuring that hypotheses are singular in focus are key, as we saw from Problem 2. It’s absolutely fine to make more than one change on a variation, as long as the aim is to achieve the same thing. For instance, on a checkout process you might decide to add security logos, change CTA test to ‘Pay Securely Now’ and make client reviews more prominent in an attempt to alleviate any potential user concern around your website.
  3. Finally, using web analytics, voice of customer tools and your CRM once you’re test has been concluded/as part of your test will also help to understand the full result. For instance, a test might show fewer people clicking on the ‘Get in Touch’ button but using your CRM you might observe that the quality of those leads is better from variation visitors.

Example in action

One of our clients offers a product that is variable in price (e.g. think holidays, insurance or pest removal). They currently don’t provide guide pricing. We noticed from GA that a lot of users were dropping out when the price was displayed so we hypothesised that if we displayed a price (as many competitors do) that would affect the type of person who’d start the process. So we mocked up a test and set it live. From looking at the results in our testing tool, the goal (people clicking on the CTA on that page) conversion rate was poorer when we displayed a price. However, when we segmented and looked at the stats in Google Analytics, we saw that users who saw the price still had a higher conversion rate – even with the bigger drop from the first stage. Great, our test was successful. But the beauty of this test was the potential for the next steps! Now we’re looking at:

  1. Was the guide price we presented the right one? Could we try a lower guide price or a higher one?
  2. Did we frame the price well enough? Could we test adding a bunch of USPs underneath the price?
  3. What implication does this have for our online marketing? Given the prevalence of PPC advertising, should we trial adding guide costs in our ads, which might result in less traffic but greater Return on Ad Spend (RoAS)? (In reality this is arguably a whole new topic!)
  4. Should we add costs in to any offline marketing we do, so that users have a greater expectation of what we offer?
  5. Should we communicate this to our phone-based customer services team? So now, if a caller asks for a guide price we can provide one rather than enforcing they go through a certain process first?

As you can see, compared to the first two scenarios we outlined, this is awesome! We’ve won and we’ve got great learning. Here’s where you can also see the value in tests losing as well – as long as you learn, you’re winning (cheeeeeese). If you lose a test where red buttons beat your variation green ones, you can’t do much with that.

Super Worker

Here at Periscopix, we live and breathe data, science and process so if you are interested in best-practice website testing, give us a call.

Image sources:

Share this article