A/B Testing: The most important questions and answers at a glance!

HOW TO SUSTAINABLY OPTIMISE YOUR PERFORMANCE WITH A/B-TESTING

Why you should definitely test A/B and what you need to consider.

You want to optimise your conversion rate in the long term and find the appropriate measures? Then you can’t get around A/B Testing. Why you should definitely test and what you have to keep in mind: Here you will find the most important hacks and insights!

1. How do I decide what to test?

First of all you have to decide what you want to achieve with your test and find out what relevant changes are necessary to achieve your goal and optimise your conversions.

  • For example, do you want to start directly with the contact form because many of your potential customers drop out there or do they not get there in the first place? There are many possible places in the questionnaire where you could lose potential customers. So first ask yourself which elements in your previous questionnaire might have stopped them so far.
  • Often you will probably come across several things that you want to change.
    IMPORTANT: Do not make all changes at the same time!

Find out more about how conversion-optimised sliders can help you optimise performance and how to get the most out of them.

Here are a few ideas what you can test on your slider:

Are the questions in the slider relevant and clearly formulated for your customers?
Is the colour selection appealing and suitable for the product?
Is the headline formulated in an activating way?
Are the icons or pictures designed to be as simple as possible and to match the answer? Ask yourself if the answer options could be understood with icons only.
Are fewer questions in the contact form sufficient?
Is there a convincing incentive to leave the (real) contact details?
Test the wording on your buttons or in the headline!
Do you highlight your USPs and the benefits for your customers clearly enough?
To improve the quality of the leads for your sales team, check which customer information is really relevant.
Add a lead value to your questions to find out how good your lead quality is.
Insert if-then navigation that prevent customers from having to answer unnecessary questions.

2. How do I prepare the test?

Get an overview of the adjustment options available to you. Then decide on one for the moment. As a rule, you should of course choose the one with the greatest increase in performance. However, it is of course possible that the effort required to prepare for this test is very high, for example due to IT or design efforts. Consequently, it may also make sense to focus on a different adjustment first, as the start of the test would be delayed too long. In this case you should use the meantime for another test!

Remember: Test one thing at a time. If you test everything at the same time, you falsify your results and at the end of the test you cannot say what made it fail or why it was successful.

Once you have decided what you want to test, formulate a hypothesis.
Here is an example: “If the first question of the slider is asked in a simpler and shorter way, it is easier for the customer to answer. This increases the click-through rate of the slider”.

NOTE: Always formulate WHAT you want to change, WHY you want to change it and WHAT GOAL you want to achieve.

A few Dos and Don’ts in A/B Testing:

Do

Before starting the test, consider how long you have to test at the minimum. If you don’t set a time frame beforehand, doubts will quickly arise and the test will be cancelled too early. You should avoid this in any case!
Test often and a lot! Not every test is successful, maybe it is only one of ten tests. So never give up and pursue your goals!
Do not test too small! If the change in the B-version is not significant enough, the test often takes a very long time until a valid result is available, if at all.
Know your goal! Before you start testing, you should always know what you want to achieve with your test!
Also test the technology! You can test not only visual elements of your website, but also technical aspects such as the speed of your website.

Don’t

Never test one after the other! If you test version A in one week and version B in the next, the test is not significant because many factors such as weather, sales, advertising campaigns, holidays, etc. influence the behaviour of your users.
Do not finish your test too early! Your test must be significant enough so that you can be sure that it is really meaningful. This depends especially on the number of visitors. Here is a tool to calculate the significance.
Avoid surprises and confusion! Do you have customers who visit your website regularly? If so, exclude them from the test and only show them your changes once the test is positive.
Do not let your gut feeling override the test results. Your goal is to optimise performance, not aesthetics.

3. How long do I have to test?

First and foremost, your test period depends on your website traffic. The more visitors* you get, the faster you get significant results – goes without saying, doesn’t it? But even with very high visitor numbers and seemingly clear results, you should test for at least 2 weeks in order to take possible fluctuations of the weekdays into account. Also keep an eye on whether special occasions such as holidays fall within your test period, which may influence user behaviour.

Furthermore, if your test has reached a significance level of less than 5%, you can usually end your test: Here you will find an online tool to easily calculate your significance level!

4. How do I analyse the test?

In our article Google Analytics – evaluation of a slider we describe in detail how you can analyse our sliders in Google Analytics to see at which question your customers jump off and are not reaching the contact form.

Once you have reached a significant test score and can see whether your test was successful or not, there are three options:

Option 1: The test was successful!

Make sure that the right test version is rolled out. Sounds super simple, but even the best can roll out the wrong one, it happened before 😛
How can you transfer the learning from these A/B tests to other areas of your performance marketing strategy?

Option 2: The test does not provide a meaningful result!

Consider whether your test version differed enough from your control version (original version). You may have to readjust and change your test version more significantly.

Option 3: The test was negative!

Understandably, this result is anything but satisfactory. But even from this you can draw conclusions for conversion optimisation and learn to understand your customers better.
Were there possibly already errors during the test preparation?
Which need of your customers have you perhaps misinterpreted?
Was the test perhaps simply played incorrectly? You should definitely check that too!
Get feedback from an uninvolved person. Sometimes you can’t see the forest for the trees 😉

5. Now what?

No matter what the test result is, it should definitely continue and more A/B testing should be done.

  • Do not be discouraged by negative or inconclusive tests, they too are an important step towards performance optimisation.
  • Even if your test was positive and gave you an uplift, you should not rest on your laurels! Because remember, the competition never sleeps 😉
  • Work out your learnings from the test and prepare a new A/B test or update the previous test to restart it.

Only those who test again and again get to know potential customers better!
By the way, here we show you how you can use your customer information even more effectively.

Let us advise you now for your optimal solution for winning new customers!

Funnel-Marketing