HOW TO SUSTAINABLY OPTIMISE YOUR PERFORMANCE WITH A/B-TESTING
Why you should definitely test A/B and what you need to consider.
You want to optimise your conversion rate in the long term and find the appropriate measures? Then you can’t get around A/B Testing. Why you should definitely test and what you have to keep in mind: Here you will find the most important hacks and insights!
1. How do I decide what to test?
First of all you have to decide what you want to achieve with your test and find out what relevant changes are necessary to achieve your goal and optimise your conversions.
- For example, do you want to start directly with the contact form because many of your potential customers drop out there or do they not get there in the first place? There are many possible places in the questionnaire where you could lose potential customers. So first ask yourself which elements in your previous questionnaire might have stopped them so far.
- Often you will probably come across several things that you want to change.
IMPORTANT: Do not make all changes at the same time!
Find out more about how conversion-optimised sliders can help you optimise performance and how to get the most out of them.
Here are a few ideas what you can test on your slider:
2. How do I prepare the test?
Get an overview of the adjustment options available to you. Then decide on one for the moment. As a rule, you should of course choose the one with the greatest increase in performance. However, it is of course possible that the effort required to prepare for this test is very high, for example due to IT or design efforts. Consequently, it may also make sense to focus on a different adjustment first, as the start of the test would be delayed too long. In this case you should use the meantime for another test!
Remember: Test one thing at a time. If you test everything at the same time, you falsify your results and at the end of the test you cannot say what made it fail or why it was successful.
Once you have decided what you want to test, formulate a hypothesis.
Here is an example: “If the first question of the slider is asked in a simpler and shorter way, it is easier for the customer to answer. This increases the click-through rate of the slider”.
NOTE: Always formulate WHAT you want to change, WHY you want to change it and WHAT GOAL you want to achieve.
A few Dos and Don’ts in A/B Testing:
3. How long do I have to test?
First and foremost, your test period depends on your website traffic. The more visitors* you get, the faster you get significant results – goes without saying, doesn’t it? But even with very high visitor numbers and seemingly clear results, you should test for at least 2 weeks in order to take possible fluctuations of the weekdays into account. Also keep an eye on whether special occasions such as holidays fall within your test period, which may influence user behaviour.
Furthermore, if your test has reached a significance level of less than 5%, you can usually end your test: Here you will find an online tool to easily calculate your significance level!
4. How do I analyse the test?
In our article Google Analytics – evaluation of a slider we describe in detail how you can analyse our sliders in Google Analytics to see at which question your customers jump off and are not reaching the contact form.
Once you have reached a significant test score and can see whether your test was successful or not, there are three options:
Option 1: The test was successful!
Option 2: The test does not provide a meaningful result!
Option 3: The test was negative!
5. Now what?
No matter what the test result is, it should definitely continue and more A/B testing should be done.
- Do not be discouraged by negative or inconclusive tests, they too are an important step towards performance optimisation.
- Even if your test was positive and gave you an uplift, you should not rest on your laurels! Because remember, the competition never sleeps 😉
- Work out your learnings from the test and prepare a new A/B test or update the previous test to restart it.