Do you regularly create new variations of your adverts, or split test various landing pages on your website in order to increase conversions, phone calls or leads? You might have some great successes with A/B testing - seeing that the results are showing one variation is performing more effectively than another, but could these results be inaccurate?
With the late arrival of spring, along with Harry's 11 new lambs (the cute little things in the image above), we thought it would be a good time to give you a fresh view on how you test your various marketing efforts. You see, unless the results of your test are "statistically significant", then your own testing could be pulling the wool over your eyes (see what I did there?)
A statistically significant marketing test is one that generates enough traffic and conversions so that it can be confidently declared as statistically significant, that is to say; accurate!
Let’s say you split test two different landing pages, and they get 100 hits each over the space of a day. One variation gathered 1 conversion while the other gained 2. You could say that the second variation won by 100%, but looking at the data, it’s not statistically significant to say this confidently. Basically, there wasn’t enough data to be sure that this test is accurate.
A more statistically significant test would be to let it run for a whole week, allowing each variation to generate an average of 700 hits each, and measure the conversions from there. If one gathered 10 leads and the other 13, then you could say with much more confidence that the second variation has performed better than the former.
So how do you go about setting up a successful A/B or multivariate test in the first place? Let us show you with these three incredibly easy but effective steps:
Step 1: Set up your campaign
Once you know what you want to test — and that could be a landing page, full page print ad or a PPC campaign — you need to get specific and decide what exactly it is you’re testing.
Could it be the headline, or the call-to-action perhaps? Maybe the name of the offer or the offer itself is the subject of your test? Whatever it is make sure you test only one thing at a time.
If you create two different ads for a newspaper, as example, and you give each one a completely different offer, headline and even give each a completely unique design, you’re not really going to do yourself many favours in terms of knowing what is it that truly made people take action.
Step 2: Begin gathering data
Once the test is in place, it’s time to start collecting the data. Depending on what you’re testing, the metric that you measure against will be different. For instance, if you’re testing a landing page, you might want to run it for 30 days in order to collect enough data to make it statistically significant. If it’s an email you’re testing, best practice is to grab a small (but significant) sample of your database and send version A to one half and version B to the other.
Step 3: Determine if your results are significant
We could go through a load of mathematical equations that will help you determine whether or not your test is statistically significant, but that would almost warrant a whole article in itself.
In the "sample size" fields, enter the amount of visits/subscribers that saw each of your landing pages/emails/adverts etc. Under the "percentage response" fields enter the percentage of those converted (i.e. those who took the action you wanted them to take). Press the "Calculate" button and you'll instantly discover whether or not your test was statistically significant.
That’s really all there is to it. From here you can take action based on your findings because you'll be confident with what you discovered.
A/B testing is incredibly easy to do, you just need to make sure your marketing analytics and call tracking systems are in place and ready to roll when you execute your test, so you can accurately measure the response.