A/B tests can be a powerful marketing tool when they are understood and used correctly. While this topic is meaty and has been studied for decades, below is (hopefully) a simple introduction to the art of A/B tests.
First, we have to start with statistics, and honestly, I’m a writer/marketer, not a statistician, or mathematician by any stretch of the imagination. Just talking about p-values and means makes me go back to high school statistics, which is not a pleasant memory. Yet, before we go on about A/B testing, we do need to refresh our statistics terms a little bit—if I’m going down, I’m taking you all down with me!
Hypotheses, Control, Variation, Sample Sizes, and Time
Before we get into the terms, I’ll give you an example we’re going to work from: Holly owns an online athletic clothing store. She’s trying to determine if there is a significant difference in completed online orders when customers were offered a 10% discount (her control) versus a 15% discount (her variation). She has a customer base of 5,000 customers.
Holly’s hypothesis is that the 15% discount will attract more customers to complete orders in her online store. Using an online calculator she found through Hubspot, her sample size is 356 customers. Holly has decided to run her test for a time period of three weeks.
What is A/B Testing?
All in all, A/B testing is simple enough to understand, yet can be a powerful tool when used correctly. Essentially, you produce two identical marketing pieces, with one variation. That one variation is what you’re testing. Whether it’s a headline, a website button, or even a discount percentage, you’ll be able to see how that one specific element disturbs the campaign.
Some examples of the components you can use in A/B testing is by testing the following:
- Forms (Lengths, Fields, etc)
- Promotions (Discount Amounts, Length of Promotion)
- Layout and Style
One thing to keep in mind is that you must release each test at the same time. If you do the ‘A’ test one week and the ‘B’ test the following week, you won’t get an accurate metric of what worked. Holly, from our example above, sent an email blast of the ‘A’ test to 178 of her constituents and the ‘B’ test to the other 178 simultaneously.
Every Result is a Learning Opportunity
Let’s consider that after Holly runs her email A/B test, she finds out that her 10% discount performed better than her variation (the 15% discount). Logic would say this doesn’t make sense. Rather than scrapping the whole test, Holly and her team can learn a few things from her results.
Neutral and/or negative results from a test can provide valuable insight into learning about your customers, how they work, and how an A/B test turned out to be unsuccessful (or how you had the wrong hypothesis). Don’t consider a negative result as, well, negative. Rather glean as much information as you can from the test, and try again.
Even if a test does glean a positive result, always remember to:
Double Check Your Work
When talking about A/B testing, it is important to do something math teachers through the ages have been trying to drill into their students – double check your work.
It is important to realize that A/B tests do have a tendency to produce false positives, i.e. false uplifts in your results. If you notice that your uplift doesn’t last over time, or if the results seem too good to be true, it’s a good idea to validate your findings by running a second test.
A/B tests can be a large investment in time, resources, and (for us, non-mathematical folks) brainpower. However, the investment does pay off when an A/B test produces insight into your customer behaviors that you wouldn’t have found out otherwise.
If you need help with your math skills A/B testing, Seapoint Digital is here to help. Our calculators are standing by.