- Glossary
-
A/B Testing in Digital Marketing
A/B testing involves creating 2 versions of a digital asset to see which one users respond to best. Examples of assets include landing pages, display ads, marketing emails, and social posts. In an A/B test, half of an audience automatically receives “version A” and half receives “version B.” The performance of each version is based on conversion rate goals, such as the percentage of people who click on a link, complete a form, or make a purchase.
A/B testing isn’t a new idea with the advent of digital marketing. At one time, direct mail was the master of “splitting” or “bucketing” offers to see which one worked best. Digital capabilities build on the same idea but enable more specific, reliable, and faster test results.
If you’re trying to grow your business, it can be hard to tell which marketing assets resonate the most with your audience. A/B testing - along with other conversion optimization strategies - lets you try things out so you can improve your content, provide the best customer experiences and reach your conversion goals faster. This guide to A/B testing will help you to learn about its fundamentals.
A/B testing definition: What is A/B testing?
A/B tests, also known as split tests, allow you to compare 2 versions of something to learn which is more effective. Simply put, do your users like version A or version B better?
The concept is similar to the scientific method. If you want to find out what happens when you change one thing, you have to create a situation where only that one thing changes.
Think about the experiments you conducted in elementary school. If you put 2 seeds in 2 cups of dirt and put one in the closet and the other by the window, you’ll see different results. This kind of experimental setup is A/B testing.
History of A/B testing
In the 1960s, marketers started to see how this kind of testing could help them understand the impact of their advertising. Would a television ad or radio spot draw more business? Are letters or postcards better for direct mail marketing?
When the internet became an integral part of the business world in the ‘90s, A/B testing went digital. Once digital marketing teams had technical resources, they began to test their strategies in real time—and on a much larger scale.
What does A/B testing involve?
A/B testing involves the use of digital solutions to test different elements of a marketing campaign. To begin A/B testing, you must have:
- A campaign to test. To A/B test a marketing campaign, you need an email, newsletter, ad, landing page, or another medium already in use.
- Elements to test. Looking at the different elements of your campaign, consider what you can change that may prompt customers to take action. Make sure to test elements individually to ensure you get the correct measurements.
- Defined goals. The goals of your A/B testing should include figuring out which version of your campaign has better results for your business. Consider the different metrics you can track, including clicks, signups, or purchases.
What is A/B testing like in the digital age?
At its core, A/B testing in marketing is the same as it's always been. You pick the factor that you want to check, such as a blog post with images versus that same post without images. Then you randomly display one style of blog post to visitors, controlling for other factors. You’d also record as much data as possible—bounce rates, time spent on the page, and so on.
You can even test more than 1 variable at once. For example, if you want to evaluate the font as well as the presence of images, you could create 4 pages, each displaying the blog post with:
- Arial with images
- Arial without images
- Times New Roman with images
- Times New Roman without images
A/B testing marketing software returns the data from experiments like this. Then someone from your company interprets the results to decide whether it makes sense for the company to act on them—and if so, how.
Why is A/B testing important?
A/B tests give you the data necessary to make the most of your marketing budget. Let's say that your boss has given you a budget to drive traffic to your site using Google AdWords. You set up an A/B test that tracks the number of clicks for 3 different article titles. You run the test for a week, making sure that on any particular day and at any particular time, you’re running the same number of ads for each option.
The results from conducting this test will help you determine which title gets the most click-throughs. You can then use this data to shape your marketing campaign accordingly, improving its return on investment (ROI) more than if you'd chosen a title at random.
Minor changes, major improvements
A/B tests let you evaluate the impact of changes that are relatively inexpensive to implement. Running an AdWords campaign can be costly, so you want every aspect to be as effective as possible.
Let's say that you run A/B testing on your homepage's font, text size, menu titles, links, and the positioning of the custom signup form. You test these elements 2 or 3 at a time so you don't have too many unknowns interacting with each other.
When the test is done, you find that changing the latter 3 elements increases your conversion rate by 6% each. Your web designer implements those changes in less than an hour, and when they’re finished, you have a shot at bringing in 18% more revenue than you did before.
Low risks, high rewards
A/B testing is not only cost-effective, but it's also time efficient. You test 2 or 3 elements and get your answer. From there, it’s easy to decide whether to implement a change or not. If real-life data doesn't hold up to your test results, it's always possible to revert to an older version.
Making the most of traffic
If you use A/B testing to make your website as effective as it can be, you can get more conversions per visitor. The higher your conversion percentage is, the less time and money you'll need to spend on marketing. That's because, in theory, everyone who visits your website is more likely to act.
Remember, when you improve your website, it can increase your conversion rate for both paid and non-paid traffic.
What does A/B testing work on?
When it comes to customer-facing content, there is so much you can evaluate with A/B testing.
Common targets include:
- Email campaigns
- Individual emails
- Multimedia marketing strategies
- Paid internet advertising
- Newsletters
- Website design
In each category, you can conduct A/B tests on any number of variables. If you're testing your site’s design, for example, you can try different options such as:
- Color scheme
- Layout
- Number and type of images
- Headings and subheadings
- Product pricing
- Special offers
- Call-to-action button design
- Video emails vs. non video emails
Essentially, almost any style or content element in a customer-facing item is testable.
How do you conduct A/B tests?
When all is said and done, the A/B testing process is just the scientific method. If you want to get the most out of it, you need to approach it scientifically. Just like in the laboratory version of the scientific method, A/B testing begins with picking what to test. The whole process consists of several steps:
1. Identify a problem
Make sure you identify a specific problem. “Not enough conversions,” for instance, is too general. There are too many factors that go into whether or not a website visitor becomes a customer or whether an email recipient clicks through to your site. You need to know why your material isn't converting.
Example: You work for a women's clothing retailer that has plenty of online sales, but very few of those sales come from its email campaigns. You go to your analytics data and find that a high percentage of users are opening your emails with special offers and reading them, but few are actually converting.
2. Analyze user data
Technically you could conduct A/B testing on everything your customers see when they open your emails, but that would take a lot of time. There are a lot of design and content elements that they encounter that aren’t relevant, so you need to figure out which element to target.
Example: People are opening your emails, so there’s nothing wrong with how you’re writing your subject lines. They’re also spending time reading them, so there’s nothing that’s making them instantly click away. Because plenty of the users who find your website from elsewhere end up becoming customers, you can tell there’s nothing wrong with how you’re presenting your products, either. This suggests that although people find your emails compelling, they’re getting lost somehow when they go to actually click through to your site.
3. Develop a hypothesis to test
Now you're really narrowing it down. Your next step is to decide exactly what you want to test and how you want to test it. Narrow your unknowns down to 1 or 2, at least to start. Then you can determine how changing that element or elements might fix the problem you're facing.
Example: You notice that the button that takes people to your online store is tucked away at the bottom of the email, below the fold. You suspect that if you bring it up to the top of the screen, you can more effectively encourage people to visit your site.
4. Conduct the hypothesis testing
Develop a new version of the test item that implements your idea. Then run an A/B test between that version and your current page with your target audience.
Example: You create a version of the email with the button positioned above the fold. You don't change its design—just its positioning. You decide to run the test for 24 hours, so you set that as your time parameter and start the test.
5. Analyze the data
Once the test is over, look at the results and see if your new email design resulted in any noticeable changes. If not, try testing a new element.
Example: Your new email increased conversions slightly, but your boss wants to know if something else could do better. Since your variable was the positioning of the button, you decide to try placing it in 2 other locations.
6. Find new challengers for your champion
The A/B testing world sometimes uses “champion” and “challenger” to refer to the current best option and new possibilities. When 2 or more options compete and one is significantly more successful, it's called the champion. You can then test that winner against other options, which are called challengers. That test might give you a new champion or reveal that the original champion was the best.
Example: You’ve A/B tested 2 versions of a landing page and found the champion between them, but there’s also a 3rd version of the page that you’d like to compare to the winner from your 1st test. The 3rd version becomes the new challenger to test against the previous champion.
Once you've run through all 6 steps, you can decide whether the improvement was significant enough to end the test and make the necessary changes. Or you can run another A/B test to evaluate the impact of another element, such as the size of the button or its color scheme.
Tips for A/B testers
Here are some pointers to help you make your A/B tests as useful as possible.
Use representative samples of your users
Any scientist will tell you that if you're running an experiment, you must ensure that your participant groups are as similar as possible. If you're testing a website, you can use several automated testing tools to guarantee that a random selection of people sees each version.
If you're sending material directly to your clients or potential customers, you need to manually create comparable lists. Make the groups as equal in size as you can and—if you have access to the data—evenly distribute recipients according to gender, age, and geography. That way, variations in these factors will have minimal impact on your results.
Maximize your sample size
The more people you test, the more reliable your results will be. This ties into a concept that statisticians refer to as “statistical significance.”
If the result is statistically significant, it’s unlikely to have occurred by chance. For example, if you send a new version of an email to 50 people and a control version to 50 more, a 5% increase in the click-through rate only means that 5 people responded better to your new version. The difference is so small that it could be explainable by chance—and if you perform the same test again, there’s a good chance you’ll get different results. In other words, your results were not statistically significant.
If you're able to send the same set of emails to groups of 500, a 5% increase means that 50 people responded better to your new style, which is much more likely to be significant.
Avoid common mistakes
It's tempting to create a pop-up button with a new font, a new text size, new button sizes, and new button colors. But the more new elements you add, the more muddled your results will be.
Sticking with the above example, if your new pop-up is completely different in design than the original, you're likely to see correlations that are completely coincidental. Maybe it looks like the large purple “check out” button with the dollar sign image is doing better than the small blue button it replaced. However, it's possible that only 1 of those design elements was significant, such as the size, for instance.
Remember, you can always run a new test with different elements later. Looking at that follow-up test will be easier than trying to analyze a test with 18 different variables.
Let the test end before making changes
Because A/B tests let you see the effects of a change in real time, it's tempting to end the test as soon as you see results so you can implement a new version right away. However, doing so means your results are more likely to be incomplete and are less likely to be statistically significant. Time-sensitive factors can impact your results, so you need to wait until the end of the testing period to benefit from randomization.
Run tests more than once
Even the best A/B testing software returns false positives because user behavior is so variable. The only way to make sure your results are accurate is to run the same test again with the same parameters.
Retesting is particularly important if your new version shows a small margin of improvement. A single false positive result matters more when there aren't as many positive results.
Also, if you run many A/B tests, it's more likely that you'll encounter a false positive. You might not be able to afford to rerun every test, but if you retest once in a while, you have a better chance of catching errors.
Simplify A/B testing with Mailchimp
A/B testing is an efficient and effective way to gauge your audience's response to a design or content idea because it doesn’t disturb your users’ experience or send out disruptive feedback surveys. Just try something new and let the results speak for themselves.
New to A/B testing? Easily test your campaigns with Mailchimp to determine which email headers, visual elements, subjects, and copy resonate the most with your customers.