Making informed decisions is crucial for serious email marketers. If your business relies on your customer’s engagement with your email, you know that every detail, no matter how insignificant it might seem, plays a role in determining the success of a campaign.
Do your subject line and from name encourage your subscribers to open your email? Have you tried sending your email at different times to see when your audience is most likely to interact with it? Are you using the best designs and calls to action in your email to drive clickthroughs and boost ROI?
There are a lot of factors to consider. Without the correct tools, testing all of the different variations can get time consuming—and perhaps even a little overwhelming.
It’s like the restaurant scene in Mrs. Doubtfire. It’s hard to keep changing email campaign designs back and forth from “middle-aged American man” to “elderly British nanny” on one’s own. It gets confusing. That’s why we built Multivariate Testing.
Multivariate Testing—a feature available exclusively for MailChimp Pro—takes the guesswork out of email marketing by offering users a streamlined tool for testing content ideas, layout options (including templates), send time, subject line, and from name all in one place at one time. With Multivariate Testing, you’ll have the flexibility to test as many as 8 variations of a single campaign at once to definitively learn which combination of factors leads to the best engagement among your subscribers.
Your most effective campaign can be determined by click rate, open rate, total revenue, or chosen manually. Multivariate Testing Campaign reports will thoroughly compile all of the data for you, so you can quickly examine the results and implement the practices in future campaigns.
Multivariate Testing is similar to MailChimp’s standard A/B Testing feature, but has been rebuilt from the ground up to offer an expanded feature set. You get improved testing flexibility, thorough reporting, and the opportunity to learn more about your audience than ever before. Instead of running separate A/B tests to determine the best subject lines, from names, send times, content, and designs for your campaign, now you can create a single multivariate test that compares a combination of these factors at once. This flexibility makes MailChimp Pro’s Multivariate Testing an efficient, time-saving solution for marketers who have large email lists and complex testing needs.
The Science Behind Multivariate Testing
Every aspect of Multivariate Testing has been thoroughly researched and thoughtfully designed with one goal in mind: to make you a more successful marketer. Our data science team analyzed millions of A/B tested emails sent by MailChimp customers and used their findings to develop recommendations to guide you through setting up multivariate tests and viewing the results. The number of available testing variations (8), the default testing duration (4 hours) and recommended testing segment sizes (5,000 subscribers) have been strategically chosen to maximize the effectiveness and reliability of each test.
4 hour default testing duration: In the past, the default testing duration of standard MailChimp A/B Testing campaigns has been 1 full day. Our research has shown that 80-90% of an email’s activity/clicks generally occurs within the first 24 hours after sending, so one day is—and will continue to be—the most effective option to ensure that most of your subscribers have had a chance to open the email. We know, however, that waiting a full day to send an email isn’t always an option. In fact, the most common duration that our users set for their A/B tests was one hour—clicking the drop-down menu and simply changing the default one “day” to one “hour” while setting up the test was a quick, appealing alternative.
It can be difficult, however, to make any determination regarding a variable’s effectiveness (or lack thereof) based on 1 hour of activity—it’s typically not enough time to collect enough clicks or opens. So, with Multivariate Testing we’ve set the default test duration to 4 hours. You can certainly adjust that if you’d like, but we’ve found that after 4 hours, nearly half of the people who are going open, read, and interact with the email already have done so. Plus, a 4-hour window will usually allow for the “winning” email to be sent on the same day the tests run. It’s a safe middle ground that will yield reliable results.
5,000-subscriber testing segments: This recommendation also comes from studying tens of thousands of A/B Testing campaigns. Historically, when the sample size was under 5,000, the results of the test wouldn’t reliably generalize when sent to the whole list. Creating these larger testing segments will help ensure that your test data isn’t misleading, allowing you to glean the most relevant information from your audience.
Let’s say, for example, that a user with 2,000 subscribers creates an A/B test distributed in a 10/10/80 ratio, with click rate as the determining factor. In this scenario, 10%—200 subscribers—of the list would receive version A, 10% receives version B, and the remaining 80% receives the best performing campaign. For the average MailChimp customer, each variation will have a click rate of around 3%, meaning that the winning combination is being determined by the actions of approximately 6-12 subscribers out of the entire 2,000. While it’s certainly possible that the results will scale in a similar manner as the sample size grows, testing on a larger sample will help you be more confident in the final decision.
8 variations per test: As with 5,000-subscriber segments, this ceiling is in place to help ensure that your tests are as effective—and reliable—as possible. When we analyzed A/B testing click rates, we were able to determine how many subscribers should be in each testing segment in order for the results to be useful. Creating more than 8 variations would result in the subscriber list being stretched thin—too thin, in most cases, to provide reliable data.
The reports generated by Multivariate Testing Campaigns are also rooted in science and designed with your success in mind, offering several levels of insight into the behavior of your audience and the effectiveness of your email. Each report details the performance of all test combinations that have been created, allowing you to quickly determine which combination of variables has yielded the best results.
If you’d like a more thorough look into the performance of each combination, simply click to explore the full campaign results, just as you would with any MailChimp campaign. The performance of each variation is tracked as well, and accompanied by confidence intervals that will help you determine the reliability of your test results.
Think of each confidence interval like a margin of error in an election—if the vote totals are close and the margins of error overlap, one candidate may appear to be the winner, but statistically, it’s a tie. Confidence intervals provide the same type of context to the results of your Multivariate Testing combinations, improving the reliability of your data.
Multivariate Testing Strategies
In the following sections, we’ll discuss the process of creating a Multivariate Testing Campaign and understanding all of the resulting report data. But before getting started, there are a few things to keep in mind:
Be intentional. Multivariate Testing can provide you with an incredible amount of information regarding the habits and preferences of your subscribers, but you’ll need to make sure you’re testing the right variables. Instead of testing random variations of your email, test thought-out ideas that will teach you about your audience.
Be creative and take chances. Test different photos, merchandise, messages, calls to action, color schemes, or even different templates altogether. The results might surprise you.
Be thoughtful about how the best performer is determined. When testing a campaign’s subject line, from name, or send time, open rate will generally be the best indicator of success. Click rate is a more effective statistic when you’re performing tests based on the content within a campaign.
Creating a Multivariate Test
To start building a Multivariate Testing Campaign in your account, simply create a new campaign and select Multivariate Campaign.
After selecting which list or segment will receive the campaign, choose which variables that will be tested, how the test will be divided between your subscribers, and how the winner will be determined. Each multivariate test allows for up to 8 variations of a single campaign—comprising a combination of different subject lines, from names, content, and send times.
After determining which variables you’d like to test, you’ll use the How should we split your recipients? slider to decide how the tests will be spread across your list of subscribers. To send the test to your entire list, slide the bar all the way to 100%. The variations of your test will be divided equally and sent to random segments of subscribers. Afterwards, you will be able to compare the results of all tests to determine how each performed.
If you’d prefer to test on a portion of your list, simply select a smaller percentage on the slider. You’ll then be asked to specify how long the test will run and how you’d like the winning combination of the campaign be determined—by open rate, click rate, total revenue, or manual selection.
Once the winning combination is determined by open rate, click rate, or total revenue, the best performer will be sent to the remainder of your list. If you’ve opted for the manual selection, you can choose which version of the campaign is sent to the remainder of your list once all of the test sends have been completed. Manual selection can be beneficial if you’d prefer to pick a winning combination based on a factor that not included in the built-in options or if or you’d just like an opportunity to evaluate the statistics generated by each of your tests before determining a winner.
In the Setup step, name the multivariate test and determine the subject lines, from names, and send times for each applicable variation of the test.
If you choose to compare campaign content, you’ll be prompted to create the emails during the Content step of the Campaign Builder. Each version of the campaign can be created just as you would create any other campaign in MailChimp—with a basic or predesigned drag and drop template, a custom template, an imported template, and so on. You’ll write a brief description for each campaign you create, so you can differentiate between them as you progress through the test and review the report.
Multivariate Testing gives you the power to test anything in a campaign—from modified copy and image placement to completely different layouts, designs, or templates. The differences between each variation can be as subtle or drastic as you’d like, so it pays to be creative.
Multivariate Testing generates a lot of data, and the Reports page collects and organizes all of it. There, you can see why each test was successful and utilize what you’ve learned in future campaigns. In each Multivariate Testing Campaign report, not only will you find data regarding the performance of each combination of the campaign that’s been sent, but also higher-level statistics that isolate and detail the performance of each variable that’s been tested.
The Combination Results section presents a thorough overview of each variation and an update on the status of each campaign, including how many emails were sent, their open and click rates, and confidence intervals that will help you determine both the success of your test and how much value to place in the results.
If you have connected your account with our e-commerce features, the sales generated by each version of the campaign will also be present in the chart. You can choose to view either the current results or the results that had been tallied at the moment the winning combination was selected, and each campaign can be clicked to reveal the full report for that particular email, just as you’d see for any regular campaign sent through MailChimp.
Earlier in this guide, we noted that confidence intervals can be equated to the margin of error values that accompany election results, but let’s explore that idea a bit further. The screenshots below represent 2 different test combinations where content was being tested and click rates were being compared in order to determine a winner.
In the first example, the winning content variation had a 13.3% click rate (with a confidence interval of +/- .3%) and the loser had a click rate of 8.2% (+/- .3%). In this test, we can be confident that the winner had, at worst, a 13.0% click rate, and the loser had, at best, an 8.5% click rate. This means that the winner is, without question, the statistical winner.
In the second example, the winning content variation has a 13.3% click rate (+/- .3%), and the variation that was declared the loser had a 13.1% click rate (+/- .3%). In this test, the winner had, at worst, a 13.0% click rate, while the loser had, at best, a 13.4% click rate. When factored into the results of the test, the confidence interval actually caused the results to be reversed, so we cannot definitively say that one variation outperformed the other.
If, like the user in the second example, the results of your test are too close to determine a clear cut winner, consider increasing the duration of the test, expanding your sample size, or testing more drastic, deliberate variations of the campaign’s content in the future. A slight adjustment to your testing strategy could help yield more informative, effective data.
Results Per Variable
The Results Per Variable section details how the different variations have performed against each other. Quickly analyze which subject line, from name, send time, or content resulted in the best average open or click rates. If you’ve integrated with our e-commerce features, you’ll find total revenue data here as well.
The Link Comparison tab in your Multivariate Testing Campaign reports will behave much like its counterpart in our standard A/B Testing Campaign reports. The performance of links across each content variation will be tracked here, and you can click any link in the Content click stats section to zoom to the location where that particular link appears in the campaign. In the Details section, you’ll see how each link performed in the content variations used in each combination.
Resources and Support
Thanks for taking the time to learn more about MailChimp Pro’s Multivariate Testing feature. As you explore, we hope that you’ll find new, exciting ways to interact with and utilize your MailChimp data. If you have any additional questions, visit our Knowledge Base or contact our support team.