How to Accurately Test the Success of Your Email Marketing

{authorName}

Daniel GrovesBusiness Growth Consultant

Thursday, July 14, 2022

Email marketing is a broad and sometimes complex subject, giving you a lot to unpack when you’re trying to optimize your campaign efforts. Getting your copy right, finding the perfect balance between memorable graphics and informative text, and ensuring that you’re avoiding common mistakes, are all pressing challenges that email marketers have to think about to be successful.

Article 8 Minutes
How to Accurately Test the Success of Your Email Marketing

Though crafting engaging emails is certainly important, it can all be for nothing if you’re not able to test their objective effectiveness, and how their results look against your targets.

If you’re pumping a lot of resources into your email marketing, but feel a little in-the-dark about what your campaigns are actually doing, then you’ve come to the right place!

In this guide, we’ll take a closer look at the value of email marketing testing, and how you can accurately test the success of your campaigns.

The importance of A/B email testing

A/B testing is by far the most effective and accurate way to measure the success of a given email marketing campaign, as well as the fastest way to gain insights about your audience’s behavior, which you can then use to inform campaigns for later success. In fact, with the blindingly fast pace of modern marketing, many people look at A/B testing as the only way to optimize a campaign.

Despite the fact that split testing emails has proven to be the most effective way to measure the success of an email marketing campaign, it’s still not universally adopted, and companies of all sizes are neglecting this potential gold mine of insights and optimization.

Litmus studies from last year show that a massive 39% of companies don’t test their marketing emails at all, so there’s a good chance that adopting this practice will give you an edge over your competitors.

How to set up an email A/B test for accurate, actionable results

Though starting to A/B test your email campaigns is a great first step towards success, not all A/B tests are created equal. If you don’t plan your A/B test with due consideration to important factors such as the variable you’re going to be testing and the timeframe you’re going to be testing it in, then the results could wind up looking muddled and confusing.

Here’s the essential steps to setting up an A/B test for the most accurate and valuable results possible:

Step 1: Choose which variable to test

In order to isolate what works and what doesn’t, each A/B test should only focus on one variable at a time.

The variable you choose to test will dictate the kind of metrics you’ll be looking at to measure the campaign’s success, which will often depend on trends that suggest where your campaigns could be stronger.

For example, if you decide to test email subject lines, the effects will be seen in your open rate. If you decide to test the copy in the email body, then you’ll want to look at the click-through rate, and to some extent the conversion rate for users who make it to your landing page.

If you’re only just starting to optimize your campaigns, you may be wondering what to test first. That all depends on what your experience with email marketing has been so far. If you’ve noticed that your campaigns are bringing less traffic than expected to your landing pages, then start off by testing your subject lines. If you’re getting traffic, but no conversions, then test the body content.

Identifying your email marketing’s obvious weak spots and testing these elements can help you discover some quick wins. However, to make your email campaigns as effective as they can possibly be, you should aim to systematically test every variable and facet of your email marketing in the long run.

Step 2: Select the right sample size

If you’ve come to email marketing from another marketing discipline where you were involved in testing, then you may find the idea of email testing a little daunting in comparison.

For example, with PPC social media ads, you’re able to test different versions of ads with massive sample sizes, promising huge potential for actionable insights and very little risk. Emails, however, are a different story. People unsubscribe from email lists in droves every day, due to emails coming in too often, with poor quality content, or simply because recipients have lost interest in the business and its products.

For this reason, email testing is notably riskier than testing ads or other marketing materials, and requires a more careful approach.

Your ideal sample size will depend on various things such as your brand equity and the behavior of your core customer base. We recommend taking a Pareto principle approach to your sample size, and using insights drawn from 20% of your audience to effectively target the remaining 80%.

If you’re working with an email list of 1000 subscribers, select 200 of them and split them in half, sending email A to 100, and email B to another 100. From there, you should be able to send the winning email to the remaining 800 subscribers, and see better results without repeating yourself, or running the risk of putting people off your brand messaging.

Step 3: Determining the right time frame

Time is the arch-nemesis of any modern marketer. While you’re trying to drill down on every available data source and understand every nuance of your target market’s behavior, the organization you’re a part of will have product launches, limited-time promotions, and various other events in the calendar that will be pressuring you to pick up the pace.

Setting a testing time frame is affected by the exact same pressures. The longer you’re able to let an A/B test run, the more accurate and useful the insights gleaned from it will be. To find the sweet spot between long enough to be useful, but short enough not to disrupt the marketing calendar, it’s important to approach your test with an understanding of how long it takes to see a result from each variable.

Generally speaking, the further down your funnel a variable sits, the longer you’ll have to wait to test it accurately. Leading email marketing tool Mailchimp ran a study on half a million A/B tests which showed some interesting patterns in this area.

When testing subject lines, and looking at open rates, it can take as little as 2 hours to see results that will be 80% + accurate. When testing elements of the email content and tracking clicks, the results will come in even faster, with observations being almost 90% accurate at the 2-hour mark. Testing either of these variables and tracking the effect on revenue takes a lot longer, and you’ll have to wait at least 12 hours to see activity that will give you 80% accuracy or more.

A/B testing ideas for your emails

Now that you know how to ensure accuracy in your email testing, here’s some A/B testing ideas to help you start reaping some valuable insights.

Days of the week/time of day

Take a moment to think about your ideal customer’s habits when interacting with their emails. There’ll likely be times of the day and days of the week when they’re more active in terms of reading and replying. There’ll also be rush hour periods when they tend to receive more emails, pushing earlier emails to the bottom of the inbox.

You can find the sweet spot for your target market by analyzing past successes from brands in the same sector, and using any trends you find to establish benchmarks for the scheduling of your own campaigns.

There’s a wealth of great case studies on email timing that can help you get off to a strong start. For example, leading sales engagement platform provider Sopro has a study on The State of Prospecting showing the effectiveness of B2B emails sent at different times of day, spanning the years 2017 to 2021.

Personalized subject lines

Though this isn’t the case in some B2B niches, personalized subject lines can often be a huge boon to your open rates and overall engagement.

Though certain audience segments may find it a little too personal, for many B2C brands, personalized subject lines represent a quick win that you don’t want to ignore. Try trialing them against a generic subject line, and see what happens.

CTA button colours and shapes

This may sound like a minor detail, but the aesthetics of CTA buttons can have a surprisingly significant effect on clicks and conversions.

Though you should obviously work within your brand identity when tweaking your email content, split testing subtle changes in the shade and shape of your buttons across different audience segments could reveal some exceptionally useful insights.

Image vs text balance

As high-quality imagery has become more accessible and commonplace over the years, email marketers have leaned towards a more image-heavy approach to their email content. However, these kinds of emails don’t garner more engagement as a rule, and in some cases, won’t load effectively for everyone on your mailing list.

The often-quoted ideal ratio for any online content is 70% text to 30% images, but like anything, the correct approach for you will be determined by the demographics and habits of your unique target market and brand identity. Come up with a few different templates with different text-to-image ratios, and split test them to determine the structure that’s right for you.

Final thoughts

We hope you’ve found this guide to accurate split testing useful as you navigate the tumultuous world of email marketing. Though the various variables, factors and nuances of effective A/B testing can be daunting, if you go in with a clear view of your KPIs and know how to measure what matters, the insights you gain from it will pay dividends!

Daniel Groves

Daniel Groves achieved a 1st class honours degree in Business Economics. Since graduating, Daniel has collaborated with a number of online publications with the aim to further develop his knowledge and share his experience with like-minded entrepreneurs, business owners and growth strategists.

Comments

Join the conversation...