For this month’s Connect theme, a number of speakers are previewing the great breakout sessions they are preparing for the 2015 Nonprofit Technology Conference in Austin, TX March 4-6. Following is a preview of one of over 100 breakout sessions.
Running A/B tests on your email list is crucial if you want to get to know your supporters better and figure out what makes them engage. It can be a heavy lift to get your testing program off the ground, but it’s well worth the effort! To help make the process smoother, here are a few pitfalls to look out for when you’re getting started.
1. Testing something that your organization will never adopt
Picture this: You run a test and get significant results, which you excitedly report back to your team. But when you propose changes to your programs based on this test, they’re met with resistance. Maybe it’s because the test version doesn’t fit with your organization’s branding, or maybe whoever’s making the call just doesn’t like it. Either way, it’s frustrating to realize that the work you put in will not go to good use.
The way around this is to make sure that your test copy is approved by all relevant parties before you start. Until you get buy-in that what you’re testing is a good idea, hold off on running your test!
2. Calling a test too early
Have you ever sent a subject line test early in the morning and chosen a winner, only to go back hours later and find that the one you chose is no longer in the lead?
Test results can be deceiving early on; that’s why it’s important to hold off on making a call until you have significance. But when that’s not possible, here’s another trick: try segmenting your early send to people on the east coast only—that way more people will see your email by the time you’re ready to send to the full list later that morning or afternoon.
3. Not enough results!
You set up an A/B test on a fundraising email, and you’re excited to see which version brings in the most money. But after sending the two variants to 10% of your audience each, the results are, well… lacking. One version gets 7 gifts while the other gets 6.
Obviously, you don’t yet have a winner on this test—it had far too small of an audience! Here’s how to avoid this in the future: Before you start your test, figure out if you’ll get enough conversions to generate significant results. You can do that by looking at what your average donation rate is, and then calculating how many conversions you’ll need for significance, based on your audience size. (Here’s an A/B testing calculator you can use.)
If you won’t get significance by doing an initial send to 10% or 20% of your list, then try splitting your test in half and sending everything at once. Sure, you won’t be able to optimize on this particular email, but you’ll get better results that you can learn from in the future. If need be, you can even run this test several different times and add up the results until you find significance.
4. Taking your results too seriously
Yep, I said it! It’s best not to take your test results too seriously. When you start a testing program, you’re probably going to test into some things that are not best practice. For example, maybe you’ll get higher open rates when you change your sender line to first name only. But does that mean you should always sign emails with just a first name? Probably not. What works on a one-off test sometimes isn’t the best for your organization’s credibility if applied over the long term. Besides, if you use this same offbeat practice every time, you’ll lose the novelty effect, and it will eventually stop working. So make sure to implement results like this sparingly, rather than creating a new best practice for your team.
5. Forgetting about your results
Your testing program is only as good as your implementation—but applying your test results isn’t always easy. So how can you make sure that you and the rest of your team remember to act on your learnings? Here are a couple of quick tips:
First, you should always write into your plan what you’ll do if the test wins, what you’ll do if it loses, and what you’ll do if there’s no clear winner at all. This will help you avoid the awkward “er, what now?” scenario when your favorite variant wins but not significantly. Then, every month or so, take a look at your results and figure out which results have yet to be implemented. If you got significant results on a test but haven’t implemented any changes yet, bring it up with your team again at your next meeting. This will help everyone to get in the habit of implementing the great things you’ve learned through your awesome new testing program.