A Bad Test Is Worse Than No Test at All

Submitted on Wed, 2/13/2013 - 9:12am
A bad test accomplishes nothing.  At least if we don't test at all we know that we don't know anything, but a pseudo-fact can hurt us badly down the line.

It's a simple fact that, due to budget and resource limitations, we have to draw a line somewhere between the "ideal" email marketing practices and the batch-and-blast cure all method. We know that we should strive for the ideal, but quite frankly we have other responsibilities, too.

This, sadly, is when we start leaning on "good enough" or "the best we can do." We look to get something out the door and move on. If we're really ambitious, we'll even try to do a partial test swapping out an image or a line of copy, breathe a sigh of relief that we tested something, and move on to the next crisis.

This is the worst thing we could do. A bad test accomplishes nothing. At least if we don't test at all we know that we don't know anything, but a pseudo-fact can hurt us badly down the line.

How, then, do we even start with message testing, let alone full program optimization? Where do we begin?

The key to an effective test is strong planning. Swapping out an image or copy isn't a bad test in itself, but it has to be done strategically, with specific analysis and follow-ups. Approach testing as any other project you would undertake, identifying the pros and cons upfront.

To help illustrate, let's walk through setting up a simple test using the Test Planning Worksheet to organize our thoughts.

For this example, we're going to play the roll of a development officer at a charity working with disadvantaged children. Ultimately, we want to know if need-based or optimistic, inspiring messaging works better in our fundraising emails – the workhorse of our online fundraising activities. About 20,000 people have joined our email list to help our kids, and we've got a matching challenge offer in hand to give our donors extra giving power. Thankfully, we've also found two need-based and two inspirational images in our photo library.

The long-term impact and value of our test is pretty clear. We're looking to better motivate our donors to raise more revenue through our emails, and if successful we'll definitely want to incorporate into other messages for testing (remember that we have to test repeatedly to be sure we're seeing what we think we're seeing).

The groundwork of our test is set, now we need to plot out how we want to run the test.

This is where we really need to take stock of our resources. Do we have the ability to create two (or four) completely different email messages for this one campaign? Can our copywriter(s) or vendor create text for more than one email in the time we have? Will our coding resource be able to handle building and sending four complete messages in addition to everything else going on?

Most likely, the answer to at least one of those will be no, so we scale back our plans a bit. This is an untested message strategy (at least for us), so we don't even know if there will be a difference in impact. Until we have a better idea of the value of this approach, we don't want to commit too many of our already taxed resources, so we start simple.

Taking the initiative (because someone has to), we call our team together to evaluate the options available to us. Our email tool lets us do a quick A/B split on send (virtually every email tool today will do this) to randomize audience selection, but we need to determine what our organization can support in terms of content creation. Writing four different messages is out of the question if we want to stick to our planned deployment schedule, but our coder has a potential solution. Our email template has room for a photo in it. If we write one message, the coder can create two copies of the email, each using one of the photos we found.

Now that we know what our options are, we can plan out what we want to accomplish with our test.

We're going to scale our overarching question back a bit to make it implementable with what we've got. Specifically, we're testing whether need-based images inspire giving from our audience better than optimistic imagery by creating two emails and random split-test on send. One email will have one of our need-based photos, the other will have one of the optimistic photos. We'll chiefly be gauging impact of the test based on total dollars raised from each message, but we'll also be keeping in mind click-through rates, unsubscribe rates, and any general comments we get back from the audience.

Remember that, though we want total revenue coming in, the images themselves are really only inspiring our readers to click on the email. Our email is an invitation to come to our donation form. If they choose not to donate, it might be a result of our donation form process, a misunderstanding of our offer, or any number of other factors. We should definitely file those away for separate testing later, but tackling them right now is likely going to be a little too much for us.

This test won't be conclusive, but it should give us baseline metrics to inform future testing. In each subsequent test, we'll try a slightly different spin – using different demographics in the photos, taking out the match, maybe even experiment with demographic targeting – all bringing us ever closer to finding out our true question: which message truly performs better?

With each test, we grow our understanding of our audience just a little bit more. As we learn and find what does and doesn't work, we incrementally improve on our program. The tests themselves become almost another campaign, growing in complexity and value as we learn and grow with them.

This is true optimization, and this is where so many of us drop the ball. Just as important as planning out the test is planning out the follow-ups. Do we need to retest? Do we need to get additional resources to expand the testing? How do we interpret what we're seeing into true value for our organization?

The test itself is only the start, a tactic used to move us closer to a greater end goal. If we fail to use what we find, then the test itself is just a waste of our valuable time.

Matt is an Email Marketing and Online Fundraising Strategist at Donordigital. Using his analytical and programmatic skills, he has consistently doubled or tripled expected revenue results for regional, national, and global non-profits while simultaneously generating exponentially higher constituent engagement through strategic email programs. In addition to his daily responsibilities, he has lead weekly training webinars to help clients with their multi-channel integration strategies and implementation needs.

AttachmentSize
test_planning_worksheet.pdf84.14 KB