Tag: testing

At nonprofits, we know how important it is to understand your audience. To listen deeply, and to understand the motivation of everyone in your ecosystem—from the constituents you serve to the volunteers who help make your work possible. Some call this human-centered design. We prefer to think of it as following the need.

The emphasis on listening has been in Empower Work’s DNA from the very beginning. Before launching the organization, founder Jaime-Alexis Fowler sought out to validate a hunch: challenging work situations are universal, but access to resources is not.

We conducted quantitative surveys with 140+ responses; market research; and more than 100 interviews with career coaches, workforce trainers, HR professionals, executive coaches, diversity and inclusion specialists, labor and employment attorneys, managers, labor organizers, and most importantly, people who have faced tough work situations. We saw a tremendous need for confidential, accessible, and immediate support for work issues—and designed Empower Work based on the needs of the people we serve.

This practice of deep listening has continued as we’ve grown. What creates the best experience and drives the best outcomes is at the heart of our organizational decision-making. We strive to apply human-centered design principles to every element of our work, whether it’s choosing a prefered mode of communication (for our users it’s SMS!) or launching our first public service announcement (PSA) campaign.

Launching our first PSA campaign

In late 2018, MUNI and Clear Channel donated ad space on buses and bus shelters across San Francisco. It was an opportunity to capture attention during an optimal time: a work commute. We worked with our pro-bono partner Odysseus Arms to create a human-centered ad concept that would speak to the people we serve.

Knowing that the ads would be featured on a moving bus, the visual and accompanying copy needed to be concise, direct, and effective. The team at Odysseus Arms drafted a concept that reflected our community’s needs and could be processed quickly by busy commuters.

One of our consultants introduced us to PickFu, an instant polling service. Through PickFu, we were able to survey 50 respondents between the ages of 18 and 54 and ask whether, upon seeing our ad, they would text us if they were facing an issue at work. The results came in lightning fast—it took exactly 31 minutes.

Poll results screen capture

The response to our ad was largely positive. Many remarked they would welcome a fresh perspective or even just the opportunity to vent. People who said they wouldn’t use the resource generally fell into two categories: they already had people in their lives they could confide in (which is great!), or they weren’t clear on what type of counseling services we offer, and whether it was truly confidential and free (which it is!).

Since the imagery seemed to resonate, we kept it as-is. However, after reading through the reactions, we adjusted the copy. Rather than focus on usage (“Free, confidential peer counseling.”), we focused on hurdles (“Navigate tough work issues with free, confidential peer counseling.”)

Empower Work bus ad

As we test more concepts in the future—from ads to landing pages—we’ll continue using survey tools to ensure we’re listening to and resonating with the community we serve. One of the biggest advantages is that we can target poll respondents based on traits such as gender, age, income level, and even their primary mode of transportation—so the next time we test a bus ad, we can test it with a group wholly made up of bus riders.

We’ve received an overwhelming positive response from the bus and bus shelter ads. Many people told us that they had no idea a resource like this existed, and they’re so grateful to have found support. “I have spent years looking for actionable advice for my work woes,” one texter expressed after seeing our ad on a bus. “Have a wonderful evening and know that you have made my journey a little bit easier.”

Deep listening and empathy can go a long way.

Tips for creating human-centered ads and content

It’s not enough just to test.

Testing isn’t a goal in itself. Rather, it’s a three-step process:

  1. First, clearly define what you’re looking to learn. Once you’ve laid out the question you’re seeking to answer, determine the optimal means to test for it. For example, what’s the best survey question to pose? How much information should we provide to those surveyed? Who is our target audience for this test? What software or services do we need to employ?
  2. Only after you’ve completed the first step should you proceed to the testing itself. The more you integrate testing into your day-to-day operations, the better you’ll get at it, and the easier it will become.
  3. Once the test is complete, critically examine the results. This is the most important step in the process. It’s not enough to gather data just to have it; what good is analytics if they sit in some esoteric report that nobody looks at? Instead, discuss what learnings can be derived from the test. Put an action plan into place for how to implement changes, whether they be in your operation itself, your ads, or your website.

Not all feedback is created equal

When you hear negative (and even positive) feedback, ask yourself whether it’s coming from your core constituency. If not, you don’t have to weigh it as heavily as feedback that comes from, say, a client or previous donor. It also means that friends and family are usually not your best sounding board for testing. They may be too close to you or your organization to give an opinion that is aligned with our community. Don’t solicit feedback in an echo chamber.

Testing should be a help, not a hassle

Don’t think of testing as a speed bump that gets in your way. Feedback should clarify the process, not hinder it. If you find a testing method too difficult or burdensome to employ, look for an easier solution. With so many software platforms to choose from, it’s simpler than ever to incorporate testing and idea validation into your nonprofit. Remember, testing isn’t just another thing to do. It’s a compass that points you in the direction of what to do.

For this month’s Connect theme, a number of speakers are previewing the great breakout sessions they are preparing for the 2015 Nonprofit Technology Conference in Austin, TX March 4-6. Following is a preview of one of over 100 breakout sessions.

Running A/B tests on your email list is crucial if you want to get to know your supporters better and figure out what makes them engage. It can be a heavy lift to get your testing program off the ground, but it’s well worth the effort! To help make the process smoother, here are a few pitfalls to look out for when you’re getting started.

1. Testing something that your organization will never adopt

Picture this: You run a test and get significant results, which you excitedly report back to your team. But when you propose changes to your programs based on this test, they’re met with resistance. Maybe it’s because the test version doesn’t fit with your organization’s branding, or maybe whoever’s making the call just doesn’t like it. Either way, it’s frustrating to realize that the work you put in will not go to good use.

The way around this is to make sure that your test copy is approved by all relevant parties before you start. Until you get buy-in that what you’re testing is a good idea, hold off on running your test!

2. Calling a test too early

Have you ever sent a subject line test early in the morning and chosen a winner, only to go back hours later and find that the one you chose is no longer in the lead?

Test results can be deceiving early on; that’s why it’s important to hold off on making a call until you have significance. But when that’s not possible, here’s another trick: try segmenting your early send to people on the east coast only—that way more people will see your email by the time you’re ready to send to the full list later that morning or afternoon.

3. Not enough results!

You set up an A/B test on a fundraising email, and you’re excited to see which version brings in the most money. But after sending the two variants to 10% of your audience each, the results are, well… lacking. One version gets 7 gifts while the other gets 6.

Obviously, you don’t yet have a winner on this test—it had far too small of an audience! Here’s how to avoid this in the future: Before you start your test, figure out if you’ll get enough conversions to generate significant results. You can do that by looking at what your average donation rate is, and then calculating how many conversions you’ll need for significance, based on your audience size. (Here’s an A/B testing calculator you can use.)

If you won’t get significance by doing an initial send to 10% or 20% of your list, then try splitting your test in half and sending everything at once. Sure, you won’t be able to optimize on this particular email, but you’ll get better results that you can learn from in the future. If need be, you can even run this test several different times and add up the results until you find significance.

4. Taking your results too seriously

Yep, I said it! It’s best not to take your test results too seriously. When you start a testing program, you’re probably going to test into some things that are not best practice. For example, maybe you’ll get higher open rates when you change your sender line to first name only. But does that mean you should always sign emails with just a first name? Probably not. What works on a one-off test sometimes isn’t the best for your organization’s credibility if applied over the long term. Besides, if you use this same offbeat practice every time, you’ll lose the novelty effect, and it will eventually stop working. So make sure to implement results like this sparingly, rather than creating a new best practice for your team.

5. Forgetting about your results

Your testing program is only as good as your implementation—but applying your test results isn’t always easy. So how can you make sure that you and the rest of your team remember to act on your learnings? Here are a couple of quick tips:

First, you should always write into your plan what you’ll do if the test wins, what you’ll do if it loses, and what you’ll do if there’s no clear winner at all. This will help you avoid the awkward “er, what now?” scenario when your favorite variant wins but not significantly. Then, every month or so, take a look at your results and figure out which results have yet to be implemented. If you got significant results on a test but haven’t implemented any changes yet, bring it up with your team again at your next meeting. This will help everyone to get in the habit of implementing the great things you’ve learned through your awesome new testing program.

Greenpeace’s Mobilisation Lab helps the organization transition into an era of people-powered campaigns. The right set of tools and an active social profile is helping Greenpeace to better support its community with campaigns that are community driven.

This case study was originally published along with a dozen others in our free e-book, Collected Voices: Data-Informed Nonprofits. You can download the e-book here.

NTEN: Tell us about how the MobLab fits into Greenpeace overall.

Michael Silberman (MS) and Wendee Parker (WP): We exist to help the global Greenpeace organization transition to a new era of people-powered campaigning shifting from Greenpeace-centric to supporter-centric campaigns. We’re working with staff in nearly 50 countries to design campaigns that enable the full power and potential of over 25 million supporters and activists to help us build stronger campaigns that win bigger. Our team has an independent budget to focus 100% on building capacity, challenging norms, sharing knowledge, and introducing new practices and tactics.

NTEN: Who are the Arctic 30, and how and why did MobLab get involved?

MS / WP: In September 2013, Russian security agents illegally boarded the Arctic Sunrise in international waters, seizing the ship and detaining all those on board at gunpoint. The ship was towed to Murmansk, and all those on board were locked up in cold, filthy cells, some of them in solitary confinement. They were charged with piracy and then hooliganism, crimes that carried lengthy prison sentences, because they dared to peacefully take action against destructive Arctic oil drilling and the onslaught of climate change, protesting at state-owned Gazprom’s Arctic drill platform in the Barents Sea. After 71 days in detention, the last of the Arctic 30 have been granted bail release, but severe piracy charges are still pending.

parker-silberman_-_some_tools_the_moblab
Some tools the MobLab provided to supporters of the Arctic 30

We got involved because there was a critical need to ensure that we were doing everything possible as an organization to help free these activists and leverage the global media spotlight to grow the campaign to save the Arctic. We added capacity to test new messages and tactics, and enable a global strategy brainstorm across offices and teams. Understanding how to effectively spread the messages by mobilizing new and existing supporters who connect with this cause through digital channels: thats what its all about.

NTEN: This has been a highly charged international incident. How have you baked principles of measurement and transparency into the campaign?

MS: We had to determine what could and should be measured. This campaign has been an opportunity to think about some of our limitations to measurement and tracking, and to have everyone really consider whats working and whats not.

WP: An informal group from several offices assembled for a week to take a look at our tools and platforms. It illuminated something many of us already knew: that consistency within digital engagement data was lacking. Trying to develop, implement, and execute a standard way to collect, track, and report on those digital efforts is an enormous challenge. The meetings gave us a good sense of our “universe” both the great effort our colleagues were already making in these areas, as well as opportunities to improve towards a complete, holistic point of view.

NTEN: Aside from this campaign, are there other wins you can pinpoint in these areas?

MS: There are over 100 active Greenpeace social accounts online. Were now seeing organizers include data analysis in their campaign planning. We at MobLab are still pushing, but it wouldnt get completely lost if we werent. Im also heartened by the fact that theres a lot of independent testing happening. People are using Optimize.ly for A/B testing, for example, and then reporting the results to everyone else.

WP: The focus and culture has definitely shifted, but the job is not done. Success would be having digital analysis (starting at defining digital analytic goals, implementing digital tracking and analytic tools for ongoing reporting, testing and optimization, ending with a complete campaign wrap up analysis) fully adopted as part of the overall campaign planning process.

NTEN: You mentioned Optimize.ly. Are there other tools that stand out as particularly helpful (or that you wish were more helpful)?

MS: We have issues with our bulk email tool, which doesnt make A/B testing as easy as it could be. On the upside, were making good progress with Google Analytics and Optimize.ly. On social analytics, were using Radian6, Topsy Pro, and Facebook insights.

WP: Greenpeace’s situation is so complex. In every office you may find a different setup for supporter data, a different set of digital engagement tools, etc. Even within offices, data can be fragmented among departments. I’m not sure theres a “one size fits all” solution, but as we work towards a common framework and toolset, it lessens the challenges towards complete supporter data integration a place where all departments view the same data and can have shared goals and metrics.

NTEN: Where would you like to see your campaign leaders a year from today with regard to systems and culture?

MS: We always want to see the four essentials of a people-powered campaign. The end is not putting data at the center of our campaigns; the end is more engagement-oriented organizing. We put people at the center of our campaigns, but data is an enabling tool. If we can use data to more effectively move people along and support our journey more deeply, thats a success point.

Policy Innovators in Education (PIE) Network

  • Membership of 45 organizations in 28 states and DC.
  • Seven full time staff.
  • Moving from a long, text-based annual report to a visual, interactive report increased views more than 400%

This case study was originally published along with a dozen others in our free e-book, Collected Voices: Data-Informed Nonprofits. You can download the e-book here.

NTEN: Eric, tell us about your work.

Eric Eagon (EE): With a membership of 45 organizations in 28 states and DC, we connect state-based education advocates to one another and to our national policy and advocacy partners. We do this through a variety of in-person and virtual networking opportunities. We also support advocates with targeted decision support tools and a social media presence that amplifies their work.

Im one of seven full-time staffers, and I came on board as a Senior Associate for Policy and Communications in September 2012.

NTEN: Whats one way you recently addressed a specific challenge related to data?

EE: We conduct an annual survey of our whole network. For our first few years, we used SurveyMonkey and then put all of the data into a massive report.

But it wasn’t getting much traction. When we looked at Google Analytics for the 2012 report, we saw just 82 views of the summary page and only one download. We had tons of information that could help our members collaborate with one another and plan better supports, but it wasn’t presented in an inviting, useful way.

NTEN: What did you do to fix or improve the situation?

EE: We made two major changes. We conducted phone interviews to supplement the online survey and capture more stories. We also created an interactive map to make it much simpler for our members to find what they need.

We launched the map at our conference this fall, creating interest among our 300+ attendees. We also led webinars; refer people back to it whenever possible; and now track the Google Analytics on the map to see how people are using it.

NTEN: Wow! How did you make it happen?

EE: The Deputy Director and I handled most of the policy work and ran the annual survey. Then, with our summer Fellow and Communications Director, we conducted 50 phone interviews that each lasted 30+ minutes, plus additional time for participants to edit our notes.

That summer, we happened to begin contracting with new website developers, a shop called Punk Ave based in Philadelphia. They had created one map for us, and we asked if they could use the same format for a new one.

We entered all of the survey and interview data into the new map so that members can sort by year and policy issue. Bills show up in green if they passed, red if not, orange if they’re pending. Members can view summaries of bills, who worked on them, lessons learned, related resources, and contact info.

NTEN: How did you get buy-in from the rest of your team?

EE: There was some concern, especially because the work coincided with our conference, which is an all-hands-on-deck initiative. Did we have time and capacity to do this? Could it wait for next year?

But the previous report had only been downloaded once. Punk Ave could build a shell into which we could add more details over time, rather than providing all of the data up front. And our Fellow could handle the data entry once it was built. Ultimately, we decided this was a priority. We try to make sure that all of our work is driven by member demand and needs.

NTEN: What went well? Do you have data to prove it?

EE: The map is much more engaging than a 50-page report. In less than three months since the launch, we’ve had over 350 unique views, many return visits, and good anecdotal feedback from members.

The interviews went well because we’ve been very intentional about building and maintaining trust with people in our network. This map is not available to the general public. Its password-protected for members only. People were candid because they trust that this is for the betterment of the education reform movement more broadly.

NTEN: What didn’t go so well? What do you still need to work on?

EE: We’ve made one minor change so far, tweaking the policy categories on the survey and map. We want to keep those as consistent as possible year to year.

We also need to streamline the lengthy interview process. We may need to begin with a quick conversation, then ask people to fill out the survey and capture most of the stories there.

NTEN: Do you have data that will help inform your next moves?

EE: We wanted to better understand our members policy priorities for 2014, so we sent personal emails to 50+ policy directors with a request to fill out another survey. We’ve seen a response rate of over 80% so far, and are integrating these responses into the existing policy map. As legislative sessions start in 2014, we also plan to make updates and even share resources in real time so that the map becomes more of a legislative tracking tool.

This is all in the name of not reinventing the wheel and sharing resources among our membership. It also helps us to reflection and plan.

NTEN: Any advice you’d offer to someone who wanted to tackle a big project like this?

EE: Make sure theres demand from your members. And as you design it, put yourself in their shoes. We asked ourselves:

  • What goals do our state advocates have?
  • What tools do they currently use?
  • How do they get the information they need?
  • Can we share mock-ups and beta versions of the tool?

Overall, the way we conducted the 2013 survey was much more labor intensive, but yielded something much more useful.

Year in, year out, nonprofits are using every available online tactic and tool to make real change. In fact, nonprofits are often on the cutting edge of web technology, using new tools and tactics always looking for new ways to build support for their movements and to cultivate and convert those supporters into donors. But not every new tactic works, and it’s critical that nonprofits have short feedback loops to figure out what’s working and where to allocate resources.

A critical component in this is a kick-ass testing strategy–a simple, quick, and free mechanism of obtaining qualitative feedback on what’s working. Testing removes your biases and guesswork to deliver a set of results which will improve online engagement and convert your audience from leads to dedicated action takers. Numbers don’t lie!

Real-time testing (10-10-80), Long-term learning (A-B Split), and Multivariate (a combination of elements) are three types of testing you can you use to generate tangible results. When employing any of these testing frameworks, it is of the utmost importance that your test proves to be statistically significant, meaning that the results reflect a pattern, rather than chance.

image03_0.png

Comic by xkcd.

What Should You Test?

Email Open and Click-through Rates

Email messaging is one of the most important avenues of connecting to supporters–you’ll want to make sure you get this right. A number of factors can affect your email open rates and subsequently, your email click through rates. Testing these factors will make your emails more effective.

Email Open Rates: You have just a small bit of real estate in your audience’s inbox with the goal to maximize the probability that any one user will click on that email and ultimately become an action taker or donor.

  1. Subject Line
  2. Timing of Delivery (day of the week, time of day)
  3. Sender Name/Email Address (personal name vs. donotreply@yourdomain.org)
  4. Preview Text (not all email clients display preview text but this can be a variable)

EXAMPLE: Here’s a test that Change.org conducted to maximize the open rate on an email being sent to users who might be interested in a petition started by another user.

It was decided that for this email, variations on email subject line would be tested:

image01_0.png

The test email was sent to sample audiences of 7000 to evaluate open rates for the following subject lines:

  1. Freddie Mac Sold Us A Meth Lab
  2. I accidentally Bought A Meth Lab
  3. My 2-year-old Lived In A Meth Lab

Which performed better?

image02.preview.png

The subject line “I Accidentally Bought A Meth Lab” had the highest open and click through rate! We confirmed the test was statistically significant. Now we can roll out this subject line to the rest of our list.

Email Click-through Rates (CTR): Once a user decides to open an email, many factors can come into play that may increase (or decrease) that person’s likelihood to click on certain buttons or continue on to visit your website. Some of the biggest factors include:

  1. Layout (single vs. multiple columns, number of images, amount of text)
  2. Image (size, type)
  3. Button (color, size, placement, quantity)
  4. Personalization
  5. Messaging

You’ll want to test these factors individually to identify the best practices to get your audience a-clickin’!

Website Forms

Signup and donation forms are critical sections of your website, where people actively choose to be included in your network. Thus, web form completion rates are one of the most important metrics to track and test. These some variables to consider in your testing:

  1. Number of Fields (required, optional)
  2. Layout
  3. Button (color, placement, messaging)
  4. Image (size, placement, type)
  5. Copy and Messaging

Depending on the performance of your forms before testing, optimizing these variables through repeated testing could result in significant increases of form submissions–and donations!

Segments of Your Audience

Our third area to test is the differentiating subsets of your audience. Each and every user is going to have a differing backgrounds, interests, and reasons for engaging with your organization. It would be wonderful if we could send a personal message to each and every action taker but since you may have thousands of users, we rely on segmentation to define broad groups of users to optimize engagement. A few possible user segments to test:

  1. New vs. Existing Users (welcome messaging, action alerts)
  2. Donor Activity
  3. Geography (personalize based on location)
  4. Acquisition Source

EXAMPLE: Let’s refer to this international aid organization who used long term testing based on their acquisition sources to determine which interest group had the highest performance and return

This organization ran four petitions on Change.org that focussed on four different issues (drinking water, child mortality, pediatric AIDS, famine):

image00_0.png

They tested and analyzed the data and outcomes associated with the action takers from each issue-based petition. Over the long term, the organization found that those supporters who originally signed the “Drinking Water” and “Child Mortality” petitions proved to be the most active and engaged. This knowledge allowed the organization to better focus future recruitment efforts.

ABT: Always Be Testing

Testing is cheaper than getting it wrong! So be sure to always be testing to get the absolute best response from your online community. Once you’ve decided what to test, walk through this handy testing checklist to make sure your tests will return meaningful, actionable data:

  1. What am I testing?
  2. What is the goal of this test?
  3. Will what I’m testing (control vs. test) get me to my goal?
  4. How large are my test panels (or groups)?
  5. How many times/how long will I need to conduct the test?
  6. How will I analyze my results?

Finally, once you’ve completed a round of tests, be sure to learn from your data! It’s not enough to test, you need apply your results to obtain the optimal results each and every time. Of course, testing should be an ongoing part of your online efforts. For more information about online testing, including a deeper dive on statistical significance, check out Change.org’sOnline Testing Resource Guide. And remember, always be testing!

Major credit for this article due to Adelaide Belk and Matt Fender, along with the rest of the amazing Client Management team at Change.org, as well as our good friend and testing guru Brenna Holmes.

Supporters communicate with your nonprofit through multiple channels but engage differently with content depending on its delivery method. You must not only create a compelling message, but also optimize it so that it is received effectively across multiple channels frequently enough to resonate with your audience.

In “How to Write Successful Fundraising Letters,” Mal Warwick describes how he tested different lengths for the letter used in a direct mail campaign. He determined that the longer the letter, the more successfully the piece performed. Try this approach with email or social media, however, and your audience will tune you out. By taking your multi-channel messaging through the following simple, three-phase approach, you will effectively repurpose your content to reach its audience without adding more hours to your day.

Plan

First, you need to plan your messaging calendar, making sure to include all the channels through which you will broadcast your content. Prioritize your calendar based on the channels that are already the most successful for your organization, but don’t be afraid to explore new channels as part of a larger plan to see if you can drive new contact points with constituents.

For example, the nonprofit Paramount & State Theatre in Austin, TX, uses a multi-channel messaging approach during its fundraising campaigns. They typically rely on their tried-and-true channels such as direct mail, email, website home page, website donation pages, social media (Facebook, Twitter), organizational blogs, press releases, physical signage, and telemarketing. Yet they’ve recently added a new channel to their marketing mix – a custom tablet app. This application is certainly a new point of contact with constituents – and likely an area in which the organization will be testing its message.

Next, consider the timeframe for the campaign. Give yourself enough time – many campaigns fail because they are too short. If you believe conventional wisdom, which says that it takes three, seven or even more impressions before a message is remembered, then a longer campaign allows for the greater possibility of reaching your target audience and generating results.

At the end of this phase, you will want to have a complete calendar that shows all the content you will need to generate and a plan for when and how you will be delivering that content. It should reflect when the message is expected to arrive in front of your audience (i.e. direct mail arriving in mailboxes) in addition to when you are sending it out.

One national human services organization, Volunteers of America, follows this strategy by sending a direct mail piece at the beginning of the month asking members to renew. Before that mailing is expected to arrive, an email is sent informing the member that the renewal request will be arriving by mail soon. Following receipt of the first direct mail piece, another email is sent reminding members that the renewal should have been received. Social media may be incorporated featuring corresponding posts.

Create

Start small. The first piece of content you create should be a purpose statement. Summarize your message in a single sentence. While it may sound difficult, this will be a litmus test for everything else you generate, ensuring that each channel you use communicates the core purpose of the campaign. Your message should be digestible, repeatable, memorable, inspiring and actionable.

Then, go long. Your most copy-intensive channel, if you are using it, is usually direct mail. Start here to elaborate fully on your purpose statement. Whether you are planning one or a series of letters, this channel usually provides enough space to say everything you want about the topic. You’ll then be able to come back to this well and repurpose its content, applying the best practices of each medium.

TIP: Make an organizational rule that any content written and approved in direct mail is considered approved for use in other channels. This will allow you to move quickly on the remainder of your content generation.

Now rework your text and incorporate images and other assets to better fit each channel in your calendar. For email, stick to a single story or idea per message. For social media, reduce this even further to single facts or teasers delivered over a succession of posts.

One national health organization ran a social media campaign for its ‘awareness week’ by posting a “sharable” image featuring a factoid and url each day. Facebook fans were encouraged to click, share, and comment. These interactions propelled the organization’s Facebook engagement rate ‘people talking about this’ by nearly 60% over the course of the campaign.

Tip: Send it again! One client I’ve worked with increases their total email open rate by more than 30% while maintaining the same unsubscribe rate by re-sending certain messages to those who did not open it the first time.

All the content you have planned should also be repeated on your website. Convio’s 2008 Wired Wealthy report tells us that the majority of major donors review the website before making a gift. The more recent 2013 Charity Dynamics & Nten Nonprofit Donor Engagement Benchmark Report tells us that for all donors, visiting an organization’s website is the prefered method to get information on the charity they most support. The credibility of your campaign will be diminished in the eyes of a supporter who, after receiving a message calling for important action, visits your website and finds no mention of it anywhere else.

Finally, review all of your content to ensure it is consistent with your one-sentence purpose statement and that all the content contains a consistent and appropriate tone. A recent npEngage article argues that maintaining consistent brand tone maximizes the impact of integrated campaigns.

Execute

With your calendar and channel plan in place and your messaging generated, you now only need to execute accordingly. Another Convio study shows that donors who give through multiple channels (e.g., both online and through direct mail) give more than those who give only through a single channel. Not following through on all of your planned channels could cost your campaign money or reduce overall engagement.

Tip: Make sure your direct mail piece has a URL directing the reader to more information or an online donation form.

Once your message is heard (i.e., once you’ve received a donation or driven someone to take an advocacy action), there is still one more thing you should do. Prompt your supporter to take another action. Make it easy by employing eCards, tell-a-friend tools, or social share components. By displaying this type of call to action on your ‘Thank You’ pages or other post-action redirect pages, you can simply and effectively extend the reach of your message.

A clear and consistent message over an appropriate amount of time will better engage your supporters and make a lasting impression. Thinking about your communication plan holistically, leveraging the technology available to you (both new and traditional), and strategically tailoring content to fit your communication channels will produce a manageable and successful campaign for your organization. Good luck!

It’s a simple fact that, due to budget and resource limitations, we have to draw a line somewhere between the “ideal” email marketing practices and the batch-and-blast cure all method. We know that we should strive for the ideal, but quite frankly we have other responsibilities, too.

This, sadly, is when we start leaning on “good enough” or “the best we can do.” We look to get something out the door and move on. If we’re really ambitious, we’ll even try to do a partial test swapping out an image or a line of copy, breathe a sigh of relief that we tested something, and move on to the next crisis.

This is the worst thing we could do. A bad test accomplishes nothing. At least if we don’t test at all we know that we don’t know anything, but a pseudo-fact can hurt us badly down the line.

How, then, do we even start with message testing, let alone full program optimization? Where do we begin?

The key to an effective test is strong planning. Swapping out an image or copy isn’t a bad test in itself, but it has to be done strategically, with specific analysis and follow-ups. Approach testing as any other project you would undertake, identifying the pros and cons upfront.

To help illustrate, let’s walk through setting up a simple test using the Test Planning Worksheet to organize our thoughts.

For this example, we’re going to play the roll of a development officer at a charity working with disadvantaged children. Ultimately, we want to know if need-based or optimistic, inspiring messaging works better in our fundraising emails – the workhorse of our online fundraising activities. About 20,000 people have joined our email list to help our kids, and we’ve got a matching challenge offer in hand to give our donors extra giving power. Thankfully, we’ve also found two need-based and two inspirational images in our photo library.

The long-term impact and value of our test is pretty clear. We’re looking to better motivate our donors to raise more revenue through our emails, and if successful we’ll definitely want to incorporate into other messages for testing (remember that we have to test repeatedly to be sure we’re seeing what we think we’re seeing).

The groundwork of our test is set, now we need to plot out how we want to run the test.

This is where we really need to take stock of our resources. Do we have the ability to create two (or four) completely different email messages for this one campaign? Can our copywriter(s) or vendor create text for more than one email in the time we have? Will our coding resource be able to handle building and sending four complete messages in addition to everything else going on?

Most likely, the answer to at least one of those will be no, so we scale back our plans a bit. This is an untested message strategy (at least for us), so we don’t even know if there will be a difference in impact. Until we have a better idea of the value of this approach, we don’t want to commit too many of our already taxed resources, so we start simple.

Taking the initiative (because someone has to), we call our team together to evaluate the options available to us. Our email tool lets us do a quick A/B split on send (virtually every email tool today will do this) to randomize audience selection, but we need to determine what our organization can support in terms of content creation. Writing four different messages is out of the question if we want to stick to our planned deployment schedule, but our coder has a potential solution. Our email template has room for a photo in it. If we write one message, the coder can create two copies of the email, each using one of the photos we found.

Now that we know what our options are, we can plan out what we want to accomplish with our test.

We’re going to scale our overarching question back a bit to make it implementable with what we’ve got. Specifically, we’re testing whether need-based images inspire giving from our audience better than optimistic imagery by creating two emails and random split-test on send. One email will have one of our need-based photos, the other will have one of the optimistic photos. We’ll chiefly be gauging impact of the test based on total dollars raised from each message, but we’ll also be keeping in mind click-through rates, unsubscribe rates, and any general comments we get back from the audience.

Remember that, though we want total revenue coming in, the images themselves are really only inspiring our readers to click on the email. Our email is an invitation to come to our donation form. If they choose not to donate, it might be a result of our donation form process, a misunderstanding of our offer, or any number of other factors. We should definitely file those away for separate testing later, but tackling them right now is likely going to be a little too much for us.

This test won’t be conclusive, but it should give us baseline metrics to inform future testing. In each subsequent test, we’ll try a slightly different spin – using different demographics in the photos, taking out the match, maybe even experiment with demographic targeting – all bringing us ever closer to finding out our true question: which message truly performs better?

With each test, we grow our understanding of our audience just a little bit more. As we learn and find what does and doesn’t work, we incrementally improve on our program. The tests themselves become almost another campaign, growing in complexity and value as we learn and grow with them.

This is true optimization, and this is where so many of us drop the ball. Just as important as planning out the test is planning out the follow-ups. Do we need to retest? Do we need to get additional resources to expand the testing? How do we interpret what we’re seeing into true value for our organization?

The test itself is only the start, a tactic used to move us closer to a greater end goal. If we fail to use what we find, then the test itself is just a waste of our valuable time.

Year in, year out, nonprofits are using every available online tactic and tool to make real change. In fact, nonprofits are often on the cutting edge of web technology, using new tools and tactics always looking for new ways to build support for their movements and to cultivate and convert those supporters into donors. But not every new tactic works, and it’s critical that nonprofits have short feedback loops to figure out what’s working and where to allocate resources.

A critical component in this is a kick-ass testing strategy–a simple, quick, and free mechanism of obtaining qualitative feedback on what’s working. Testing removes your biases and guesswork to deliver a set of results which will improve online engagement and convert your audience from leads to dedicated action takers. Numbers don’t lie!

Real-time testing (10-10-80), Long-term learning (A-B Split), and Multivariate (a combination of elements) are three types of testing you can you use to generate tangible results. When employing any of these testing frameworks, it is of the utmost importance that your test proves to be statistically significant, meaning that the results reflect a pattern, rather than chance.

image03_0.png

Comic by xkcd.

What Should You Test?

Email Open and Click-through Rates

Email messaging is one of the most important avenues of connecting to supporters–you’ll want to make sure you get this right. A number of factors can affect your email open rates and subsequently, your email click through rates. Testing these factors will make your emails more effective.

Email Open Rates: You have just a small bit of real estate in your audience’s inbox with the goal to maximize the probability that any one user will click on that email and ultimately become an action taker or donor.

  1. Subject Line
  2. Timing of Delivery (day of the week, time of day)
  3. Sender Name/Email Address (personal name vs. donotreply@yourdomain.org)
  4. Preview Text (not all email clients display preview text but this can be a variable)

EXAMPLE: Here’s a test that Change.org conducted to maximize the open rate on an email being sent to users who might be interested in a petition started by another user.

It was decided that for this email, variations on email subject line would be tested:

image01_0.png

The test email was sent to sample audiences of 7000 to evaluate open rates for the following subject lines:

  1. Freddie Mac Sold Us A Meth Lab
  2. I accidentally Bought A Meth Lab
  3. My 2-year-old Lived In A Meth Lab

Which performed better?

image02.preview.png

The subject line “I Accidentally Bought A Meth Lab” had the highest open and click through rate! We confirmed the test was statistically significant. Now we can roll out this subject line to the rest of our list.

Email Click-through Rates (CTR): Once a user decides to open an email, many factors can come into play that may increase (or decrease) that person’s likelihood to click on certain buttons or continue on to visit your website. Some of the biggest factors include:

  1. Layout (single vs. multiple columns, number of images, amount of text)
  2. Image (size, type)
  3. Button (color, size, placement, quantity)
  4. Personalization
  5. Messaging

You’ll want to test these factors individually to identify the best practices to get your audience a-clickin’!

Website Forms

Signup and donation forms are critical sections of your website, where people actively choose to be included in your network. Thus, web form completion rates are one of the most important metrics to track and test. These some variables to consider in your testing:

  1. Number of Fields (required, optional)
  2. Layout
  3. Button (color, placement, messaging)
  4. Image (size, placement, type)
  5. Copy and Messaging

Depending on the performance of your forms before testing, optimizing these variables through repeated testing could result in significant increases of form submissions–and donations!

Segments of Your Audience

Our third area to test is the differentiating subsets of your audience. Each and every user is going to have a differing backgrounds, interests, and reasons for engaging with your organization. It would be wonderful if we could send a personal message to each and every action taker but since you may have thousands of users, we rely on segmentation to define broad groups of users to optimize engagement. A few possible user segments to test:

  1. New vs. Existing Users (welcome messaging, action alerts)
  2. Donor Activity
  3. Geography (personalize based on location)
  4. Acquisition Source

EXAMPLE: Let’s refer to this international aid organization who used long term testing based on their acquisition sources to determine which interest group had the highest performance and return

This organization ran four petitions on Change.org that focussed on four different issues (drinking water, child mortality, pediatric AIDS, famine):

image00_0.png

They tested and analyzed the data and outcomes associated with the action takers from each issue-based petition. Over the long term, the organization found that those supporters who originally signed the “Drinking Water” and “Child Mortality” petitions proved to be the most active and engaged. This knowledge allowed the organization to better focus future recruitment efforts.

ABT: Always Be Testing

Testing is cheaper than getting it wrong! So be sure to always be testing to get the absolute best response from your online community. Once you’ve decided what to test, walk through this handy testing checklist to make sure your tests will return meaningful, actionable data:

  1. What am I testing?
  2. What is the goal of this test?
  3. Will what I’m testing (control vs. test) get me to my goal?
  4. How large are my test panels (or groups)?
  5. How many times/how long will I need to conduct the test?
  6. How will I analyze my results?

Finally, once you’ve completed a round of tests, be sure to learn from your data! It’s not enough to test, you need apply your results to obtain the optimal results each and every time. Of course, testing should be an ongoing part of your online efforts. For more information about online testing, including a deeper dive on statistical significance, check out Change.org’s Online Testing Resource Guide. And remember, always be testing!

Major credit for this article due to Adelaide Belk and Matt Fender, along with the rest of the amazing Client Management team at Change.org, as well as our good friend and testing guru Brenna Holmes.