Tag: data management

Moving files and applications to the cloud has skyrocketed in popularity over the last few years. Often it’s a mix of intentional decisions and employees doing what is convenient for them personally. But what exactly is “the cloud” and what should nonprofits consider before migrating? Our thorough guide, written by Afua Bruce, explains the questions you should be asking when evaluating your options.

Thank you to our partner Microsoft for sponsoring these guides.


Download Cloud Computing for Nonprofits

Download Cloud Computing for Nonprofits: Executive Summary


At the heart of every nonprofit organization is its “mission” — some definition of impact that serves as the reason for the organization’s existence. Increasingly, our mission-driven work is playing out digitally as we educate through websites, advocate through social media, and otherwise engage online. As a result, our measurement and evaluation of mission success similarly rely more than ever on data from these digital tools.

For most organizations, the heart of their outreach is their website. Data collected from the website can be used to measure audience engagement, assess the success of key outreach campaigns, and learn more about the needs of constituents. To do this, the vast majority of nonprofit organizations turn to the free, flexible Google Analytics. The flexibility and ease of setting up Google Analytics, however, can just easily result in problems.

Earlier this year, we conducted an original study of more than 1,500 nonprofit websites to check for visible trends and common mistakes in how they’ve implemented their analytics. Among the things we learned: As of today, nearly 1-in-10 nonprofit websites are double-counting their Google Analytics page views. The presence of double counting is a detriment to data reliability, and we’ve written about the downstream effects on measures of reach and engagement, and steps to diagnose and remediate.

The real problem revealed by this, though, is the dire state of data as a neglected resource throughout the sector. The fact that double-counting occurs so often in the first place suggests underinvestment in data strategy and management, while the fact that it persists and can even linger for years at a time speaks to underutilization of data as a strategic resource. Many organizations check the box (“Yes, we have data”), but aren’t making enough use of it to discover fundamental problems with data accuracy, much less harnessing data to continuously optimize content and outreach strategies.

Moreover, high-profile data lapses are heightening global concern about how carefully organizations collect and manage their data. The inattention highlighted by our new findings can result in potential security and legal risks — if so much double-counting can go undetected, to what extent are other faults, such as the unintentional collection of personal information, also being neglected?

Why is this happening?

To understand why this is a leadership problem, we first have to understand why it happens. Most often, double-counting page views begins during a website redesign or marketing campaign. Some change is being made, and a well-intentioned staff member adds (or “moves”) analytics tracking in the process without checking to see if it is already or also configured elsewhere on the site.

In these cases, the team is tasked with building a website or adding a new capability, and while they may be told “turn on analytics,” the details of what’s expected and the resulting data are neither seen as a part of that process nor as a part of the budget (particularly when external vendors are involved). As a result, data becomes a casualty of the change in an accident of inattention and a process that treats analytics as an add-on rather than a key requirement.

Failing prevention, organizations still have a chance to correct the error. Each organization that began double-counting likely passed through a period where their data showed a sudden, dramatic, and wildly unnatural change in their bounce rate (the percentage of visitors who navigate away from the site after viewing only one page). Signs like this quickly lead many motivated analysts to discover the truth and correct the issue.

In many cases, however, the data is not sufficiently questioned if it is reviewed at all. The person or team with responsibility for data assumes that it is merely the result of changes to the website and accepts the false data as a new baseline.

As you can see, there are a number of things that have to go wrong for an organization to start double-counting and then to continue doing so. It’s actually a bit of bad luck to end up in this state, and many organizations with the same gaps in data strategy and operations end up avoiding this situation altogether through unrelated choices or vagaries of the development process.

It’s not enough to have data

One of the downsides of how easy it is to collect data is that it makes it just as easy to declare analytics “accomplished” without really understanding what you’ve collected or putting it to use.

Very few nonprofits can afford to hire dedicated analytics staff, and as a result, it becomes a “collective” responsibility to use data to better the organization. Because of this, it ranks relatively low among each employee’s priorities and quickly becomes an abundant but neglected resource—organizations have reams of data, but aren’t doing anything of value with it.

Yet when it’s made a high priority, nonprofit staff can use data to:

  • Support expert intuition and accelerate decisions about outreach and publishing based on past performance
  • Encourage collaborative discussions across teams to celebrate and learn from successful strategies
  • Correct the record about audiences and content, using data as a myth buster to free the team from misapprehensions
  • Settle internal disputes about strategy by providing a single source of truth about what is effective
  • Begin with automated marketing by using data to segment audiences and design experiences for each of them

There are so many things you can do with data, and pursuing one or more of those uses bring direct benefits to your organization, while indirectly contributing to a culture of data that improves the quality of work overall and helps prevent major issues like double-counted data.

Money and a mandate

As a sector, we need to confront the fact that thousands of organizations can have this significant of a failure of their data without anyone noticing. It’s up to leadership and the funding community to help organizations improve and make mistakes like this increasingly rare.

Budget for data

The most direct thing you can do is direct funds toward increasing your capabilities in data — most commonly through technology that makes it easier to surface insights and staff time to extract and act upon those insights. Dashboards, training, upgrades to data that make it more relevant to your strategy — all come with costs, and without budget allocated towards them, they will forever be activities for “next year.”

Whether you are a philanthropist writing a grant or an executive writing a budget, you can make it easy for your team by adding a line item for analytics. Just as good, or even better, is to add more detailed outcomes expected of analytics to existing projects, such as a website redesign or campaign plan.

Encourage a culture of data

Most nonprofits have a central set of mission-driven “goals,” and many take the extra step of tying data to those goals. Where many fall short is exercising those metrics and creating a habit of internal measurement and evaluation, instead waiting until grant review time to pull together the numbers, often when it’s too late to act on them. Creating this expectation of nearly real-time analysis will help you catch problems with your data, not to mention optimizing your outreach.

The tone of this mandate is critical, however. Building a culture of data means creating a professional environment where people are encouraged to ask questions and ask for evidence, while simultaneously making it okay to say “I don’t know, and I need help finding out.” A lot of paralysis in analytics stems from fear—the fear of asking silly questions or being resented for them by people who fear the responsibility of answering them. Even when analyses end with no finding, the mindset and process of conducting them often reveal unexpected insights.

Recognize the challenges of working with data by celebrating the unanswerable and the discovery of problems. Make it safe to report failures in data, or they’ll never get fixed.

How funders can support data strategy & management

If you work for a grant-making organization, it’s possible you already connect grantees with valuable services to inform strategic planning or fundraising. Yet these essential areas rely on a foundation of reliable data. In addition, funders themselves depend on accurate data from nonprofits in order to make wise philanthropic investments.

Given the sector’s struggle in embracing ideal data practices, there’s a strategic opportunity for grant-makers to build capacity that empowers organizations to make necessary changes to their data practices and culture. When funders connect grantees with relevant technology, training, and consulting services, nonprofits can learn how to gather and use timely, actionable data to improve their effectiveness. As an example, one potentially game-changing application for many nonprofits is online fundraising from individual donors. Robust data systems and practices can optimize outreach and maximize giving behaviors by strengthening long-term relationship building with top donors. In addition, enhanced digital data helps nonprofits do other important things like tailoring their websites to meet the needs of constituents, strengthen their digital advocacy, and feed into data dashboards that empower executive decision-making.


When data is rendered relevant and actionable within organizations, it tends to get used. And when it’s used, there are strong incentives to ensure it stays accurate. The trick to preventing future issues like double counting, then, is for leaders to integrate data into ongoing decision processes. When this happens, organizations will have sufficiently robust internal systems, accessible expertise, and – perhaps most important – many people asking smart questions of their data on an ongoing basis. The latter is the surest way for unexpected data issues to be identified and addressed quickly in the future.

Many people may not make the connection between technology and food/agriculture, but when we merge the two together, we can transform the way we connect with, produce, and consume the foods that end up on our plates. And my team at the Bainum Family Foundation’s Food Security Initiative had a small taste of this potential when we created an online platform called the Food Learning Locator, which we just relaunched in March.

The Food Learning Locator originally went live in September 2017, shortly after I joined the team. The intention of the tool was to address a large information gap. No one (our foundation and our partners included) seemed to understand the full scope and types of food-related education and job training opportunities available in the Washington, D.C., metropolitan area. As we began to collect the data, we quickly realized a greater need. We weren’t the only ones unaware of so many opportunities around us. Our community was missing out on them as well. This spurred the idea to create an easy online tool to help people find and participate in food-education and job-training programs, many of which are offered by nonprofits and other community organizations that have limited resources and time to market their programs.

After the site’s beta launch, we learned from our partners that we could add even greater value by improving the Food Learning Locator’s UX and UI. I then took the lead on getting feedback on the site, and we used it as the backbone of an eight-month web development process to add to and improve the data structure, add an administrative portal for organizations on the map, and find ways to measure the use of the tool.

But what all was involved with each of those steps? Let me share our experience with you.

User research

Since this tool had the potential to benefit many audiences (i.e., community members, advocates, funders, and program providers), we encountered a major challenge — how to create a tool that addresses numerous needs and interests. For such a wide-reaching site, we realized we had to meet people where they were to get the necessary feedback.

We sought feedback from as many people as possible, hosting a focus group with community members and one-on-one feedback sessions with healthcare providers and program providers to accommodate everyone’s schedules. We also attended community events and conferences to market the site and ask for thoughts and suggestions. With this invaluable feedback, we were able to both identify and carry out the next steps for improving the Food Learning Locator. For example, we completely changed the UI from having food-education and job-training programs on the same map to splitting those categories into two maps and we added new data points to further improve user experience.

Updates to data structure

Our challenge was to create a tool that encompassed the key program offerings we knew about in 2017 but to also structure them in a way that allowed new data points and content areas to be added over time. The food-education and job-training space is constantly evolving and adapting, so developing an agile tool was key to its long-term success.

The web developers we worked with were incredibly helpful in structuring the back end of the site to allow us to easily add new data filters or content areas requested by community members (e.g., program cost and registration information). We also made sure to structure the front-end map and back-end Program Manager Portal so that it could be easily used as a template for other cities or regions that may eventually be interested in bringing this tool to their communities.

Development of the Program Manager Portal

With the data structure refined and the UI designed, we developed a Program Manager Portal for program providers to dynamically update, add, or remove organizational and programmatic data. The addition of the portal was a huge improvement over the beta version of the site, where update requests had to be submitted via SurveyMonkey and then manually changed by the Food Learning Locator team on a semiannual basis. This portal simplified the data-maintenance process, which was particularly helpful given how quickly the site has been expanding since its launch. Since February, we’ve added more than 10 new organizations to the Food Learning Locator, bringing the total to more than ten organizations in Washington, D.C., Maryland, and Virginia. Across these organizations, there are nearly 90 food education programs and more than 30 job training programs — all displayed and searchable on our now easy-to-use site.

The portal still has room for improvement but having it has helped both our team and program managers maintain accurate information, consequently creating a more reliable source of information for community members interested in the food space.

Data collection

Like most websites, we’re able to easily track site traffic and other metrics using Google Analytics. However, this tool doesn’t help us measure in-person user outcomes of the Food Learning Locator (i.e., the number of users attending programs they found via the map or the number of new collaborations or partnerships among program providers.)

We’re still in the early stages of determining the best method of tracking these connections, as they play a large role in evaluating the success of the tool. For now, we’re relying on qualitative feedback from site users — whether directly (online or in-person) or through program providers or community advocates — to understand budding connections from the tool. We’re looking into how we can track referrals among program providers, but that’s likely something we’ll have to implement down the road. For now, we’re relying on qualitative feedback and testimonials to assess that success metric.

Key Lesson

So, technology and food/agriculture — not so separate after all now, right? While I myself am new to merging program needs and technological tools (at least on this scale), I see endless possibilities for so many fields to leverage online platforms to create real, lasting offline connections and impact. And my hope is that the Food Learning Locator will cultivate and strengthen our local food-interested community and will eventually expand beyond. With tech, the sky is truly the limit.

A special thank you to the web development team at Alley Interactive and the Food Learning Locator team — Andrew Curtis, Laura Hepp, Katie Jones, Ann Egan, and Morgan Maloney — for helping to develop this tool.

Nonprofits often operate with lean teams and long hours, and donors and community members always come first. After completing the required recordkeeping to report to funders, board members, and other stakeholders, the last thing many want to do is to take additional time to clean a CRM. Yet unreliable data leads to haphazard decision making, can harm relationships with funders, and hinder your ability to implement your mission.

Meanwhile, it seems every day there are newer, faster software products to help nonprofits clean their outdated, error-ridden data management systems. These programs are described as digital magic wands to relieve your organization of invalid email addresses, duplicate records, improper formatting, and more. Sometimes a nonprofit may have the budget to hire data managers or consultants who can audit CRM data, but a clean, well-functioning CRM system shouldn’t require a fundraising campaign.

The contact record is the foundation of any CRM, and its value can become virtually obsolete overnight. Life changes like promotions, resignations, marriages, divorces, and relocations mean that the data integrity of your entire organization must be a regular practice.

Get motivated for some spring cleaning and embrace an internal data hygiene project. With the guidance below, you can make your action plan.

Getting started

Data integrity must involve your entire team, not just a technical or administrative professional. No one person engages in all organizational contact. Start by creating an internal committee; ideally, a participant from each program area, including the executive and administrative staff.

Step 1: Take Inventory

You may think your team already has a data management strategy, but if you’re reading this, it’s likely that it needs some tweaking. Plan on documenting a comprehensive data strategy that maps every step from collection or intake to reporting, and revisit it annually. While data strategy is unique for each organization, here are three questions your team must answer to begin cleaning:

    • 1. What data fields are necessary for our program staff? They could be name, title, organization affiliation, email, address or zip code, health status, conditions, birthdate, marital status, and others.
    • 2. What data fields are necessary for reporting to our stakeholders? Think program participation, level of participation, time of intake, race and ethnicity, education level, income, employment status, etc.
              3. What other data fields are we managing, and why?

Step 2: Stop the overflow

Once you’ve got a solid idea of what data should be in your CRM, it’s time to prevent data flow into your CRM that does not need to be there.

Consider minimizing points of entry. In our integration-obsessed society, we often love the idea of linking systems to make processes simpler, but we are left with the consequences of contact records which are hyper-managed in some systems, and loosely or unmanaged in others. Frequent, high-volume entry such as a ‘join our list’ sign up at an event creates a margin of error, that is then multiplied with the multiplication of systems a contact touches as it interacts with your organization.

Then, consider simplifying the overall number of data fields. Code tables and attributes can increase data entry speed, but also result in a lot of complexity. A common occurrence is that different users might enter a phone type as a “cell” or “mobile” phone because the system accepts both, when actually this redundancy makes running reports unnecessarily complicated.

Step 3: Restructure data entry points

Alerts—either via e-mail or upon login—are standard features in most CRM systems. You want your system alerts to deploy at the onset of entry.

Create a “to review” check box in contact profiles. The alert is sent in real time and timestamped, allowing the team to sort through and prioritize records based on age— this means there are fewer questions as to where the issues are.

In some cases, teams opt to create workflows within a CRM, mandating (or preventing) certain entries at the onset of data entry, but be mindful: often workflows can disincentivize data management. When it takes 10 minutes to enter a contact record, it becomes a struggle to perform, and will often go unfulfilled.

Create a “last review date” field in plain sight. By putting a new date in this field each time, the contact is updated, you can easily sort through contacts that may need updating. You can also get a sense of what percent of the CRM database is updated and how often it’s being reviewed. Plan to update contact profiles at least once every six months.

Step 4, Option 1: A systematic overhaul

Identify the biggest culprit of bad data within your CRM. Are there inconsistent naming conventions, misspellings, missing values, duplicate records, embedded values (multiple values in one field)? Perhaps the CRM itself is riddled with glitches or incorrect dependencies, such as a donor and a company. If you’re unsure, ask around: chances are your team has a good understanding of data issues based on their programmatic function.

Next, run a query on this culprit (or create a list) and have each member of the data cleaning committee set aside a block of time to clean a segment of it.

Step 4, Option 2: Start small and steady

Rather than cleaning up existing data at once, a smaller team may prefer to use key points of entry where major interactions occur, to designate time for cleaning up the CRM data. To begin, decide which opportunities exist to update your database in the course of business. Some ideas our nonprofits have used include:

  • When sending mass emails – Update once per month
  • When you plan to send out emails to a large group of people, set aside time in advance for a team member to manage the replies. Any email sent to more than 50 people is likely to garner a response saying that the contact either moved or has a new title or email address. Put those changes into the CRM as they come in.
  • When meeting with donors/stakeholders – Update no more than twice per year
  • Whenever you’re setting up a new meeting with someone, take the opportunity to compare the information in their email signature with their information in your CRM. At the same time, even if you haven’t made any changes, update the “last review date” field so that you can pull the profile out of your cleanup pile.
  • When sending annual reports, following up after an event, or other milestone for your organization.

Regardless of your approach you must complete Step 4 with an internal debrief. What was the most challenging part of the cleanup process? How might you better proceed next time? Do you think the team can manage two or three of these projects in a given fiscal year? Come to an agreement about goal setting in this area and make your progress known to the organization at-large. Remember to regularly report on your CRM’s contact and account data, and ensure such reports are part of regular meetings. Ensure you can observe the number of updates required by the staff member(s) and that you make it an action item for that staff member.

Step 5: Innovate and train to sustain

Finally, consider investing in tools that provide real-time data. Today’s workforce is more transient than ever. Between people changing jobs and organizational changes, contact data is a fluid asset rather than a fixed one. Contact data expires as time passes, to the tune of about 32% per year (according to SiriusDecisions).

Invest in tools to bring new accounts to your attention: adding accounts to your system is as important as maintaining existing ones.

Then, train, train, and train some more! Anyone with access to your organization’s CRM should know how to use it independently and proficiently. We should never expect software to solve a lack of strategy, communication, or fix broken business processes. Mandate training for your team on data entry and for the constituent records they will work with most frequently.

Cleaning our CRMs can be far less overwhelming than we might expect. Someday soon you’ll be able to say that your CRM:

  • Works in conjunction with good data strategy
  • Can detect and remove inconsistencies
  • Does not rest on the shoulders of a single individual within the organization

This article was originally published by Mighty Citizen. An edited version is republished here with permission.

One of my first questions for any new nonprofit client is: How good is your data?

I’m not asking if they have data, but how good is it? As a marketer, I want to build communication strategies that are founded on research and real insights from everyday donors. I want to know what your donors care about, how they want to be communicated with, and what kind of content resonates with them. This information allows your nonprofit to better engage your donors and leads to long-term benefits like increased and repeat donations.

Without data, your communication strategies are likely to be built on gut feelings and opinions. The beauty of data is that it negates opinions. Data can guide us; it can be a beacon of truth in a sea of opinion.

If you’re like most nonprofits, you have data available, but it’s either 1) housed in many different tools, rendering it cumbersome and useless, or 2) limited to basic insights like “total lifetime giving amount.”

Sound familiar?

If so, let’s take a gander at how to collect useful donor data without being annoying. Why do I say “without being annoying?” Because most donors view survey questions as a nuisance, in the way of a task they’re trying to complete, like filling out your donation form.

Option 1: use your thank you page as a data collection point

On your website, your thank you page can be an unbelievably valuable asset if used correctly. After all, your donor feels great about the commitment they’ve just made and is more likely to continue giving—this time, in the form of information.

Often, nonprofits use their thank you pages to say nothing more than “thanks for the donation.” But what if you popped a couple of questions around donor motivation onto your thank you page? For example, you could ask “What motivated your donation today?” and offer up three to five optional answers. You could also offer a text field if they wanted to provide additional info.

Or you could ask “What’s the best way to communicate with you?” Or, if your marketing strategy benefits from it, you could ask their age, what part of town they’re in, or which of your programs they care about most.

There’s a plethora of questions you might ask to help you segment your donors for future campaigns. The thank you page is a great place to get answers.

Option 2: send a quick donor survey

Has your organization conducted a donor survey? Was it short? Probably not short enough.

The inclination is to create a donor survey that allows you to find out anything you could ever possibly want to know about your donors. (Favorite color, everyone?!) You must resist this inclination!

Instead, put a survey link in your donor’s email receipt. Remember, they’re feeling good about that donation they just made. They’re looking for a receipt in their inbox to confirm the donation went through successfully. And now, within that same email, you’re simply asking them to tell you how you might serve them best by answering a short 3-5 question survey that takes less than 2 minutes to complete. Emphasize how quick and easy this survey will be, and make sure it is!

Using this strategy, the survey is not a cumbersome initiative for your nonprofit. It simply rolls out on a continual basis whenever a donation is made. And with a seamless integration of your donor database, you can ask different questions each time the donor gives.

Option 3: ask them during the donation process

I’ll be honest, I don’t like this option as much. That’s why it’s down here at the bottom! This option requires that you include a couple of questions for donors to answer as they’re filling out your donation form (online and off).

I’m hesitant to recommend this to our clients because I never want donors to fill out more than they have to in the process of donating. The less friction in the donation experience, the better. We don’t want them having to think too much. But sometimes—depending on your donation form software and internal IT resources—this is the fastest and easiest route.

If you take this approach, ask questions that might enhance the donor’s experience, like “Which of our programs most interest you?” or “How best should we communicate with you?” This is not the time to ask age or income or any other question that might give potential donors pause. My recommendation is to ask no more than 2 of these questions in your donation form.

Ask questions that might enhance the donor’s experience.

Test, and test again

I’m admittedly biased toward some of these options over others but you should test all of the options to see what works best for you. Test all of the options and see what gains the most traction with your particular donors.

Keep in mind that if you don’t receive a large quantity of online donations, you’ll want to test over a longer period of time to make sure you’ve got enough data to accurately compare your options.

Now, go get your data!

I hope you’re now primed and ready to learn valuable information about your donors so you can serve them better. Not only does this require some forethought, but you’ll likely also need to tap into your IT pro or an outside consultant to set your systems up properly so this doesn’t become a manual burden for your team. But once you’ve got the machine running, you’ll never look back!

I’ve found success with putting the analytics where people will see them. One of those places is the website itself.

Figure out what really matters

There’s no shortage of data to review. So, it’s important to move beyond vanity metrics and get to the heart of why we do what we do. This means turning to the goals and key performance indicators you have for your website (or defining them for the first time!).

For my work at Agaric, those goals are:

1. Secure well-matched projects by communicating the value we provide to potential clients.
Key performance indicator: feedback on design and content from target audiences.

2. Diversify and expand free software communities by sharing relevant knowledge.
Key performance indicator: pageviews of blog posts.

Each goal should be accompanied by at least one key performance indicator. This is data that tells you how successful you are being at reaching your goal.

In our case, our first goal of feedback is best measured qualitatively by asking our current clients— and those who we like working with—what they think of the website. We conduct interviews to gather that feedback. For our second goal, we can get a good picture of content relevance by pageviews, a valuable data point to share with the team.

A different site might try to increase one-time donations, in which case seeing the number of donations made during a campaign would be helpful. Another group might focus on building a regular readership, therefore email list sign ups are the best indicator of success. Whatever it is, make sure you can link the analytics you are tracking back to a goal you have for your site. There’s no point in measuring something you won’t take action on.

Know who needs to see the data and where they hang out

After identifying your key performance indicators, decide who on your team should review that data.

For our six-person worker-owned cooperative, that answer was easy – all of us. We all blog and we all have a vested interest in helping our free software communities thrive. We want to know which posts are resonating the most.

After knowing your target audience, find out where they spend their time. In our case, it’s the website’s back-end content overview page. Our website admins go here to pull up a page we want to update and to see what posts are still in draft mode. So, we added a column for pageviews and made that column sortable.

Screenshot of website analytics


For the independent news site Portside, the same was true. In addition to showing pageviews on their content overview page, they also include them directly on each post (visible only to content editors).

Portside homepage screen


For the online educator portal Teachers with Guts, the organization wanted to track several data points on their members’ use of the platform. So, they have a report page built into the site showing information such as the number of downloads made, comments left, and pages bookmarked.

Data report from Teachers with Guts website

Other opportunities to share analytics include weekly email reports, a user dashboard upon logging in, or via mobile texting or apps. Don’t be shy about asking your team where they would most likely notice the data you’re sharing with them.

Meaningful, informed conversations

By showing key data in high traffic areas, you foster an informed team. From there you can have the conversations you want and need. We now know which posts are getting the most reach and are evaluating why that is. As a result, our best practices have evolved to make our writing more relevant with readers.

You’ve seen articles with titles like “Top Ten Technology Trends For Nonprofits in 2019,” promoting artificial intelligence, automation, hyper-personalized content, and other buzzwords. Yet the reality is that we are putting the cart before the horse when it comes to our priorities.

What really brought things into focus for me was a recent workshop I did with my local chapter of the Association of Fundraising Professionals. We put on a one-day seminar on donor retention, using the free tools of the Fundraising Effectiveness Project. In the workshop, we looked to identify the ways that nonprofits are losing donors, and how to address that in real, practical, and data-driven ways.

The dozen or so people in the room were savvy with articulating their mission, had a passion for data (they did sign up for a full day on donor retention), but still articulated issues they had around data management.

That’s when it hit me. We are being led toward solutions that answer problems that aren’t real problems yet for the vast majority of nonprofits. The root cause of our fundraising struggles isn’t lack of tools that will automate things for us. The issue is low-quality data on our donors.

Bad data costs your organization a lot of money

Low quality and mismanaged data costs organizations a lot of money. In the United States, it is estimated that bad data costs $3 trillion per year. It costs an organization $1 to verify a record upon entry, $10 to dedupe and clean data AFTER input, and $100 per bad record if nothing is done.

The average nonprofit is using between three to five different data systems to complete their daily workload. In practical terms, we typically see organizations using at least one (if not more!) digital fundraising tool that is loaded manually or haphazardly into their donor management system and then reconciled with their email marketing and accounting systems.

Good intentions don’t generate good data

There is concrete data that shows the top reason a donor stops giving to your organization is because of inadequate communication. Retention rates are dropping across the industry, with the organizational average being 46% and new donor retention rates nearly half that rate.

A common suggestion is to address the problem through “personalizing content” and “building a culture of philanthropy” through donor-centered messaging. Yet these are tactical applications that need to be applied once the root cause of bad data is addressed.

With the average technology budget for nonprofits coming in at 5.7% of overall operational expenses, there will be pressure to have that technology perform more efficiently and effectively for the organization. Focusing in on what will generate both immediate revenue as well as long- term growth is where we can begin to address the retention losses we are seeing industry-wide.

Data stewardship needs to come first

When I worked at a school in Chicago, we held a donor gratitude event. I printed name tags using the information from our donor management system and walked around the party when I noticed a woman had used a marker to cross out the name on her tag and write her nickname. I immediately ran upstairs and updated that field on her record, so we’d always refer to her by her preferred name.

If we begin to integrate process adjustments like this into our organization’s data management, we will begin shifting toward a model of data stewardship that will have major impact on our ability to communicate and excite our donor base. Small ways to begin this process can be:

Okay, NOW you can automate

Good data means we have done the hard work on understanding our relationship with our donors and other supporters. It takes time, analysis, and quick thinking to do this properly and we need to provide more education to our community on proper data management best practices and the right tools to get clean and accurate data in the first place.

Of course we should have conversations on how to then engage our donors around increasingly sophisticated content to engage and retain them. As NTEN’s 2018 Digital Outlook Report shows, organizations are beginning to invest more in donor-centered experiences for their website, video content, social media, and more. This should be encouraged and supported, but not at the expense of ensuring that fundraisers are able to do their jobs in the first place.

If we invest in solid data stewardship, then we will begin to be able to stop the attrition of donors investing in our missions and be able to utilize strategies and tactics that will speak to the experience our donors are looking to have with our nonprofits. So before you click on that next article on why artificial intelligence will remake the world, go back to your database and run a new donor report and check if you have the right phone number for them first.

Donation form example













Because my spouse and I have different last names, I probably filled it in like this:

Donation form example 2








Donation form example 3













If your organization constantly struggles with low-quality data, whether it’s incorrect or incomplete, chances are that there is an underlying usability issue—or several.

That’s right: It’s not the fault of your website visitors, your colleagues, or your volunteers.

Wait, so does that mean it’s your fault?

Good news: you can do something about it!

Guidelines to improve user interfaces

There are 10 usability heuristics that are commonly used both as guiding principles and to identify opportunities for improvement. The examples I shared of the online donation form are what happens when something is awry in the match between the system and the real world.

Here are the other nine:

  • Visibility of system status—Ever had a panicked donor call you because they accidentally submitted their online donation twice? This is probably because they clicked the “Submit” button, and there was no spinning wheel, progress bar, or anything to indicate that the form was submitted and processed.
  • User control and freedom—I once worked on a database when once you entered something as one type of record, you couldn’t change the record type. You had to copy EVERYTHING to a new record and then delete the old one. This is the opposite of user control and freedom.
  • Consistency and standards—If you’re a PC user, it’s jarring to use a Mac because the program controls are in a different place, and vice versa. However, if you’re a Mac user using a Mac, you probably don’t even think about how to minimize a window.
  • Error prevention—Do you have phone numbers in your database that are missing area codes or are incomplete? Data validation is a form of error prevention.
  • Recognition rather than recall —You’re making a guest list for the ribbon-cutting on the new playground. Would you rather try to list off the largest donors from memory or look at a donor report to create the guest list?
  • Flexibility and efficiency of use—When copying and pasting, do you prefer to use CTRL+C and CTRL+V, or to right click to see all your options? The first may be more efficient if you remember the keyboard shortcuts, but both work, allowing flexibility and efficiency.
  • Aesthetic and minimalist design—Ever visited a website and felt like you were in Times Square because you didn’t know where to look? Someone got carried away with all the cool visual things they could do and forgot what they really wanted the user to do.
  • Help users recognize, diagnose, and recover from errors—“Error code 4238” doesn’t tell me what’s wrong, how to fix it, or whether I can. “Tray 2 is empty” tells me that the copier is out of paper in Tray 2 and that I simply need to refill it.
  • Support and documentation—In an ideal world, it would be so intuitive that you wouldn’t need any help and documentation. But we don’t live in that world, so provide some other support for when people need it.

Draw a complete picture of your data flow

Have you heard of service design? If not, here’s a good explanation from the team at Practical Service Design:

Service design is a human-centered design approach that places equal value on the customer experience and the business process, aiming to create quality customer experiences, and seamless service delivery.”

Think about a particular process in which you start by collecting data and use it to do something, like make a decision or send a mailing. Identify all of the inputs and outputs, the start and end points. Who are all the people in between? Is there anybody on either end of the start and end points? Does the process need to be extended?

Map out the process. Then walk through it, talking to as many users as possible. Find out what parts of the picture are missing and fill in those blanks. Go through the parts of the process you don’t normally see Sit with people. Ask why they do things the way they do—the answer may surprise you. And often when you sit with people and watch them perform a task, you begin to see why.

Then look for the common denominators. For example, if data is entered incorrectly or inconsistently, consider what may be missing:

  • Data validation, like requiring a postal code have five digits if the country is the United States
  • Multiple choice options when there should only be a limited set of answers, like having a dropdown for states instead of a free text field
  • Templates, so you make sure that everyone collects the same data each time
  • Form questions or field names that are commonly misunderstood. Consider rewording them in language that makes sense to whoever is entering the data.
  • Training or up to date documentation—Perhaps people are getting confused because your organization changed its program model and the how-to guides keep referencing data you no longer collect. Or perhaps, the data being asked for is no longer relevant.

The more you can do to design for better quality data upfront, the less time you’ll spend trying to clean up, sort through, or interpret incorrect, incomplete, or inconsistent data.

If people put off entering data or using the database, find out why it’s frustrating. Maybe there are steps or redundancies that are no longer necessary now that your systems are integrated. Are there any data you no longer need to collect to streamline the process? Maybe there are too many report options and now only a few of them truly get used—how is a new staff member to tell and not simply get overwhelmed? Perhaps the order of the process needs to be arranged to better match the real (current) workflow—it’s frustrating because the system doesn’t match the real world.

This is just a start, but it hopefully can illustrate how to begin looking at your own data management processes and systems. You’ll be rewarded with better quality data and less frustrated colleagues, volunteers, program participants, and donors.

Increasingly, nonprofit organizations are employing cloud-based Software-as-a-Service (SaaS) applications such as G Suite, Office 365, and Salesforce to improve productivity, allow technical staff to focus on organizational improvements, and save on cost. According to NTEN’s State of the Nonprofit Cloud report, “Cloud services are a core part of nonprofit operations with 100% of survey respondents indicating they use at least two cloud services, up from 80% of survey participants in our last survey.”

SaaS has had a major impact on the nonprofit sector. For organizations of all sizes, SaaS provides a simple and effective way to scale growth, allowing for simple onboarding and minimal maintenance. The latter can be especially welcome by nonprofits where teams are lean and freeing up the time used to maintain productivity applications and databases has a lasting impact. But with all the benefits of SaaS, some important concerns around data protection often go overlooked.

Data loss is almost always caused by user error, accidental or malicious. A survey from Spanning found that accidental deletion of information is the leading cause of data loss from SaaS applications, responsible for 43 percent in the US and 41 percent in the UK, ahead of data loss caused by malicious insiders and hackers.

Common scenarios for cloud-based data loss

Compounding risk is the integration of key cloud applications such as Gmail or Office 365’s Exchange Online with applications like Salesforce (used for donor management or student lifecycle management), which can leave an organization further exposed.

For example, admin errors in importing or exporting data can overwrite critical data at compute speed—and when overwritten data syncs with other apps, errors spread exponentially. Nonprofit staff can also cause data loss by actions such as emptying a recycle bin full of “master” data, which cascade deletes “detail” data. Staff and volunteers with access to SaaS systems are also a vector for ransomware attacks, which can result financial hardship for non-profits forced to choose between paying a significant price to unlock their data or losing access to it. Finally, malicious actors (cybercriminals or disgruntled employees who have access to an organization’s email, collaboration apps, or CRM apps) can deliberately overwrite or delete vital data, leading to cascades of data loss as noted above.

Humans aren’t always to blame, however, and something as simple as a sync error where important data such as donor outreach records can be corrupted, can have a palpable impact on a nonprofit. For example, a bad sync between Gmail and Salesforce can corrupt contact activity records, leading to donors getting too many emails and feeling “spammed” and stopping their donation.

Nonprofit sysadmins and business analysts have an important role in managing their organizations’ data, and the time spent recovering from SaaS data loss is a drain on limited resources. As such, organizations who utilize SaaS applications should adhere to the three pillars of data protection to keep operations running smoothly and uphold mission-driven organizations.

Three pillars of data protection for nonprofits


Automating SaaS data backup and restoration greatly reduces the number of manual steps needed to protect data, which in turn eliminates the risks that human error and inconsistent execution can add. This approach also reduces audit and governance risk.


Implementing multiple layers of security is vital to protecting nonprofit mission and operations. This not only helps to secure critical data, but also contributes to overall compliance adherence. For example, a SOC 2 report describes the controls that a SaaS provider has in place to deliver on security, availability (uptime), data integrity, confidentiality, and the privacy of personal data. By ensuring that the SaaS vendors you use are SOC 2 Type II compliant, nonprofits get a window into the security measures protecting their data.


As indicated in customer reference calls and reviews, reliability goes beyond simple service uptime and accuracy—it helps ensure you’re selecting vendors you can trust. At the end of the day, this is one of the most important features that SaaS vendors can offer.

Integrating these three pillars into their policies and procedures has allowed organizations like the East Coast Migrant Head Start Project (ECMHSP) to scale up investments in cloud-based productivity applications, while meeting internal and regulatory requirements. Through its work with Spanning, ECMHSP has successfully met or exceeded recovery time objectives (RTO) in drills.

By taking an integrated approach to SaaS data threats and upholding all three pillars of SaaS data protection, nonprofits using a collaboration platform like G Suite or Office 365 along with a donor management or student lifecycle management application can maximize time and resources saved, safely and securely.