Tag: program evaluation

A few months ago, I asked, “Are You an Accidental Evaluator?” If you find yourself in this position, you may soon realize that the amount of data that you need to collect and analyze is daunting. Luckily, there is a lot of available technology to support evaluation, but the process to determine what you need can be overwhelming and potentially very costly if you choose wrong.

Why Technology in Evaluation?

“A good database and reporting tool can greatly ease the process of collecting, recording, and analyzing the output and outcome data an organization is tracking. . . Similarly, the lack of flexible tools can hinder an organization’s effectiveness and add unnecessary time to the evaluation process.” (Idealware, The Reality of Measuring Human Services Programs: Results of a Survey p. 15)

Drawing of man looking at a computer screen with text: "Monday, 9am: We just bought a new database on Friday--now I can show that my program has impact! Whoo-hooo!"

Technology is a tool that can facilitate your organization’s evaluation process, but it is not a magic box. The technology needs to support your evaluation plan; your evaluation plan shouldn’t be dictated by your technology. For more information about evaluation planning, check out the resources I listed in my other piece.

How Can Technology Support Evaluation?

Technology can support evaluation in several keys ways:

  • Facilitating data collection (e.g., via online forms)
  • Storing data (e.g., on people and activities)
  • Analyzing data (i.e., as often done using Excel, SPSS, Stata, R, GIS, etc.)
  • Reporting (e.g., dashboards, monitoring reports)

What Types of Software Can Support Evaluation?1

There are many types of software—covering a wide spectrum of quality and cost—that can support your evaluation needs. These may include:

  • Spreadsheets
  • Custom Databases (e.g., MS Access, Filemaker)
  • Constituent Relationship Management (CRM) Software (e.g., CiviCRM, Salesforce)
  • Case Management System (e.g., Efforts-to-Outcomes, Apricot)
  • Homelessness, Learning, Membership, or Legal Case Management Information Systems
  • Electronic Medical Record Software

How Do You Choose Technology for Evaluation?

What is your evaluation scope? That is, how complex are your evaluation needs? The more complex, the more data you will need to gather, monitor, analyze, and report, so your technology needs to be able to support that complexity. This goes back to the importance of evaluation planning. As described in more detail in a recent AEA365 blog post, at Jewish Family & Children’s Service of Boston (JF&CS), Rachel Albert, the Vice President of Learning and Impact, and I have developed a tool for determining the scope of a nonprofit’s internal evaluation needs.

What is your evaluation scale? How large are your evaluation needs, whether it be the number of clients served, programs, or staff members? For example, we serve 17,000 people a year across 42 programs, each with their own evaluation scope. Therefore, our evaluation scale is quite large, and so our system needs to be robust enough to handle that volume of data.

What resources do you have available? By resources, I mean two distinct things. First, I mean pure capital: money for licensing (at initial purchase and ongoing, depending on the contract), implementation (in particular if customization is needed), and support. Second, I mean people resources: in order for an evaluation technology to be successfully implemented in an organization, there needs to be someone in charge of supporting it, whether a staff member or contractor. Additionally, your technology support personnel needs will grow with the scope and scale of your evaluation needs. We have one-and-a-half full-time staff members devoted to database administration. I cannot stress enough the importance of being thoughtful and realistic about what resources you have available to implement a technology for evaluation before making a purchase.

What Features Should You Consider in a Technology for Evaluation?

Determining the scope and scale of your evaluation needs directly informs the features that you will need to consider, and of course the resources you have available will mediate your decision. Some questions you should ask yourself include:

Is the technology cloud-based or local (i.e., installed on individual machines)? If you have multiple people who need to access the system, or if you have staff members who need to be able to access the system from the field, you may need a cloud-based solution. A cloud-based solution can also reduce support costs.

Is the technology customizable? Some technology is acquired “as is:” you have to work within the constraints of the system with little ability to customize. However, other technology will allow for customization (often at a cost), which will allow you to tailor it to your organization’s particular needs. Alternately, some software options will work “out of the box” while others must be customized before use, which may potentially increase the cost significantly.

Does the technology use open standards? Some platforms that support evaluation are “closed systems”—meaning, you must rely on the developers to create new features. Other systems are based on open standards, meaning you can access other applications, often through a marketplace, to enhance your system.

Does the technology support different levels of users? Depending on the scope of your evaluation plan, you may need multiple staff members to be using the system, each with different access permissions.

Does the technology support constituent and/or non-constituent-level data? You may need to store data both about the people you serve and the activities in which they participate. For example, I worked in a nonprofit in which we needed to gather data not only about participants who took our trainings, but also on the training sessions themselves (e.g., when and where they were being held, the trainer, the status of the training planning, etc.).

Is your nonprofit required to use a federal-, state- or grant-mandated database system? If you use an alternate system, will you be able to export your data in order to be able to import it into the required system?

What additional security features are required or desired? For example, does your system need to be HIPAA-compliant? As another example, The National Network to End Domestic Violence has created a resource, focusing on security, to use when selecting a database for domestic violence and sexual assault programs.

Here are some additional considerations:

  • Does the technology need to support data collection tools?
  • Does the technology need to have analysis or reporting tools?
  • Does the technology need to support billing?
  • Does all or parts of the technology need to be an approved electronic medial record?
  • Is the technology accessible on mobile devices?

Concluding Takeaways

If you are trying to choose a technology to support your evaluation needs, here are my key takeaways:

  1. Remember that technology should be used as a tool to support a thoughtful implementation of an evaluation plan, not a substitute for one.
  2. Consider the scale and scope of your evaluation needs.
  3. Consider your available resources.
  4. Then look at technology options!

Where Can You Get More Help?

If you are looking for more support in choosing a technology for evaluation, following are some helpful resources.

Idealware has several resources about technology and program evaluation, including the reports Software to support program evaluation and Nonprofit performance management: Using data to measure and improve programs.

TechSoup has a listing of databases and analytics software as well as support and how-to articles pertaining to databases and analytics.

1 Adapted from: Christian, R., Quinn, L.S., Andrei, K., Pope, E.  The Reality of Measuring Human Services Programs: Results of a Survey. Portland, ME: Idealware. Available from: http://www.idealware.org/reports/reality-measuring-human-service-programs-results-survey, p. 15.

Among the things that we, as nonprofit professionals, have to cram into our day, tracking and measurement generally sit pretty close to the top of the “I know it’s important but I never have time for it” pile. Don’t deny it. If only we had more time, we could implement all of the awesome strategies and tactics we learn by reading the NTEN Blog!

This is even truer for measuring the impact of our volunteer programs. At the risk of showing a persecution complex, volunteer programs are often seen as a “nice to have” when it comes to nonprofit resources and support. After all, volunteers are supposed to be free, right?

Ahhhh!

The truth is, volunteer programs require budget and resources just like any other nonprofit program does. They will give back at least ten times that amount in the long run, if given the necessary support. Thus the frustrating catch-22 of the nonprofit volunteer program: If we only had more money, we could invest in volunteer programs that would enable us to get more money!

Why is Measuring Volunteer Impact Important?

One way to break this vicious cycle is to prove to nonprofit leaders and funders that volunteers really are a keystone program in the nonprofit sector. And when we say “prove,” we mean lots and lots of pretty charts, graphs, and data visualizations, of course.

The need to invest more in volunteers is just another symptom of the larger nonprofit funding conundrum made popular by Dan Pallotta and the folks who launched the Overhead Myth: By penalizing nonprofits that make staff and operational support a priority, we’re essentially setting the entire sector up for failure. I mean, who would run a business like that?

Data, in my opinion, can help us overcome these gross (aka “disgusting”) misconceptions. For example, did you know that the lifetime value of a volunteer recruited (for free) via the VolunteerMatch network is over $3,000? Beat that ROI!

There’s more: Reimagining Service has a whole page of resources to help you make the case for volunteer management funding. From research on the estimated value of volunteer time, to connecting the dots between volunteer management and organizational effectiveness, there’s enough data here to keep any tech geek happy for hours.

Your Volunteer Impact Report

It’s often not enough, however, to quote other folks’ statistics. What about your experiences? How do volunteers impact your organizations, and how can we all learn from this to make sure our programs are better supported in the future?

Back in the spring of 2014, VolunteerMatch partnered with technology review firm Software Advice to find out what metrics, indicators, and data collection methods nonprofits are using to measure volunteers’ impact on their organizations’ outcomes.

We ended up getting over 2,700 responses from organizations of all shapes, sizes, and locations. Some of the data points presented in the first-ever Volunteer Impact Report provide unexpected lessons about the role volunteer programs play in organizational success, and how nonprofits are tracking and measuring this.

For example, when reporting on the benefits of measuring volunteer impact, many respondents mentioned increased recruitment and retention of volunteers, as well as improved program outcomes overall. Additionally, a full 17% of respondents also reported that their organizations obtained more funding because these impact numbers motivated funders to give!

Tying measurement directly to bottom-line fundraising results? Yes, please! Despite this encouraging statistic, however, only 55% of respondents said that their organizations measure volunteer impact at all. The key obstacles, not surprisingly, are lack of resources, time, and knowledge.

Don’t Let It End Here

So what have we learned? Measuring volunteer impact on your organization not only results in more successful volunteer programs and more successful nonprofits in general, it also helps organizations raise more money. And yet, a significant portion of nonprofits are not tracking the impact of volunteers.

How can we fix this?

First of all, if your organization is not currently tracking volunteer impact, take a look at the Volunteer Impact Report to see how others are doing it. There’s information in there about the frequency, strategies and tools other nonprofits are using.

Next, grab a couple of coworkers who also believe this is important and form an alliance. Convince your leadership and coworkers that tracking and reporting a few key volunteer program metrics will help everyone’s goals in the long run.

Finally, utilize the great resources from organizations like NTEN and VolunteerMatch to gain access to the skills, tools and advice you need to conduct your surveys, analyze your data and improve your programs.

How can nonprofit organizations know what is working, what is not, and what impact they are making? They can achieve this understanding by implementing a realistic and meaningful evaluation system.

There was a time when monitoring activities, such as the number of people served or hours delivered, was sufficient to fulfill evaluation requirements. The nonprofit landscape has changed, and with it has emerged increasing pressure for nonprofit organizations to measure impact beyond just counting the numbers.

Evaluation provides a road map for how to collect data that is useful and meaningful. It is a valuable tool to help nonprofits navigate how to best serve their clients and measure the degree to which they are achieving their goals. The key benefits of conducting evaluation are continuous program improvement and accountability. When an organization is motivated to conduct an evaluation because it wants to make program decisions based on data, it creates a culture of evaluative thinking.

Nonprofit organizations can conduct a program evaluation in many different ways. Regardless of the approach selected, the following common elements are important considerations.

Define Success

Success is often defined as having a positive impact or making a difference in some way. Each nonprofit organization is unique and therefore needs to identify the specific goals to reach within the context of the organization’s mission.

In other words, what results does your organization expect to achieve in terms of influencing change in knowledge, skills, behaviors, and/or attitudes?

A common method to understand expected changes is to create a logic model. The WK Kellogg Foundation Logic Model Development Guide is a good resource to understand how to develop the model.  This tool summarizes and visually illustrates an organization’s program activities and their expected outcomes.

Collaboration is an essential ingredient in defining measurable outcomes that are a true reflection of what an organization wants to achieve. It is critical to have key stakeholders such as program, development, and/or marketing staff members participate in the evaluation design conversation. This inclusion ensures that everyone’s data needs are met and consensus is achieved regarding what elements are to be measured.

Determine the Process

Once an organization has defined success, the next step is to determine a realistic data collection process. Many methods can be used, including surveys, focus groups, and observations. How an organization chooses to collect data will be influenced by available resources. It is critical to understand what is feasible to collect as one of the most common pitfalls in the evaluation process is to attempt to collect more data than what is realistic to gather and use.

Identifying how the data will be managed, stored, analyzed, and reported are essential decisions that must be made for a successful program evaluation. Prior to beginning the evaluation process, it is critical for organizations to develop a database that will allow for effective storage and management of the data and to select someone with the necessary skills to analyze and report the data collected. Organizations sometimes realize too late in the process that they lack staff with the time or ability to collect, manage, analyze, or report the data.

A key to successful program evaluation is to start small, be successful with one data collection method, and then expand these efforts to incorporate additional data collection tools.

Use the Data

A successful evaluation process is one in which the data are used to make program decisions. One way to accomplish this goal is to conduct data discussion meetings. In these meetings, staff review evaluation results and discuss how they can relate to program decisions.

Accountability is another reason to conduct a program evaluation. Those that invested in your program should be informed of the evaluation findings. A separate communications plan on how to disseminate findings to different groups is a simple and important step. For example, evaluation findings may be shared with the following groups and answer questions that are relevant to each group:

  • Staff members: What do the data mean for our organization and to our roles?
  • Board: How do the data inform the big picture and our strategic plan?
  • Donors: What 3-5 key findings can you report to donors via email or letter?
  • Participants: Thank participants for being a part of the evaluation process. How do the findings impact the program activities, and the services they receive?

In summary, by collaboratively defining success, identifying a feasible data collection system, and using the evaluation findings, organizations increase their chances of improving their internal systems as well as making the impact they strive to achieve. Successful program evaluation empowers nonprofit organizations to improve decision-making and offer the best possible programs.

 

What if nonprofits and social enterprises had an affordable way to report real-time, large-scale data on their social impact? This question inspired Kopernik to create the “impact tracker technologies’” catalog in online and print forms.

Organizations are under pressure to measure their performance and results. Many low-cost, information communication technology (ICT) -based tools already exist to help collect data on a large-scale, real-time basis. Yet, while both supply and demand for ICT-based tools exist, nonprofits and social enterprises often fail to take advantage of them.

The issue is access. There isn’t a central marketplace at which organizations can access ICT-based tools and come to understand their pros and cons as well as their applications to specific needs.

The other issue is technical language. “Free and open source” doesn’t mean no-cost, turn-key solutions ready for immediate deployment. Rather, it means that people with specific skills, such as IT programmers, can use open source tools to build something useful for organizations. However, most nonprofits and social enterprises do not have in-house programmers to help use such tools.

A user-friendly catalog showing options and recommendations

In addition to addressing these gaps, Kopernik’s impact tracker technologies catalog goes a step further by providing recommendations that help users make decisions in some categories of tools (e.g., digital data collection apps and SMS communication platforms). Beyond these targeted recommendations, the catalog displays all relevant research findings so that users can draw their own comparisons.

This catalog aims to show the options as neatly and simply as possible so that the catalog’s audience — small-to-medium organizations — can understand and take action. But such a simplification poses the risk of cutting out some of the nuances and complexities of individual tools. The result is a careful balancing of simplicity and complexity, rigor and practicality, subjectivity and objectivity.

This field of impact tracker technology is dynamic and fast-moving. New tools come out on the market on a regular basis. Existing tools frequently expand their features to cater to users’ needs and challenge their competitors. Given this dynamism, the online version of catalog will be updated as regularly as possible.

The catalog groups a total of 39 ICT tools into four categories. These categories are described in turn below.

1. Digital data collection apps – no more paper-based surveys

The digital data collection apps are solutions to eliminate paper surveys in the field and reduce the time it takes to compile data. These apps work on smart phones and tablets, allowing for easy and robust data collection. They often allow users to develop digital questionnaires using a pre-programmed form builder, deploy these forms to mobile devices, collect data on devices, and sync forms with the cloud when connected to a data network. Some of the apps can also produce charts and maps from the collected data, generate PDF reports, and allow users to download aggregated data to conduct more complex analysis. Of the 12 tools featured in this category, our top recommendations include Magpi, Commcare, and iFormBuilder, which are user-friendly, affordable, and comprehensive in their features.

2. SMS communication platforms – keep in touch with your remote clients

The SMS communication category features tools that can efficiently manage large-scale communications with clients and beneficiaries through SMS so that organizations can reduce the number of phone calls and physical visits to project sites. Many of these platforms are cloud-based and can be accessed using any web browser straight from your computer, as well as via the platform’s dedicated Android apps where available. Our top recommendations include TextIt and Telerivet, which offer the most comprehensive sets of features that can be easily set up by users with limited IT knowledge.

3. Geospatial mapping tools – visual information at your fingertips

Geospatial mapping tools enable users to visually compile information from various sources in the form of a map. These maps are useful for tracking information, analyzing data, and presenting updates. They operate on web-based applications on which administrators build data forms to be filled out by individual users via their phones or tablets. Information can be sent through web browsers, mobile apps, email, or SMS. Once submitted, information will be automatically aggregated on a map. Organizations can use the produced maps both for internal and external communication purposes.

4. Remote sensors – additional eyes and ears in the field

The remote sensors category features low-power and low-maintenance remote sensors used to monitor and measure the use of cook stoves, water filters, and other devices, as well as to evaluate changes in environmental conditions. These sensors were developed to address the challenges in collecting unbiased and precise data on technology adoption and program interventions. Taking advantage of growing access to the Internet and sliding costs of IT components, many of the sensors have the capability to send data wirelessly with very minimal internet connectivity. This eliminates the need to physically go to the field and download data from the devices.  Each featured sensor measures something particular such as stove usage, air quality, and forest logging.

In a recent guest article, Michael D. Smith described the White House’s Social Innovation Fund (SIF), which has developed a new program evaluation guide based on best practices.

Recently, Transparency Talk conducted an online interview with Kelly Fitzsimmons of The Edna McConnell Clark Foundation (EMCF) to learn how the new framework provided by the Social Innovation Fund can be adapted for use in assessing foundation program impact.

1. What do you see as the value of program evaluations in your field?
I think there are some misconceptions about what you “do” with program evaluation. For us at the Edna McConnell Clark Foundation, one of the biggest positives is that evaluation can expand what you know about what “works” as well as about what doesn’t work. It’s our belief that program evaluations are a key driver of innovation.

Whether a study shows positive, mixed, or disappointing results, if carefully designed it almost always unearths information that can be used to innovate and improve how a program is delivered to boost quality and impact.  To get the most out of an evaluation, we believe it is critical during the planning stage for organizations and their evaluators to ask themselves not only what impacts they are looking to test, but also what they’d like to learn about the program’s implementation.  For example, answering questions such as: “How closely is the program run compared to the intended model?” or “To what extent does this or that program component contribute to impact?” can yield important insights into how well a program is implemented across different sites (or cities or regions) or reveal differences in impacts depending on the population served or environmental factors.

2. How does having evaluation plans, like the Social Innovation Fund’s Evaluation Plan Guidance, help nonprofits become more effective?
The Social Innovation Fund’s tool is a useful resource for organizations interested in building their evidence base and thinking about how to plan thoughtfully for evaluation. It offers practical takeaways that organizations should consider when thinking about evaluation, from structuring an evaluation plan to what elements should be considered in an evaluation, and even ways to assess the feasibility of undertaking one. A thoughtful evaluation plan can also inform an organization’s larger plans. For example, if an evaluation requires that *X* number of kids must participate in order for a program to be assessed, does your organization need to grow or adapt in order to meet that threshold? If so, how will the organization get there while maintaining program quality?

In essence, a strong, multi-year evaluation plan is much like a strong business plan—it helps you think about the resources you need, identify your interim and ultimate goals, and even decide what to do and how to communicate if your plan goes off-track.

3. How do EMCF and your grantees use the data you’ve collected from evaluations? We like to approach evidence building from the premise that we’re seeking to understand *how* a program works, not just *if* a program works. From this perspective, whether the findings are positive, mixed or null, evaluating programs over time can yield insights that inform practice, drive innovation and ultimately ensure the best possible outcomes for youth and families.

For example, take Reading Partners, which connects students who are half a grade to 2 ½ grades behind in reading with trained volunteers who use a specialized curriculum. A recently released MDRC evaluation found these kids made greater gains in literacy—1.5 to two months—than their peers after an average of 28 hours of Reading Partners’ instruction. During the evaluation, MDRC was able to corroborate that local sites were implementing the program with a high degree of fidelity, including providing appropriate support and training to volunteer tutors. The data collected also indicated the program was effective across different subsets of students: across 2nd to 5th grades, varying baseline reading achievement levels, girls and boys, and even non-native English speakers. This knowledge is now helping Reading Partners think more strategically about how and where it expands to impact more kids.

We worked with Reading Partners as we do with other EMCF grantees, bringing in experts to help them develop high-quality evaluation plans, often connecting them to other experts, and also funding their evaluations. We help them identify key evaluation questions at the outset, work together to monitor progress toward evaluation goals, make revisions to their plans when circumstances change or new information arises, and communicate results when they become available. Evidence building is a continuous, dynamic process that informs how EMCF as well as our grantees set and reach our growth, learning, and impact goals.

We also use quality and impact data to help measure and track quarterly and annually the performance of each grantee and our entire portfolio, including whether our investment strategy is having its intended effect of aiding our grantees in meeting the yearly and end-of-investment milestones and evidence-building goals on which we have mutually agreed.

Regardless of your personal or organization’s viewpoint on the use of data, you, like many nonprofit professionals, may be asked to serve as an “accidental evaluator:” you may not be trained in evaluation, but you are asked to do it anyway as part of your job.

If you run a client database as part of operations, you might be asked to “pull a few numbers.” If you run your organization’s website, you might be asked to report on actions taken when people visit the site (e.g., did they donate? Volunteer?). If you are a program manager, you might be required to report on your program’s activities for a grant proposal or report. If you are front-line staff, you might be asked to provide client stories that reflect your organization’s impact.

Being an accidental evaluator is not an easy task but it is an essential one, particularly for small nonprofits that may not have the resources to have an experienced evaluator on staff. What can you do if you find yourself in this position? Here are some tips to get you started.

What is program evaluation?

The American Evaluation Association (AEA) commissioned a Task Force of highly-regarded evaluation professionals to answer exactly this question; their response was:

Evaluation is a systematic process to determine merit, worth, value or significance. … Programs and projects of all kinds aspire to make the world a better place. Program evaluation answers questions like: To what extent does the program achieve its goals? How can it be improved? Should it continue? Are the results worth what the program costs? Program evaluators gather and analyze data about what programs are doing and accomplishing to answer these kinds of questions. (American Evaluation Association, What is evaluation?)

Why should nonprofits evaluate their programs?

There are a lot of reasons why you might choose to evaluate your programs, but here are two pragmatic reasons: reporting and operations improvements.

Reporting

Results from the 2014 Nonprofit Finance Fund State of the Sector survey revealed for 27% of respondents, all of their funders were asking for impact metrics in their reports; for 43%, half or more were. For many accidental evaluators, this is probably the main reason you are in the position in the first place—many funders, including federal, state, local, and private grantors require statements of impact.

Operations


As nonprofit professionals, we should strive to provide the best programs possible in the most efficient manner possible, both to ensure the well-being of those we serve and to respect our fiduciary responsibility to all those who provide us funding. By using data and learning from your evaluation process, you can deliver high-quality programs efficiently.

Where should you start?

Here are five places where you can start to learn more about the evaluation planning process:

Before you embark on your evaluation planning, two general pieces of advice:

(1) Don’t run before you can walk! Many people try to jump to statements about their nonprofit’s impact or outcomes without first ensuring that their foundational data—whom they are serving and how—is in place. Said another way, you should be able to answer with confidence and accuracy (i.e., complete data) “Are we serving whom we intended to serve?” and “What services and how much of those services did participants receive?”

(2) Start small and do that thing well. The Urban Institute recently did a great interview with three evaluation professionals—Isaac Castillo, Tony Fujs, and Daniel Tsin—and all had the same advice: start small and think long-term. Some great advice from Tony:

“My advice is always to start small. The typical mistake that I’ve seen many times is to start by trying to collect everything about everything. And then you have information about nothing—because you can’t process the data, or the cost of collecting the data was underestimated, or you don’t have good-quality data so you can’t say anything.”

Where can I get more help?

Find in-person and virtual communities of practice

The American Evaluation Association (AEA) is an international membership organization for evaluators. As its members span a variety of disciplines, settings, and areas of expertise, members can be part of Topical Interest Groups (TIG), which allow for further community development around specific topics (for example, the Nonprofits and Foundation TIG, of which I am a co-chair). AEA has several welcoming communities of practice available for non-members, if you are not quite ready to join, such as AEA365, a curated daily blog with tips and tricks about evaluation, and the AEA EVALTALK listerv.

In-person communities of practice can also be an invaluable resource for understanding how to engage in evaluation in your nonprofit. To begin, check out the listing of AEA affiliates. If there is not one in your area, or it if is not active, start your own! For example, Crittenon Women’s Union, a Boston (MA) nonprofit, organizes an Outcomes Workgroup that meets quarterly and is open to local nonprofit professionals of all levels of evaluation expertise.

Consider hiring a consultant

An evaluation consultant with expertise in the nonprofit sector can provide you with guidance about how to plan an evaluation, based on your needs and resources available. One way to find a consultant is through AEA’s “Find an Evaluator” listings.

Check out online resources related to evaluation

There are many wonderful, and free, online evaluation resources, such as:

Outcomes management, performance management, evaluation, assessments… You may hear a lot of terms thrown around by the sector related to capturing your organization’s impact.  At the heart of these conversations is the real question: are you achieving your mission? You are likely already collecting data related to your mission; a report that NTEN launched earlier this year reported that 75% of the nonprofits surveyed were collecting data to evaluate their programs. But are you set up to use that data effectively? Do you really understand how well you are helping your constituents/clients/community?

If you are new to the topics mentioned above, performance management is the discipline of making good decisions with data; it is the foundation for outcomes management. Outcomes management is the science of managing to stewarding specific social results and helping you understand whether your programs and services are effectively contributing to your underlying mission.

Tackling Outcomes Management

Outcomes management isn’t just about tracking data using the right system (although that is important!): it’s a comprehensive approach to your organization’s culture and operations. This can seem like an overwhelming endeavor to undertake, but with a structured process, you can help your organization move down the path of outcomes thinking by developing an outcomes management strategy, structuring a technology system, and building an organizational culture with a strong outcomes management focus. Here are some suggestions to get you started:

  1. Develop your knowledge of outcomes management. This topic could be a person’s life work, but you can take bite-size steps. Look internally first, starting with your Theory of Change. How will you have the impact you describe in your mission? What information do you need to measure each step of the way toward your mission? Next, you can study best practices in outcomes management across the sector. There are a number of great resources for this effort, such as Leap of Reason by Mario Morino, The Nonprofit Outcomes Toolbox by Robert Penna, and The Stanford Social Innovation Review.
  2. Build systems and processes for outcomes management. Whether your data is in spreadsheets or a customized CRM system, learning about outcomes will likely uncover ways you can better manage them. Beginning with your processes, you’ll want to identify what data you need to collect, who will collect it, and how it will be collected. With documented procedures, you can assess your current systems and how they need to adapt to enable your outcomes strategies.
  3. Lead your organization with an outcomes focus. Successful organizations start their outcomes management journey because they want to see measurable change for their clients and constituents. Leaders of your organization must be committed to driving an outcomes mindset throughout your organization. Based on that example, staff will need to help determine the data to track, input the data into your systems, and use those data to make changes in your programs. No matter the role, each team member plays a part in ensuring that your organization manages outcomes. That said, organizational leadership is also crucial.

Learning from the Community

The path described above isn’t always straightforward or linear. You’ll likely tackle aspects of each step in parallel. In addition, you might adapt your approach as you see how other organizations track their results. For instance, several of our nonprofit clients contributed to a report by Idealware, featuring case studies of how nonprofits are continuing to evolve their approach to using data to improve their programs.

Both Teach for All and The International Youth Foundation are aggregating data from their partners across the world so that programs can compare results and learn how to improve. The Cara Program collects a wealth of data in order to understand how clients are overcoming poverty and homelessness. As Database Analyst Andrea Cote mentions in the report, “We definitely use the data to identify any areas we want to change. If we’re proposing program changes or have to shift our approach because of external or internal factors, we have a bunch of cases where we look to data to help support that. A lot of times the initial idea for change comes from someone observing something and thinking a change should be made. We’ll then work together to see how our existing data supports that.”

These case studies, among others in the report, provide real-life examples of organizations using data to make a bigger impact. Like your mission, outcomes management is a journey. You will learn, build these lessons into your processes, adopt the new and updated systems, and then enter this iteration cycle again. Each step will bring you closer to realizing your mission and making greater change.

Why use scissors to cut the grass when you can use a lawn mower? In the world of impact measurement with remote beneficiaries, mobile is the new lawn mower.

Four nonprofits are leading the way in leveraging mobile technology to hear directly from remote beneficiaries. Each has a very different mission:

But all have the shared goal of hearing directly from their beneficiaries about whether their work is working, and mobile enables them to do it.

Where there is no internet

cisco_foundation_main_images-44.jpgIn India, where these nonprofits work, only 137 million households have internet access, out of a population of 1.2 billion. So surveying your beneficiaries online is not an option.

Until recently, organizations used the crude tools of pen-and-paper surveys (challenging with a 65% female literacy rate) or in-person interviews (not reliable for sensitive questions as the respondent may just tell you what they think you want to hear). Neither of these is scalable, and both are error-prone since they require manual data entry.

Deploying pen-and-paper surveys with a low literacy population is a bit like mowing the lawn with scissors. It’s time-consuming and doesn’t quite get the job done.

Yet in the same country there are 929 million mobile subscribers and growing (soon approaching the total population size). Now we’re onto something.

How it works

cisco_foundation_main_images-31.jpgWe designed the Labor Link platform to help organizations leverage this new mobile connectivity to listen to beneficiaries. We made it free, anonymous, and voice-based (not SMS) – so it does not require literacy. It also works on basic feature phones, the medium most familiar to the masses.

Target respondents call a local phone number, place a missed call (let it ring and hang up, a common practice in India), and immediately get a call-back from our automated system. They answer 10-12 multiple-choice questions with their touchtone keypad that are voice-recorded in Hindi, Tamil, or other local language.

Then we analyze that data for the nonprofit, giving them real-time feedback from the field.  And close the loop with the respondent through voice-recorded messages that make them feel heard and share locally relevant educational content.

How nonprofits are using it

Following a high-profile rape case in 2012 in New Delhi that mobilized advocates for women’s rights and safety, SAI, for example, wanted to measure workers’ understanding of their right to equal treatment in the workplace. The organization was running a training program on gender discrimination in garment factories. The project was funded by DFID, the UK international aid agency, as part of a broader effort called Responsible and Accountable Garment Sector (RAGS) to improve working conditions in garment factories.

We partnered with SAI to design a survey on women’s issues and delivered it at seven factories employing over 12,000 people. It asked questions like “Is rude language sometimes necessary to communicate urgency in the factory?” and “Are there some jobs at the factory that are only suitable for men or women, but not both?”

The response was overwhelming. Nearly 40% of targeted workers completed surveys, consistent with other survey campaigns we’ve conducted and much higher than the average 5-10% on customer surveys.

Other nonprofits we work with have had a similar experience. VisionSpring is using Labor Link to understand if the people reached by their eyeglass distribution campaigns could afford or had access to other eyeglasses, and the impact of their new glasses on productivity at work or school. GoodWeave is giving rug weavers an anonymous channel to report on sensitive issues like the presence of child labor in their workplace. And Fair Trade USA is measuring the social and economic ripple effect of Fair Trade in communities that grow tea, coffee, and fresh fruit and vegetables.

What we’re learning

Through these partnerships in India, we’ve learned a few things that maximize participation, data quality, and value across the board. To survey beneficiaries effectively via mobile, keep in mind these 5 things:

  1. Keep the survey short. Anything longer than 4 minutes on the phone and callers start to drop off, which affects data quality. That translates into about 10-12 multiple-choice questions.
  2. Offer simple incentives for participation. We typically use mobile credit because it’s a universal currency and can be administered virtually to the respondent’s phone.
  3. Close the loop with respondents. Think of the last survey you took. Do you know what happened to the data? Make sure your respondents are not answering questions into a black hole. Thank them, and let them know what you found and what you’re doing about it.
  4. Compare with other data sources for a 3D picture. What other data sources do you have access to – either your own data or public sources like UN or World Bank data – to provide additional context?
  5. Re-package the findings for partners. We find that maximum transparency is best. Share the results with respondents and other stakeholders, such as shop-keepers (for customer surveys) or employers (for workplace surveys). It builds trust for future information sharing.

Lastly, let’s share with the wider world. This is innovative stuff. We need a community of practice around really listening to beneficiaries and reporting back on what we find. Our nonprofit partners have found that donors are eager for such direct beneficiary feedback. In fact, some of this work in India is supported by a grant from USAID’s Development Innovation Ventures (DIV) unit.

Nonprofits are also using Labor Link mobile survey data to strengthen business relationships. “Labor Link enables farm management to stay intimately connected to their workers,” says Hannah Freeman, Director of Produce & Floral at Fair Trade USA. “In Fair Trade, we see it as an opportunity to strengthen communication and improve the operational efficiency of our key partners.”

Just as there are many ways to cut the lawn – from scissors to a lawn mower, or even a goat – there are many ways to capture impact data. But when you’re trying to reach a low literacy population that lacks internet access, mobile is the way to get it done in a way that’s reliable, affordable, and scalable.

Data Analysts for Social Good

  • Breaking down data silos.
  • You don’t have to be a data analyst, but you will need to know how to collect and understand data.
  • You don’t have to use the best tools right away. It’s alright to say “This is the best tool for now.”

Andrew Means launched Data Analysts for Social Good in his spare time to address a need – a better understanding of how to use data not just to maximize inputs, but to show the importance of data to support organizations functioning more efficiently and effectively.

This case study was originally published along with a dozen others in our free e-book, Collected Voices: Data-Informed Nonprofits. You can download the e-book here.

NTEN: Andrew, you’ve spoken with NTEN before about your experiences with data at the YMCA of Metro Chicago. Now you work at Groupon and spend a lot of your spare time launching Data Analysts for Social Good (DASG), which offers webinars, a LinkedIn group, and an annual conference. Why did you start DASG?

Andrew Means (AM): I saw no one talking about data well. Fundraising analysts, marketing analysts, program evaluation people…everyone was so siloed. We were all using the same skills, underlying tools and methods, but applying them to different parts of our organizations. Data shouldn’t be siloed to one team or one person who pulls lists. The real power of analytics and social science research is that you can address a number of questions using the same kinds of tools and skills. And most organizations don’t know where to begin. We have very little human capital around this in the nonprofit sector although this has grown immensely over the past couple of years. DataKind and others are doing phenomenal work connecting data scientists to nonprofits, but the long-term solution is to have the next generation of executive directors, nonprofit leaders, and people entering the sector really understand these tools from the get-go.

NTEN: How are you creating a data-informed culture as you grow DASG and prepare for your second annual Do Good Data conference?

AM: The hard thing about starting an organization is that you have no data to begin with, so you have to create your own. I’m enough of an analyst to know my data points are really weak. But I try to use data as much as possible to generate content. I put out a survey in the early stages of planning the second conference, asking potential attendees what they want to learn. Now, as I line up conference speakers, I can look at that survey to make sure I’m delivering.

Another example: Every two weeks or so I send an email out to my list. I track click-to-open rates to make sure I’m giving people what they want, and sending these at effective times of day on the best days of the week. I used to believe that I should send all emails at 5:00 a.m. so that they’d be in my subscribers inboxes first thing in the morning. But when I paid attention to the numbers, I started to see a bit of a jump in opens if I sent them in the early afternoon.

I use a lot of free tools: MailChimp for email, Eventbrite for RSVPs, Google Analytics, and Google Forms. They’re fine for now. Thats something not enough people really consider. Its OK to say I have what’s necessary. I don’t want to use it forever, but it works for now and I’m moving forward. It’s worth dipping your toes in the water.

NTEN: What else should people keep in mind as they dip their toes in?

AM: We live in a world that makes it possible to measure so much, from apps that track what we eat, to Fitbits that track where we go. How do we allow these things to inform us but not control us? With that in mind, I ask myself: Is my community growing? How many people can I reach through social media? When are the best times of day to do that? Did this email outperform the list average? Its not super formal; I’m letting the data inform me, but getting the email out is more important than succumbing to analysis paralysis.

NTEN: That said, you are looking to grow DASG strategically. How do you see yourself professionalizing this organization? Is that the goal?

AM: DASG started as a happy hour 18 months ago when I sent out a few tweets. I have been surprised by its success. It’s easy to get caught up just doing the work of running a growing organization; I forget to step back and look at, say, the Eventbrite data from the past year which can help me analyze which webinars performed best. I want to standardize my email practices and create standard surveys for all webinars. I got a tremendous response when I surveyed the people who came to our first conference. So it’s about taking the time to collect the data but also to reflect on it. And for me, that’s about rhythms: taking the time weekly or monthly to reflect and plan.

NTEN: If you hired an employee, what rhythm would you want them to be in? What would you ask them to regularly report to you?

AM: Right now email is big. I’d definitely ask for regular reports on:

  • Revenue, since we have to make sure this is sustaining itself
  • Attendance at webinars and events
  • List growth for both email and LinkedIn

Where people on both the email list and LinkedIn are coming from geographically. In 2014, I’d love to do more events outside Chicago. I need to see where we have the highest concentration of subscribers.

NTEN: Why is it so important to you to create spaces where people can come together and talk data with their peers?

AM: Everyone is talking about data, but not in ways that will benefit us in the long term. Of course there are some organizations I really respect. But too often, analytics are used to maximize our inputs, not our outcomes. We use data to raise more money, attract more donors, and send effective direct mail campaigns. I’m not seeing data applied as rigorously to help us think about actually being better organizations. We need to step back and think critically about what we exist to do.

What happens when nonprofits make a real commitment to collect healthy data about their programs and operations; manage it well; and make savvy, data-informed decisions? And what happens when you connect energized, smart, data-passionate nonprofit professionals for a year of learning and knowledge sharing?

In 2013, NTEN, Microsoft, and some of the brightest members of the nonprofit technology community set out to discover the answers. The 18 members of the Communities of Impact pilot program spent the year connecting through two in-person retreats, monthly calls with seasoned data practitioners from all sectors, and ongoing online discussions and resource sharing.

The best way we could find to capture the lessons, insights, and discoveries from this year of work is by compiling case studies from participants with resources and conversations that emerged during their work together. This is not a report, per se; it isn’t a guide or a handbook. Just as these participants plan to continue working on the ways their organizations collect and use data, we hope that this collection can serve you and your team in learning about what others are doing and where you may go next.



Please log in to download this report.