October 9, 2014

The Edna McConnell Clark Foundation’s Kelly Fitzsimmons on a New Blueprint for Evaluation Plans

In a recent guest article, Michael D. Smith described the White House’s Social Innovation Fund (SIF), which has developed a new program evaluation guide based on best practices.

Recently, Transparency Talk conducted an online interview with Kelly Fitzsimmons of The Edna McConnell Clark Foundation (EMCF) to learn how the new framework provided by the Social Innovation Fund can be adapted for use in assessing foundation program impact.

1. What do you see as the value of program evaluations in your field?
I think there are some misconceptions about what you “do” with program evaluation. For us at the Edna McConnell Clark Foundation, one of the biggest positives is that evaluation can expand what you know about what “works” as well as about what doesn’t work. It’s our belief that program evaluations are a key driver of innovation.

Whether a study shows positive, mixed, or disappointing results, if carefully designed it almost always unearths information that can be used to innovate and improve how a program is delivered to boost quality and impact.  To get the most out of an evaluation, we believe it is critical during the planning stage for organizations and their evaluators to ask themselves not only what impacts they are looking to test, but also what they’d like to learn about the program’s implementation.  For example, answering questions such as: “How closely is the program run compared to the intended model?” or “To what extent does this or that program component contribute to impact?” can yield important insights into how well a program is implemented across different sites (or cities or regions) or reveal differences in impacts depending on the population served or environmental factors.

2. How does having evaluation plans, like the Social Innovation Fund’s Evaluation Plan Guidance, help nonprofits become more effective?
The Social Innovation Fund’s tool is a useful resource for organizations interested in building their evidence base and thinking about how to plan thoughtfully for evaluation. It offers practical takeaways that organizations should consider when thinking about evaluation, from structuring an evaluation plan to what elements should be considered in an evaluation, and even ways to assess the feasibility of undertaking one. A thoughtful evaluation plan can also inform an organization’s larger plans. For example, if an evaluation requires that *X* number of kids must participate in order for a program to be assessed, does your organization need to grow or adapt in order to meet that threshold? If so, how will the organization get there while maintaining program quality?

In essence, a strong, multi-year evaluation plan is much like a strong business plan—it helps you think about the resources you need, identify your interim and ultimate goals, and even decide what to do and how to communicate if your plan goes off-track.

3. How do EMCF and your grantees use the data you’ve collected from evaluations? We like to approach evidence building from the premise that we’re seeking to understand *how* a program works, not just *if* a program works. From this perspective, whether the findings are positive, mixed or null, evaluating programs over time can yield insights that inform practice, drive innovation and ultimately ensure the best possible outcomes for youth and families.

For example, take Reading Partners, which connects students who are half a grade to 2 ½ grades behind in reading with trained volunteers who use a specialized curriculum. A recently released MDRC evaluation found these kids made greater gains in literacy—1.5 to two months—than their peers after an average of 28 hours of Reading Partners’ instruction. During the evaluation, MDRC was able to corroborate that local sites were implementing the program with a high degree of fidelity, including providing appropriate support and training to volunteer tutors. The data collected also indicated the program was effective across different subsets of students: across 2nd to 5th grades, varying baseline reading achievement levels, girls and boys, and even non-native English speakers. This knowledge is now helping Reading Partners think more strategically about how and where it expands to impact more kids.

We worked with Reading Partners as we do with other EMCF grantees, bringing in experts to help them develop high-quality evaluation plans, often connecting them to other experts, and also funding their evaluations. We help them identify key evaluation questions at the outset, work together to monitor progress toward evaluation goals, make revisions to their plans when circumstances change or new information arises, and communicate results when they become available. Evidence building is a continuous, dynamic process that informs how EMCF as well as our grantees set and reach our growth, learning, and impact goals.

We also use quality and impact data to help measure and track quarterly and annually the performance of each grantee and our entire portfolio, including whether our investment strategy is having its intended effect of aiding our grantees in meeting the yearly and end-of-investment milestones and evidence-building goals on which we have mutually agreed.

Kelly Fitzsimmons
Kelly Fitzsimmons is Vice President, Chief Program and Strategy Officer of the Edna McConnell Clark Foundation (EMCF). In this capacity, she leads the strategic planning and management of the Foundation’s grantmaking program. She is the keeper and developer of EMCF’s theory of investment, continually refining it and insuring it informs all Foundation initiatives. She also oversees EMCF’s evaluation activities and its operational learning and knowledge development. You can follow the Edna McConnell Clark Foundation (EMCF) on Twitter @emclarkfdn.
Interest Categories: Evaluation, Program
Tags: program delivery, program evaluation