Evaluating our evaluation. I hear you, it sounds pretty meta. In creating the newly launched series of worksheets supporting data-informed nonprofits, that’s exactly what we did. And in evaluating our evaluation process, we found that it was the people and not the data that needed to be addressed further.
In 2013, NTEN partnered with Idealware to create Getting Started With Data-Driven Decision Making: A Workbook. The workbook has since been downloaded and used by diverse organizations in the U.S. and beyond. We frequently use the workbook and processes it outlines in NTEN Labs— one-day, hands-on workshops. We even use the workbook internally here at NTEN. Needless to say, we think they are pretty useful. But we knew there was opportunity to expand, to provide some additional pieces to guide the full process of creating a plan, adopting the plan, and really using it. This summer, we partnered with Idealware again to create additional resources. Our process was an example of some key evaluation components that we often all forget but are critical to our success, including the following big steps.
This project was similar to how we approach many new research or analysis projects: we all felt pretty sure we already knew what was missing and needed. Sound familiar? Checking those assumptions at the meeting door and allowing yourself and your team to have an open conversation ensures you aren’t rushing for a solution that doesn’t quite fit the problem. For us, this meant separating our own observations and opinions from the larger dataset. As I mentioned above, we’ve used the existing workbook in various events and workshops. We turned to the event and post-event surveys to look for consistently raised additional questions or topics. We also tapped the guest speakers involved in the events to gather their feedback about what more they would have wanted to cover or to understand the questions participants raised.
Reviewing the workbook in practice —like evaluating your data in context— identified that a few of our assumptions and inklings were correct and that we had overlooked a few pieces. Participants in our trainings consistently made it through the full workbook but left with questions that didn’t touch on data; their unanswered questions focused on people, adoption, and communication. Just like any technology project, it’s the pieces of the plan that aren’t technical that often have the biggest influence over our outcomes.
Once we were clear about the topics that needed to be addressed, we wanted to start brainstorming and drafting the processes and workflows that were needed. We couldn’t help ourselves — we are human, too! — and did try out a few ideas. We quickly hit a roadblock though, because the worksheets we were drafting only seemed to work in hypothetical examples we threw out to fit the molds. This proved another opportunity for checking our assumptions and convenient examples at the meeting room door! Before we got any further, we identified some very real examples of evaluation work we are doing or want to do at NTEN. Those examples then served us in our work against which to test the worksheet or processes.
We also created about twice as many options as we ultimately finalized. Having a longer list of worksheets to consider helped us put the real examples up as tests and stop development of the worksheets that didn’t serve us as users or that required too much context and facilitation to complete (we wanted these resources to be valuable to people to use in their organizations, not materials that required us to be in the room to explain them).
Lastly, our testing wasn’t complete until we went back to some of those participants and speakers from previous trainings and shared the worksheets in development. We asked for general feedback and thoughts as well as pointed questions about relevence, value, and clarity. This was an important step for us because we knew that our own examples were limited; we wanted the worksheets to be flexible enough that they worked for evaluation work in other organizations, and we wanted to ensure we hadn’t gotten lost in our own process creating worksheets with language and work flows that only made sense to us.
Focus on Adoption
All of this testing and assumption-dropping wasn’t simply to create a beautiful product; it was to produce a valuable resource. We wanted to expand on the workbook to provide even more materials that people in diverse organizations would actually use. Focusing on adoption became a filter for many decisions. It meant that, when deciding amongst eight or more potential worksheets, we reviewed what was covered in the workbook already, what was available through other nonprofit or online resources, and even what would be the most straightforward to download and add to an organization’s staff meeting agenda. This focus on adoption also meant that, as we tested out the worksheets with our internal evaluation scenarios, we brought in staff that weren’t already part of the project meetings or worksheet developments — staff who would care about the actual use of the worksheets for our own evaluation work but hadn’t been through all of the conversations about creating the templates. The questions they raised helped us identify where we weren’t clear or were totally off the mark for how we addressed the topic.
This focus on adoption also meant putting the worksheets to work, right away. As I type, staff and comunity members are gathered in Austin, TX, for an NTEN Lab. They are using the workbook as well as trying out some of these new worksheets. We are excited to share them with you today, too. You can download the workbook and new resources using the links below. We look forward to continuing this process and feedback loop, all the way back up to understanding needs so we can test and create adoptable, valuable resources for you. Please let us know what you think!