September 30, 2013

What is “Good Enough” for Data?

We’re not always clear about what we mean when we use the word “data.” Often, we use it in a strictly technical sense, to refer to the discrete bits of information that populate a database. And that is entirely appropriate for purposes of discussing the proper collection and organization of information for specific types of analyses.

But there’s a larger and more fundamental sense in which the term “data” can be used that places “technical” data in its proper context. In this larger sense of the term, we all use data all the time. Because, fundamentally, data is all relevant information that’s used in the service of making decisions, large or small. Unless you make decisions through coin tosses, every decision made is (to some degree) informed by the data you have at your disposal.

In itself, this is not a particularly revelatory insight. But within this larger context, it changes in a very important way: how we think about data in the more technical sense. If we’re all using data all the time, then the relevant question to ask when doing philanthropic work is not “do we need data (of the type that resides in databases)?”; but rather, are the “data” we currently have good enough for the decisions we need to make?

Thinking of data in this broader sense allows us to perceive of it as existing on a continuum, from raw, immediate sensory input on one end, to highly processed and organized databases on the other. How much data you need and what kind depend upon the situation.

Not long ago, I took a hike in a wooded area of northern New Jersey. The day was warm, the air fresh, and the scenery quite pleasant. Suddenly, upon rounding a bend, I was startled to see a young black bear about 100 feet ahead.

At that moment, I needed to make a decision. I drew primarily upon two sources of relevant information in determining what to do. The first was instinct, which told me to freeze (raw, sensory data). The second was everything I could possibly recall having learned about bears at any point in my life (processed, organized data). Unfortunately, at that moment, I could recall next to nothing.

So, in that situation, I relied primarily upon my eyes and my gut in deciding what to do. I slowly backed away in the direction I had come, watching closely for signs that the bear might be taking an interest in me. As I turned, I walked at a deliberate pace back towards the beginning of the trail, checking over my shoulder every few steps. The bear continued to mind his own business and I made it back without incident.

In that instance, my natural, instinctive fear of large beasts was all the information I needed to make a decision. I didn’t need a database to help me figure out what to do. Yes, it might have helped if I had also known something about the psychology of bears, but that actually bears out a further point I want to make about the data continuum (sorry – no pun intended). Simply put, the more complicated the problem, the more likely it is that the knowledge most immediately available to you will be insufficient in helping make an effective decision.

Now, let’s talk about philanthropy. Philanthropy works on a wide range of difficult problems, some of these problems may be relatively simple compared to others, but generally any type of work that involves interventions in human affairs is, by definition, complicated.

Over the past decade, philanthropy has come a long way in recognizing the complexity of the work it’s trying to do. It is now de rigueur to have a “theory of change” in place before developing and implementing an intervention strategy. This is an acknowledgement of the fact that our default mental models of how things work are insufficient to the task of designing effective interventions. Our inherent understandings of the types of situations we are working in are partial and, in some cases, just wrong.

For solving complicated problems, we can’t rely solely on guesswork if we want to achieve our desired outcomes. We need data that is commensurate with our information needs.

At the Foundation Center, we’ve collected, cleaned, organized, and analyzed information about foundation grantmaking for more than 50 years. We see “data” everywhere we look – grants, knowledge products, news stories, Twitter feeds, demographics, health and social well-being indicators; international aid flows, etc. – and this type of information will continue to be critically important both to grantseekers and to foundations optimizing their effectiveness., for example, weaves together no fewer than 17 separate streams of data along with state-of-the-art visualization tools to give funders working on issues of water access, sanitation, and hygiene a dynamic interactive platform for finding and sharing knowledge.

But in the larger sense of data as “all relevant information that we use in the service of making decisions,” we also need to “go beyond the grant” and collect, clean, organize, and analyze information about philanthropic outcomes and the knowledge that foundations and nonprofits have gained through doing their work. We have to do this because we need data that is commensurate with the complexity of the problems philanthropy is working on. Anything less is just not good enough.

Like what you're reading?
Sign up to receive the latest articles and updates on nonprofit tech from NTEN and its community of experts.


Larry McGill
Larry McGill is the Vice President of Research at the Foundation Center.
Interest Categories: Data, Program
Tags: Data, grantmaking