February 4, 2013

Reading Time: 2 minutes

Gathering and organizing data is one thing; using data effectively is another. Thanks to new technology, as Bill Gates writes in his 2013 annual letter, people can gather and organize data with more speed and accuracy than ever before. That’s great news, particularly for the social sector around the world, which badly needs more and better feedback loops to improve its performance. But gathering and organizing data is just one half of the loop. The other half–actually using data to inform program design and decision-making–can be a generational challenge to infuse into the culture of an organization.

You wish. (via flickr)

You wish. (via flickr)

Gathering and organizing data is one thing; using data effectively is another. Thanks to new technology, as Bill Gates writes in his 2013 annual letter, people can gather and organize data with more speed and accuracy than ever before. That’s great news, particularly for the social sector around the world, which badly needs more and better feedback loops to improve its performance. But gathering and organizing data is just one half of the loop. The other half–actually using data to inform program design and decision-making–can be a generational challenge to infuse into the culture of an organization.  Part of the challenge is internal, and part is external. On the internal side, BRAC established its Research and Evaluation Division (RED) in 1975 to gather and organize data for decision-making. Through RED, BRAC is often its own fiercest critic. Almost 40 years later and over 100,000 employees strong, BRAC itself is still a work in progress as we continually adapt and learn from our mistakes. (See, for instance, “Scaling up without losing your edge.”) In October of last year, former BRAC senior manager for education Farzana Kashfi gave a TEDx talk at Columbia University in which she gives a snapshot of the state of internal data-driven decision-making inside BRAC, in the context of its work with adolescent girls. BRAC’s adolescent girls programs are scaling up in seven countries, most recently Sierra Leone, with about 275,000 participants worldwide as of June 2012. My colleague Scott MacMillan recently blogged about the results, which have impressed even previous skeptics of the “girl effect.” The fact that we can convincingly demonstrate the success (or failure) of such interventions is largely because our East Africa research team happens to be good at gathering and organizing data. Externally, data-driven decision-making on the part of funders in the social sector space have just as much work to do, if not more. As Edward Carr puts it,

Like most everyone else in the field, I agree with the premise that better measurement (thought very broadly, to include methods and data across the quantitative to qualitative spectrum) can create a learning environment from which we might make better decisions about aid and development. But none of this matters if all of the institutional pressures run against hearing bad news.

So I applaud Bill Gates’ call for more and better measurement and the great progress that’s occurred on most Millennium Development Goals. His letter makes the important point that we need to have better metrics and tools for measuring performance. But development organizations need to remain vigilantly open to bad news. BRAC has spent decades building what Gates calls in his letter “an environment where problems can be discussed openly so you can effectively evaluate what’s working and what’s not,” and we’re still working on it. Learning from failure is in vogue among development practitioners as well as business strategists, with Harvard Business Review devoting an entire issue to its virtues in April 2011. Investing in better measurement is a huge first step in the right direction. It’s that second step of integrating failure into the work itself that looms just beyond.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments