January 25, 2013

Reading Time: 2 minutes

It’s the second time around for The Global Journal‘s ranking of the Top 100 NGOs worldwide. Needless to say this has generated equal parts pride and enthusiasm in the BRAC family of organizations. We are gratified at the recognition, and at the same time know we’re part of a larger family of civil society groups looking for solutions to the world’s most pressing problems.

78-147-large[1]It’s the second time around for The Global Journal‘s ranking of the Top 100 NGOs worldwide. Needless to say this has generated equal parts pride and enthusiasm in the BRAC family of organizations. We are gratified at the recognition, and at the same time know we’re part of a larger family of civil society groups looking for solutions to the world’s most pressing problems.

But not everyone is happy with the rankings.

As aid blogger Dave Algoso notes in his criticism of last year’s edition, most of the organizations included are quite good. So it’s great for BRAC to be recognized as a member of a thriving social sector, but the problem Dave and many others have is that no one outside the Global Journal editorial staff really knows how the rankings are determined.

The editors reference three broad areas on which The Global Journal staff judged NGOs: impact, innovation, and sustainability. They write that (emphasis added) “despite our best efforts to ensure the ranking is based on concrete information fed through a rigorous, objective process, there is no science in the measuring.”

The Global Journal isn’t the only organization that ranks NGOs. For instance, last year Guidestar’s Philanthropedia used a slightly more transparent process — surveying 72 industry experts, publishing their identities and a description of what they were asked to do — to rank international microfinance nonprofits.

One of the many challenges in ranking NGOs is that even NGOs themselves have lengthy, esoteric debates about how to measure performance for their own purposes. BRAC’s Research and Evaluation Division has been at it since 1975, and is still figuring out how best to do it. Esther Duflo, Abhijit Banerjee, Dean Karlan and other “Randomistas” are at the forefront of an entire sub-sector of the NGO industry devoted to measuring progress using randomized-controlled trials. But almost all of the studies produced by BRAC or the Randomistas are essentially project-based rather than measurements of an NGO’s overall performance.

A conversation over how to measure NGO performance and how to rank NGOs is worth having, even if there is ultimately no clear answer. NGOs have never had metrics like share prices or stock market capitalization as a concrete (though still flawed) measure of their value as discrete organizations. Conversations over how to measure the value of an NGO to society may indeed help drive efficiency and boost impact for the sector. It’s particularly useful when such conversations go beyond the dreaded overhead/administrative costs question.

To that end, rankings like The Global Journal‘s Top 100 NGOs, flawed as they may be, can help engage new voices in those conversations — forcing us insiders to put more thought into how we communicate what we do and why our organizations are worth something to the world.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments