Evaluation and taking lessons forward

Dean Karlan, founder and director of the non-profit research organization Innovations for Poverty Action (IPA), at a recent workshop in Seattle, clarified a misconception about their work: Karlan discourages looking backward to see if a donor’s money made a difference. Instead, Karlan recommends building evaluation into the planning process for future projects. Evaluation will help sharpen the overall project and deliver more useful, credible results. “We need to understand whether programs are really effective or not so that we know as donors what to support, and as practitioners what we should be doing. Rigorous evaluations help us do just that. We also must learn why programs work, rather than merely evaluate whether something works. It is the why that helps us know what lessons to take from one country to another, from one year to the next.”[1]

IPA’s research spans a variety of fields, including microfinance, education, health, agriculture, charitable giving, political participation, and social capital. Attendees at the Global Washington workshop heard from Dean Karlan, Jake Appel, Tania Alfonso, Anna York and Mary Kay Gugerty about the methodologies for assessing program impacts.

IPA promotes the use of rigorous experimentation, or “randomized control trial” experiments, to evaluate whether an intervention has an impact and if so, why the intervention worked. The team presented what makes a good evaluation, including the detailed assessments needed and the time involved to collect quality data.  The heart of a randomized trial is making one set of data (with the intervention) equal to the other set (without the intervention) in order to answer the question: where would the studied population have been without the intervention. As IPA notes in their website FAQs: “If we want to know how effective a program is, we need to have a comparison group. Without a comparison, we can’t really say anything about what would have happened without the program. And the only way of having a fair comparison group is with random assignment.” [2] Using case studies, attendees shared questions and opinions about the value of randomized control trial evaluation. Attendees also learned about cost-benefits of randomized control trial evaluation. The workshop provided attendees a great opportunity to ask questions of Dean Karlan and Jake Appel regarding their new book, More Than Good intentions. In their book, Karlan and Appel claim that accounting for irrational behavior and introducing a method of controlled experiments will help fix the problem of failed humanitarian interventions for poverty alleviation.

Overall attendees were able to reflect collaboratively upon implementing greater strategies for evaluating interventions. One participant learned enough to recommend  her organization take a step back and revisit their evaluation plan: rather than conduct a small randomized trial, the organization decided to focus on telling “success stories” in the coming year. If you want to know more, please read IPA’s blog, http://www.poverty-action.org/blog or write to Innovation for Poverty Action (IPA) at contact@poverty-action.org.

[1] “Q&A: Seeking ‘real world’ solutions to global poverty,” Dorie Baker, Yale Office of Communications, March 22, 2011, http://opac.yale.edu/news/article.aspx?id=8357

[2] IPA FAQ: http://www.poverty-action.org/about/faqs