Nonprofits are increasingly expected to use data and evidence to demonstrate that they are producing positive results for the clients they serve. A growing number of national foundations are incorporating an array of approaches to assess the performance of their grantmaking through the use of dashboards, scorecards, results frameworks, and more formal evaluations. But how can nonprofits—and their funders—navigate the maze of resources and recommendations about what data and research tools to use to drive results? Are all of these methods appropriate or valuable for every nonprofit?
In a new brief, I discuss two frameworks for using data and evidence: performance measurement and evaluation. While some may use these terms interchangeably, the distinct approaches have different goals.
- Performance measurement tells what a program did and how well it did it.
- Evaluation tells what effect the program had on the people, families, or communities it is serving, that is, whether a program is producing results or having an impact.
Both methods have benefits, but they help organizations improve their programs in different ways and should be applied appropriately. Performance measurement involves collecting real-time data that a nonprofit can use to detect and respond quickly to challenges that emerge in program implementation and service delivery—referred to as “continuous improvement.”
Evaluation, on the other hand, provides evidence about how well a program achieved its aims after some time has passed. Evaluation results can be used to help decide whether a program might be successfully scaled up to serve larger numbers of people or expanded to other locations or different populations. Evaluation results might also tell nonprofits and funders whether a program should be substantially changed or phased out.
Many people, when they hear the word “evaluation,” think of randomized controlled trials (RCTs), but that is just one type. Other approaches include comparison studies, planning studies, and implementation studies. All of them can yield valuable information but answer different questions.
To show how these different approaches can work together, the brief introduces the “performance measurement-evaluation continuum.” Even before a program is launched, nonprofits can use planning studies (a type of formative evaluation) to answer questions like, “Is this program appropriate for the identified goals and population?” Once the program begins, staff can start collecting performance data to track actual implementation and service utilization. As the program continues, periodic formative evaluations (such as implementation studies) can assess, “Did services get delivered as intended?” Once the program has become well established, summative evaluations like RCTs may be used to answer the question, “What would have happened to our clients had our program not been in place?”
While the continuum may provide a useful framework, nonprofits need help and resources to adopt evidence-based approaches and become results driven. Recognizing this, the World Bank Group and Urban Institute started Measure4Change in 2014. Measure4Change works with nonprofits in the Washington region to help them develop their ability to measure program effectiveness. This year, Measure4Change began working with its second cohort of nonprofits, providing them with one-on-one technical assistance and grant support. It is our hope that these and other efforts will expand the use of performance measurement and evaluation methods to improve results for nonprofits and the communities they serve.