Urban Wire Evaluating complex community initiatives
Susan J. Popkin
Display Date

Earlier this month, two colleagues and I participated in a panel on Data, Measurement, and Evaluation at the 2012 UNCA Neighborhood Revitalization Conference—a meeting of practitioners working on community-based initiatives like Promise and Choice Neighborhoods. Our session was in the late afternoon, competing with two sessions featuring speakers from two noteworthy program sites (the DC Promise Neighborhood and Cincinnati STRIVE). I fully expected that our room would be empty, or that our session would only be attended by a few hardy folk responsible for internal evaluation.

Instead, our room was packed with representatives from community organizations eager to learn how to show that their good work really matters—reflecting the fact that federal and philanthropic funders are increasingly requiring the programs they fund to demonstrate “impact” on key outcomes. The world of indicator measurement and evaluation is new territory for many of these organizations, and they face real challenges in meeting their funders’ requirements. The groups implementing comprehensive community initiatives are typically social service agencies, neighborhood organizations, and schools. While some are data savvy, many are new to the idea of tracking complex indicators over time rather than simply reporting on the number of clients served or basic outcomes (e.g., job placements). They now have to ensure that their staff are comfortable defining measurements and using state-of-the art database tracking systems like Salesforce, ETO, and In Focus.

The challenge is even greater because these comprehensive community initiatives involve coordinating multiple service providers. Partner agencies providing key services (e.g., early childhood programming) also have to be willing and able to put their data into the shared system regularly. And the lead agency has to negotiate data agreements with all its partners, as well as other government entities that can provide key administrative records. Beyond tracking service indicators, these agencies are challenged with gathering very specific information: for example, Promise and Choice Neighborhoods require implementation sites to monitor such indicators as obesity and the number of children who get five servings of fruit and vegetables a day, how often parents read to children, and perceptions of school and neighborhood safety.

Beyond tracking indicators, funders are asking sites to rigorously evaluate their performance and outcomes. Evaluating complex, comprehensive community initiatives and demonstrating that their program activities actually create significant change is very challenging. These costly, constantly evolving projects do not lend themselves to random assignment designs, so performing a useful and rigorous evaluation requires multiple methods and strategies. The Urban Institute is using this approach for our evaluation of the HOST initiative, our work with local Promise sites, and our baseline evaluation of Choice. Over the coming year, we hope to develop several applied evaluation strategies that we can share with the many local agencies seeking to document and evaluate the impact of their hard work.

Body

Tune in and subscribe today.

The Urban Institute podcast, Evidence in Action, inspires changemakers to lead with evidence and act with equity. Cohosted by Urban President Sarah Rosen Wartell and Executive Vice President Kimberlyn Leary, every episode features in-depth discussions with experts and leaders on topics ranging from how to advance equity, to designing innovative solutions that achieve community impact, to what it means to practice evidence-based leadership.

LISTEN AND SUBSCRIBE TODAY

Research Areas Neighborhoods, cities, and metros
Policy Centers Metropolitan Housing and Communities Policy Center
Research Methods Performance measurement and management