The blog of the Urban Institute
July 3, 2013

Evaluating place-based programs

July 3, 2013

Random assignment studies are the gold standard for judging whether an intervention works, but that doesn’t mean they’re always the best tool for the job. When it comes to evaluating place-based programs—those that aim for comprehensive community-wide changes—random assignment is typically the wrong approach to take.

In random assignment studies, people are randomly divided into two groups. One group receives the intervention being tested—whether it’s a new drug or a job training program—and the other group does not. Then researchers study how each group’s outcome differs. Whether in clinical studies or social science research, an experiment that randomly assigns treatment is the preferred approach because it is unbiased, given a few assumptions: it gets the answer right on average. Without random assignment, the drug being given to the healthiest patients or the job training given to those with the lowest current earnings can bias us in favor of finding large effects where, in truth, the effects are small or none. But the assumptions are not always justified.

Place-based programs, such as Promise Neighborhoods from the Department of Education and Choice Neighborhoods from the Department of Housing and Urban Development, aim to produce change by affecting the whole community, not just the individuals touched by a funded program. Part of that approach is to saturate the target, providing services to a large portion of the population so that even those not directly affected probably know someone who is, and social networks transmit the effect across the whole community. These spillovers mean that the statistical framework underlying random assignment does not apply.

We could randomly assign communities to get a program or not, but place-based programs are not a simple prescription formulated the same way everywhere. These programs are grown organically in the communities where they are implemented and draw different interventions from a broad menu of services. Each intervention is tailored to conditions on the ground. They are also continually improved using data in an ongoing development effort, as described by Sue Popkin. Treatments may also adapt to individual circumstances, with constant feedback from outcome data. Another common element is a form of case management where services are coordinated across domains, so that individuals do not fall through the cracks.

How should we evaluate place-based programs?

Spillover effects on people not receiving services, plus continual improvement of services and place-specific designs, make a simple random assignment design the wrong choice. But there are methods that can credibly evaluate place-based interventions. The crucial part is defining exactly what intervention is being examined, and then using data from other communities to estimate the counterfactual outcome: What would have happened without that intervention?

It is hard to define what treatments might happen in the absence of an intervention. A neighborhood that does not receive federal dollars to implement a specific place-based program can choose to enact its own intervention. Is the right alternative no intervention at all, or whatever intervention grows in the absence of the specific treatment? There are no sugar pills given out in social experiments to prevent individuals or communities from designing their own treatment regimen. The absence of a placebo is even trickier in the absence of random assignment, but that just means we need to collect very good data on what is being done everywhere we look. 


As an organization, the Urban Institute does not take positions on issues. Experts are independent and empowered to share their evidence-based views and recommendations shaped by research.