Listening to NPR’s Morning Edition on Tuesday, my attention was captured by two stories about how researchers can best answer questions about what works, whether the goal is curing a terrifying disease or achieving a public policy objective.
Both stories explained the value of random control trials, in which people are randomly divided into two groups: one that receives a “treatment” (a drug, a therapy, or some form of public service or benefit), while the other receives nothing. This approach is extremely powerful because it compares outcomes for the treatment group with outcomes for comparable people who weren’t treated. In fact, random control trials are often referred to as the gold standard for evidence about what works.
But sometimes it’s unethical or impractical to apply this method in its purest form. And both Morning Edition stories featured creative researchers exploring effective alternatives.
In West Africa, many consider it unethical to deny experimental drugs to members of a control group. Nancy Kass at Johns Hopkins University argues that researchers have to think outside the box, possibly administering different doses of a drug to the treatment and control groups, or giving the drug to everybody at a particular hospital and comparing their outcomes with those of people like them in other hospitals where the drug isn’t available.
In the realm of public policy, Anna Aizer at Brown University came up with a clever alternative to a random control trial to find out whether welfare improves outcomes for poor children. During the Depression, states cut back drastically on programs that gave cash benefits to mothers. Some mothers received benefits while others didn’t, but otherwise the families were pretty similar in other respects. Aizer tracked outcomes for the children from both groups, and found that those whose mothers received cash benefits lived a year longer on average, probably because they stayed in school longer and earned more.
At the Urban Institute, we believe in the power of good research to strengthen public policy, help find solutions, and elevate the debate about what works. Random control trials constitute a very important tool in our tool kit.
But some programs can’t be effectively evaluated in this way. In particular, complex place-based interventions (like the new Promise Neighborhoods Initiative) aren’t good candidates for random control trials. So researchers have to come up with creative alternatives like Aizer’s welfare study, or apply other tools that produce the reliable evidence policymakers need to answer questions about what works. For example:
- microsimulation models (like the Urban-Brookings tax policy model) can forecast outcomes under a wide range of “what if” scenarios;
- administrative data from public agencies can be systematically linked and analyzed to answer questions about program design and implementation; and
- sometimes, fully diagnosing a complex problem, designing an innovative solution, or understanding exactly how a program should be implemented requires more nuanced, qualitative information gathered through in-person observation, in-depth interviews, or focus groups.
Instead of relying on a single tool, policymakers and practitioners should draw from a portfolio of tools to effectively advance evidence-based policy. Using the wrong tool may produce misleading information or fail to answer the questions that really matter. Applying the right tool to the policy question at hand can inform public debate, help decisionmakers allocate scarce resources more effectively, and improve outcomes for people and communities.
Photo: An Ebola worker dons protective gear in Freetown, Sierra Leon. (AP Photo/Michael Duff)