A recent three-part series in the Washington Post on the “I Have A Dream” program raises the question: What does it mean for a program to “work”? This question is critical in the current age of budgetary pressures to fund only effective and evidence-based programs. Yet, it seems to me that we often use two very different standards to discuss whether something works.
In 1988, two philanthropists promised to pay for the college educations of 59 fifth-graders at Seat Pleasant Elementary School in Prince George’s County, if the students graduated from high school. The program hoped that this promise of college support (together with other support along the way from the program) would motivate all 59 students to graduate from high school and go on to college. In the Washington Post series, Paul Schwartzman tracked what happened to each student. Some of the “Dreamers,” as they were called, were very successful—alumni included a lawyer, a doctor, a track star, and a cellist. Others were somewhat successful, attending some college and obtaining middle-class jobs. Some had more tragic outcomes—one was killed and one incarcerated.
This focus on individual students’ lives leads readers to judge the program’s success by whether it worked for each participant. Using this standard, the program did not work for the students who failed to graduate high school or for those with tragic life outcomes. Likewise, effective medications may not work for every person. For the patient who dies of heart failure after taking Lipitor, the drug did not “work.” This is one common meaning of whether a program works— for a particular individual.
So what should we conclude about the “I Have A Dream” program? The highest hopes—college attendance and graduation for all participants—were not realized. Does that mean the program was a failure? No. At the program level, rather than the individual level, asking whether a program “works” is a different question: Did the program improve outcomes beyond what would have happened in the program’s absence, on average? No program is a silver bullet and none can guarantee a good outcome. Rather, much as with effective cholesterol-lowering medication, effective programs are expected to be helpful ingredients in the lives of participants.
How about the Dreamers? The Washington Post series finds that “at least 49 of the 59 Dreamers—83 percent—graduated from high school or got their GEDs…far surpassing Prince George’s overall rate in 1995. Almost half the students enrolled in college.” This suggests that, at the program level, it was likely effective. Of course, knowing only what happened to program participants does not tell us if the program improved outcomes. We also need to make some assumptions about what would have happened in the absence of the program, which evaluation researchers call “the counterfactual”—but that’s a topic for another day.
My point here is that focusing on individuals can obscure a program’s effectiveness by setting the bar far too high. Effective programs are those that provide incremental improvement over the alternatives.
If we are to make evidence-based policy decisions about which programs to fund, we need to experiment with programs and study their outcomes. We need to understand how and when a program works. Tracking participants’ outcomes, as the Post did, provides critical information—but we must not take individual successes and failures as a measure of whether a program “works” in general. Doing so may lead us to miss the value of effective programs—or worse, to judge them as failures.