Programs that help families facing food insecurity can benefit from evaluations to understand how they are helping their communities and how they can improve.
However, evaluations can be challenging, and the gold standard for program evaluation—the randomized control trial (RCT)—can be a high bar for practitioners to meet.
What is a randomized control trial?
RCTs randomly assign similar people to either the treatment group, which receives services, or the control group, which does not, but whose outcomes are still measured. Well-designed RCTs can successfully tease out the causal effects of interventions, or what changes can be attributed to programs.
Some food banks have been able to implement RCTs to evaluate the effectiveness of their programs.
Foodshare implemented an RCT to evaluate its Freshplace program, which bundles a client-choice fresh-food pantry, case management services, and referrals to community services to help clients set and achieve goals. Subsequent programs and services are tailored to participants’ needs so they can be most beneficial to them.
The randomized evaluation found that households participating in Freshplace had significant improvements in food security, self-sufficiency, diet quality, and self-efficacy. Freshplace households showed a four-point increase in self-sufficiency scores (which measure levels of education, employment, income, health, housing, child care, and stress, with a total of 100 points) compared with similar households who didn’t participate in the program.
These early results suggest case management services paired with charitable food programs may positively impact families’ independence and food security status.
Katie Martin, executive director for Foodshare’s Institute For Hunger Research & Solutions, was the primary researcher for the RCT. She notes that the program has evolved over time to include standardized trainings, case files, and program materials, and the program is adapted to local community needs.
Based on the findings at Freshplace, Foodshare and Urban Alliance, another nonprofit organization, created the More Than Food framework to train other food pantries on this holistic approach. Despite decades of providing charitable food, we know it takes more than food to address food insecurity.
The More Than Food framework pairs clients with coaches to work on health, food security, and stability goals. The program draws on research from motivational interviewing, behavioral economics, and social cognitive theory. Foodshare is scaling the program in additional food pantries and continues to evaluate the program with the same outcomes as the original Freshplace study, but without the rigorous control group.
Evidence from a food pantry in El Paso, Texas, using the More Than Food framework found the same significant improvements in food security, self-sufficiency, and diet quality as the Freshplace study.
Results indicate the program is effective and provides support for scaling the program in additional food pantries. Currently, several other food pantries are using the More Than Food framework and are part of a continuous evaluation with Foodshare.
Randomizing may not always be the right choice for evaluation
Despite the strength of the evidence they provide, RCTs may not always be feasible for field implementation because of budget and program constraints.
RCTs require the following:
- intensive resources, of both time and money
- fidelity to a program model with limited flexibility in implementation
- large samples of participants and nonparticipants
- monitoring of compliance and randomization
These and other challenges can make this evaluation design impractical for many human services programs. But there are a range of approaches to estimate program impacts.
Quasi-experimental designs (studies that do not have intentional random assignment of treatment and control groups but use statistical techniques to create a comparison group), analyses of program data using pre-post testing to measure changes within a group that receives an intervention without a control group, and literature reviews (systematic overviews of existing evidence in a particular field, which can show the effects of similar types of programs without undertaking direct program evaluation) can provide insight into program impacts.
Other food banks we spoke with hadn’t evaluated their programs using RCTs, but they expressed the importance of evaluating programs to understand their effects and to communicate these to potential participants, community partners, and donors.
How other food banks have collected evidence
Their program data analysis found evidence that more than 80 percent of participants graduated the 10-week program and were placed into jobs. Their culinary workforce development program provides a 10-week sector-focused program for basic culinary skills paired with job search assistance and an additional 8-week fine dining and line cooking course that acts as a stackable credential.
Research on sector-focused programs (PDF) like Cincinnati COOKS! shows increased earnings, job quality, and likelihood of securing employment. Job search assistance can increase employment and earnings, and career pathways programs and stackable credentials may increase credentials attained.
This body of evidence, paired with an analysis of program data, suggests the Cincinnati COOKS! model may result in positive outcomes for program participants, although more rigorous evaluations using comparison groups could confirm this.
This approach and other nonrandomized evaluation strategies can help practitioners building, refining, and iterating programs gain insight into effectiveness, adjust program models to better meet the needs of families served, and look beyond their program’s data.
Evaluating social service programs is a complex process that balances the needs of pure evaluation with the needs of communities and the capacity of programs and agencies. Food banks across the nation are navigating this gray area with varied approaches, from implementing RCT evaluations to collecting and analyzing program data in context with broader research literature.
Practitioners in these spaces can adopt a wide range of strategies, and if the gold standard of randomized evaluation does not meet their needs, they can build evidence on their program’s effectiveness and improve it in other ways.