PROJECTEvidence Navigator Newsletter

Project Navigation
  • CWEST Newsletter Homepage
  • Issue #1 - December 2021

  • Issue #1 - December 2021

    Cwest and urban logo webpage banner

    Child welfare agencies provide vital services to children and families in complex circumstances, and these services must be guided by the best possible evidence. Research plays a vital role in pinpointing the most effective child welfare practices and ensuring the best possible outcomes for children and families. Launched in 2016, CWEST is committed to rigorously evaluating interventions that support the child welfare population.

    In our first newsletter to the field, we are delighted to share highlights of our collaborative work.

    In this newsletter

    • Evaluating HUD’s Family Unification Program
    • What child welfare staff say about participating in program evaluations
    • Equipping the field: CWEST’s Child Welfare Evidence Building Academy
    • Why the RCT design brings the best evidence

    Evaluating How Housing Helps Families in the Child Welfare System

    The Family Unification Program (FUP) provides permanent Housing Choice Vouchers (HCV) to families served by child welfare agencies. Created by the US Department of Housing and Urban Development (HUD) in 1990 and operating in 44 states, the District of Columbia, and Puerto Rico, FUP’s goals are to prevent children’s placement in foster care and increase family reunification. The program also provides vouchers to young people who are transitioning out of foster care and facing the risk of homelessness. CWEST’s evaluation of FUP, led by Devlin Hanson, focuses on FUP for families.

    CWEST’s evaluation will inform the child welfare field by providing evidence on whether FUP achieves its goals for families. It will also explore ways for HUD to improve housing and quality of life for these families. Our findings will help child welfare agencies make their programs work better for FUP participants.

    We want to understand how having a voucher affects a family’s involvement with child welfare agencies. Some studies have shown the vouchers reduce removals and increase family reunification. But the evidence is limited. Our project asks the following questions:

    • Does having an FUP voucher reduce the likelihood that a child will be removed from their home and placed in foster care?
    • Does having an FUP voucher increase the likelihood that a child in foster care will be reunified with their family? Does FUP help children reunify with their families faster?
    • Does having an FUP voucher reduce the number of new child maltreatment reports?

    Seven public housing authorities and their six public child welfare agency partners agreed to participate in the evaluation. These agencies received FUP voucher awards from HUD in November 2018 or April 2020.  Research activities will continue over the next few years in the following six locations:

    • Bucks County, Pennsylvania
    • Chicago, Illinois
    • Orange County, California
    • Phoenix, Arizona
    • Santa Clara County California
    • Seattle/King County, Washington

    CWEST Map

    The evaluation uses two studies to look under the hood of the program’s impact and implementation.

    The impact study uses a randomized controlled trial (RCT) to understand FUP’s effect, on reunification and avoiding removal, for families referred to the program. More families are eligible for FUP than vouchers are available. The RCT distributes the vouchers so that each family has the same chance of getting one, while also providing two comparison groups to assess the program’s true impact. Eligible families are assigned to either the treatment or control group. The treatment group is referred to the FUP program, while the control group receives non-FUP services provided by the agencies. CWEST will compare the child welfare outcomes of these two groups using child welfare administrative and program data.

    The implementation study explores how FUP is conducted at participating sites. Information gathered will show the differences between sites, allowing for a greater understanding of any differences between each program’s outcomes measured in the impact study. The implementation study conducts interviews and focus groups with staff at the participating sites to learn about the successes, challenges, and supports needed to implement FUP, as well as interviews with parents to capture their experiences participating in the program. Bringing together data on public child welfare referrals with data on the housing search and rental process will show how well families are able to move into housing and their time spent in transition.

    In their own words: What FUP staff in Orange County, California, say about participating in a program evaluation

    The staff at agencies we work with on evaluations often ask us: What are the benefits and challenges of participating in a research study? We dug deeper to ask agency staff about their experiences with our program evaluation. Below is some of what they told us:

    Q: Why are you participating in the FUP evaluation? “We wanted to take the opportunity to be a model county for the rest of the counties in California. To say, you know, we’re willing to take that risk and take on this project to see what can come out of it. It’s really benefited us and it’s also good to learn from the other participants as well.” (Mario Murillo, Orange County Social Services Agency)

    Q: What do you hope to learn from the evaluation? “We have been having meetings with other housing authorities that participate in the study. We want to learn what other housing authorities have problems [with], how do they handle eligibility, how do they work with the programs to help [make] the program more successful. When we administer the program, we want to see if they are experiencing what we also see on our part. Especially during the pandemic, how did they work out issuing all the vouchers out to this group. We may also use [the evaluation results] to apply for more funding.” (Priscilla Le, Orange County Housing Authority)

    “Hopefully we can see what the result [is]. How many are taking advantage of this program and are they still in housing or another program. It’s helping us [plan] for the next program we encounter.” (Eddy Surya, Orange County Social Services Agency)

    Q: What kind of burden does the evaluation impose on your agency? How does your agency handle that burden? “The burden is mostly [having to] track the data … we tracked how many are active and how many are in the program, ... we know how many pending vouchers are there and those that are expired, but just tracking the individual is different. I track the process of the 50 [people in the study]. So I’m looking at their data … every two weeks. I rely on data entered into the system from staff [and] review a lot of the notes and data. It’s something additional [that I do] … at first it was harder but now it’s become a routine.” (Priscilla Le, Orange County Housing Authority)

    “The randomization tool. How people were getting picked and people weren’t. The only feedback we got [from the workers] was you know we filled out all this paperwork to be told [the families are] not eligible. But we told them, you know, if you don’t do it, [the families] are not going to have the opportunity. Because if you don’t do it, you’re not going to get anything. If you do it, at least you have a 50 percent chance of obtaining a voucher for your client. It wasn’t a big issue; it was only a little push back. But we changed the forms, so a lot of information is auto-populated, things are easier to read, and [we] collaborated with [the Urban] team. So towards the end it was easier because we already fixed the bugs.” (Mario Murillo, Orange County Social Services Agency)

    Q: What should agencies consider when joining an evaluation or RCT? “We have to be open minded and review before we reject or deny [anything] and think of the long-term benefits. I know this is a lot of work, but we looked at the long-term benefits for our clients.” (Eddy Surya, Orange County Social Services Agency)

    For those who are thinking about participating in an evaluation, these insights suggest that being part of an evaluation does place some burden on staff. However, they also know that their support and participation is important to help the families they serve. Over time, agency staff adjust to the evaluation’s needs and the burden becomes lighter.

    Equipping the child welfare field to produce rigorous evidence of what works

    In summer and fall 2020, CWEST held its first virtual Child Welfare Evidence-Building Academy, an interactive program of trainings for child welfare agency staff, practitioners, and evaluators. In our first cohort, we hosted about 150 participants from 50 child welfare teams hailing from 26 states and two territories. Participants attended 15 sessions where they learned how to:

    • identify appropriate evaluation designs;
    • apply basic scientific principles and tools needed to produce evidence; and
    • build greater capacity to conduct and support rigorous evaluation.

    The Academy’s trainings walked participants step-by-step through the essentials of evaluation in child welfare settings. Academy experts were on hand to answer questions, engage in problem solving, and help participants create a strong evaluation plan. Some participants were just starting out on the road to evaluation and others were adjusting an evaluation underway.

    Check out the content from the Evidence-Building Academy (EBA) trainings here.

    The Benefits of using a Randomized Controlled Trial (RCT) design

    Child welfare agencies often share a common concern: “It seems unfair to deny this effective program to so many families.” However, the RCT design has tremendous value in evaluating programs for two reasons.

    First, RCTs are the fairest way to distribute limited resources. For example, if 75 families are eligible for 25 available slots, an RCT offers a random process for selecting 25 eligible families. Other service delivery approaches, like filling slots on a first-come, first-served basis, favor those who get to the front of the line, either on their own or through a strong advocate.

    Second, researchers view RCTs as the “gold standard” for learning whether a program is, in fact, effective. The process of “randomizing” – or flipping a coin – to select participants means that eligible families assigned to the treatment group are generally the same as eligible families assigned to the control group. When the groups are large, this equalizing effect is even stronger. Opening the program to families in the treatment group and following both groups over time allows researchers to see whether the treatment group fares better than the control group. Because the groups are equal, any difference in outcomes for the two groups is caused by the program.

    Study designs that compare participants with those who choose not to participate, or designs that do not have a comparison group at all, make it difficult to tell whether a program is effective. These designs can’t rule out the possibility that families would have improved on their own without the program—an important detail for child welfare administrators facing funding decisions.

    Child welfare administrators make hard financial choices about which programs have the best chance of helping families. A robust RCT design produces evidence that is clear, convincing, and can be used to select programs that meet the needs of the families they serve.