With today’s release of the 2019 National Assessment of Educational Progress (NAEP) scores, education wonks and policymakers will be parsing through the data for the latest student achievement trends. One analysis they can likely skip: assessing improvement for students who receive free- and reduced-price lunch (FRPL).
Understanding the academic performance of our nation’s low-income students, for which free and reduced price lunch receipt is a historical proxy, is critically important. But a student’s FRPL status, as reported in recent administrations of NAEP, is a weak indicator of student poverty, inconsistent across time and across states.
Over the past several years, high-poverty schools and districts have adopted the community eligibility provision (CEP), which allows schools with a large share of students whose families use social safety net programs, such as the Supplemental Nutrition Assistance Program and Temporary Assistance for Needy Families, to provide free lunch to all their students, without any paper applications.
In most states, finding students whose families use safety net programs, a process called direct certification, only identifies students who are eligible for free, not reduced, lunch. Thus, the share of students identified via direct certification tends to be substantially lower than the free- and reduced-lunch share identified with paper forms.
States vary in how they report free lunch receipt in CEP schools. For NAEP, some states report all students in CEP schools as receiving free lunch, resulting in a likely overcount of student need. Other states only report those who were identified as low-income via direct certification (PDF), resulting in a likely undercount.
Because of this inconsistency, comparing the performance of free lunch students in two states that use different reporting methods can yield misleading conclusions. It can even be difficult to measure the performance of FRPL-eligible students in a single state over time, as schools increasingly adopt CEP and thus either reduce or increase the share of students they report as eligible for FRPL.
The paradox of rising scores
West Virginia is an example of the second scenario, where aggregate trends in FRPL-eligible student achievement can prove misleading. More than half of West Virginian students are enrolled in schools that rely on CEP, and it appears that all students enrolled in those schools were reported as eligible for free lunch on NAEP. As a result, the share of FRPL-eligible students reported in the NAEP data has climbed substantially in the past few years. In 2011, 46 percent of students who took the 8th grade math assessment in West Virginia were reported as FRPL eligible. In 2017, 75 percent were deemed FRPL eligible. Although child poverty is an important and growing issue in West Virginia (PDF), it is unlikely that child poverty actually increased by more than 50 percent in just six years, and it is much more likely that this is overreporting based on CEP data.
With this change in reporting comes a dramatic change in outcomes for FRPL-eligible and ineligible students in West Virginia. Although 8th grade math scores for the state were flat from 2011 to 2017, both FRPL-eligible and non-FRPL subgroups posted increases in their NAEP performance, with scores rising 4 and 7 points, respectively.
Why did this happen? First, it’s important to know that a student’s family income is associated with academic achievement. Students who are newly identified as FRPL eligible because they attend CEP schools, but were not FRPL eligible before, are likely higher-income students who have, on average, higher test scores than those students who are eligible for FRPL based on their family income or participation in social safety net programs. Moving these students from the non-FRPL to the FRPL group artificially boosts the overall performance of FRPL students.
However, because these newly FRPL students are attending high-poverty CEP schools, they likely have lower family income and lower average achievement scores than the typical non-FRPL student. Because these students are moved out of the non-FRPL pool, the average of non-FRPL students also goes up.
An interim fix for FRPL
The Urban Institute has been using individual-level data—including FRPL status—to demographically adjust state NAEP data. This is important because comparing raw scores can be misleading; Massachusetts serves a different student population than Mississippi, and our adjustment controls for basic demographic differences between the two states, including FRPL status. Because of the changes to FRPL reporting, we opted for a new approach in the 2019 update.
For students enrolled in CEP schools, we assume that we don’t have any data about FRPL status. Instead, we use other contextual data on the students, the school, and the district to statistically predict the likelihood that each student is eligible for FRPL. This has the effect of “leveling out” inconsistent reporting. CEP schools that report all students as FRPL eligible have fewer predicted FRPL students, while those that report only directly certified students tend to have more predicted students.
This statistical fix isn’t perfect, but it does provide a more nuanced estimate of the number of students with FRPL at CEP schools. Using predicted FRPL eligibility changes how states rank when scores are demographically adjusted for only a student’s FRPL status. On the 8th grade math assessment, 42 states change at least one rank place, and 11 states change more than three rank places. Because West Virginia has such a large share of CEP schools, it has the largest drop when we use our new FRPL measure: 16 places, from 31 to 46.
Student survey data provide critical context for FRPL reporting
CEP implementation produces many benefits for students, including increased test scores (PDF), reduced disciplinary incidents, and healthier body mass index (PDF). But it has also weakened FRPL as a proxy for student poverty. Fortunately, the NAEP also administers a 10-minute survey to students, which provides useful background data on the student’s socioeconomic status, such as how many books they have at home, whether the child lives with a single parent, and whether they have access to technology such as laptops or a smartphone. We use much of these data in our statistical adjustment.
The NAEP student survey is essential for building and testing new socioeconomic status indicators. But it is optional, and states are increasingly opting out. Although every state but Alaska gave the survey in 2013, seven states opted out of the student survey in 2017.
If we’re going to accurately measure the progress of our most vulnerable students, we need to be able to consistently identify them. Inconsistent FRPL reporting on NAEP, combined with the loss of student survey data in states, endangers our ability to track the performance of low-income students.