Urban Wire How to Make Gainful Employment More Inclusive
Erica Blom, Robert Kelchen, Carina Chien, Kristin Blagg
Display Date

Media Name: gettyimages-1252668228_crop.jpg

Gainful employment (GE) regulations, implemented by the Obama administration in 2015, were intended to hold for-profit and nondegree programs accountable by assessing whether graduates’ earnings were high enough to justify the debt they had taken on. Repealed under Trump, these rules could be reinstated by the Biden administration. If so, policymakers should consider “rolling up” programs to include as many programs as possible.

GE regulations, as designed in 2015, were based on two debt-to-earnings ratios of graduates to assess whether their earnings were high enough to justify the debt. The first considered the ratio of annual student loan payments to total annual income, and the second considered the ratio of annual student loan payments to discretionary income (the amount above 150 percent of the federal poverty level). If the first ratio was more than 12 percent and the second more than 30 percent, the program could risk loss of Title IV eligibility (that is, they would no longer have access to federal grants and loans).

There are different ways to define a “program.” Programs, or majors, are organized by Classification of Instructional Programs (CIP) codes. There are 47 broad categories that are given two-digit codes, such as 26 (biological and biomedical sciences), which contain four-digit subcategories (such as 26.08, genetics). Within these are specific six-digit programs (such as 26.0804, animal genetics). In the 2018–19 award year, credentials were awarded in 38 two-digit codes, 368 four-digit codes, and 1,423 six-digit codes. Depending on your goal, two-, four-, or six-digit CIP codes might be appropriate.

GE regulations used six-digit codes, the most specific definition of a program. For privacy and statistical stability, programs with fewer than 30 graduates (pooled across two years) were not required to report data. This excluded a considerable share of programs and students. As the figures below show, roughly 73 percent of GE-qualifying programs and 23 percent of students in those programs would have been excluded from measurement in the 2017–18 and 2018–19 graduate cohorts.

Bar chart showing the share of programs and students excluded from gainful employment regulations

Which types of programs are often too small to reach the 30-person minimum? Among the seven most common two-digit programs offering undergraduate certificates, more than half of computer science, engineering, and business programs are too small to be counted under the six-digit GE system, with 64, 64, and 63 percent of programs, respectively, excluded. Thirty-nine percent of health professions; 40 percent of precision production; 40 percent of culinary, entertainment, and personal services; and 46 percent of mechanic and repair programs are excluded.

One alternative is to report GE data at the four-digit level (as College Scorecard does) or at the two-digit level. As the above charts show, this would increase coverage significantly. However, this approach would likely obscure data on six-digit programs that were already large enough to be assessed under old GE rules. For example, the poor performance of an animal genetics program with 35 graduates, which previously would have failed GE, might be masked by the performance of the other biology-related majors.

A different approach is to iteratively “roll up” programs within a given credential level to four-digit or two-digit CIP codes, or even to the institution, until the threshold of 30 students is met. The concern, however, is potentially combining programs with very different outcomes. For example, a failing program could be reported along with a successful program, and either the failing program would get unfairly saved or the successful program unfairly put at risk of closing.

How often does this happen? By definition, we don’t see GE outcomes for programs with fewer than 30 graduates, so we can’t implement exactly the roll-up described above. But we can roll up the programs we do see to the four-digit CIP code and see what happens.

First, the vast majority (76 percent) of six-digit programs in the GE data are the only six-digit program with data within their four-digit program. (In our genetics example, that would mean that either animal genetics is the only six-digit program within 26.08, genetics or that all other genetics programs fall below the 30-graduate threshold, so we don’t see outcomes for those programs in our data.) If we examine the small number of four-digit programs containing multiple six-digit programs, the majority have the same outcome: all passing or all failing. Only 10 percent of this small number of four-digit programs with multiple six-digit programs have mixed outcomes, where at least one program fails and at least one passes. The table below provides more detail.

Table showing that programs with four-digit Classification of Instructional Programs codes containing six-digit programs mostly have programs that are all passing or all failing

The number of instances of disparate outcomes within a four-digit CIP code are much lower than by chance. The overall failing rate is roughly 9 percent among six-digit programs. If two six-digit codes were chosen randomly from our data, the probability of a mixed outcome is 17 percent, more than twice the 8 percent actually seen. Similarly, if three were chosen, the probability of a mixed outcome is 25 percent, compared with 14 percent, and if four were chosen, the probability is 32 percent, compared with 24 percent.

However, the chance of mixed outcomes is clearly not zero, so there is a trade-off: Is including more programs worth the cost that some successful programs may be grouped together with some less successful programs? We believe it is.

First, these smaller programs would not be included at all should GE regulations be revived without alteration. More data and accountability is better than less. Second, passing programs are more likely to pull a failing program up than vice versa (because there are far more passing than failing programs), so the risk of a program unfairly deemed as failing is low. Finally, any program deemed failing is given a chance to improve before sanctions take effect; it is likely the case that college administrators are aware which of their six-digit programs are less effective, and they can take appropriate action.

A final benefit of this roll-up strategy is that it also minimizes “gaming” by program size—administrators capping programs at 29 students or dividing programs into smaller subprograms to avoid scrutiny. For GE to work, it needs to cover all eligible programs.

The Urban Institute has the evidence to show what it will take to create a society where everyone has a fair shot at achieving their vision of success. 

Show your support for research and data that ignite change. 

Research Areas Education
Tags Employment and income data
Policy Centers Center on Education Data and Policy