A Better Measure of Inflation Doesn’t Mean a Better Measure of Poverty
The Trump administration is proposing to change the way the US officially measures poverty by using a different method to adjust the federal poverty level for inflation over time. The administration’s preferred approach would lead to slightly lower rates of inflation from year to year, slowing the growth of the poverty level. The change would result in fewer families being eligible for a variety of public assistance programs, and those still eligible would receive fewer benefits unless program rules change.
Although the official poverty measure has many shortcomings, how it is adjusted for inflation is far from the most important. Using an arguably better measure of inflation will not address the challenges of measuring how the nature of poverty and the size and composition of the poor change across generations. And changing the inflation index without also reassessing the level of resources people need to escape poverty could exacerbate the burdens and challenges confronting vulnerable families.
Could we do a better job measuring inflation?
If prices are going up, people need more money to buy the things they usually buy, which is why adjusting for inflation when tracking people’s well-being is important. Currently, the government uses the Consumer Price Index for All Urban Consumers (CPI-U) to measure inflation. Every month, the government checks the prices of 211 items in 38 locales to see how prices are increasing. Because people’s buying habits change over time, the government adjusts the mix of items every two years.
The administration argues that the CPI-U overstates inflation because it does not adequately account for shifts in what people buy as relative prices of goods and services change. The chained CPI takes those shifts into account, reflecting a slower inflation rate over time.
To illustrate, if you buy five apples and five pears and they each cost $1, you will spend $10. If the price of pears goes up to $2, then you would need $15 dollars to buy your basket of five apples and five pears, so the CPI-U inflation would be 50 percent. But if pears cost twice as much as apples, you might opt to buy eight apples and three pears—getting more fruit but in a different proportion—and be just as happy as you were with five and five. And eight apples and three pears only cost $14, so the chained CPI inflation would be 40 percent.
Many economists believe that the chained CPI provides a better measure of inflation than the traditional CPI-U. Tax legislation enacted by Congress and signed into law by the Trump administration requires the US Department of the Treasury to use the chained CPI to adjust tax brackets for inflation.
But proposals to index Social Security benefits using the chained CPI failed during the Obama administration partly because of resistance to the fact that the proposal would lead to benefit reductions for seniors and others who rely on Social Security. Critics of the chained CPI argue that not everyone can easily change the mix of items they buy in response to changes in relative prices and that using the chained CPI to adjust program eligibility and benefit levels would disproportionately harm vulnerable populations.
What difference would it make?
Moving from the traditional CPI-U to the chained CPI would reduce measured inflation by 0.2 to 0.3 percentage points per year. So in any one year, the difference might appear limited, but over time, the difference would grow.
Between August 2016 and August 2017, the CPI-U and chained CPI (PDF) increased by 1.94 and 1.69 percent, respectively—a difference of 0.25 percentage points. But between August 2002 and August 2017, the CPI-U increased by 35.9 percent while the chained CPI (PDF) increased by 31.6 percent, increasing the gap between the two inflation indices to 4.8 percentage points.
This means that over time, according to official measurements using the chained CPI, we would see a noticeably lower poverty rate without an actual change in household circumstances, and programs that use the federal poverty level as the basis for eligibility would serve fewer people.
Although not all public assistance programs use the federal poverty level to determine eligibility and benefits, some major ones do, such as the Supplemental Nutrition Assistance Program and certain parts of Medicaid. As the poverty level decreases relative to what it would have been using the CPI-U, fewer people would qualify for benefits, and those who would qualify would receive smaller benefits.
Is there a better way to measure poverty?
Adjusting the poverty level for inflation using the chained CPI would not improve our measure of poverty. The federal poverty level is based on food consumption patterns in the 1950s and the cost of what we considered an acceptable diet in the 1960s. Because families spent about one-third of their incomes on food in the 1950s, the poverty level was set at three times the cost of the US Department of Agriculture’s “economy food plan” (now called the “thrifty food plan”). Today, we basically use that poverty level established more than half a century ago and adjust it for inflation.
Basing the federal poverty level on food consumption patterns today, when the typical family spends only about 10 percent of their income on food, would set the poverty level at 10 times the level of the thrifty food plan, which would result in a poverty level that exceeds the median income of US households. If we wouldn’t draw the same line today, then how we adjust for inflation is not the primary issue.
The supplemental poverty measure (SPM) represents a better way to measure poverty. It sets the poverty level based on approximately how much the lowest-income third of consumers spends not just on food but also on clothing, shelter, and utilities as reported in the Consumer Expenditure Survey. In any given year, the SPM provides a yardstick for identifying households whose resources are so limited that they cannot fully participate in society and the economy.
The SPM thresholds are reestablished every year and are not tied to a specific level of basic need set at one point in time. Indeed, it is difficult to argue that what constitutes being poor in one time period in a given society can be relevant across all time periods and places. In 1940, more than 45 percent of US homes lacked complete plumbing facilities. Back then, having indoor plumbing was a good indicator that a family wasn’t poor—but today, not so much.
We don’t have to establish a new federal poverty level every year, and we may even want to keep a standard in place for several years to see how and if the absolute well-being of the lowest-income households is improving. But it makes little sense to use the same standard for decades. In 1973, a US government interagency subcommittee recommended that the poverty threshold be updated every 10 years.
And that brings us back to inflation. If we appropriately update the federal poverty threshold every decade or so to reflect the level of resources families need to be able to participate in society and the economy, then the choice of an inflation factor carries far fewer consequences. Applying a better measure of inflation to an increasingly outdated standard of need will only lead to a worse measure of poverty.
Photo by Zelma Brezinska / EyeEm / GettyImages.