The blog of the Urban Institute
November 1, 2019

‘Nonresponse Bias’ Could Explain the Census Bureau’s Finding on the Citizenship Question

November 1, 2019

Yesterday, the US Census Bureau released selected findings from a national census test, conducted in July 2019, to examine the impact of including a citizenship question on “self-response” rates—that is, the rate at which people actively return completed census forms.

They found that the addition of a citizenship question to the census would have virtually no impact on self-response.  Below are the results, according to the Census Bureau:

Self-Response Rates for Samples with and without the Citizenship Question: July 2019 National Census Test Results

figure

So is this the end of the story surrounding the controversial citizenship question and its potential impacts? I don’t think so.

The Bureau’s results certainly raised my eyebrows, given all the previous research and testimony by Census Bureau researchers showing that adding a citizenship question to the census would detrimentally affect self-response.

Even recent, independent, nongovernmental research suggests that there would be a participation chill if a citizenship question were included. So which research is correct, and which is wrong? All of the research is valid, but you need to be careful interpreting the results.

Here is a plausible explanation that validates findings of the July census test (showing no effect of the citizenship question) and prior research (showing the negative effect of a citizenship question). The reasoning is very simple and draws on past Census Bureau research experience.

What explains the Census Bureau’s finding?

Nonresponse bias—the difference between those who participate in a survey and those who do not—has consistently muddled Census Bureau research efforts for decades in their quality assessments of the decennial censuses. Their assessments involve conducting a large sample survey just after the decennial census and comparing the results of the sampled households with the results from corresponding census returns.

But a problem inevitably creeps up: not everyone in the subsequent quality assessment survey participates. That phenomenon creates a nonresponse bias because virtually nothing is known about those who fail to respond to both the census and the survey.  And the same applies to the July 2019 census test; although the test focused on the self-responders, we know nothing about the nonresponders.

Nonresponse bias could explain why the July 2019 test did not detect a meaningful difference in participation between households receiving forms with and without the citizenship question. The people most likely to be impacted by the citizenship question may well have opted out of the Census Test altogether.  After all, the nation had just been exposed to at least a year of media coverage and political dialogue about threatening uses of citizenship data (e.g., immigration enforcement, gerrymandering).

Words and actions can affect the decision to participate in government surveys

The impact of this exposure cannot be expected to vanish simply because SCOTUS rendered a decision. Words and actions matter. They can instill fear in population groups such as immigrants and the Latinx community. And it’s plausible that much of the impact on these populations originated in the early months of the push to add a citizenship question.

The populations most affected by the citizenship question may well have chosen not to participate in any census efforts before the census test launched in June 2019, leaving only participants who would have responded regardless of the inclusion of the citizenship question. So nonparticipation would be the same between the samples with and without the citizenship question in their forms.

One might point out, as the Census Bureau does, that the self-response rates for the July 2019 test are similar to the Rhode Island census test conducted before the citizenship issue fully emerged.  But the self-response rate targets for both tests were in the 50 percent range, while the actual 2020 census plan relies on a 60.5 percent self-participation rate.  So there is a sizeable gap between test and decennial census self-response that could accommodate a large group of people opting out altogether.  

This would explain the similarity in self-response between forms with and without the citizenship question. And it would support the validity of the sizeable body of research conducted before the test, which suggests a chilling effect on census participation because of the citizenship question.

At a minimum, we should accept the statistical results of the Census Bureau’s well-designed self-response experiment in the July 2019 census test. But we should also recognize that generalizing these findings to the general population is risky.

A well-known phenomenon—nonresponse bias—explains why we might expect a genuine negative effect on the upcoming decennial census if the people most likely to be affected by the citizenship question debate over the past year had already decided not to participate in any census activities.

We should not take a sigh of relief following the Supreme Court’s decision—it’s too late for that. Our efforts to bolster participation and community trust to achieve a fair and accurate census should continue unabated. We all deserve to be counted.

A crowd gathers outside the US Supreme Court as the court hears oral arguments in the case Department of Commerce v. New York on April 23, 2019 in Washington, DC. The case highlights a proposal by the Trump administration to include question about US citizenship in the 2020 census. (Photo by Win McNamee/Getty Images)

SHARE THIS PAGE

As an organization, the Urban Institute does not take positions on issues. Experts are independent and empowered to share their evidence-based views and recommendations shaped by research.