A year after the White House released a blueprint for an AI Bill of Rights, President Biden has issued a new executive order to guide government and its partners in the private sector and academia as they develop, assess, and implement equitable uses of AI.
Because automated systems are a reflection of societal biases and structural inequities, AI applications in the public sector have been shown to discriminate based on race, ethnicity, and gender, compounding preexisting inequities. However, AI applications also present significant opportunity to close equity gaps across domains such as health care, housing, criminal justice, and public benefit administration.
With this new guidance and an additional memorandum (PDF) from the Office of Management and Budget, government and community-based organizations have an opportunity to collaborate with public policy researchers to build an evidence base that highlights the current opportunities and risks associated with AI. To promote better social outcomes, the connection between these potential benefits and risks must be more clearly documented and incorporated into a virtuous cycle of learning, which policymakers can use to implement more equitable and responsible applications of these tools.
Responding to the equity mandate by balancing the study of AI’s harms and benefits
A robust response to the federal government’s request for the development and implementation of equitable AI requires a balanced analysis of the harms and benefits resulting from these technologies. To do so, research organizations and policymakers can develop a research framework that aims to disrupt structural racism through rigorous measurement and policy paths for action. In addition to building on the extensive literature on ethical AI and algorithmic bias, equitable AI can pursue participatory quantitative methods that ensure communities of color are at the forefront of designing and assessing the benefits and risks of new automated systems.
Researchers, designers, and regulators can assess the distribution of AI harms and benefits by considering the following three components of equity:
- Procedural equity: How can we design, develop, and implement automated systems to help ensure that marginalized communities receive fair access to services?
- Distributional equity: How can resource allocation and procurement distribute benefits associated with new automated systems to those systemically and historically excluded from accessing them?
- Structural equity: How can new guidance and standards transform the implementation of AI-related policies and the design and redesign of automated systems in a way that incentivizes actors to make measurable progress toward equitable outcomes?
Advancing AI and automated systems to benefit underserved communities
The Urban Institute has already begun to demonstrate how centering equity in the evaluation and implementation of AI and automated systems can have a positive impact. Urban researchers examined the disparate effects on home appraisals that automated valuation models (AVMs) had in majority-Black neighborhoods. Through a regression analysis, the researchers found that AVM accuracy is worse for majority-Black neighborhoods, which can lead to reduced wealth accumulation for Black homeowners and understate risk on Black homeowners’ balance sheets. More recently, Urban researchers released a follow-up report that reinforced their findings of racial differences in AVM accuracy by incorporating satellite imagery data and machine learning models. These publications have informed guidance on rooting out racial and ethnic bias in home valuations and highlighted future areas for studying how AI may shape racial disparities in the mortgage market.
As rapid advances in AI continue, automated systems will increasingly touch every corner of society, and it’s crucial to ensure they do so in ways that promote equity. We see preliminary research and policy opportunities in the following areas of interest, among others:
- studying racial inequities in Medicaid programs’ risk assessment or adjustment algorithms to generate more racially equitable health outcomes
- investigating how insurers, hospitals, and physicians can use AI to guide data-driven decisionmaking about patient care and diagnosis and assist in triaging limited resources
- utilizing text analysis and natural language processing models (PDF) to identify racial bias in the child welfare system, without prompting overreliance by human decisionmakers on opaque scores (i.e., automation bias)
- opening up the black box on tenant screening data collection processes and consequent landlord behavior, which can disproportionately exclude renters of color from safe and affordable housing
- recalibrating pretrial risk assessment tools that might exhibit racial bias against defendants of color
Building a collaborative evidence base for future AI research
Urban’s Racial Equity Analytics Lab is working across these and other areas of research with our new body of work on equitable AI and automated systems. Housed in the Office of Race and Equity Research, our team draws on a blend of policy expertise, research and data science capacity, and community-engaged approaches to study how we can minimize the harms and maximize the benefits of automated systems. We will continue to apply what we learn to drive our own use of AI to fill data and research gaps across these same domains.
We look forward to collaborating with key stakeholders in federal and state government, nonprofit research, and community-based group collaborators to ensure the outsize benefits and large-scale risks of automated systems are distributed equitably. Please reach out to Rita Ko ([email protected]) to share your interest in partnering with the Urban Institute to evaluate and implement more equitable and impactful algorithmic systems.