Urban Wire Three Ways AI Can Discriminate in Hiring and Three Ways Forward
Jenny R. Yang
Display Date

Media Name: gettyimages-1187987260_crop1.jpg

In 2012, college engineering student Kyle Behm applied for a number of hourly jobs at retail stores. Behm had worked in similar positions, but the jobs he applied for required personality assessments. Kyle had been diagnosed with bipolar disorder, so questions about whether he experienced mood changes led many of the retailers to reject him even though he was well qualified.

Behm’s story illustrates the risks posed by a new generation of tools powered by artificial intelligence (AI) that are transforming the lives of America’s workers, with profound implications for civil rights.

Last week, I testified before the House Committee on Education and Labor Subcommittee on Civil Rights and Human Services to discuss how technology is changing work and how policymakers can address the new civil rights challenges raised by algorithmic hiring tools, worker surveillance, and tech-enabled business models that disrupt traditional employer-employee relationships.

Many new tech-driven hiring systems use AI to more quickly filter through increasing numbers of online applicants. Employers are using chatbots, résumé-screening tools, online assessments,  web games, and video interviews to automate various stages of the hiring process.

Some employers aim to hire more quickly, assess “cultural fit,” or reduce turnover. Others aim to make better job-related decisions and hire more diverse candidates, expanding the applicant pool by measuring abilities rather than relying on traditional proxies for talent, such as graduation from an elite university, employee referrals, or recruiting from competitors. AI may be able to help employers identify workers who have been excluded from traditional pathways to success but have the skills necessary to succeed.

But with AI, machines work to replicate human decisionmaking. Often the bias in AI systems is the human behavior it emulates. When employers seek to simply automate and replicate their past hiring decisions, rather than hire based on a rigorous analysis of job-related criteria, this can perpetuate historic bias. Discriminatory criteria can be baked into algorithmic models and then rapidly scaled.

Bias may enter AI-powered systems in at least three ways:

1. Biased data. Data used to train algorithms may introduce bias. Amazon’s effort to build a résumé-screening tool highlights this challenge. Amazon’s model—trained on 10 years of résumés submitted primarily by men—learned to penalize women applicants.

2. Biased variables. Variables considered by algorithms often contain bias, and models may learn to use proxies for protected characteristics. For example, zip codes can be a proxy for race. Selecting biased variables can reflect developers’ blind spots—an acute concern considering the lack of diversity in the field.

3. Biased decisions. Humans may misuse models’ predictions and place undue weight on them, leading to discriminatory decisions.

Compounding these problems, many systems operate as a “black box,” meaning vendors of algorithmic systems do not disclose how inputs lead to decisions. Systems may rely on inaccurate or biased data and may not be designed to enable anyone to understand or explain a particular hiring decision. Because technology provides a sense of objectivity and scientific analysis, employers may not question automated, essentially unreviewable, decisions.

The answer to these concerns is not to simply return to human decisionmaking. Subjective decisionmaking practices have long perpetuated discrimination while being very difficult to challenge.

Used appropriately, technology can serve as a tool to support data-driven efforts to measure how bias operates at different stages of the employment process. Technology can help employers learn when and how bias occurs—whether in the recruitment phase, résumé review, the interview process, or deciding pay and promotions. Algorithms can play a powerful role in improving decisionmaking by identifying job-related criteria and behaviors, as well as patterns of hidden bias.

To harness AI’s potential, we need to ensure robust safeguards to address the new risks of AI systems. I share three strategies to chart a way forward:

1. Ensure a third-party audit of the development and use of AI tools.

A third-party auditing system would promote accountability by employers and vendors while having flexibility to evolve with technology and protect intellectual property. The government has an important role in creating an auditing framework and core requirements for retention and documentation of technical details, including disclosing training data for review during an investigation.

Independent auditors could follow established principles in the computer science and test validation fields, informed by workers, civil rights principles, and the public. This would promote meaningful transparency and external review while enabling standards to adapt with technological advances.

2. Adopt a workers’ bill of rights.

A workers’ bill of rights for algorithmic decisions would ensure understanding of how decisions are made and would provide a process to challenge biased or inaccurate decisions. These four areas are an important starting place:

1) Notice and consent: Workers should have the right to know and consent to the information collected to screen and evaluate them and to understand how personal information is stored, sold, or otherwise used. Employees need to understand how they will be evaluated so they can determine whether they need to seek reasonable accommodation for a disability under the Americans with Disabilities Act or otherwise have reason to believe the automated screen may be inaccurate.

2) Right to an explanation: To address concerns about fairness and accuracy, employers should explain the information considered for an applicant and the rationale for a decision in terms that a reasonable worker could understand.

3) Process for redress: Workers should have the right to view the data collected on them and have an opportunity to correct errors through an accessible process with human review and redress for harms.

4) Accountability: Employers and vendors have a responsibility to ensure systems are auditable by third parties, including in litigation or a government investigation. This includes retaining records on data used to train algorithms, as well as documentation of decisions made by algorithmic systems.

3. Update existing federal guidelines.

An update to the 1978 Uniform Guidelines on Employee Selections Procedures would provide valuable guidance on the validation standards for algorithmic screens. A revision could incorporate the latest scientific understanding into unified government principles.

To ensure a future that advances equal opportunity, it is essential that we have robust interdisciplinary engagement and public participation in the creation of safeguards that create meaningful accountability. The Urban Institute has been facilitating cross-sector dialogue, including through Urban’s May 2019 Knowledge Lab on Artificial Intelligence and Employment Equity and an October 2019 convening Urban hosted in collaboration with Upturn, the Leadership Conference on Civil and Human Rights, and the Lawyers’ Committee for Civil Rights Under Law.

By bringing together lawyers, employers, tech developers, computer and data scientists, and industrial and organizational psychologists and other social scientists, we are exploring strategies for ensuring fairness and equity in the use of hiring algorithms and AI.

Body

Tune in and subscribe today.

The Urban Institute podcast, Evidence in Action, inspires changemakers to lead with evidence and act with equity. Cohosted by Urban President Sarah Rosen Wartell and Executive Vice President Kimberlyn Leary, every episode features in-depth discussions with experts and leaders on topics ranging from how to advance equity, to designing innovative solutions that achieve community impact, to what it means to practice evidence-based leadership.

LISTEN AND SUBSCRIBE TODAY

Research Areas Workforce
Tags Workforce development Employment and income data Beyond high school: education and training
Policy Centers Center on Labor, Human Services, and Population