As businesses begin to rehire after months of extraordinary job loss, artificial intelligence (AI)-driven hiring screens are becoming increasingly attractive to employers as an alternative to in-person interactions during the COVID-19 pandemic. To winnow down a flood of online job applications efficiently, major employers are using predictive hiring tools that screen and rank resumes, assess candidates through online games, and conduct video interviews that analyze applicants’ facial expressions.
Although many vendors claim their tools reduce subjectivity and discrimination, these systems may make decisions on inaccurate or biased data that can reproduce inequity by incorporating discrimination from past hiring decisions. During economic downturns, workers face intense competition for fewer jobs along with a heightened risk of discrimination. As employers increasingly adopt hiring assessment technology, they have a critical responsibility to ensure the systems they use don’t deepen inequalities.
Ensuring equity in hiring could not be more urgent
Tens of thousands of workers nationwide walked off the job last Monday as the Strike for Black Lives called for fundamental changes to address inequality and systemic racism in solidarity with the Black Lives Matter movement. As the pandemic exacerbates inequality, it has highlighted the impact of widespread occupational segregation, as Black, Latinx, Asian American, and Native American people; women; and people with disabilities have faced higher unemployment rates. And the coronavirus has disproportionately affected communities of color, where workers are overrepresented in low-wage jobs deemed essential. As businesses make statements of solidarity in support of Black Lives Matter and racial justice, they have an important opportunity to invest in substantive changes to tackle the bias in their employment practices.
The complexities of hiring assessment technologies
A vast body of research shows that many long-standing hiring processes are biased. Widely used practices such as subjective resume review and unstructured interviews enable stereotypical views and inaccurate assumptions to influence hiring decisions. And despite decades of research showing resumes with African-American, Latinx or Asian-sounding names have been 24-36 percent less likely to receive interview requests than resumes with identical qualifications with white-sounding names, such practices are the foundation of many employers’ hiring processes.
Further, Black and white Americans report stark differences in their experiences with discrimination. Seventy-two percent of Black people working in science, technology, engineering, and mathematics say a major reason Black and Latinx workers are underrepresented in these jobs is because they face discrimination in recruiting, hiring, and promotions; only 27 percent of white people say the same.
Hiring assessment technology could help to expand the applicant pool by measuring abilities rather than relying on proxies for talent, such as a college degree, employee referrals, or recruitment from competitors, all of which may exclude qualified workers who have been historically underrepresented. By moving away from traditional criteria, employers could hire from a more diverse pool of high-performing candidates. Yet, simply disrupting the current system with technology will not advance equity. Hiring assessment technology systems reflect the choices of their developers, who may not detect bias in the data—a particularly acute concern given the lack of diversity in the AI field. And even the most sophisticated tech companies struggle to ensure their AI systems are not discriminatory. Two years ago, Amazon abandoned an AI screening program because the system taught itself to prefer male candidates over women, based on the company’s past hiring patterns.
To realize the promise of new technology, we must ensure systems are carefully designed to prevent bias and to document and explain decisions necessary to evaluate their reliability and validity. Without adequate safeguards, algorithmic assessments can perpetuate patterns of systemic discrimination already present in the workforce.
Shared principles can help reduce hiring inequities
Today, civil rights leaders have released an important set of Civil Rights Principles to guide tech developers, employers, and policymakers in the development, use, and auditing of hiring assessment technologies. These principles build on a cross-sector convening hosted by the Urban Institute in collaboration with Upturn, the Leadership Conference on Civil and Human Rights, and the Lawyers’ Committee for Civil Rights Under Law in October 2019. These principles recognize that to prevent discrimination and advance equal opportunity, hiring assessment technologies must be explainable, job-related, and audited. These principles include the following:
- Applicants should be notified and provided an explanation about how they will be assessed so they may seek redress under existing civil rights protections or request reasonable accommodations for a disability if needed.
- Hiring assessments should measure job-related traits and skills. Organizations should study and identify job-related criteria and be able to describe what an assessment is measuring and why.
- Hiring assessments should be regularly and thoroughly audited for discrimination and job-relatedness. Although self-testing is encouraged, assessments should be audited by independent third parties to ensure a greater degree of accountability and impartiality.
Finally, the principles provide that meaningful oversight and accountability structures are essential to ensuring equity is foundational in hiring practices and not just a backend consideration to avoid compliance issues.
As I discussed in my testimony and letter to Congress, as a former chair of the US Equal Employment Opportunity Commission, I have seen that without clear accountability structures, the complexity and opacity of systems can mask discrimination by making it difficult, if not impossible, to understand the reason for a selection decision. To provide a starting place for advancing equity in hiring, I have collaborated with the civil rights community in developing these principles to guide the development and use of hiring assessment technologies and inform policy approaches.
An important next step will be to operationalize these principles by applying existing legal standards alongside the development of new technical and legal frameworks to address novel issues raised by hiring technology. This effort will require a robust interdisciplinary collaboration including computer and data scientists, tech developers, civil rights lawyers, employers, industrial and organizational psychologists, and other social scientists to translate principles of nondiscrimination into concrete actions so we build a future that aims not to merely minimize discrimination but to maximize equity.
Christopher Livingston and Michaela Morrissey contributed to this post.
Tune in and subscribe today.
The Urban Institute podcast, Evidence in Action, inspires changemakers to lead with evidence and act with equity. Cohosted by Urban President Sarah Rosen Wartell and Executive Vice President Kimberlyn Leary, every episode features in-depth discussions with experts and leaders on topics ranging from how to advance equity, to designing innovative solutions that achieve community impact, to what it means to practice evidence-based leadership.