Bias in algorithms has become a hot topic, as IBM, Microsoft, Amazon, and others have recently endured criticism for bias in their facial recognition artificial intelligence (AI) service, and Google came under fire last year for its gender-biased machine translations. Meanwhile, the Trump administration recently launched a nationwide AI initiative.
Governments in particular have reason to be worried as they turn to data and algorithms to make public policy decisions. State courthouses, for example, came under scrutiny for allowing judges to use a risk-assessment algorithm called COMPAS to make decisions about bail, sentencing, and parole for defendants.
In response, tech companies and universities have created tools to mitigate bias—from the What-If tool at Google, to Reductions for Fair Machine Learning at Microsoft, to the AI Fairness 360 toolkit at IBM, to Fairness Flow at Facebook, to the Fairness Tool at Accenture, to Aequitas at the University of Chicago, and so on. And researchers here at the Urban Institute and elsewhere (PDF) have discussed AI bias issues (PDF) and potential solutions (PDF).
The need for public participation is another part of the conversation surrounding AI bias. As the AI Now Institute recently wrote (PDF), key actors need to democratize the ethics of AI as much as possible so that it’s an accessible conversation to the general public, not just technical experts. To apply AI to ethical decisionmaking, the public needs to be involved.
But even if we democratize the conversation around AI, algorithms shouldn’t shoulder all the blame. Indeed, some argue that AI is making these biased decisions more transparent, making discrimination easier to identify. And biased decisionmaking still exists without AI. Government officials could still rely on biased data to drive policy decisions or be otherwise biased in their decisionmaking.
To understand bias in policy decisions, we need to understand bias in data
The cause of biased decisionmaking is often biased data. An infamous example is predictive policing algorithms powered by police arrest data. Low-income, minority communities subject to higher rates of police patrols are more likely to have higher arrest rates. These biased data are fed into an algorithm, directing more police to low-income, minority neighborhoods, generating additional arrests and feeding more biased data into the algorithm in a destructive feedback loop. The algorithms may or may not be biased, but the data certainly are. Data scientists call this the “garbage in, garbage out” problem.
In response to these issues, groups like the Future of Privacy Foundation recommend that cities such as Seattle (PDF) “develop or obtain tools for evaluating the representativeness of the city’s open data” as a key step to ensuring equity and fairness within its open data program. Outside of open data, we believe this principle applies to any data that drive decisionmaking in any part of government.
When datasets have information on the attributes we care about, we can typically directly measure the bias in the data. For example, if we have data on gender, pay, title, and years of experience, we can get a pretty good idea of the gender bias in pay within a government agency.
But what about when we don’t have that information? Many datasets, for example, have geographic identifiers, such as points on a map, but no data on gender, income, race, or other sensitive information with which we can measure bias. In cases like these, it’s difficult to determine the bias in the data without the help of expert analysts—exactly the situation the AI Now Institute recommends we avoid.
In a new working paper, we propose a prototype bias assessment tool that could empower advocates, community groups, and government decisionmakers to quickly analyze bias in geospatial point data. This tool, which we’ve built but have not yet released publicly, would allow users to upload a dataset, click a button, and view a report on bias in the dataset relative to underlying census data. And because it runs on the latest serverless cloud technology, it will likely cost us less than $1,000 a year to host and maintain.
We plan to release the full tool later this year to help officials reduce the bias in their decisionmaking and to support community members and advocates in holding their decisionmakers accountable. We hope the tool will lead to improved data collection processes, especially from underrepresented groups, and increased awareness of limitations of datasets used in downstream analyses. We plan to improve the tool so it can more accurately reflect and measure bias and help decisionmakers understand how to mitigate it.
Data and algorithmic transparency are critical to balancing the rights of citizens with the potential for public good arising from the government’s increased data and algorithm use. We recognize that our tool is just one in a collection of bias measurement tools that we need to democratize the conversation around algorithmic decisionmaking, and we look forward to more development in this space.
We hope this toolkit will improve and grow to empower the public and decisionmakers to better understand how bias in data and algorithms shape their everyday lives.