
Vivienne Ming, a theoretical neuroscientist and cofounder of Socos Labs in Berkeley, California, defines artificial intelligence (AI) as “any autonomous and artificial system that can make a decision under uncertainty and make expert human judgements cheaper, faster, and increasingly, in some domains, better than a human can.”
Using that definition, AI powers many of today’s popular technological services: ride-sharing, email communication, facial recognition, mobile banking, music recommendations, and social media.
AI has already been widely applied across business, social, and government sectors. But if it’s not applied carefully, AI can lead to distorted results or decisions and potentially exclude historically marginalized or underrepresented populations.
On a recent episode of the Urban Institute’s podcast, Critical Value, Ming discusses three approaches to minimize the risk of AI supporting problematic or biased outcomes.
1. Conduct regular audits
If AI is trained on biased data and learns from biased samples, the system can reproduce bias that originated from discriminatory human decisions and practices.
Ming argues that public or private entities should audit how their AI systems operate and the data that AI use to make decisions. Audits can help gauge, for example, the level of bias in a dataset or how AI is adding value to a process. Audits could also promote transparency in organizations’ operations and decisionmaking processes.
“This is something that I would love for the industry to take on…as a standard,” said Ming. “Auditing became a norm in the financial world as part of simple good governance and board work. You can’t have investment if you don’t know what’s going on in a company.”
2. Involve strong regulatory institutions
According to Ming, AI can unlock new realms of scientific research and tackle challenging social issues. But AI must meet existing standards, and any new standards we establish, to reach its greatest potential. Strong regulatory institutions can design a framework to develop new technical standards, ethical guidelines, and public policies to maximize the benefits of AI.
“We could look at the WHO, CDC, or other federal agencies as the different models of the kind of institutions I am talking about,” she explained.
“I love this idea of having empowered institutions like NGOs coming in and doing what any good institutional regulatory body should do, which is be an expert that loves its industry.”
Ming also advocated for “funding or government contracts towards groups using data in the right way.”
3. Empower and educate people
Bridging the AI education gap may help society deal with AI’s impacts as they come. People trained in a range of AI-related skills, including machine learning, programming, distributed computing, and data science and engineering, can provide insight on data analysis, management, and regulation.
People trained in AI can also promote equity in design and implementation and look out for potential discrimination.
Ming also offered an example of how people trained in AI could form new organizations and partnerships. “There is this idea of ‘data trusts,’ which are legal entities that represent the interest of their members,” she explained. “You would contribute your data to a data trust collectively with other groups that share your interest in how data is managed. Then, groups like mine could go build a system. All you need is the infrastructure to run it.”
AI can support clearer decisions and efficiency across our society, but we need to prepare and equip institutions and people to maximize the benefits and ensure they are shared equitably.