Bias issues in AI decisionmaking have become increasingly problematic in recent years, as many companies increase the use of AI systems across their operations. On one hand, AI can help reduce the impact of human biases in decisionmaking. On the other, AI can make the bias problem worse.
AI systems learn to make decisions based on the data and algorithms humans put into them. Often, AI systems inherit human biases because they are trained on data containing human decisions. Evidence suggests that "AI models can embed human and societal biases and deploy them at scale."
Research has found that AI bias appears in algorithms in several different ways. AI natural language processing models have been found to contain gender, age, race, and sexual orientation stereotypes. One tech company stopped using a hiring algorithm when it found that the algorithm favored applicants based on words that were commonly found on men's resumes. A state using a criminal justice algorithm found that the algorithm "mislabeled African-American defendants as ‘high risk’ at nearly twice the rate it mislabeled white defendants."
One cause of bias issues in AI may be lack of diversity. Data from the Bureau of Labor Statistics shows that the individuals who write AI programs are still "largely white males," and other studies have shown that "only 12% of leading machine learning researchers are women."
What can business and policy leaders do to minimize bias in AI going forward? Among others, here are six steps that companies should consider.
- Stay up-to-date on the fast-moving AI field and be aware of those situations in which AI can help correct bias and those in which AI can exacerbate bias
- Establish responsible processes and practices to mitigate bias in AI systems
- Engage in fact-based conversations around potential human biases
- Consider how humans and machines can work together to mitigate bias
- Invest more and make more data available for bias research
- Focus on diversity in the AI field