Bias in AI can result from assumptions in the machine learning process, or as a result of data that is imbalanced or incomplete and does not reflect a true representation of the relevant population. Examples of such skewed data include datasets that are mislabeled or misrepresentative of reality, systematic errors in the collection of data, or valuable data that is completely excluded, which in turn creates biased outputs.
The implications of bias in AI are widespread, and can affect recruiting processes, credit referencing, and insurance decisions, to name a few. For example, if an employer uses an AI tool for recruiting that uses historical data from the company’s past and current employees, most of whom are male, then the AI system may incorrectly “learn” that the ratio of preferable candidates should match this historical data, therefore resulting in a biased outcome. As the role of AI increases in decision making across industries, the risks of bias also increase due to the large scale of data that can be processed by machines.
The risks associated with bias in AI can result in statutory, contractual, and common law liability. Laws that prohibit discrimination, like the Fair Housing Act in the United States and the Equality Act in the United Kingdom, provide examples of how biased AI could lead to liability for organizations.
There are steps that can be taken, if not to completely remove bias in AI, then at least to mitigate it. Examples include choosing a suitable AI provider, performing regular audits of algorithms, and employing a diverse programming or control team with antibias training and a culture of transparency.
As we look to the future, the detrimental impact and effects of bias in AI will likely increase as we see a continued focus on ensuring diversity and inclusion, which is likely to lead to new or updated legislation which AI bias may fall foul of. There is also the possibility of AI-specific legislation that places obligations on companies regarding their use of AI.