The growing use of artificial intelligence (AI), especially in sensitive areas like recruitment, criminal justice and healthcare, has stirred a debate about bias and fairness. Yet human decision making in these and other domains can also be flawed, shaped by individual and societal biases that are often unconscious. Will AI’s decisions be less biased than human ones? Or will AI make these problems worse? CMS Wire brought this interesting information to us in their article, “Dealing With AI Biases, Part 2: Inherited Biases From Data.”

The simplest response to AI bias is to acknowledge the bias and use the trained algorithms judiciously. Not using the biased AI in populations where its bias marginalize certain groups could limit its potential damage, but this also limits the benefits it could bring.

Underlying data rather than the algorithm itself are most often the main source of the issue. Models may be trained on data containing human decisions or on data that reflect second-order effects of societal or historical inequities. Bias can also be introduced into the data through how they are collected or selected for use.

Most organizations have little knowledge on how AI systems make the decisions they do, and as a result, how the results are being applied in the various fields that AI and machine learning are being applied. Explainable AI allows users to comprehend and trust the results and output created by machine learning algorithms. Explainable AI is used to describe an AI model, its expected impact and potential biases. Why is this important? Because explainability becomes critical when the results can have an impact on data security or safety.

Melody K. Smith

Data Harmony is an award-winning semantic suite that
leverages explainable AI.
Learn More

Sponsored by Access Innovations, the intelligence and the technology behind world-class explainable AI solutions.