Bias in technology exists. No one is arguing that. However, there are steps to take to minimize that and it starts with understanding how technology works.
Slate recently reported on this topic in their article, “Facebook Apologizes After its AI Mislabels Video of Black Men as “Primates”’. Facebook has disabled its topic recommendations after the artificial intelligence (AI)-powered feature mislabeled a video of Black men as “primates.”
Human biases are well-documented, from implicit association tests that demonstrate biases we may not even be aware of, to field experiments that demonstrate how much these biases can affect outcomes. Over the past few years, society has started to wrestle with just how much these human biases can make their way into AI systems — with harmful results.
The problem is also not new. Back in 1988, the UK Commission for Racial Equality found a British medical school guilty of discrimination. The computer program it was using to determine which applicants would be invited for interviews was determined to be biased against women and those with non-European names.
Thirty years later, algorithms have grown considerably more complex, but we continue to face the same challenge. Using an algorithm didn’t cure biased human decision-making. But simply returning to human decision-makers would not solve the problem either.
The growing use of AI in recruitment, criminal justice and healthcare has stirred the debate about bias and fairness. Yet human decision-making in these and other domains can also be flawed, shaped by individual and societal biases that are often unconscious.
At a time when many companies are looking to deploy AI systems across their operations, being acutely aware of those risks and working to reduce them is an urgent priority.
Unfortunately, most organizations have little knowledge of how AI systems make the decisions they do, and as a result, how the results are being applied in the various fields that AI and machine learning are being applied. Explainable AI allows users to comprehend and trust the results and output created by machine learning algorithms.
Data Harmony is Access Innovations’s AI suite of tools that leverages explainable AI for efficient, innovative and precise semantic discovery of new and emerging concepts.
Melody K. Smith
Sponsored by Access Innovations, the intelligence and the technology behind world-class explainable AI solutions.