Artificial intelligence (AI) systems can unfairly penalize certain segments of the population—especially women and minorities. Researchers and tech companies are figuring out how to address this issue. This interesting topic came to us from Law.com in their article, “Implementing Artificial Intelligence Requires Diverse Sets of Data to Avoid Biases.”

This bias has materialized as AI systems being less accurate at identifying the faces of dark-skinned women, giving women lower credit-card limits than their husbands and to make assumptions in the legal system with regard to repeat offenders based on race and gender.

Algorithms have grown considerably more complex, but we continue to face the same challenge. AI can help identify and reduce the impact of human biases, but it can also make the problem worse by baking in and deploying biases at scale in sensitive application areas. It is important to understand what data sets go into AI systems and how they can be configured to eliminate unconscious bias.

Melody K. Smith

Sponsored by Access Innovations, the world leader in thesaurus, ontology, and taxonomy creation and metadata application.