Bias in technology is a conversation that continues to develop and evolve. Fortunately, the conversation is also sparking change. The Mercury News brought this interesting information to our attention in their article, “Google AI researcher’s exit sparks ethics, bias concerns.“
Bias in artificial intelligence (AI) technology is the underlying prejudice in data that’s used to create AI algorithms, which can ultimately result in discrimination and other social consequences. The truth is that AI started a technological revolution, and while AI has yet to take over the world, bias is a pressing concern, and an ethical one.
Human biases are well-documented, from implicit association tests that demonstrate biases we may not even be aware of, to field experiments that demonstrate how much these biases can affect outcomes. As society has started to wrestle with just how much these human biases can make their way into AI systems resulting in harm, many companies are looking to deploy AI systems across their operations. It has never been more important to be aware of those risks and working to reduce them.
Melody K. Smith
Sponsored by Data Harmony, a unit of Access Innovations, the world leader in indexing and making content findable.