Regardless of how much data artificial intelligence (AI) has made available, there will always be unforeseeable situations in real-world deployments. Unfortunately, socially-situated learning remains an open challenge for AI because they must learn how to interact with people to seek out the information that they lack. This interesting information came from The Proceedings of the National Academy of Sciences of the United States of America (PNAS) in their article, “Socially situated artificial intelligence enables learning from human interaction.”

AI is becoming seamlessly integrated into our everyday lives by augmenting our knowledge, avoiding traffic, finding friends, choosing the perfect movie, and even cooking a healthier meal. It also has a significant impact on many aspects of society and industry, ranging from scientific discovery, healthcare, and medical diagnostics to smart cities, transportation, and sustainability. Training AI in these real world situations has always been done by drowning the technology with knowledge. When it comes to social situations, the approach may have to be different.

Deep learning involves training AI systems using examples of how humans have made decisions in a similar situation. The examples used implicitly contain the biases and ethical values of the humans involved. Humans, however, are not all the same.

The real challenge comes from organizations that have little knowledge on how AI systems make their decisions and how to treat the results of those decisions. Explainable AI allows users to comprehend and trust the results and output created by machine learning algorithms.

Melody K. Smith

Data Harmony is an award-winning semantic suite that leverages explainable AI.

Sponsored by Access Innovations, the intelligence and the technology behind world-class explainable AI solutions.