Regardless of how much data artificial intelligence (AI) is made available, there will always be unseen situations in real-world deployments. Unfortunately, socially situated learning remains an open challenge for AI because they must learn how to interact with people to seek out the information that they lack. This interesting information came from The Proceedings of the National Academy of Sciences of the United States of America (PNAS) in their article, “Socially situated artificial intelligence enables learning from human interaction.”

AI is becoming seamlessly integrated into our everyday lives, augmenting our knowledge, avoiding traffic, finding friends, choosing the perfect movie and even cooking a healthier meal. It also has a significant impact on many aspects of society and industry, ranging from scientific discovery, healthcare and medical diagnostics to smart cities, transport and sustainability. Training AI in these real world situations have always been done by drowning the technology with knowledge. When it ccmes to social situations, the approach may have to be different.

Deep learning involves training AI systems using examples of how humans have made decisions in a similar situation. These examples implicitly contain the biases and ethical values of the humans involved in those examples. Hence, AI system’s decisions reflect the human bias. However, humans are not all the same.

The real challenge comes from organizations who have little knowledge on how AI systems make the decisions they do, and as a result, how the results are being applied in the various fields that AI and machine learning are being applied. Explainable AI allows users to comprehend and trust the results and output created by machine learning algorithms.

Melody K. Smith

Data Harmony is an award-winning semantic suite that leverages explainable AI.

Sponsored by Access Innovations, the intelligence and the technology behind world-class explainable AI solutions.