MIT researchers are striving to improve the interpretability of standardized features of machine learning algorithms, so decision makers will be more comfortable using the outputs of machine learning models. They have developed a taxonomy to help developers craft features that will be easy for their target audience to understand. This interesting information came to us from Science Daily in their article, “Building explainability into the components of machine-learning models.”

Explanations helpful to users for understanding and trusting machine learning models often center on how far certain features go in contributing to its predictive capability. For example, if a model predicts a patient’s risk of developing cardiac disease, a physician might want to know in what way the patient’s heart rate data correlates with that prediction.

Most organizations have little knowledge regarding how artificial intelligence (AI) and machine learning systems make their decisions, so they are not prepared to apply results anywhere with any confidence. Explainable AI allows users to comprehend and trust the results and output created by machine learning algorithms. Explainable AI is used to describe an AI model, its expected impact, and its potential biases. Why is this important? Accountability becomes critical when the results can have an impact on data security or safety.

Melody K. Smith

Data Harmony is an award-winning semantic suite that leverages explainable AI.

Sponsored by Access Innovations, the intelligence and the technology behind world-class explainable AI solutions.