Machine learning models are growing in complexity and researchers are improving models that have trouble understanding decisions. This interesting subject came to us from RT Insights in their article, “MIT Researchers Create Explanation Taxonomy to ML Models.”
In finance, healthcare, and logistics, businesses are attempting to implement artificial intelligence (AI) in their decision-making processes but are finding decision makers often reject or doubt AI systems, because they do not understand what factors the AI used to come to a certain observation or decision.
Researchers at MIT have been working on a solution to this issue, by building a taxonomy inclusive to all the different types of people interacting with a machine learning model. The taxonomy covers how best to explain and interpret different features, but also how to transform hard-to-understand features into formats that are easier to understand for non-technical users.
The real challenge is that most organizations have little knowledge on how AI systems make decisions. Explainable AI allows users to comprehend and trust the results and output created by machine learning algorithms.
Melody K. Smith
Sponsored by Access Innovations, changing search to found.