Machine learning models are growing in complexity and researchers are improving models that have trouble understanding decisions. This interesting subject came to us from RT Insights in their article, “MIT Researchers Create Explanation Taxonomy to ML Models.”
In finance, healthcare, and logistics, businesses are attempting to implement artificial intelligence (AI) in their decision making processes but are finding decision makers often reject or doubt AI systems. When they cannot understand what factors the AI used to come to a certain observation or decision, decision makers cannot or will not utilize AI results.
Researchers at MIT have been working on a solution to this issue, by building a taxonomy that is inclusive to all types of people who interact with a machine learning model. The taxonomy covers how best to explain and interpret different features, but also how to transform hard-to-understand features into formats that are easier to understand for non-technical users.
The real challenge is that most organizations have little knowledge on how AI systems make decisions. Explainable AI allows users to comprehend and trust the results and output created by machine learning algorithms.
Melody K. Smith
Sponsored by Access Innovations, changing search to found.