Since machine learning models are imperfect, people must understand when to believe a model’s predictions in high-stakes situations. MarkTechPost brought this interesting news to our attention in their article, “MIT researchers have developed a new technique that can enable a machine learning model to quantify how confident it is in its predictions.”
Robust machine learning models are helping humans solve complex problems in cancer screening and autonomous vehicle navigation. It has never been more valuable to implement machine learning and to be discerning in its deployment.
Deep learning models have made impressive progress in vision, language, and other modalities, particularly with the rise of large-scale pre-training. Such models are most accurate when applied to test data drawn from the same distribution as their training set. In practice, however, the data-confronting models in real-world settings rarely match the training distribution. In addition, the models may not be well-suited for applications where predictive performance is only part of the equation. For models to be reliable in deployment, they must be able to accommodate shifts in data distribution and make useful decisions in a broad array of scenarios.
One method for enhancing a model’s dependability is uncertainty quantification. The uncertainty quantification model generates a score along with the prediction that indicates the degree of confidence in the accuracy of the forecast.
The fact that most organizations have little knowledge on how artificial intelligence (AI) systems make decisions isn’t helpful. Explainable AI allows users to comprehend and trust the results and output created by machine learning algorithms.
Melody K. Smith
Sponsored by Access Innovations, the intelligence and the technology behind world-class explainable AI solutions.