Global executives are deploying and scaling artificial intelligence (AI) at a rate higher than ever before. In fact, 94% of business leaders agree that AI is critical to success over the next five years. However, some organizations are seeing a delay in benefits from AI. Computer Weekly brought this interesting information to us in their article, “Why some businesses are failing at AI.”
AI could perform more quickly, accurately, reliably, and impartially than humans on a wide range of problems, from detecting cancer to deciding who receives an interview for a job. But AIs have also suffered numerous failures. And the increasing ubiquity of AI means that failures can affect not just individuals but millions of people.
Almost half of the business leaders in a recent survey said the main difficulty they faced was integrating AI into the organization’s daily operations and workflows. As organizations attempted to scale up their AI projects over time, the key impediments business leaders faced were managing AI-related risks (50%), lack of executive buy-in (50%), and lack of maintenance or ongoing support (50%).
I would say that the lack of knowledge regarding how AI systems make decisions should be at the top of that list. Explainable AI allows users to comprehend and trust the results and output created by machine learning algorithms. Explainable AI is used to describe an AI model, its expected impact, and its potential biases.
Melody K. Smith
Sponsored by Access Innovations, the intelligence and the technology behind world-class explainable AI solutions.