Machines are getting smarter every year, but artificial intelligence (AI) has yet to reach the pinnacle of performance once expected by many. Is this a technology failure or skewed expectations? This interesting topic came to us from Tech Republic in their article, “Deep learning isn’t living up to the hype, but still shows promise.”
While AI still has a long way to go before anything like human-level intelligence is achieved, AI algorithms can excel at specific tasks. Technology giants like Google, Amazon and Meta are banking on that fact.
Sadly, most organizations have little visibility or knowledge of how AI systems make the decisions they do, and how the results are applied.
Explainable AI allows users to comprehend and trust the results and output created by machine learning algorithms. “Explainable AI” is used to describe an AI model, its expected impact and potential biases. Why is this important? Because the results can have an impact on data security or safety.
Melody K. Smith
Sponsored by Access Innovations, the intelligence and the technology behind world-class explainable AI solutions.