COVID-19 has been the catalyst for many changes and, unfortunately, fraudulent digital activity is one of them. This topic came to us from Lombard Odier in their article, “Tackling cyber fraud using both human and artificial intelligence.”
Hackers have switched their focus to people working remotely online, and used fear created by COVID-19 to actively target vulnerable people and health services to conduct espionage, or steal money and sensitive data.
Ransomware incidents continue to increase, with the health sector reporting the second-highest number of attacks.
Reports show that over ninety percent of all cyber crime results from human error, despite the advanced security technologies available today, including nascent artificial intelligence (AI) applications that can take matters out of human hands.
Most organizations have little visibility and knowledge of how AI systems make the decisions they do, or how the results are being applied. Explainable AI allows users to comprehend and trust the results and output created by machine learning algorithms. “Explainable AI” is used to describe an AI model, its expected impact and potential biases. Why is this important? Because explainability becomes critical when the results can have an impact on data security or safety.
Melody K. Smith
Sponsored by Access Innovations, the intelligence and the technology behind world-class explainable AI solutions.