Deep learning training is when a deep neural network learns how to analyze a predetermined set of data and make predictions about what it means. It involves a lot of trial and error, until the network is able to accurately draw conclusions based on the desired outcome. Constructing training data sets for deep learning models involves billions of data samples, curated by crawling the Internet. Trust is an implicit part of the arrangement. This important information came to us from IEEE Spectrum in their article, “Protecting AI Models from “Data Poisoning”.

That trust is being threatened with a new kind of cyberattack called data poisoning – when trawled data assembled for deep-learning training has been intentionally compromised with false patterns. A team of computer scientists have demonstrated two model data poisoning attacks. Thus far, there is no evidence of these attacks having yet been carried out. The attacks are expected to happen in text-based machine-learning models trained by the Internet.

Data poisoning can render machine learning models inaccurate, possibly resulting in faulty bases and bad decision-making. With no easy fixes available, security pros must focus on prevention and detection.

Melody K. Smith

Data Harmony is an award-winning semantic suite that leverages explainable AI.

Sponsored by Access Innovations, changing search to found.