Deep learning training is when a deep neural network learns from a predetermined set of data to make predictions. It involves trial and error, and it doesn’t end until the network is able to accurately draw conclusions. Constructing training data sets for deep learning models involves billions of data samples, curated by crawling the Internet. Trust is an implicit part of the arrangement. This important information came to us from IEEE Spectrum in their article, “Protecting AI Models from “Data Poisoning”.
That trust is being threatened with a new kind of cyberattack called data poisoning, where trawled data assembled for deep-learning training has been intentionally compromised with false patterns. A team of computer scientists have demonstrated two model data poisoning attacks. Thus far, there is no evidence of these attacks having yet been carried out. The attacks are expected to happen in text-based machine-learning models trained by the Internet.
Data poisoning can render machine learning models inaccurate, possibly resulting in faulty bases and bad decision-making. With no easy fixes available, security pros must focus on prevention and detection.
Melody K. Smith
Sponsored by Access Innovations, changing search to found.