Machine learning adoption continues to grow at an exponential rate. Over the past decade, high performance computing has been driven in part by the rise of cloud computing. As vendors integrate machine learning into products across industries, security experts warn of adversarial attacks designed to abuse the technology. This alarming information came to us from CSO in their article, “How data poisoning attacks corrupt machine learning models.”
Machine learning exists everywhere. Most social networking platforms, online video platforms, search engines and other services have some sort of recommendation system based on machine learning. Every time you make an online purchase and get the familiar “because you purchased xx item, you might like this item” message – that is machine learning. The queries users type in Google Search are all fed back into sites’ machine learning models to make better and more accurate recommendations.
Poisoning the data that feeds those predictions and recommendations is a problem. Data poisoning or model poisoning attacks involve polluting a machine learning model’s training data. Data poisoning is considered an integrity attack because tampering with the training data impacts the model’s ability to output correct predictions.
Melody K. Smith
Sponsored by Data Harmony, a unit of Access Innovations, the world leader in indexing and making content findable.