Machine learning has become an important component of many applications we use today, and adding machine learning capabilities to applications is becoming increasingly easy. Interestingly, many machine learning libraries and online services don’t even require a thorough knowledge of machine learning. This interesting topic came to us from Pete Warden’s Blog in his post, “How Should you Protect your Machine Learning Models and IP?“
Even easy-to-use machine learning systems come with their own challenges. Among them is the threat of adversarial attacks. These are different from other types of security threats that programmers are used to dealing with. Add to that, machine learning models deployed behind a cloud API have different challenges, but are easier in a lot of ways because the model file itself isn’t accessible.
Another important factor in machine learning security is model visibility. When you use a machine learning model that is published online, you’re using a “white box” model. Everyone else can see the model’s architecture and parameters, including attackers. Having direct access to the model will make it easier for the attacker to create adversarial examples.
Because so many people and organizations have limited knowledge of how machine learning works, it has never been more important to implement explainable AI, allowing users to comprehend and trust the results and output created by machine learning algorithms.
Melody K. Smith
Sponsored by Access Innovations, the intelligence and the technology behind world-class explainable AI solutions.