acm-header
Sign In

Communications of the ACM

ACM TechNews

How to Steal the Mind of an AI

Machine-Learning Models Vulnerable to Reverse Engineering

View as: Print Mobile App Share:
robot, illustration

Credit: Giphy

Machine-learning (ML) models can be reverse engineered, and basic safeguards are little help in ameliorating attacks, according to a paper presented in August at the 25th Annual Usenix Security Symposium by researchers from the Swiss Federal Institute of Technology in Lausanne, Cornell University, and the University of North Carolina at Chapel Hill. The investigators exploited the fact that such models permit input and may yield predictions with percentages indicating the confidence of correctness. The researchers say the models demonstrated "simple, efficient attacks that extract target ML models with near-perfect fidelity for popular model classes, including logistic regression, neural networks, and decision trees."

The team successfully tested their attack on BigML and Amazon Machine Learning. Cornell Tech professor Ari Juels says attack mitigation may be possible, but he suspects "solutions that don't degrade functionality will need to involve greater model complexity."

Although many ML models have been open sourced to encourage users to improve their code and implement the models on the developers' cloud infrastructure, other models rely on confidentiality. The researchers note ML reverse engineering can violate privacy, such as by making it easier to identify images of people used to train a facial-recognition system.

From The Register
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account