acm-header
Sign In

Communications of the ACM

ACM TechNews

Machines Learning Evolves, and Hackers Stand to Gain


View as: Print Mobile App Share:
Adversarial machine learning lies at the intersection of machine learning and computer security.

Adversarial machine learning is still too poorly understood to guarantee machine learning models' predictions are trustworthy.

Credit: GCN

Experts say the growing pervasiveness and maturity of machine learning makes it an increasingly attractive candidate for cybersecurity applications.

However, adversarial machine learning (AML) is still too little understood to guarantee machine learning models' predictions are trustworthy.

"While the problem that the attacker must solve is theoretically hard, it is becoming clear that it is possible to find practical attacks against most practical systems," says University of Maryland at College Park professor Tudor A. Dumitras. He notes hackers now know how to bypass machine learning-enabled detectors, taint machine learning systems' training data and manipulate their outputs, and invert models to exfiltrate private user information.

Meanwhile, Dumitras says known AML defenses are few, applicable only to specific attacks, and become ineffective when hackers change strategy.

Nicolas Papernot, a security researcher at Pennsylvania State University, says federal agencies and law enforcement can proactively benchmark their machine-learning algorithms' vulnerabilities as a first step to mapping their attack surface.

From Government Computer News
View Full Article

 

Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account