acm-header
Sign In

Communications of the ACM

Home/News/Noise Warfare/Full Text
ACM TechNews

Noise Warfare


View as: Print Mobile App Share:
Computer scientists take lessons from Sun Tzus The Art of War and teach machine learning algorithms to know their enemies.

Harvard University researchers have developed noise-robust classifiers against the worst case of added additional data that disrupts or skews information an algorithm has already learned.

Credit: Harvard SEAS

Researchers at Harvard University say they have developed noise-robust classifiers that are prepared against the worst case of added additional data that disrupts or skews information the algorithm has already learned, known as noise.

The team notes these algorithms have a guaranteed performance across a range of different example cases of noise and perform well in practice.

The researchers want to use this new technology to help protect deep neural networks, which are vital for computer vision, speech recognition, and robotics, from cyberattacks.

"Since people started to get really enthusiastic about the possibilities of deep learning, there has been a race to the bottom to find ways to fool the machine-learning algorithms," says Harvard professor Yaron Singer.

He notes the most effective way to fool a machine-learning algorithm is to introduce specifically tailored noise for whatever classifier is in use, and this "adversarial noise" could wreak havoc on systems that rely on neural networks.

From Harvard University
View Full Article

 

Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account