acm-header
Sign In

Communications of the ACM

ACM TechNews

Preventing AI From Divulging Its Own Secrets


View as: Print Mobile App Share:
Hackers can reverse-engineer the inner workings of neural networks using differential power analysis.

Researchers have begun developing ways to shield the power signatures of artificial intelligence systems from prying eyes.

Credit: Erik Vrielink/IEEE Spectrum

North Carolina State University (NC State) researchers have demonstrated the first countermeasure for shielding artificial intelligence from differential power analysis attacks.

Such attacks involve hackers exploiting neural networks' power signatures to reverse-engineer the inner mechanisms of computer chips running those networks.

The attack relies on adversaries physically accessing devices in order to measure their power signature, or analyze output electromagnetic radiation. Attackers can repeatedly have the neural network run specific computational tasks with known input data, and eventually determine power patterns associated with the secret weight values.

The countermeasure is adapted from a masking technique; explains NC State's Aydin Aysu, "We use the secure multi-part computations and randomize all intermediate computations to mitigate the attack."

From IEEE Spectrum
View Full Article

 

Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account