acm-header
Sign In

Communications of the ACM

ACM TechNews

Imaginary Numbers Protect AI from Very Real Threats


View as: Print Mobile App Share:
abstract technical image, illustration

Credit: Getty Images

Computer engineers at Duke University have shown that numbers with both real and imaginary components can be critical in securing artificial intelligence algorithms against threats while preserving efficiency. Including just two complex-valued layers among hundreds if not thousands of training iterations offers sufficient protection. For example, using complex numbers with imaginary components can instill additional flexibility for adjusting internal parameters within a neural network being trained on a set of images.

The researchers describe their work in "Improving Gradient Regularization Using Complex-Valued Neural Networks," published in the Proceedings of the 38th International Conference on Machine Learning.

"The complex-valued neural networks have the potential for a more 'terraced' or 'plateaued' landscape to explore," says Duke's Eric Yeats. "And elevation change lets the neural network conceive more complex things, which means it can identify more objects with more precision." This enables gradient regularization neural networks using complex numbers to arrive at solutions just as quickly as those lacking the extra security.

From Duke University
View Full Article

 

Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account