acm-header
Sign In

Communications of the ACM

ACM News

Breaking the Scaling Limits of Analog Computing


View as: Print Mobile App Share:
Using the new technique, the larger an optical neural network becomes, the lower the error in its computations.

Massachusetts Institute of Technology researchers have developed a technique that greatly reduces the error in an optical neural network.

Credit: MIT News

As machine-learning models become larger and more complex, they require faster and more energy-efficient hardware to perform computations. Conventional digital computers are struggling to keep up.

An analog optical neural network could perform the same tasks as a digital one, such as image classification or speech recognition, but because computations are performed using light instead of electrical signals, optical neural networks can run many times faster while consuming less energy.

However, these analog devices are prone to hardware errors that can make computations less precise. Microscopic imperfections in hardware components are one cause of these errors. In an optical neural network that has many connected components, errors can quickly accumulate.

Even with error-correction techniques, due to fundamental properties of the devices that make up an optical neural network, some amount of error is unavoidable. A network that is large enough to be implemented in the real world would be far too imprecise to be effective.

From MIT News
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account