acm-header
Sign In

Communications of the ACM

ACM TechNews

Neuromorphic Computing Finds New Life in Machine Learning


View as: Print Mobile App Share:
A brain approximated in electronics.

Researchers are working to resolve the fact that spiking neurons currently have no way that they can be trained via gradient descentthe basis of conventional machine learning.

Credit: azureedge.net

Researchers at the Salk Institute trained a standard recurrent neural network and then transferred those parameters to a spiking neural network.

Their goal is to resolve the fact that spiking neurons currently have no way to be trained via gradient descent—the basis of conventional machine learning.

The new research is focusing on a form of what is known as "transfer learning," developing parameters in one place and moving them to a new place, in order to avoid the shortcomings of spiking neurons.

Separately, researchers at the U.S. Defense Advanced Research Projects Agency (DARPA) have developed a Python-based programming package called BindsNET, which can perform a kind of transfer learning similar to that of the Salk Institute project.

The DARPA researchers used BindsNET to simulate the construction of shallow artificial neural networks made up of spiking neurons.

Both the Salk Institute and DARPA projects show there is energy and intelligence active within the spiking neuron realm of neuromorphic computing.

From ZDNet
View Full Article

 

Abstracts Copyright © 2019 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account