acm-header
Sign In

Communications of the ACM

ACM TechNews

New Learning Procedure For Neural Networks


View as: Print Mobile App Share:
Robert Gtig has programmed neuronal networks and lined them with different sensory stimuli (coloured boxes) that are reflected in the input activity (Point Pitch).

A new learning procedure for neural networks teaches model neurons to differentiate between different stimuli by adjusting their activity to the frequency of the cues.

Credit: Robert Gutig/MPI f. Experimental Medicine

Max Planck Institute researcher Robert Gutig has used a computer model to develop a learning procedure for neural networks in which the model neurons can learn to differentiate between different stimuli by adjusting their activity to the frequency of the cues.

The model is based on a synaptic learning rule in which individual neurons can increase or decrease their activity in response to a simple learning signal.

Gutig says he has employed this rule to establish an "'aggregate-label' learning procedure...built on the concept of setting the connections between cells in such a way that the resulting neural activity over a certain period is proportional to the number of cues."

Gutig's model also performs well when there is a delay between the cue and the event or outcome, by interpreting the average neural activity within a network as a learning signal. He says this "self-supervised" learning conforms to a principle differing from the Hebbian theory often applied in artificial neural networks.

"It is not necessary for the neural activity to be temporally aligned," Gutig says. "The total number of spikes in a given period is the deciding factor for synaptic change."

One possible application of Gutig's work is the development of speech-recognition programs.

From Max Planck Gessellschaft
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account