Max Planck Institute researcher Robert Gutig has used a computer model to develop a learning procedure for neural networks in which the model neurons can learn to differentiate between different stimuli by adjusting their activity to the frequency of the cues.
The model is based on a synaptic learning rule in which individual neurons can increase or decrease their activity in response to a simple learning signal.
Gutig says he has employed this rule to establish an "'aggregate-label' learning procedure...built on the concept of setting the connections between cells in such a way that the resulting neural activity over a certain period is proportional to the number of cues."
Gutig's model also performs well when there is a delay between the cue and the event or outcome, by interpreting the average neural activity within a network as a learning signal. He says this "self-supervised" learning conforms to a principle differing from the Hebbian theory often applied in artificial neural networks.
"It is not necessary for the neural activity to be temporally aligned," Gutig says. "The total number of spikes in a given period is the deciding factor for synaptic change."
One possible application of Gutig's work is the development of speech-recognition programs.
From Max Planck Gessellschaft
View Full Article
Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA
No entries found