Researchers at the University of Chicago have found that adapting a well-known brain mechanism can dramatically improve the ability of artificial neural networks to learn multiple tasks, while avoiding "catastrophic forgetting," a persistent challenge in artificial intelligence (AI) studies.
The project serves as an example of how neuroscience research can inform new computer science strategies, and how AI technology can help scientists better understand the human brain.
The new algorithm, called "context-dependent gating," enables single artificial neural networks to learn and perform hundreds of tasks with only minimal loss of accuracy.
Said the University of Chicago's Nicolas Masse, "With this method, a fairly medium-sized network can be carved up a whole bunch of ways to be able to learn many different tasks if done properly."
From University of Chicago
View Full Article
Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA
No entries found