Researchers at Google's DeepMind unit have developed an artificial intelligence (AI) system that teaches itself to recognize a range of visual and audio concepts by watching short video clips.
For example, the new system can understand the concept of lawn mowing, even when it has not learned the words to describe what it is hearing or seeing.
"We want to build machines that continuously learn about their environment in an autonomous manner," says University of California, Berkeley researcher Pulkit Agrawal.
He notes the DeepMind project brings the field one step closer to the goal of creating AI that can teach itself by watching and listening to the world around it.
Instead of relying on human-labeled datasets, the new algorithm learns to recognize images and sounds by matching what it sees with what it hears.
The results suggest similar algorithms might be able to learn by analyzing huge unlabeled datasets such as YouTube's millions of online videos.
From New Scientist
View Full Article
Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA
No entries found