A Microsoft researcher team won the ImageNet Large Scale Visual Recognition Challenge in December with a new approach to deep learning.
The researchers designed a "deep residual network," a neural network that spans 152 layers of mathematical operations, compared to six or seven for typical designs.
The researchers note the neural network is better at recognizing images because it can examine more features. The neural net suggests that in the years to come, companies such as Microsoft will be able to use graphics processing units and other specialized chips to significantly improve image recognition, as well as other artificial intelligence services, including speech recognition and even understanding language as humans naturally speak it.
The neural network is designed to skip certain layers when it does not need them, but use them when it does. "When you do this kind of skipping, you're able to preserve the strength of the signal much further, and this is turning out to have a tremendous, beneficial impact on accuracy," says Peter Lee, Microsoft's head of research.
Microsoft also designed a system that can help build these networks.
From Wired
View Full Article
Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA
No entries found