Google recently released an academic paper outlining a template for how to create a single machine-learning model that is capable of addressing multiple tasks. The new template, called MultiModel, was trained on a variety of tasks, including translation, language parsing, speech recognition, image recognition, and object detection.
Although Google acknowledges the system's results do not show significant improvements over existing approaches, they do illustrate that training a machine-learning system on a variety of tasks could help boost its overall performance. The researchers say their study also could serve as a framework for the development of future machine-learning systems that are more broadly applicable, and potentially more accurate, than the narrow solutions generally available today. In addition, they say similar techniques could help reduce the amount of training data needed to create a viable machine-learning algorithm.
From VentureBeat
View Full Article
Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA
No entries found