acm-header
Sign In

Communications of the ACM

ACM TechNews

Expect Deeper and Cheaper Machine Learning


View as: Print Mobile App Share:
machine learning, illustration

Credit: Edmon de Haro

Machine-learning technologies are undergoing a transformation with appearances in products and systems that experts predict will be less expensive and more focused on deep-learning calculations. "Everybody is doing deep learning today," says Stanford University professor William Dally.

One popular method in this area is the use of application-specific integrated circuits (ASICs), which is Google's approach with its Tensor Processing Unit. Field-programmable gate arrays are another tool being used; they have the benefit of reconfigurability with changing computing requirements. However, the most common technique in use involves graphics-processing units for parallel execution of mathematical operations.

Dally cites three distinct deep-learning hardware application areas, including "training in the data center," where many neuronal links are adjusted so an artificial neural network can perform an assigned task. Another area is "inference at the data center," which entails the continuous operation of cloud-based neural networks that have previously been taught to execute some other task. Dally says the third core deep-learning operation is "inference in embedded devices" such as smartphones, tablets, and cameras, which will likely be handled by low-power ASICs as smartphone apps are increasingly augmented by deep-learning software.

Dally, recipient in 2010 of the ACM/IEEE Eckert-Mauchly Award, notes software advances can quickly make hardware obsolete. "The algorithms are changing at an enormous rate," he says. "Everybody who is building these things is trying to cover their bets."

From IEEE Spectrum 
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account