The success of machine learning for a wide range of applications has come with serious costs. The largest deep neural networks can have hundreds of billions of parameters that need to be tuned to mammoth datasets. This computationally intensive training process can cost millions of dollars, as well as large amounts of energy and associated carbon. Inference, the subsequent application of a trained model to new data, is less demanding for each use, but for widely used applications, the cumulative energy use can be even greater.
"Typically there will be more energy spent on inference than there is on training," said David Patterson, Professor Emeritus at the University of California, Berkeley, and a Distinguished Engineer at Google, who in 2017 shared ACM's A.M. Turing Award. Patterson and his colleagues recently posted a comprehensive analysis of carbon emissions from some large deep-learning applications, finding that energy invested to refine training can be more than compensated by reduced inference costs for improved models.
No entries found