acm-header
Sign In

Communications of the ACM

ACM TechNews

A Foolproof Way to Shrink Deep Learning Models


View as: Print Mobile App Share:
The model works by retraining the smaller, pruned model at its faster, initial learning rate.

Researchers have proposed a technique for shrinking deep learning models that they say is simpler and produces more accurate results than state-of-the-art methods.

Credit: Alex Renda

Massachusetts Institute of Technology (MIT) researchers have proposed a technique for compressing deep learning models, by retraining a smaller model whose weakest connections have been "pruned," at its faster, initial rate of learning.

The technique's groundwork was partly laid by the AutoML for model compression (AMC) algorithm from MIT's Song Han, which automatically removes redundant neurons and connections, and retrains the model to reinstate its initial accuracy.

MIT's Jonathan Frankle and Michael Carbin determined that the model could simply be rewound to its early training rate without tinkering with any parameters.

Although greater shrinkage is accompanied by reduced model accuracy, in comparing their method to AMC or earlier work by Frankle on weight-rewinding techniques, Frankle and Carbin found that it performed better regardless of the amount of compression.

Frankle said the pruning algorithm "is clear, generic, and drop-dead simple."

From MIT News
View Full Article

 

Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account