Massachusetts Institute of Technology (MIT) researchers have proposed a technique for compressing deep learning models, by retraining a smaller model whose weakest connections have been "pruned," at its faster, initial rate of learning.
The technique's groundwork was partly laid by the AutoML for model compression (AMC) algorithm from MIT's Song Han, which automatically removes redundant neurons and connections, and retrains the model to reinstate its initial accuracy.
MIT's Jonathan Frankle and Michael Carbin determined that the model could simply be rewound to its early training rate without tinkering with any parameters.
Although greater shrinkage is accompanied by reduced model accuracy, in comparing their method to AMC or earlier work by Frankle on weight-rewinding techniques, Frankle and Carbin found that it performed better regardless of the amount of compression.
Frankle said the pruning algorithm "is clear, generic, and drop-dead simple."
From MIT News
View Full Article
Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA
No entries found