acm-header
Sign In

Communications of the ACM

ACM TechNews

Cutting 'Edge': A Tunable Neural Network Framework Towards Compact, Efficient Models


View as: Print Mobile App Share:
The prototype chip fabricated in 40 nm technology.

Researchers from Tokyo Institute of Technology have addressed the large resource requirements of state-of-the-art convolutional neural networks on low-power edge devices of Internet-of-Things networks.

Credit: Tokyo Institute of Technology

A sparse convolutional neural network (CNN) framework and training algorithms developed by researchers at Japan's Tokyo Institute of Technology (Tokyo Tech) can seamlessly integrate CNN models on low-power edge devices.

The 40-nanometer sparse CNN chip yields high accuracy and efficiency through a Cartesian-product multiply and accumulate (MAC) array and pipelined activation aligners that spatially shift activations onto a regular Cartesian MAC array.

Tokyo Tech's Kota Ando said, "Regular and dense computations on a parallel computational array are more efficient than irregular or sparse ones. With our novel architecture employing MAC array and activation aligners, we were able to achieve dense computing of sparse convolution."

From Tokyo Institute of Technology News (Japan)
View Full Article

 

Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account