acm-header
Sign In

Communications of the ACM

ACM TechNews

Tiny Four-Bit Computers Are Now All You Need to Train AI


View as: Print Mobile App Share:
Diagram of a neural network.

IBM researchers are proposing to reduce the number of bits, or 1s and 0s, needed to represent data from 16 bits, the current industry standard, to only four.

Credit: MIT Technology Review

IBM researchers have proposed reducing the number of computer bits from the current industry standard of 16 to just four.

They said this could increase the speed and reduce the energy costs needed to train deep learning models by more than sevenfold.

It also would allow smartphones and other small devices to run artificial intelligence models.

In a four-bit computer, the activations and weights in the neural network would be rescaled for every round of training to minimize the loss of precision.

To address the challenge of representing the intermediate values that arise during training, the researchers scaled these numbers logarithmically.

They ran several simulations of four-bit training for deep learning models in computer vision, speech, and natural language processing and saw a limited loss of accuracy in overall performance compared with 16-bit deep learning.

Stanford University's Boris Murmann said, "This advancement opens the door for training in resource-constrained environments."


From: MIT Technology Review
View Full Article - May Require Paid Subscription

 

Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account