Massachusetts Institute of Technology researchers at this week's International Solid State Circuits Conference introduced a new chip that implements neural networks.
The researchers say the chip is 10 times more efficient than a mobile graphical-processing unit (GPU), so it could enable mobile devices to run powerful artificial-intelligence algorithms locally, instead of uploading data to the Internet for processing.
The Eyeriss chip has 168 cores, and its efficiency is rooted in its ability to minimize the frequency with which cores must exchange data with distant memory banks. Many of the cores in a GPU share a single, large memory bank, but each of the Eyeriss cores has its own memory.
The chip also features a circuit that compresses data prior to sending it to individual cores. Each core also can communicate directly with its immediate neighbors, so if data sharing is necessary, they do not have to route it via main memory.
In addition, the Eyeriss has special-purpose circuitry that allocates tasks across cores, which can be reconfigured for different types of networks. They can automatically distribute data manipulated by the nodes it is simulating and data describing the nodes themselves across cores in order to maximize the amount of work each core can perform before retrieving more data from main memory.
From MIT News
View Full Article
Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA
No entries found