acm-header
Sign In

Communications of the ACM

ACM TechNews

To Speed ­p AI, Mix Memory and Processing


View as: Print Mobile App Share:
A new chip design called deep in-?memory architecture.

Computer engineer Naresh Shanbhag of the University of Illinois at Urbana-Champaign believes it is time for chips to switch from the eponymous von Neumann architecture to a design better suited for todays data-intensive tasks.

Credit: Sujan Gonugondla

The University of Illinois' Naresh Shanbhag is pushing for a new computer architecture that blends computing and memory so devices can be smarter without consuming more energy.

One group pursuing this architecture is run by Stanford University's Subhasish Mitra; they are layering carbon-nanotube integrated circuits atop resistive random-access memory (RAM). The researchers produced a demo showing their system could efficiently classify the language of a sentence.

Meanwhile, Shanbhag's group and others are using existing materials, employing analog control circuits enclosing arrays of memory cells in new ways. Instead of sending data to the processor, they program these circuits to run simple artificial intelligence algorithms in a "deep in-memory architecture."

Shanbhag believes conducting processing at the edges of subarrays is sufficiently deep to boost energy and speed without losing storage. His group produced a 10-fold improvement in energy efficiency and a fivefold improvement in speed when using analog circuits to detect faces in images stored in static RAM.

From IEEE Spectrum
View Full Article

 

Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account