acm-header
Sign In

Communications of the ACM

ACM TechNews

Scale-Out Processors: Bridging the Efficiency Gap Between Servers and Emerging Cloud Workloads


View as: Print Mobile App Share:
server room

Credit: iStockPhoto.com

Ecole Polytechnique Federale de Lausanne (EPFL) professor Babak Falsafi recently presented "Clearing the Clouds: A Study of Emerging Workloads on Modern Hardware," which received the best paper award at ASPLOS 2012.

"While we have been studying and tuning conventional server workloads (such as transaction processing and decision support) on hardware for over a decade, we really wanted to see how emerging scale-out workloads in modern data centers behave,” Falsafi says. "To our surprise, we found that much of a modern server processor's hardware resources, including the cores, caches, and off-chip connectivity, are overprovisioned when running scale-out workloads leading to huge inefficiencies."

Efficiently executing scale-out workloads requires optimizing the instruction-fetch path for up to a few megabytes of program instructions, reducing the core complexity while increasing core counts, and shrinking the capacity of on-die caches to reduce area and power overheads, says EPFL Ph.D. student Mike Ferdman.

The research was partially funded by the EuroCloud Server project.

"Our goal is a 10-fold increase in overall server power efficiency through mobile processors and [three-dimensional] memory stacking," says EuroCloud Server project coordinator Emre Ozer.


From HiPEAC 
View Full Article

Abstracts Copyright © 2012 Information Inc. External Link, Bethesda, Maryland, USA 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account