acm-header
Sign In

Communications of the ACM

ACM TechNews

Future Challenges of Large-Scale Computing


View as: Print Mobile App Share:
The Wellcome Trust Sanger Institute computing cluster.

NVIDIA chief scientist Bill Dally predicts that similar processor requirements in high-performance computing, Web servers, and big data will lead to a convergence on heterogeneous multicore processors, in which each socket will feature a small number of c

Credit: Genome Research Ltd.

NVIDIA chief scientist Bill Dally says in an interview that similar processor requirements in high-performance computing, Web servers, and big data will lead to a convergence on heterogeneous multicore processors in which each socket will feature a small number of cores optimized for latency and many more cores optimized for throughput.

Dally predicts that three-dimensional stacked chip technology will be essential to the extension of high-bandwidth on-package memory capacity.

With budget austerity likely to cut U.S. government investments in exascale computing, Dally projects that industry will continue to move ahead in this field on its own, although at a much slower pace. He also is hopeful that the challenge of achieving sustained exaflops on a real application in 20 MW will be met, thanks to numerous emerging circuit, architecture, and software technologies that could potentially enhance the energy efficiency of one or more parts of the system.

Dally perceives energy efficiency and programmability as the two biggest challenges to reaching exascale. He notes that research projects are underway to devise more productive programming systems and the tools that will enable automated mapping and tuning.

From HPC Wire
View Full Article

 

Abstracts Copyright © 2013 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account