The Oak Ridge Leadership Computing Facility published a report in which researchers documented that graphical processing unit (GPU)-equipped supercomputers increased application speeds by a factor of between 1.4 and 6.1 across a range of science applications. The performance gains using GPU-based supercomputers indicate the technology is generating good results across a range of applications.
The 11 simulation programs, which include S3D, Denovo, LAMMPS, WL-LSMS, CAM-SE, NAMD, Chroma, QMCPACK, SPECFEM-3D, GTC, and CP2K, are used by tens of thousands of researchers around the world. The report was written by researchers from Oak Ridge National Laboratory, the National Center for Supercomputing Applications, and the Swiss National Supercomputing Center (CSCS). The researchers ran the programs on CSCS' Monte Rosa, which has two AMD Interlagos central processing units (CPUs) per node, and TitanDev, which consists of hybrid nodes that each contain one Nvidia Fermi GPU and one Interlagos CPU. The researchers found that only Chroma fully exploited the performance advantage of GPU-based processing.
Meanwhile, another factor to consider in comparing application performance is power usage, since GPU accelerators use about twice as much power as high-end X86-based systems.
From HPC Wire
View Full Article
Abstracts Copyright © 2012 Information Inc., Bethesda, Maryland, USA
No entries found