The yardstick used to identify the world’s best-performing supercomputers on the Top 500 list–the Linpack Benchmark–no longer reflects "real-world usage" and ought to be replaced with a new metric.
So says Allan Snavely, associate director of the San Diego Supercomputer Center (SDSC) at the University of California, San Diego. He proposes, instead, a "data motion capacity metric" that measures a supercomputer's overall ability to help researchers solve real-world problems. Instead of comparing supercomputers by their fastest calculation speed–typically measured in the number of floating point operations per second (FLOP/S)–he suggests a measurement that weighs DRAM, flash memory, and disk capacity according to access time as measured in computer cycles.
"Processors have become increasingly faster, but the performance of real applications haven't kept pace due to the slower data-fetching and data-movement aspects of the machines," he explains. "In fact, disks are in a sense getting slower all the time as their capacity goes up but access times stay the same."
Snavely contends it’s not really helpful to have rankings that "only reflect the features that have been improved to the point where they are no longer the bottleneck. That metric is saying a given supercomputer is the best around, but it’s not really."
Horst D. Simon, deputy laboratory director at the Lawrence Berkeley National Laboratory, says he wholeheartedly agrees with Snavely’s efforts and that "as we move forward, it is clear that FLOP/S is not the right way to measure the performance of high-end machines."
UC-Berkeley's Simon Horst
|
Simon looks forward to reading a white paper on Snavely’s proposal when it is written, "as, I’m sure, will the rest of the supercomputing community. Then," he adds, "the next step will be for Allan to get his proposal accepted, which will take time. First, he will need to get buy-in from the community, then he will need to have his benchmark implemented on at least a dozen very different architectures, report on what has been found, and then involve the community in discussions. It's easy to propose a benchmark, but actually getting it accepted takes a lot of work over multiple years."
Meanwhile, the SDSC is about to roll out its new Gordon supercomputer designed more for data-intensive supercomputing than speed. Snavely says its capabilities for accessing data on disk are unprecedented but admits that it will "be a way's down on the Top500 list–maybe around #40–but that's only because of the skewed metric that list uses."
In order to generate interest in his proposal, Snavely will speak at the SC11 Supercomputing Conference in Seattle on Nov. 15, approximately the same time as the Gordon rollout.
"Then, early next year, I hope to publish a white paper along with a benchmark and a simple metric, a test that people can run on their supercomputer to measure its data-moving capacity," he says. "I’m hoping people will start sending us their data which we can compile on a new Web site.
"What I would argue," he says, "is that, if it weren’t for the Linpack Benchmark–which reflects very little of the real world and is a very extreme and tailored benchmark for measuring only the FLOP/S rate–we would probably have a much more interesting and useful list of supercomputers by capability."
Paul Hyman was editor-in-chief of several hi-tech publications at CMP Media, including Electronic Buyers’ News.
No entries found