The future of supercomputing holds several significant software challenges, writes Numerical Algorithms Group's Andrew Jones. The first challenge is the rapidly increasing degree of concurrency required. A complex hierarchy of parallelism, from vector-like parallelism at the local level through multithreading to multi-level, and massive parallel processing across numerous nodes, also present unique challenges, Jones says.
Additionally, supercomputing will have to handle a new wave of verification, validation, and resilience issues. Although petaflop and exascale computing holds much promise, Jones says experts question whether some current applications will still be usable. Experts argue that some legacy applications are coded in certain ways that make evolution impossible, and that code refactoring and algorithm development would be more difficult than starting from scratch.
However, Jones notes that disposing of old code also throws away extremely valuable scientific knowledge. Ultimately, he says that two classes of applications may emerge — programs that will never be able to exploit future high-end supercomputers but are still used while their successors develop comparable scientific maturity, and programs that can operate in the exascale and petascale arena. Jones says that developing and sustaining these two fields will require a well-balanced approach among researchers, developers, and funding agencies, who will have to continue to provide investments in scaling, optimization, algorithm evolution, and scientific advancements in existing code while diverting sufficient resources to the development of new code.
From ZDNet UK
View Full Article
Abstracts Copyright © 2009 Information Inc., Bethesda, Maryland, USA
No entries found