acm-header
Sign In

Communications of the ACM

ACM Careers

Pppl Physicists to Lead a Doe Exascale Computing Project


View as: Print Mobile App Share:
supercomputer, illustration

Credit: iStockPhoto.com

Scientists at the U.S. Department of Energy's Princeton Plasma Physics Laboratory (PPPL) will take part in a national initiative to develop the next generation of supercomputers. Known as the Exascale Computing Project (ECP), the initiative will include a focus on exascale-related software, applications, and workforce training.

Once developed, exascale computers will perform a billion billion operations per second, a rate 50 to 100 times faster than the most powerful U.S. computers now in use. The fastest computers today operate at the petascale and can perform a million billion operations per second. Exascale machines in the United States are expected to be ready in 2023.

The PPPL-led multi-institutional project, titled "High-Fidelity Whole Device Modeling of Magnetically Confined Fusion Plasmas," was selected during the ECP's first round of application development funding, which distributed $39.8 million. The overall project will receive $2.5 million a year for four years to be distributed among the partner institutions, including Argonne, Lawrence Livermore, and Oak Ridge national laboratories, together with Rutgers University, the University of California, Los Angeles, and the University of Colorado, Boulder. PPPL itself will receive $800,000 per year; the project it leads was one of 15 selected for full funding, and the only one dedicated to fusion energy. Seven additional projects received seed funding.

The application efforts will help guide DOE's development of a U.S. exascale ecosystem as part of President Obama's National Strategic Computing Initiative (NSCI). DOE, the Department of Defense, and the National Science Foundation have been designated as NSCI lead agencies, and ECP is the primary DOE contribution to the initiative.

The ECP's multi-year mission is to maximize the benefits of high performance computing for U.S. economic competitiveness, national security, and scientific discovery. In addition to applications, the DOE project addresses hardware, software, platforms, and workforce development needs critical to the effective development and deployment of future exascale systems. The ECP is supported jointly by DOE's Office of Science and the National Nuclear Security Administration within DOE.

PPPL has been involved with high-performance computing for years. PPPL scientists created the XGC code, which models the behavior of plasma in the boundary region where the plasma's ions and electrons interact with each other and with neutral particles produced by the tokamak, a doughnut-shaped fusion facility. The high-performance code is maintained and updated by PPPL scientist C.S. Chang and his team.

XGC runs on Titan, the fastest computer in the United States, at the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility at Oak Ridge National Laboratory. The calculations needed to model the behavior of the plasma edge are so complex that the code uses 90 percent of the computer's processing capabilities. Titan performs at the petascale, completing a million billion calculations each second, and the DOE was primarily interested in proposals by institutions that possess petascale-ready codes that can be upgraded for exascale computers.

The PPPL proposal lays out a four-year plan to combine XGC with GENE, a computer code that simulates the behavior of the plasma core. GENE is maintained by Frank Jenko, a professor at the University of California, Los Angeles. Combining the codes would give physicists a far better sense of how the core plasma interacts with the edge plasma at a fundamental kinetic level, giving a comprehensive view of the entire plasma volume.

Leading the overall PPPL proposal is Amitava Bhattacharjee, head of the Theory Department at PPPL. Co-principal investigators are PPPL's Chang and Andrew Siegel, a computational scientist at the University of Chicago.

The multi-institutional effort will develop a full-scale computer simulation of fusion plasma. Unlike current simulations, which model only part of the hot, charged gas, the proposed simulations will display the physics of an entire plasma all at once. The completed model will integrate the XGC and GENE codes and will be designed to run on exascale computers.

The modeling will enable physicists to understand plasmas more fully, allowing them to predict its behavior within the tokamaks. The exascale computing fusion proposal focuses primarily on ITER, the international tokamak being built in France to demonstrate the feasibility of fusion power. But the proposal will be developed with other applications in mind, including stellarators, another variety of fusion facility. Better predictions can lead to better engineered facilities and more efficient fusion reactors. Currently, support for this work comes from the DOE's Advanced Science Computing Research program.

"This will be a team effort involving multiple institutions," says Bhattacharjee. He notes that PPPL will be involved in every aspect of the project, including working with applied mathematicians and computer scientists on the team to develop the simulation framework that will couple GENE with XGC on exascale computers.

"You need a very-large-scale computer to calculate the multiscale interactions in fusion plasmas," says Chang. "Whole-device modeling is about simulating the whole thing: all the systems together."

Because plasma behavior is immensely complicated, developing an exascale computer is crucial for future research. "Taking into account all the physics in a fusion plasma requires enormous computational resources," says Bhattacharjee. "With the computer codes we have now, we are already pushing on the edge of the petascale. The exascale is very much needed in order for us to have greater realism and truly predictive capability."


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account