Scientists and engineers striving to create the next machine-age marvel — whether it be a more aerodynamic rocket, a faster race car, or a higher-efficiency jet engine — depend on reliable analysis and feedback to improve their designs.
Building and testing physical prototypes of complex machines can be time-consuming and costly and can provide only limited results. For these reasons, companies in industries as diverse as aerospace, car manufacturing, and wind power have been turning to supercomputers to investigate complex design problems related to fluid flow, or how air and fluids interact with a machine.
Using computational fluid dynamics (CFD) applications, codes created to track fluid's chaotic flow patterns through or around a solid object, researchers can augment physical testing, gain a more comprehensive view of product performance, and even glean insights that might lead to further design improvements.
But as supercomputers have increased in size and scale, many industry-standard CFD applications have struggled to keep pace, limiting their accuracy and ability to fully supplant physical testing. Furthermore, many CFD codes have not yet been adapted for accelerated computing architectures, such as that of the Oak Ridge Leadership Computing Facility's Titan supercomputer, a Cray XK7 with a peak performance of 27 petaflops.
In an effort to modernize CFD, a group of Imperial College researchers led by Peter Vincent, a senior lecturer in the Department of Aeronautics, has developed new open-source software called PyFR, a Python-based application that combines highly accurate numerical methods with a highly flexible, portable, and scalable code implementation that makes efficient use of accelerators like Titan's GPUs. Industry adoption of the code could allow companies to better exploit petascale computing to understand long-standing fluid flow problems, in particular unsteady turbulence — the seemingly random and chaotic motion of air, water, and other fluids.
In recognition for its work, Vincent's team, which includes postdoctoral researchers Brian Vermeire, Jin Seok Park, and Arvind Iyer of Imperial College, and postdoctoral researcher Freddie Witherden of Stanford University, has been named a 2016 finalist for the Association of Computer Machinery's Gordon Bell Prize, one of the most prestigious prizes in supercomputing.
To demonstrate PyFR's high-performance computing prowess, the team ran a high-resolution, GPU-accelerated simulation of flow over a jet turbine linear cascade on Titan, the flagship system of the OLCF, a U.S. Department of Energy Office of Science User Facility located at Oak Ridge National Laboratory. As detailed in the team's award submission, the simulation achieved more than 50 percent of Titan's theoretical peak performance. Additionally, the application supports plug-in software customization, making it easy for researchers to conduct visualization and data analyses processes in real-time.
"We set out to design a technology well suited for accelerated systems to solve unsteady turbulent flow problems, which require very high levels of accuracy," Vincent says. "We wanted this new code to be designed from the ground up to run on a range of system architectures. This is what led us to use the Python programming language — which is pretty novel in a high-performance computing context — to create this very powerful platform that can make good use of modern hardware to investigate real-world problems."
Turbulence, a term commonly associated with bumpy airplane flights, has long jostled the brains of great scientific thinkers. Renowned theoretical physicist Richard Feynman once described turbulence as "the most important unsolved problem of classical physics."
Though turbulent phenomena are ubiquitous — picture a wave breaking on the beach or the curl of smoke rising from a campfire — a complete mathematical description continues to elude scientists.
The challenge hinges on tracking a turbulent system's features over a range of scales that emerge, mix, and move through space and time. "This is difficult to model because you need to approximate the governing equations that fundamentally describe how these systems evolve," Vincent says. "These numerical methods unavoidably introduce some amount of error, but the flip side of that is this can help to stabilize your numerical scheme."
PyFR balances these factors by employing a highly-parallel numerical scheme known as flux reconstruction, a framework proposed by National Aeronautics and Space Administration scientist H.T. Huynh that ties together several high-accuracy schemes.
Unlike current CFD codes, which typically use mathematical averages to approximate the unsteady features of turbulent systems, flux reconstruction allows researchers to do more fine-grained calculations of turbulent physics. This means researchers can run PyFR on large-scale accelerated architectures to accurately resolve fluid features that were previously out of reach.
"We have far more access to all features of the flow when running this big computation than researchers can get from experiment," Vincent says. "Furthermore, we can potentially use this super database of information to tune industry-standard models to make them more accurate."
For companies bent on creating the next lightweight, fuel-efficient jet engine, the resulting insights from high-resolution simulation can prove invaluable. Modern jet engine turbines are designed to use as few blades as possible. However, this arrangement can lead to unsteady airflow patterns that reduce engine efficiency. To study this phenomenon, Vincent's team used Titan to create a comprehensive simulation of a key jet engine component, called the low-pressure turbine.
"Our objective is to provide a capability that can act as a virtual wind tunnel for these low-pressure turbine cascades, where you can resolve all the physics and get an accurate answer without any tweaking to fit experimental results," Vincent says.
The team ran its initial simulation of five turbine blades on the Swiss National Supercomputing Centre's Piz Daint machine, scaling the simulation across 5,000 of the system's GPU-enabled compute nodes. After the success of these initial runs, Vincent's team received a Director's Discretionary allocation on Titan and its more than 18,000 GPUs. On Titan, the team's highest-performing run contained 195 billion degrees of freedom — or independent variables — and operated at a sustained speed of 13.7 petaflops, or 13.7 quadrillion calculations a second.
"Titan is the only supercomputer big enough to do the scale of simulation that we wanted to try," Vincent says. "It was a huge enabler in that sense."
In addition to calculating turbine physics, Vincent's team leveraged PyFR's built-in data analysis capabilities to process the resulting simulation data in real-time. Such capacity helps to avoid generation and after-the-fact analysis of large datasets and can speed up extraction of useful information.
Having demonstrated the value of its open-source code, the team's next step is to deliver the technology to industry.
Vincent's team is working with a startup software vendor called Zenotech and the Centre for Modelling and Simulation in the United Kingdom to incorporate some of PyFR's advanced features into a proprietary cloud-based CFD codebase called zCFD that will be released later this year. Additionally, the team collaborates on projects with multiple companies, including BAE Systems, Rolls-Royce, and MTU Aero Engines.
Extending PyFR to incorporate additional physics and continue to adapt to changes in computing architectures remains a priority as well, Vincent says.
"There's still a bunch of things we'd like to do to, but the driver is to actually have people in industry use the technology to design everything from wind turbines, to race cars, to unmanned aerial vehicles. That's what motivates me," he says.
In addition to DOE's Office of Science, this work was supported by the United Kingdom's Engineering and Physical Sciences Research Council, Innovate UK, and the European Commission under Horizon 2020.
No entries found