Researchers at the U.S. Department of Energy's (DOE's) Argonne National Laboratory are using a method called software-based parallel volume rendering to accelerate the generation of quadrillions of data points for visualizations. The work is sponsored by DOE's Office of Advanced Scientific Computing Research.
Scientists first split up the data among numerous processing cores so that they can all work concurrently, and then the data is transferred to a series of graphical processing units (GPUs) that produce the final images. "It's so much data that we can't easily ask all of the questions that we want to ask: Each new answer creates new questions and it just takes too much time to move the data from one calculation to the next," says the Argonne Leadership Computing Facility's Mark Hereld. "That drives us to look for better and more efficient ways to organize our computational work."
The researchers sought to determine if they could augment performance by forgoing transfer to the GPUs and instead execute the visualizations directly on the supercomputer. They tested the method on a set of astrophysics data and learned that they could speed up operational efficiency. "We were able to scale up to large problem sizes of over 80 billion voxels per time step and generated images up to 16 megapixels," says Tom Peterka at Argonne's Mathematics and Computer Science Division.
Argonne researchers can explore physical, chemical, and biological phenomena with much greater spatial and temporal detail, because the Blue Gene/P supercomputer's main processor is capable of visualizing data as it is analyzed.
From Argonne National Laboratory
View Full Article
Abstracts Copyright © 2009 Information Inc., Bethesda, Maryland, USA
No entries found