acm-header
Sign In

Communications of the ACM

ACM TechNews

Throwing a Lifeline to Scientists Drowning in Data


View as: Print Mobile App Share:
A life preserver.

New computational techniques could be the life preserver scientists need to keep from drowning in massive amounts of data.

Credit: DomainingLaws.com

Lawrence Berkeley National Laboratory (Berkeley Lab) researchers say they have developed computational techniques that could help scientists manage massive amounts of data.

The researchers have developed a method, called distributed merge trees, to streamline the analysis of enormous scientific datasets using the same techniques that make complex subway systems understandable at a glance.

"The growth of serial computational power has stalled, so data analysis is becoming increasingly dependent on massively parallel machines," notes Berkeley Lab's Gunther Weber. "To satisfy the computational demand created by complex datasets, algorithms need to effectively use these parallel computer architectures."

Once a massive dataset has been generated, scientists can use the distributed merge tree algorithm to translate it into a topological map. The algorithm scans the entire scientific dataset and tags interesting values, and merges points or connections in the data.

Weber says distributed merge trees take advantage of massively parallel computers by dividing topological datasets into blocks, and then distributing the workload across thousands of nodes. "By reducing the tree size per node while maintaining a full accurate representation of the merge tree, we speed up the topological analysis and make it applicable to larger datasets," he says.

From Berkeley Lab News Center
View Full Article

 

Abstracts Copyright © 2013 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account