acm-header
Sign In

Communications of the ACM

ACM News

Data-Scope Computer to Allow Data Analysis That's Impossible Today


View as: Print Mobile App Share:
data analysis

Credit: Dezearch

Imagine a tool that is a cross between a powerful electron microscope and the Hubble Space Telescope, allowing scientists from disciplines ranging from medicine and genetics to astrophysics, environmental science, oceanography and bioinformatics to examine and analyze enormous amounts of data from both "little picture" and "big picture" perspectives.

Using a $2.1 million grant from the U.S. National Science Foundation, a group led by computer scientist and astrophysicist Alexander Szalay of Johns Hopkins University's Institute for Data Intensive Engineering and Science is designing and developing such a tool, dubbed the Data-Scope.

Once built, the Data-Scope, which is actually a cluster of sophisticated computers capable of handling colossal sets of information, will enable the kind of data analysis tasks that simply are not otherwise possible today, says Szalay, the Alumni Centennial Professor in the Krieger School's Henry A. Rowland Department of Physics and Astronomy.

"Computer science has drastically changed the way we do science and the science that we do, and the Data-Scope is a crucial step in this process," Szalay says. "At this moment, the huge data sets are here, but we lack an integrated software and hardware infrastructure to analyze them. Data-Scope will bridge that gap."

Co-investigators on the Data-Scope project, all from Johns Hopkins, are Kenneth Church, chief scientist for the Human Language Technology Center of Excellence, a Department of Defense-funded center dedicated to advancing technology for the analysis of speech, text and document data; Andreas Terzis, associate professor in the Department of Computer Science at the Whiting School of Engineering; Sarah Wheelan, assistant professor of oncology bioinformatics in the School of Medicine; and Scott Zeger, professor of biostatistics in the Bloomberg School of Public Health and the university's vice provost for research.

Data-Scope will be able to handle 5 petabytes of data. That's the equivalent of 100 million four-drawer file cabinets filled with text. (Fifty petabytes would equal the entire written work of humankind, from the beginning of history until now, in all languages.)

The new apparatus will allow Szalay and a host of other Johns Hopkins researchers (not to mention those at other institutions, including universities and national laboratories such as Los Alamos in New Mexico and Oak Ridge in Tennessee) to conduct research directly in the database, which is where Szalay contends that more and more science is being done.

"The Data-Scope will allow us to mine out relationships among data that already exist, but that we can't yet handle, and to sift discoveries from what seems like an overwhelming flow of information," he says. "New discoveries will definitely emerge this way. There are relationships and patterns that we just cannot fathom buried in that onslaught of data. Data-Scope will tease these out."

According to Szalay, there are at least 20 research groups within Johns Hopkins that are grappling with data problems totaling 3 petabytes. Without Data-Scope, "they would have to wait years in order to analyze that amount of data," Szalay says.

The two-year NSF grant, to be supplemented with almost $1 million from Johns Hopkins, will underwrite the design and building of the new instrument and its first year of operation, expected to begin in May 2011. Szalay says that the range of material that the Data-Scope will handle will be "breathtakingly large, from genomics to ocean circulation, turbulence, astrophysics, environmental science, public health and beyond."

"There really is nothing like this at any university right now," Szalay says. "Such systems usually take many years to build up, but we are doing it much more quickly. It's similar to what Google is doing—of course on a thousand-times-larger scale than we are. This instrument will be the best in the academic world, bar none."

Zeger says he is excited about the research possibilities and collaborations that the new instrument will make possible.

"The NSF funding of a high-performance computing system, specially designed by Dr. Szalay and his team to solve large computational problems, will contribute to Johns Hopkins' remaining in the forefront of many areas, including biomedicine, where I work," he says. "The new genomic data are voluminous. Their analysis requires machines faster than are currently available. Dr. Szalay's machine will enable our biomedical and computational scientists to work together to solve problems that would have been beyond them otherwise."

Jonathan Bagger, vice provost for graduate and postdoctoral programs and special projects, believes that the Data-Scope positions Johns Hopkins to play a crucial role in the next revolution in science: data analysis.

"The Data-Scope is specially designed to bring large amounts of data literally under the microscope," he says. "By manipulating data in new ways, Johns Hopkins researchers will be able to advance their science in ways never before possible. I am excited that Johns Hopkins is in the forefront of this new field of inquiry: developing the calculus of the 21st century."

The instrument will be part of a new energy-efficient computing center that is being constructed in the basement of the Bloomberg Center for Physics and Astronomy on Johns Hopkins' Homewood campus. The house-sized room once served as a mission control center for the Far Ultraviolet Spectroscopic Explorer, a NASA satellite. This computing center is being built using a $1.3 million federal stimulus grant from the NSF.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account