A new approach to computation is required for managing big data because of the shift to a decentralized, distributed computer architecture. "The challenge is not how to solve problems with a single, ultra-fast processor, but how to solve them with 100,000 slower processors," says Stanford University's Stephen Boyd.
A new kind of computing fabric must accommodate growth in data sets' size and complexity that outpaces the expansion of computing resources, according to the California Institute of Technology's Harvey Newman.
Boyd's consensus algorithms use a strategy in which a data set is split into bits and distributed across 1,000 agents that analyze their individual bit and generate a model based on the data they have processed, and all of the models must ultimately agree. The iterative process supports a feedback loop, in which initial consensus is shared with all agents, which update their models in view of the new information and reach a second consensus, and so on until all the agents are in agreement.
Meanwhile, quantum computing might aid big data by searching large, unsorted data sets, and key to this advance is a quantum memory accessible in quantum superposition. Massachusetts Institute of Technology professor Seth Lloyd has conceived of a prototype system and accompanying app that he believes could uncover patterns within data while preserving superposition by not actually looking at any individual records.
From Quanta Magazine
View Full Article
Abstracts Copyright © 2013 Information Inc., Bethesda, Maryland, USA
No entries found