acm-header
Sign In

Communications of the ACM

ACM News

Estimating the Informativeness of Data


View as: Print Mobile App Share:

MIT researchers have discovered a scalable way to estimate the amount of information contained in any piece of data using probabilistic programming and probabilistic inference.

Credit: Bill Smith

Not all data are created equal. But how much information is any piece of data likely to contain? This question is central to medical testing, designing scientific experiments, and even to everyday human learning and thinking. MIT researchers have developed a new way to solve this problem, opening up new applications in medicine, scientific discovery, cognitive science, and artificial intelligence.

In theory, the 1948 paper, "A Mathematical Theory of Communication," by the late MIT Professor Emeritus Claude Shannon answered this question definitively. One of Shannon's breakthrough results is the idea of entropy, which lets us quantify the amount of information inherent in any random object, including random variables that model observed data. Shannon's results created the foundations of information theory and modern telecommunications. The concept of entropy has also proven central to computer science and machine learning.

From Wired
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account