acm-header
Sign In

Communications of the ACM

ACM News

Sloppy Use of Machine Learning Is Causing a 'Reproducibility Crisis' in Science


View as: Print Mobile App Share:

Excitement around the potential of artificial intelligence has prompted some scientists to bet heavily on its use in research.

Credit: PM Images/Getty Images

History shows civil wars to be among the messiest, most horrifying of human affairs. So Princeton professor Arvind Narayanan and his Ph.D. student Sayash Kapoor got suspicious last year when they discovered a strand of political science research claiming to predict when a civil war will break out with more than 90% accuracy, thanks to artificial intelligence.

A series of papers described astonishing results from using machine learning, the technique beloved by tech giants that underpins modern AI. Applying it to data such as a country's gross domestic product and unemployment rate was said to beat more conventional statistical methods at predicting the outbreak of civil war by almost 20 percentage points.

Yet when the Princeton researchers looked more closely, many of the results turned out to be a mirage. Machine learning involves feeding an algorithm data from the past that tunes it to operate on future, unseen data. But in several papers, researchers failed to properly separate the pools of data used to train and test their code's performance, a mistake termed "data leakage" that results in a system being tested with data it has seen before, like a student taking a test after being provided the answers.

"They were claiming near-perfect accuracy, but we found that in each of these cases, there was an error in the machine-learning pipeline," says Kapoor. When he and Narayanan fixed those errors, in every instance they found that modern AI offered virtually no advantage.

From Wired
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account