acm-header
Sign In

Communications of the ACM

ACM Careers

Scientists Call for Transparency and Reproducibility in AI Research


View as: Print Mobile App Share:
science brain, illustration

Credit: Getty Images

International scientists are challenging their colleagues to make Artificial Intelligence research more transparent and reproducible to accelerate the impact of their findings for cancer patients. 

In "Transparency and Reproducibility in Artificial Intelligence," published in the journal Nature, scientists at Princess Margaret Cancer Centre, the University of Toronto, Stanford University, Johns Hopkins University, Harvard T.H. Chan School of Public Health, Massachusetts Institute of Technology, and others, challenge scientific journals to hold computational researchers to higher standards of transparency, and call for their colleagues to share their code, models, and computational environments in publications.

"Scientific progress depends on the ability of researchers to scrutinize the results of a study and reproduce the main finding to learn from," says Benjamin Haibe-Kains, senior scientist at Princess Margaret Cancer Centre and first author of the article. "But in computational research, it's not yet a widespread criterion for the details of an AI study to be fully accessible. This is detrimental to our progress."

The authors voice their concern about the lack of transparency and reproducibility in AI research after "International Evaluation of an AI System for Breast Cancer Screening," a study by Google Health's Scott Mayer McKinney et al., published in Nature in January 2020, claimed an AI system could outperform human radiologists in both robustness and speed for breast cancer screening. The study made waves in the scientific community and created a buzz with the public, with headlines appearing in BBC News, CBC, and CNBC. 

A closer examination raised some concerns: the study lacked a sufficient description of the methods used, including their code and models. Researchers say the lack of transparency prohibited them from learning exactly how the model works and how they could apply it to their own institutions.

"On paper and in theory, the McKinney et al. study is beautiful," says Haibe-Kains. "But if we can't learn from it then it has little to no scientific value."

Journals Are Vulnerable

Haibe-Kains says this is just one example of a problematic pattern in computational research.

"Researchers are more incentivized to publish their finding rather than spend time and resources ensuring their study can be replicated," he says. "Journals are vulnerable to the 'hype' of AI and may lower the standards for accepting papers that don't include all the materials required to make the study reproducible — often in contradiction to their own guidelines."

In the Nature article, the authors offer numerous frameworks and platforms that allow safe and effective sharing to uphold the three pillars of open science to make AI research more transparent and reproducible: sharing data, sharing computer code, and sharing predictive models.

"We have high hopes for the utility of AI for our cancer patients," Haibe-Kains says. "Sharing and building upon our discoveries — that's real scientific impact."

Competing Interests

Michael M. Hoffman received a GPU Grant from Nvidia. Benjamin Haibe-Kains is a scientific advisor for Altis Labs. Chris McIntosh holds an equity position in Bridge7Oncology and receives royalties from RaySearch Laboratories.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account