acm-header
Sign In

Communications of the ACM

ACM TechNews

Medical AI Tools Can Make Dangerous Mistakes. Can the Government Help Prevent Them?


View as: Print Mobile App Share:
The government is wrestling with how to ensure AI tools for doctors do no harm.

Doctors can forgo taking notes during a patient’s visit because an AI system would listen in and capture information.

Credit: Doug Barrett/The Wall Street Journal

The Biden administration has proposed a labeling system for artificial intelligence (AI) healthcare apps that aims to ensure their safety.

The "nutrition label" would detail how the apps were trained and tested, how they perform, their intended uses, and measures of their "validity and fairness."

Healthcare and technology companies oppose the rule due to concerns that it would hinder competition and compromise proprietary information.

The proposal from the U.S. Department of Health and Human Services' Office of the National Coordinator for Health Information Technology (ONC) could be finalized by the end of the year.

Supporters of the labels said they would allow providers to avoid AI tools that underperform or are not appropriate for a particular case.

From The Wall Street Journal
View Full Article - May Require Paid Subscription

 

Abstracts Copyright © 2023 SmithBucklin, Washington, D.C., USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account