acm-header
Sign In

Communications of the ACM

ACM TechNews

Medical Imaging AI Software Is Vulnerable to Covert Attacks


View as: Print Mobile App Share:
An attacker could manipulate medical software.

A new study warns systems meant to analyze medical images are vulnerable to attacks designed to fool them in ways that are imperceptible to humans.

Credit: iStock

Deep learning neural network systems for analyzing medical images can be exploited by cyberattackers in ways that humans cannot detect, according to a new study.

Harvard Medical School's Samuel Finlayson warns this relatively simple adversarial attack method could be easily automated.

His team tested deep learning systems with adversarial examples on three common imaging tasks: classifying diabetic retinopathy from retinal images, identifying pneumothorax from chest x-rays, and finding melanoma in skin photos.

The exploits involve altering pixels within images so people would mistake them for noise, when in fact they are deceiving the software into misclassifying the images, potentially up to 100% of the time.

"We feel that adversarial attacks are particularly pernicious and subtle, because it would be very difficult to detect that the attack has occurred," Finlayson notes.

From IEEE Spectrum
View Full Article

 

Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account