Tianfu Wu at North Carolina State University and colleagues developed the QuadAttacK software to test deep neural networks for adversarial vulnerabilities.
Wu explained that a trained artificial intelligence (AI) system tested with clean data will behave as predicted.
"QuadAttacK watches these operations and learns how the AI is making decisions related to the data," he said. "This allows QuadAttacK to determine how the data could be manipulated to fool the AI. QuadAttacK then begins sending manipulated data to the AI system to see how the AI responds. If QuadAttacK has identified a vulnerability, it can quickly make the AI see whatever QuadAttacK wants it to see."
The researchers used their QuadAttacK software to test four deep neural networks that are widely used and found that all four to be vulnerable.
From NC State University News
View Full Article
Abstracts Copyright © 2023 SmithBucklin, Washington, D.C., USA
No entries found