Computer scientists at the University of California, San Diego (UCSD) demonstrated for the first time that detectors programmed to spot deepfake videos can be beaten.
Presenting at the Winter Conference on Applications of Computer Vision 2021 in January, the researchers explained how they inserted adversarial examples into every video frame, inducing errors in artificial intelligence systems.
The method also works after videos are compressed, because the attack algorithm estimates across a set of input transformations how the model ranks images as real or fake, then uses this calculation to alter images so the adversarial image remains effective after compression and decompression.
The USCD researchers said, "The current state-of-the-art methods for deepfake detection can be easily bypassed if the adversary has complete or even partial knowledge of the detector."
From UC San Diego Jacobs School of Engineering
View Full Article
Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA
No entries found