acm-header
Sign In

Communications of the ACM

ACM TechNews

Deepfake Detectors Can Be Defeated, Computer Scientists Show for the First Time


View as: Print Mobile App Share:
Analyzing a video for signs it is true or false.

Systems designed to detect deepfakes can be deceived, according to University of California, San Diego researchers.

Credit: Rick Jo/Getty Images

Computer scientists at the University of California, San Diego (UCSD) demonstrated for the first time that detectors programmed to spot deepfake videos can be beaten.

Presenting at the Winter Conference on Applications of Computer Vision 2021 in January, the researchers explained how they inserted adversarial examples into every video frame, inducing errors in artificial intelligence systems.

The method also works after videos are compressed, because the attack algorithm estimates across a set of input transformations how the model ranks images as real or fake, then uses this calculation to alter images so the adversarial image remains effective after compression and decompression.

The USCD researchers said, "The current state-of-the-art methods for deepfake detection can be easily bypassed if the adversary has complete or even partial knowledge of the detector."

From UC San Diego Jacobs School of Engineering
View Full Article

 

Abstracts Copyright © 2021 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account