acm-header
Sign In

Communications of the ACM

ACM News

New Deepfake Threats Loom, says Microsoft's Chief Science Officer


View as: Print Mobile App Share:

To date, he wrote in the new paper, deepfakes have been created and shared as one-off, stand-alone creations; now, however, “we can expect to see the rise of new forms of persuasive deepfakes that move beyond fixed, singleton productions.”

Credit: searchenginejournal.com

Deepfakes, or high-fidelity, synthetic, fictional depictions of people and events leveraging artificial intelligence (AI) and machine learning (ML), have become a common tool of misinformation over the past five years. But according to Eric Horvitz, Microsoft's chief science officer, new deepfake threats are lurking on the horizon. 

A new research paper from Horvitz says that interactive and compositional deepfakes are two growing classes of threats. In a Twitter thread, MosaicML research scientist Davis Blaloch described interactive deepfakes as "the illusion of talking to a real person. Imagine a scammer calling your grandmom who looks and sounds exactly like you." Compositional deepfakes, he continued, go further with a bad actor creating many deepfakes to compile a "synthetic history." 

"Think making up a terrorist attack that never happened, inventing a fictional scandal, or putting together "proof" of a self-serving conspiracy theory. Such a synthetic history could be supplemented with real-world action (e.g., setting a building on fire)," Blaloch tweeted.

From VentureBeat
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account