acm-header
Sign In

Communications of the ACM

ACM News

AI Misinformation: Why It Works and How to Spot It


View as: Print Mobile App Share:
 The risks of AI continue to grow.

Problems arise when we can't tell AI from reality. Or when AI-generated content is intentionally made to trick people, so not just misinformation (wrong or misleading information), but disinformation (falsehoods designed to mislead or cause harm).

Credit: James Martin/CNET

A year and a half ahead of the 2024 presidential election, the Republican National Committee began running attack ads against President Joe Biden. This time around, however, the committee did something different.

It used generative AI to create a political ad filled with images depicting an alternative reality, with a partisan slant — what it wants us to believe the country would look like if Biden was reelected. The ad flashes images of migrants coming across US borders in droves, a world war imminent and soldiers patrolling the streets of barren U.S. cities. And at the top left corner of the video, a small, faint disclaimer — easy to miss — notes, "Built entirely with AI imagery."

It's unclear what prompts the RNC used to generate this video. The committee didn't respond to requests for more information. But it surely seems like it worked off ideas like "devastation," "governmental collapse" and "economic failure."

From CNET
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account