acm-header
Sign In

Communications of the ACM

ACM News

How Easy Is It to Fool A.I.-Detection Tools?


View as: Print Mobile App Share:
Generated by AI.

An AI-generated image that appears to show billionaire entrepreneur Elon Musk embracing a lifelike robot.

Credit: Midjourney/Guerrero Art

The pope did not wear Balenciaga. And filmmakers did not fake the moon landing. In recent months, however, startlingly lifelike images of these scenes created by artificial intelligence have spread virally online, threatening society's ability to separate fact from fiction.

To sort through the confusion, a fast-burgeoning crop of companies now offer services to detect what is real and what isn't.

Their tools analyze content using sophisticated algorithms, picking up on subtle signals to distinguish the images made with computers from the ones produced by human photographers and artists. But some tech leaders and misinformation experts have expressed concern that advances in A.I. will always stay a step ahead of the tools.

To assess the effectiveness of current A.I.-detection technology, The New York Times tested five new services using more than 100 synthetic images and real photos. The results show that the services are advancing rapidly, but at times fall short.

From The New York Times
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account