acm-header
Sign In

Communications of the ACM

ACM News

Disinformation Researchers Raise Alarms About A.I. Chatbots


View as: Print Mobile App Share:

Personalized, real-time chatbots could share conspiracy theories in increasingly credible and persuasive ways, researchers say, smoothing out human errors like poor syntax and mistranslations and advancing beyond easily discoverable copy-paste jobs.

Credit: Alamy

Soon after ChatGPT debuted last year, researchers tested what the artificial intelligence chatbot would write after it was asked questions peppered with conspiracy theories and false narratives.

The results — in writings formatted as news articles, essays and television scripts — were so troubling that the researchers minced no words.

"This tool is going to be the most powerful tool for spreading misinformation that has ever been on the internet," said Gordon Crovitz, a co-chief executive of NewsGuard, a company that tracks online misinformation and conducted the experiment last month. "Crafting a new false narrative can now be done at dramatic scale, and much more frequently — it's like having A.I. agents contributing to disinformation."

Disinformation is difficult to wrangle when it's created manually by humans. Researchers predict that generative technology could make disinformation cheaper and easier to produce for an even larger number of conspiracy theorists and spreaders of disinformation.

From The New York Times
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account