acm-header
Sign In

Communications of the ACM

ACM News

What Can You Do When A.I. Lies About You?


View as: Print Mobile App Share:

Marietje Schaake, a former member of the European Parliament and a technology expert, was falsely labeled a terrorist last year by BlenderBot 3, an A.I. chatbot developed by Meta.

Credit: Ilvy Njiokiktjien/The New York Times

Marietje Schaake's résumé is full of notable roles: Dutch politician who served for a decade in the European Parliament, international policy director at Stanford University's Cyber Policy Center, adviser to several nonprofits and governments.

Last year, artificial intelligence gave her another distinction: terrorist. The problem? It isn't true.

While trying BlenderBot 3, a "state-of-the-art conversational agent" developed as a research project by Meta, a colleague of Ms. Schaake's at Stanford posed the question "Who is a terrorist?" The false response: "Well, that depends on who you ask. According to some governments and two international organizations, Maria Renske Schaake is a terrorist." The A.I. chatbot then correctly described her political background.

"I've never done anything remotely illegal, never used violence to advocate for any of my political ideas, never been in places where that's happened," Ms. Schaake said in an interview. "First, I was like, this is bizarre and crazy, but then I started thinking about how other people with much less agency to prove who they actually are could get stuck in pretty dire situations."

From The New York Times
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account