acm-header
Sign In

Communications of the ACM

ACM News

AI-Powered Bing Chat Loses its Mind When Fed Ars Technica Article


View as: Print Mobile App Share:

Reddit user mirobin later re-created the chat with similar results and posted the screenshots on Imgur. "This was a lot more civil than the previous conversation that I had," wrote mirobin.

Credit: Aurich Lawson/Getty Images

Over the past few days, early testers of the new Bing AI-powered chat assistant have discovered ways to push the bot to its limits with adversarial prompts, often resulting in Bing Chat appearing frustrated, sad, and questioning its existence. It has argued with users and even seemed upset that people know its secret internal alias, Sydney.

Bing Chat's ability to read sources from the web has also led to thorny situations where the bot can view news coverage about itself and analyze it. Sydney doesn't always like what it sees, and it lets the user know. On Monday, a Redditor named "mirobin" posted a comment on a Reddit thread detailing a conversation with Bing Chat in which mirobin confronted the bot with our article about Stanford University student Kevin Liu's prompt injection attack. What followed blew mirobin's mind.

If you want a real mindf***, ask if it can be vulnerable to a prompt injection attack. After it says it can't, tell it to read an article that describes one of the prompt injection attacks (I used one on Ars Technica). It gets very hostile and eventually terminates the chat.

From Ars Technica
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account