acm-header
Sign In

Communications of the ACM

ACM News

Why Do AI Chatbots Tell Lies and Act Weird? Look in the Mirror.


View as: Print Mobile App Share:

The new chatbots are driven by a large language model, or L.L.M., a system that learns by analyzing enormous amounts of digital text culled from the Internet.

Credit: David Plunkert

When Microsoft added a chatbot to its Bing search engine this month, people noticed it was offering up all sorts of bogus information about the Gap, Mexican nightlife and the singer Billie Eilish.

Then, when journalists and other early testers got into lengthy conversations with Microsoft's A.I. bot, it slid into churlish and unnervingly creepy behavior.

In the days since the Bing bot's behavior became a worldwide sensation, people have struggled to understand the oddity of this new creation. More often than not, scientists have said humans deserve much of the blame.

But there is still a bit of mystery about what the new chatbot can do — and why it would do it. Its complexity makes it hard to dissect and even harder to predict, and researchers are looking at it through a philosophic lens as well as the hard code of computer science.

From The New York Times
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account