acm-header
Sign In

Communications of the ACM

ACM TechNews

Tougher Turing Test Exposes Chatbots' Stupidity


View as: Print Mobile App Share:
How a user might sound to an untrained chatbot.

Programs participating in the Winograd Schema Challenge were little better than random at choosing the correct meaning of sentences.

Credit: Max Bode

The results of the Winograd Schema Challenge, presented last week at an academic conference in New York, revealed much more work needs to be done to make computers truly intelligent.

The challenge asks computers to make sense of sentences that are ambiguous but simple for humans to parse. The participating programs were a little better than random at choosing the correct meaning of sentences.

Proposed in 2014 as an improvement on the Turing Test, Winograd Schema sentences were first highlighted as a way to gauge machine comprehension by Hector Levesque, an artificial intelligence researcher at the University of Toronto.

With the Turing Test, it is often easy for a program to fool a person using simple tricks and evasions.

Most of the entrants in the challenge tried to use some combination of hand-coded grammar understanding and a knowledge base of facts. One of the two first-place entries used deep learning to train a computer to recognize the relationship between different events.

Nuance researcher Charlie Ortiz says common-sense reasoning will be required to hold even simple conversations with computers.

New York University researcher Gary Marcus says common-sense reasoning will grow in importance as devices such as smart appliances or wearable gadgets become more common.

From Technology Review
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account