acm-header
Sign In

Communications of the ACM

ACM Careers

Georgia Tech Professor Proposes Alternative to Turing Test


View as: Print Mobile App Share:
Alan M. Turing

Alan Turing never meant his test to be a benchmark for determining if a computer can think like a human.

Credit: National Portrait Gallery

A Georgia Tech professor recently offered an alternative to the celebrated "Turing Test" to determine whether a machine or computer program exhibits human-level intelligence. The Turing Test — originally called the Imitation Game — was proposed by computing pioneer Alan Turing in 1950. In practice, some applications of the test require a machine to engage in dialogue and convince a human judge that it is an actual person.

Creating certain types of art also requires intelligence, an observation that prompted Mark Riedl, an associate professor in the School of Interactive Computing at Georgia Tech, to consider if that might lead to a better gauge of whether a machine can replicate human thought. Riedl describes his test in "The Lovelace 2.0 Test of Artificial Creativity and Intelligence," to be presented at Beyond the Turing Test, an AAAI workshop to be held January in Austin, Texas.

"It's important to note that Turing never meant for his test to be the official benchmark as to whether a machine or computer program can actually think like a human," Riedl says. "And yet it has, and it has proven to be a weak measure because it relies on deception. This proposal suggests that a better measure would be a test that asks an artificial agent to create an artifact requiring a wide range of human-level intelligent capabilities."

To that end, Riedl has created the Lovelace 2.0 Test of Artificial Creativity and Intelligence.

For the test, the artificial agent passes if it develops a creative artifact from a subset of artistic genres deemed to require human-level intelligence and the artifact meets certain creative constraints given by a human evaluator. Further, the human evaluator must determine that the object is a valid representative of the creative subset and that it meets the criteria. The created artifact needs only meet these criteria but does not need to have any aesthetic value. Finally, a human referee must determine that the combination of the subset and criteria is not an impossible standard.

The Lovelace 2.0 Test stems from the original Lovelace Test as proposed by Bringsjord, Bello, and Ferrucci in 2001, and described in "Creativity, the Turing Test, and the (Better) Lovelace Test." The original Lovelace test required that an artificial agent produce a creative item in such a way that the agent's designer cannot explain how it developed the creative item. The item, thus, must be created in such a way that is valuable, novel, and surprising.

Riedl contends that the original Lovelace test does not establish clear or measurable parameters. Lovelace 2.0, however, enables the evaluator to work with defined constraints without making value judgments such as whether the artistic object created surprise.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account