In October 1950, the British logician and computing pioneer Alan Turing examined the possibility of intelligence embodied in a computer. He devised a chat-session imitation game as a tool for determining whether a "computing machine" might exhibit intelligent behavior [2]. Over the past 50 years, much debate has ensued as to the validity of Turing's approach in diagnosing intelligence [1]. Rather than add to this imbroglio, we believe that 50 years after Turing's article it is timely to consider more directly the effects of success in building such an imitation device: granted that a computing machine passes the Turing test, would intelligence alone make it useful to its human creators?
Circumventing altogether the debate on machine intelligence and on its certifiability via the Turing test, we brand a machine that passes the test with the sigil "Turing Chatterbox." Assuming, now, the existence of machines so labeled, of what real use are these chatterboxes?
Consider the following two possible medical scenarios:
Scenario A. Miss Parker wakes up one morning feeling very much under the weather. She regretfully decides that a visit to the doctor's would be the order of the day. However, having been healthy her whole life, the "doctors" page in her diary is entirely vacant. Being a resourceful person, Miss Parker phones several of her friends, all of whom recommend unreservedly a certain Dr. Jekyll. Miraculously, Miss Parker manages to secure an appointment, and upon arriving at Dr. Jekyll's office, marked by an august, gold-lettered doorplate, she is immediately ushered in by the doctor's kindly nurse, who proceeds to perform the preliminary examinations. "Don't worry," says the nurse while going about her business, "Dr. Jekyll is the best there is." Miss Parker then enters the inner sanctum and is greeted by Dr. Jekylla white-coated, silver-haired gentleman of solid build. "He certainly looks the part," thinks Miss Parker. Taking the seat proffered by the doctor, she feels entirely at ease, instinctively knowing she has come to the right place.
Scenario B. Waking up and feeling ill, Miss Parker phones city hall and is given the address of a Turing clinic. Luckily, it is located in a nearby office building. On arrival, without waiting, she is escorted to an immaculate, nondescript room that contains only a chair and a box, the latter of which carries the royal "Turing Chatterbox" logo. The box wastes no time in identifying itself as "IQ175" andwhile cheerfully humming to itselfproceeds to scan Miss Parker with hidden sensors, printing a diagnostic and a treatment form. At no time during the silent examination has Miss Parker detected even a hint of the box's professional medical capacities. Is it any wonder she cannot help feeling not only ill, but indeed ill at ease?
If a Turing Chatterbox is to be more than a mere conversing toy, it must come to be trusted to a degree commensurate with that of a human being. Why does the human doctor earn Miss Parker's trust while the Turing Chatterboxthough apparently equally "intelligent"does not? "I believe," wrote Turing, "that in about 50 years' time it will be possible to program computers ... to make them play the imitation game so well that an average interrogator will not have more than 70% chance of making the right identification after five minutes of questioning." While a five-minute intelligence test may well exist, would you trust a five-minute trust test?
As human beings we are part of multitudinous social networks and continually refine our view on trustworthiness. A person is judged trustable not merely by his or her utterances, demeanor, and known actions, but also through the influence of invisible social networks that "float" in the backdrop. Witness Miss Parker's attention to her friends' opinions, the office's doorplate, the doctor's diploma, the nurse, the doctor's professional attire and demeanor, all attesting to the character of Dr. Jekyll. We continually collect signpoststhrough friends, colleagues, newspapers, books, television, and so onthat signify the collective confidence placed in each person and institution with whom we have social dealings. It is therefore expected that when machines move from the role of mechanical intermediary (for instance, a telephone or database program) to that of interlocutor (travel agent, investment adviser) the trust issue will enter the picture in a much more explicit way. We argue that when intelligence is actually put to use it need come hand-in-hand with another primordial (human) quality: trust.
What happens when a Turing financial advisor misadvises an investor or when a Turing doctor mistreats a patient? Can Turing Chatterboxes be held accountable for their actions?
What compounds this trust issue even further is what we call the "slippery mind" problem, as our gallant Miss Parker demonstrates in a third scenario.
Scenario C. Waking up and feeling under the weather, Miss Parker summons an online doctor recommended by her home computer. With hardly any delay, the animated image of a reassuring-looking gentleman in his 50s appears on the screen.
"Good morning," says the image. "I am Dr. Jekyll. Before I begin my examination, I must inform you that I am not a human doctor but a Turing doctor, that is, a machine. Do you wish to continue?"
"Yes," replies Miss Parker, "Let's get on with it. I really feel quite ill."
It takes the good Turing doctor less than five minutes to diagnose the latest strain of the Boston flu, and to promptly prescribe the necessary medication.
The next day, feeling worse, Miss Parker asks her home computer to call the doctor again. But now the synthetic image appearing on the screen shows a grinning chimpanzee twirling a stethoscope.
"Are you the same Dr. Jekyll from yesterday?" she asks.
"Yes," replies the machine.
Is it any wonder that Miss Parker is left with an uneasy feeling?
Human intelligence (or indeed animal intelligence in general) is constrained by the one-mind/one-body principle: one mind inhabits exactly one body, and vice versaone body is inhabited by exactly one mind. We find it very difficult to deal with any form of intelligence that diverges from this maxim, and indeed consider multiple personality disorder a grave disease. Humans are used to the one-mind/one-body way of life; chatterboxes, on the other hand, canas software entitiesroam the Net and hop from "body" to "body." When facing a Turing Chatterbox, we may justifiably be unsure of the identity of the "mind" lurking within the box ("body"); this compounds the trust problem. It would be niceat least as a stopgap measureto be able to assign a unique face to the being that momentarily animates the box. Are "mind signatures" a new area of research for cryptography?
Golden retrievers, baboons, teenagers, and even chatterboxes will make mistakes or even mischief. What then? What happens when a Turing financial advisor misadvises an investor or when a Turing doctor mistreats a patient? Can Turing Chatterboxes be held accountable for their actions? With current human products (be it cars or software) we ultimately hold the manufacturers responsible. This is akin to holding parents responsible for the actions of their child. But what happens once the child flies the coop? We could at first hold the manufacturers of Turing Chatterboxes responsible for their products. However, as these boxes enter the social whirlpool, growing increasingly complex and autonomous, how do we keep them in check? Can we devise virtual prisons? The scenario becomes less like a manufacturer producing a (guaranteed) product and more like that of parenting a child.
We believe the years ahead will eventually see the coming of Turing Chatterboxes. In the short run, we shall be able to immediately put them to use in games and in jobs that mostly call for innocuous "small talk," such as Web interfaces, directory services, tourist information, and so forth. In the long run, though, we contend that the question of the boxes' intelligence will cede its place to more burning issues, arising from the use of these chatterboxes:
We conclude that when machines begin to participate in social transactions, unresolved issues of trust and responsibility may well overshadow any raw reasoning ability they possess. Turing's final words are still as true as they were 50 years ago: "We can only see a short distance ahead, but we can see plenty there that needs to be done."
1. MacroVU Press. Mapping great debates: Can computers think? Bainbridge Island, Wash., 1998 (A "road map" of the machine intelligence debate: seven posters, 800 argument summaries, 500 references. See www.macrovu.com.)
2. Turing, A.M. Computing machinery and intelligence. Mind 59, 236, (Oct. 1950), 433460.
©2000 ACM 0002-0782/00/1000 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2000 ACM, Inc.
No entries found