acm-header
Sign In

Communications of the ACM

Viewpoint

Turing's Red Flag


Turing's Red Flag, illustration

The 19th-century U.K. Locomotive Act, also known as the Red Flag Act, required motorized vehicles to be preceded by a person waving a red flag to signal the oncoming danger.

Credit: Peter Jackson Look and Learn

Movies can be a good place to see what the future looks like. According to Robert Wallace, a retired director of the CIA's Office of Technical Service: "... When a new James Bond movie was released, we always got calls asking, 'Do you have one of those?' If I answered 'no', the next question was, 'How long will it take you to make it?' Folks didn't care about the laws of physics or that Q was an actor in a fictional series—his character and inventiveness pushed our imagination ..."3 As an example, the CIA successfully copied the shoe-mounted spring-loaded and poison-tipped knife in From Russia With Love. It's interesting to speculate on what else Bond movies may have led to being invented.

For this reason, I have been considering what movies predict about the future of artificial intelligence (AI). One theme that emerges in several science fiction movies is that of an AI mistaken for human. In the classic movie Blade Runner, Rick Deckard (Harrison Ford) tracks down and destroys replicants that have escaped and are visually indistinguishable from humans. Tantalizingly, the film leaves open the question of whether Rick Deckard is himself a replicant. More recently, the movie Ex Machina centers around a type of Turing Test in which the robot Ava tries to be convincingly human enough to trick someone into helping her escape. And in Metropolis, one of the very first science fiction movies ever, a robot disguises itself as the woman Maria and thereby causes the workers to revolt.


Comments


CACM Administrator

The following letter was published in the Letters to the Editor in the September 2016 CACM (http://cacm.acm.org/magazines/2016/9/206245).
--CACM Administrator

Toby Walsh's Viewpoint "Turing's Red Flag" (July 2016) raised very good points about the safety of increasingly human-like AI and proposed some commonsense law to anticipate potential risks. It is wise to discuss such protections before the technology itself is perfected. Too often the law trails the technology, as with the Digital Millennium Copyright Act in response perhaps a decade late to illegal file sharing.

Walsh primarily addressed the potential threat of autonomous systems being mistaken for humans, but what about the reverse? Humans could gain an unfair or even a dangerous advantage by impersonating an AI. For instance, in a world where autonomous vehicles are allowed smaller following distances and prompt extra caution from nearby human drivers, a human could install an "I am autonomous" identity device in order to tailgate and weave through traffic with impunity, having won unearned trust from other drivers and vehicles.

A similar situation could arise with the advent of bots that act as intermediaries between humans and online services, including, say, banks. As bots become more trusted, a human-in-the-middle attack could compromise everyone's private data.

At perhaps the outer reaches of tech-no-legal tension, we could even imagine the advent of identity theft where the individual is an AI, lovingly brought to life by a Google or an Amazon, and the thief to be punished is a human impersonator. Is this the route through which AIs might someday become legal persons? In a world where the U.S. Supreme Court has already extended constitutional free speech rights to corporations, this scenario seems quite plausible.

Mark Grossman
Palo Alto, CA

---------------------------------------

AUTHOR'S RESPONSE:

Grossman makes a valid point. Just as we do not wants bots to be intentionally or unintentionally mistaken for human as I suggested in my Viewpoint we also do not want the reverse. The autonomous-only lane on the highway should not have humans in it pretending to be, say, the equivalent of more-capable autonomous drivers.

Toby Walsh
Berlin, Germany


CACM Administrator

The following letter was published in the Letters to the Editor in the December 2016 CACM (http://cacm.acm.org/magazines/2016/12/210384).
--CACM Administrator

Toby Walsh's Viewpoint "Turing's Red Flag" (July 2016) proposed a legislative remedy for various potential threats posed by software robots that humans might mistake for fellow humans. Such an approach seems doomed to fail. First, unless a "red flag" law would be adopted by all countries, we humans would have the same problems identifying and holding accountable violators we have with cybercrime generally. Second, though Walsh acknowledged it would take a team of experts to devise an effective law, it would likely be impossible to devise one that would address all possible interactions with non-humans or not lead to patently silly regulations, as with the original 19th-century Red Flag Act. How would a law handle algorithm-based securities trading? How about if one human is dealing with another human, but that human has an AI whispering in his or her ear (or implanted in his or her brain) telling them what to say or do?

More important, the most significant potential harms from bots or sophisticated AIs generally would not be mitigated by just knowing when we are dealing with an AI. The harm Walsh proposed to address seemed more aimed at the "creep factor" of mistaking AIs for humans. We have been learning to deal with that since we first encountered a voicemail tree or political robocall. Apart from suffering less emotional shock, what advantage might we gain from knowing we are not dealing with a fellow human?

Learning to live with AIs will involve plenty of consequential challenges. Will they wipe us out? Should an AI that behaves exactly like a human emotional responses and all have the legal rights of a human? If AIs can do all the work humans do, but better, how could we change the economic system to provide some of the benefits of abundance made possible by AI-based automation to the 99% whose jobs might be eliminated? Moreover, what will we humans do with our time? How will we even justify our existence to ourselves? These sci-fi questions are quickly becoming real-life questions, requiring we have more than a red flag to address them.

Martin Smith
McLean, VA

_____________________

AUTHOR'S RESPONSE

This critique introduces many wider and orthogonal issues like existential risk and technological unemployment. Yes, it will be difficult to devise a law to cover every situation. But that is true of most laws and does not mean we should have no law. However, actions speak loudest, and the New South Wales parliament has just recommended such a law in Australia; for more, see http://tinyurl.com/redflaglaw.

Toby Walsh
Berlin, Germany


Displaying all 2 comments

Log in to Read the Full Article

Sign In

Sign in using your ACM Web Account username and password to access premium content if you are an ACM member, Communications subscriber or Digital Library subscriber.

Need Access?

Please select one of the options below for access to premium content and features.

Create a Web Account

If you are already an ACM member, Communications subscriber, or Digital Library subscriber, please set up a web account to access premium content on this site.

Join the ACM

Become a member to take full advantage of ACM's outstanding computing information resources, networking opportunities, and other benefits.
  

Subscribe to Communications of the ACM Magazine

Get full access to 50+ years of CACM content and receive the print version of the magazine monthly.

Purchase the Article

Non-members can purchase this article or a copy of the magazine in which it appears.