The impressive progress in artificial intelligence (AI) over the past decade and the prospect of an impending global race in AI-based weaponry have led to the publication, on July 28, of "Autonomous Weapons: An Open Letter from AI & Robotics Researchers," with over 20,000 signatories by now, calling for "a ban on offensive autonomous weapons beyond meaningful human control." Communications is following up on this letter with a Point-Counterpoint debate between Stephen Goose and Ronald Arkin on the subject of lethal autonomous weapons systems (LAWS) beginning on page 43.
"War is hell," said General William T. Sherman, a Union Army general during the American Civil War. Since 1864, the world's nations have developed a set of treaties (known as the "Geneva Conventions") aiming at somewhat diminishing the horror of war and ban weapons that are considered particularly inhumane. Some notable successes have been the banning of chemical and biological weapons, the banning of anti-personnel mines, and the banning of blinding laser weapons. Banning LAWS seems to be the next frontier in effort to "somewhat humanize" war.
While I am sympathetic to the desire to curtail a new generation of even more lethal weapons, I must confess, however, to having a deep sense of pessimism as I read the Open Letter, as well as the two powerful Point and Counterpoint articles. I suspect many computer scientists, like me, like to believe that, on the whole, computing benefits humanity. Thus, it is disturbing for us to realize computing is also making a major contribution to military technology. In fact, since the 1991 Gulf War, information and computing technology has been a major driver in what has become known as the "Revolution in Military Affairs." The "third revolution in warfare," referred to in the Open Letter, has already begun! Today, every information and computing technology has some military application. Let us not forget, for example, the Internet came out of ARPAnet, which was funded by the Advanced Research Projects Agency (ARPA) of the U.S. Department of Defense. Do we really believe AI can, somehow, get an exemption from military applicability? AI is already seeing wide military deployment.
Rather than call for a general ban on military application of AI, the Open Letter calls for a more specific ban on "offensive autonomous weapons," which "select and engage targets without human intervention." But the concept of "autonomous" is intrinsically vague. In the 1984 science-fiction film The Terminator, the protagonist is a cyborg assassin sent back in time from the year 2029. The Terminator seems to be precisely the nightmarish future the Open Letter signatories are attempting to block, but the Terminator did not select its fictional target, Sarah Connor; that selection was done by Skynet, an AI defense network that has become "self-aware." So the Terminator itself was not autonomous! In fact, the Terminator can be viewed as a "fire-and-forget" weapon, which does not require further guidance after launch. My point here is not to debate a science-fiction scenario but to point out the intrinsic philosophical vagueness of the concept of autonomy.
Goose argues that ceding life-and-death decisions to machines on the battlefields crosses a fundamental moral and ethical line. This assumes humans perform every life-and-death decision in today's battlefield. But today's battles are conducted by systems of enormous complexity. A lethal action is the result of many actions and decisions, some by humans and some by machines. Defining causality when discussing composite actions by highly complex systems is nearly impossible. The "fundamental moral and ethical line" discussed by Goose is fundamentally vague.
Arkin's position is that AI technology could and should be used to protect noncombatants in the battlespace. I am afraid I am as skeptical of the potential of technology to humanize war as I am skeptical of the prospect of banning technology in war. Arkin argues that judicious design and use of LAWS can lead to the potential saving of noncombatant life. Technically, this may be right. But the main effort of military designers has been and will be to increase the lethality of their weapons. I fear that protecting noncombatant life has been and will be a minor goal at best.
The bottom line is that the highly important issue raised by the Open Letter and by the Point-Counterpoint articles is highly complex. Knowledgeable, well-meaning experts are arguing the two sides of the LAWS issue. To the best of my knowledge, this is the first time the computing-research community is publicly grappling with an issue of such weight. That, I believe, is a very positive development.
Follow me on Facebook, Google+, and Twitter.
Moshe Y. Vardi, EDITOR-IN-CHIEF
The Digital Library is published by the Association for Computing Machinery. Copyright © 2015 ACM, Inc.
I think there's room for optimism inside your pessimism. Once autonomous technology advances far enough, it will no longer provide any advantage at all to have humans on the battlefield. They're far too fragile. So, our wars will become machine-machine battles the harm no humans (and will thus generally be won by the side with greater economic resources).
I think Asimov has a story about a future where war has been completely mechanized, until it is discovered that involving humans in war fighting offers a military advantage.
Weapons designed to only attack other weapons might be a good use for fully autonomous systems. If they disable tanks, artillery, guns, and explosives they could effectively pacify an area that could then be more easily controlled through less lethal means. Imagine a swarm of little robots that move into an area and destroy all the weapons they find.
The following letter was published in the Letters to the Editor of the May 2016 CACM (http://cacm.acm.org/magazines/2016/5/201586).
--CACM Administrator
Moshe Y. Vardi concluded his Editor's Letter "On Lethal Autonomous Weapons" (Dec. 2015) by saying "Knowledgeable, well-meaning experts are arguing the two sides of the LAWS [lethal autonomous weapons systems] issue. To the best of my knowledge, this is the first time the computing-research community is publicly grappling with an issue of such weight." The debate about lethal autonomous weapons goes back to at least President Ronald Reagan's Strategic Defense Initiative (SDI, or "Star Wars"), which proposed deployment of weapons of various types, including missiles and lasers, that would be controlled by computers and based partly in space. The stated primary purpose of SDI was to shoot down missiles carrying nuclear weapons while in transit, but SDI proposals also considered destroying missile-launch facilities.
The SDI proposal prompted intense worldwide debate about SDI's reliability and risks, including whether life-and-death decisions could and should be entrusted to computer systems. Computer scientists David Parnas, David Bellin, Severo Ornstein, Alan Borning, and others, led by an organization called Computer Professionals for Social Responsibility (CPSR, http://cpsr.org/), said computer technology is inherently too unreliable to allow it to make such decisions on its own, without human oversight. CPSR sponsored several debates on the topic and published a book called Computers in Battle: Will They Work(1) often cited in today's publications on autonomous weapons.
Gary Chapman,(2) the editor of the book and CPSR's first executive director, was a key figure in broadening the scope of the debate to include all computerized autonomous weapons. He and his colleagues at the LBJ School of Public Affairs at the University of Texas at Austin and beyond published a number of papers and articles about the reliability, risks, and ethics of LAWS from the late 1980s until Chapman's untimely death at age 58 in 2010.
Regarding the Point/Counterpoint debate "The Case for Banning Killer Robots" by Stephen Goose and Ronald Arkin (also in Dec. 2015), anti-personnel landmines are nothing more, nothing less than autonomous weapons with an extremely simple algorithm: Tread on me and I blow you up. Soldiers and other combatants place them in battle zones, and henceforth they operate without human control. The simplicity of the algorithm does not make them any less "autonomous" than high-tech AI-based weapons. Since well-accepted treaties have banned them internationally, it seems there is a strong precedent for banning computer-controlled LAWS as well.
Jeff Johnson (former chair of CPSR)
San Francisco, CA
REFERENCES:
(1) Bellin, D. and Chapman, G. Computers in Battle: Will They Work. Harcourt Brace Jovanovich, Boston, MA, 1987.
(2) Chapman, G. Thinking about 'autonomous' weapons. CPSR Newsletter 3 (Fall 1987), 1114.
------------------------------------------------
AUTHOR'S RESPONSE:
I thank Johnson for his instructive letter about the history of the debate on lethal autonomous weapons. In spite of the long history of the subject, the issues raised by strategic-defense weapons, landmines, and newer AI-based LAWS are quite different and should not be conflated.
Moshe Y. Vardi, Editor-in-Chief
The following letter was published in the Letters to the Editor of the April 2016 CACM (http://cacm.acm.org/magazines/2016/4/200162).
--CACM Administrator
I was encouraged by Communications addressing such weighty issues as "lethal autonomous weapon systems" through Moshe Y. Vardi's Editor's Letter "On Lethal Autonomous Weapons" (Dec. 2015) and related Stephen Goose and Ronald Arkin "Point/Counterpoint" debate "The Case for Banning Killer Robots" in the same issue. Computing professionals should indeed be paying attention to the effects of the software and hardware they create. I agree with those like Goose who say use of technology in weapons should be limited. America's use of military force is regularly overdone, as in Iraq, Vietnam, and elsewhere. It seems like making warfare easier will only result in yet more wars.
ACM should also have similar discussions on other contentious public issues; for example, coal-fired power plants are probably today's most harmful machines, through the diseases they cause and their contribution to climate change.
ACM members might imagine they are in control of their machines, deriving only their benefit. But their relationship with machinery (including computers) is often more like worship. Some software entrepreneurs strive even to "addict" their users to their products.(1) Computing professionals should take a good look at what they produce, not just how novel or efficient or profitable it is but how it affects society and the environment.
Scott Peer
Glendale, CA
REFERENCE
(1) Schwartz, T. Addicted to distraction. New York Times (Nov. 28, 2015); http://www.nytimes.com/2015/11/29/opinion/sunday/addicted-to-distraction.html?_r=0
--------------------------------------------
AUTHOR'S RESPONSE
I agree with Peer that Communications should hold discussions on public-policy issues involving computing and information technology, though I do not think ACM members have any special expertise that can be brought to bear on the issue of coal-fired power plants.
Moshe Y. Vardi, Editor-in-Chief
The following letter was published in the Letters to the Editor of the March 2016 CACM (http://cacm.acm.org/magazines/2016/3/198861).
--CACM Administrator
Moshe Y. Vardi's Editor's Letter "On Lethal Autonomous Weapons" (Dec. 2015) said artificial intelligence is already found in a wide variety of military applications, the concept of "autonomy" is vague, and it is near impossible to determine the cause of lethal actions on the battlefield. It described as "fundamentally vague" Stephen Goose's ethical line in his Point side of the Point/Counterpoint debate "The Case for Banning Killer Robots" in the same issue. I concur with Vardi that the issue of a ban on such technology is important for the computing research community but think the answer to his philosophical logjam is readily available in the "ACM Code of Ethics and Professional Conduct" (http://www.acm.org/about-acm/acm-code-of-ethics-and-pro-fessional-conduct), particularly its first two "moral imperatives" "Contribute to society and human well-being" and "Avoid harm to others." I encourage all ACM members to read or re-read them and consider if they themselves should be working on lethal autonomous weapons or even on any kind of weapon.
Ronald Arkin's Counterpoint was optimistic regarding robots' ability to "... exceed human moral performance ...," writing that a ban on autonomous weapons "... ignores the moral imperative to use technology to reduce the atrocities and mistakes that human warfighters make." This analysis involved two main problems. First, Arkin tacitly assumed autonomous weapons will be used only by benevolent forces, and the "moral performance" of such weapons is incorruptible by those deploying them. The falsity of these assumptions is itself a strong argument for banning such weapons in the first place. Second, the reasons he cited in favor of weaponized autonomous robots are equally valid for a simpler and more sensible proposal autonomous safeguards on human-controlled weapons systems.
What Arkin did not say was why the world even needs weaponized robots that are autonomous. To do so, I suggest he first conduct a survey among the core stakeholder group he identifiedcivilian victims of war crimes to find out what they think.
Bjarte M. stvold
Oslo, Norway
-----------------------------------------------
AUTHOR'S RESPONSE
The desire to eliminate war is an old one, but war is unlikely to disappear in the near future. "Just War" theory postulates that war, while terrible, is not always the worst option. As much as we may wish it, information technology will not get an exemption from military applications.
Moshe Y. Vardi, Editor-in-Chief
Displaying all 6 comments