acm-header
Sign In

Communications of the ACM

Point/Counterpoint

The Case For Banning Killer Robots: Counterpoint


The Case for Banning Killer Robots, illustration

Credit: The Economist

Let me unequivocally state: The status quo with respect to innocent civilian casualties is utterly and wholly unacceptable. I am not Pro Lethal Autonomous Weapon Systems (LAWS), nor for lethal weapons of any sort. I would hope that LAWS would never need to be used, as I am against killing in all its manifold forms. But if humanity persists in entering into warfare, which is an unfortunate underlying assumption, we must protect the innocent noncombatants in the battlespace far better than we currently do. Technology can, must, and should be used toward that end. Is it not our responsibility as scientists to look for effective ways to reduce man's inhumanity to man through technology? Research in ethical military robotics could and should be applied toward achieving this goal.

I have studied ethology (animal behavior in their natural environment) as a basis for robotics for my entire career, spanning frogs, insects, dogs, birds, wolves, and human companions. Nowhere has it been more depressing than to study human behavior in the battlefield (for example, the Surgeon General's Office 2006 report10 and Killing Civilians: Method, Madness, and Morality in War.9). The commonplace occurrence of slaughtering civilians in conflict over millennia gives rise to my pessimism in reforming human behavior yet provides optimism for robots being able to exceed human moral performance in similar circumstances. The regular commission of atrocities is well documented both historically and in the present day, reported almost on a daily basis. Due to this unfortunate low bar, my claim that robots may be able to eventually outperform humans with respect to adherence to international humanitarian law (IHL) in warfare (that is, be more humane) is credible. I have the utmost respect for our young men and women in the battlespace, but they are placed into situations where no human has ever been designed to function. This is exacerbated by the tempo at which modern warfare is conducted. Expecting widespread compliance with IHL given this pace and resultant stress seems unreasonable and perhaps unattainable by flesh and blood warfighters.


Comments


CACM Administrator

The following letter was published in the Letters to the Editor of the March 2016 CACM (http://cacm.acm.org/magazines/2016/3/198861).
--CACM Administrator

I am writing to express dismay at the argument by Ronald Arkin in his Counterpoint in the Point/Counterpoint section "The Case for Banning Killer Robots" (Dec. 2015) on the proposed ban on lethal autonomous weapons systems. Arkin's piece was replete with high-minded moral concern for the ". . . status quo with respect to innocent civilian casualties . . ." [italics in original], the depressing history of human behavior on the battlefield, and, of course, for ". . . our young men and women in the battlespace . . . placed into situations where no human has ever been designed to function." There was an incongruity in Arkin's position only imperfectly disguised by these sentiments. While deploring the ". . . regular commission of atrocities . . . ," in warfare, there was nowhere in Arkin's Counterpoint (nor, to my knowledge, anywhere in his extensive writings) any corresponding statement deploring the actions of the U.S. President and his advisors, who, in 2003, through reliance on the technological superiority they commanded, placed U.S. armed forces in the situations that gave us, helter-skelter, the images of tens of thousands of innocent civilian casualties, many thousands of men and women combatants returning home mutilated or psychologically damaged, and the horrors of Abu Ghraib military prison.

Is it still surprising that an enemy subject to the "magic" of advanced weapons technology would resort to the brutal minimalist measures of asymmetric warfare, and combatants who see their comrades maimed and killed by these means sometimes resort to the behavior Arkin deplores?

In the face of clear evidence that technological superiority lowers the barrier to waging war, Arkin proposed the technologist's dream weapons systems engineered with an ethical governor to ". . . outperform humans with respect to international humanitarian law (IHL) in warfare (that is, be more humane) . . ." Perfect! Lower the barrier to war even further, reducing consideration of harm and loss to one's own armed forces at the same time representing it as a gentleman's war, waged at the highest ethical level.

Above all, I reject Arkin's use of the word "humane" in this context. My old dictionary in two volumes(1) gives this definition:

Humane "Having or showing the feelings befitting a man, esp. with respect to other human beings or to the lower animals; characterized by tenderness and compassion for the suffering or distressed."

Those, like Arkin, who speak of "ethical governors" implemented in software, or of robots behaving more "humanely" than humans are engaging in a form of semantic sleight of hand the ultimate consequence of which is to debase the deep meaning of words and reduce human feeling, compassion, and judgment to nothing more than the result of a computation. Far from fulfilling, as Arkin wrote, ". . . our responsibility as scientists to look for effective ways to reduce man's inhumanity to ma n through technology . . . ," this is a mockery and a betrayal of our humanity.

William M. Fleischman
Villanova, PA

REFERENCE
(1) Emery, H.G. and Brewster, H.K., Eds. The New Century Dictionary of the English Language. D. Appleton-Century Company, New York, 1927.

-------------------------------------------------
AUTHOR'S RESPONSE

While Fleischman questions my motive, I contend it is based solely on the right to life being lost by civilians in current battlefield situations. His jus ad bellum argument, lowering the threshold of warfare, is common and deserves to be addressed. The lowering of the threshold of warfare holds for the development of any asymmetric warfare technology robotics is just one that provides a one-sided advantage, as one might see in, say, cyberwarfare. Yes, it could encourage adventurism. The solution then is to stop all research into advanced military technology. If Fleischman can make this happen I would admire him for it. But in the meantime we must protect civilians better than we do, and technology can, must, and should be applied toward this end.

Ronald C. Arkin
Atlanta, GA


Displaying 1 comment

Log in to Read the Full Article

Sign In

Sign in using your ACM Web Account username and password to access premium content if you are an ACM member, Communications subscriber or Digital Library subscriber.

Need Access?

Please select one of the options below for access to premium content and features.

Create a Web Account

If you are already an ACM member, Communications subscriber, or Digital Library subscriber, please set up a web account to access premium content on this site.

Join the ACM

Become a member to take full advantage of ACM's outstanding computing information resources, networking opportunities, and other benefits.
  

Subscribe to Communications of the ACM Magazine

Get full access to 50+ years of CACM content and receive the print version of the magazine monthly.

Purchase the Article

Non-members can purchase this article or a copy of the magazine in which it appears.
Sign In for Full Access
» Forgot Password? » Create an ACM Web Account