Moshe y. vardi's editor's letter "On Lethal Autonomous Weapons" (Dec. 2015) said artificial intelligence is already found in a wide variety of military applications, the concept of "autonomy" is vague, and it is near impossible to determine the cause of lethal actions on the battlefield. It described as "fundamentally vague" Stephen Goose's ethical line in his Point side of the Point/Counterpoint debate "The Case for Banning Killer Robots" in the same issue. I concur with Vardi that the issue of a ban on such technology is important for the computing research community but think the answer to his philosophical logjam is readily available in the "ACM Code of Ethics and Professional Conduct" (http://www.acm.org/about-acm/acm-code-of-ethics-and-pro-fessional-conduct), particularly its first two "moral imperatives"—"Contribute to society and human well-being" and "Avoid harm to others." I encourage all ACM members to read or re-read them and consider if they themselves should be working on lethal autonomous weapons or even on any kind of weapon.
Ronald Arkin's Counterpoint was optimistic regarding robots' ability to "... exceed human moral performance ...," writing that a ban on autonomous weapons "... ignores the moral imperative to use technology to reduce the atrocities and mistakes that human warfighters make." This analysis involved two main problems. First, Arkin tacitly assumed autonomous weapons will be used only by benevolent forces, and the "moral performance" of such weapons is incorruptible by those deploying them. The falsity of these assumptions is itself a strong argument for banning such weapons in the first place. Second, the reasons he cited in favor of weaponized autonomous robots are equally valid for a simpler and more sensible proposal—autonomous safeguards on human-controlled weapons systems.
No entries found