Diplomats and military experts from more than 90 countries gathered in Geneva in April 2015 for their second meeting on "lethal autonomous weapons systems," also known as fully autonomous weapons, or more colloquially, killer robots. Noted artificial intelligence expert Stuart Russell informed delegates the AI community is beginning to recognize the specter of autonomous weapons is damaging to its reputation and indicated several professional associations were moving toward votes to take a position on the topic.
On July 28, 2015, more than 1,000 AI professionals, roboticists, and others released an open letter promoting a "ban on offensive autonomous weapons beyond meaningful human control."a
The following letter was published in the Letters to the Editor of the February 2016 CACM (http://cacm.acm.org/magazines/2016/2/197433).
--CACM Administrator
Both sides of the Point/Counterpoint "The Case for Banning Killer Robots" (Dec. 2015) over lethal autonomous weapons systems (LAWS) seemed to agree the argument concerns weapons ". . . that once activated, would, as Stephen Goose wrote in his "Point," be able to select and engage targets without further human involvement." Arguments for and against LAWS share this common foundation, but where Goose argued for a total ban on LAWS-related research, Ronald Arkin, in his "Counterpoint," favored a moratorium while research continues. Both sides accept international humanitarian law (IHL) as the definitive authority concerning whether or not LAWS represents a humane weapon.
If I read them correctly, Goose's position was because LAWS would be able to kill on their own initiative they differ in kind from other technologically enhanced conventional weapons. That difference, he said, puts them outside the allowable scope of IHL and therefore ought to be banned. Arkin agreed LAWS differs from prior weapons systems but proposed the difference is largely their degree of autonomy and their lethal capability can be managed remotely when required. Arkin also said continued research will improve deficiencies in LAWS, thereby likely reducing the number of noncombatant casualties.
Stepping back from the debate about IHL and morality, LAWS appear to be the latest example of off-the-shelf (more-or-less) algorithms and hardware systems to be integrated into weapons systems. Given this, the debate over LAWS fundamentally concerns how far AI research should advance when it results in dual-use technologies. AI technologies clearly support driverless vehicles, aerial drones, facial recognition, and sensor-driven robotics, and are already in the public domain.
These technologies can be integrated into weapons of all sorts relatively cheaply and with only modest technical skills when equally modest levels of accuracy and reliability are acceptable. One must look only at the success of the AK-47 automatic assault rifle and Scud missiles to know relatively inexpensive weapons are often as useful as their higher-priced counterparts. A clear implication of the debate is AI research already enables development and use of LAWS-like weapons by rogue states and terrorists.
No one can expect AI researchers to stop work on what possibly could become dual-use technologies solely on the basis of such a possibility. LAWS may be excluded from national armories, but current AI technology almost assures their inevitable development and use by ungoverned actors.
Anthony Fedanzo
Corte Madera, CA
Displaying 1 comment