acm-header
Sign In

Communications of the ACM

ACM News

Weaponizing Robots


View as: Print Mobile App Share:
The SWORDS Talon armed unmanned ground vehicle, made by a U.S. subsidiary of Qinetiq.

Robots equipped with serious firepower are under development, with an eye on battlefield integration in the not-to-distant future.

Credit: Popular Mechanics

Science fiction writers often envision a future in which battles are fought not only by human soldiers, but also by automated robot armies that can enhance firepower while reducing humans’ exposure to injury or death. A world filled with Terminator¬-style killing machines remains highly unlikely – outside of Hollywood blockbusters – but robots equipped with serious firepower are being developed today, with an eye on battlefield integration in the not-to-distant future.

Indeed, in October 2013, four manufacturers of robots, iRobot Inc., Qinetiq, Northrop Grumman, and HDT Global, participated in a live-fire demonstration at the Fort Benning U.S. Army Post in Georgia. Each company demonstrated its robots' abilities to fire M240 machine guns and eliminate pop-up targets from a distance of 150 meters. The purpose of the live-fire exercise was to illustrate how robotic technology could be used to augment soldiers on the ground, similar to the way unmanned aerial vehicles (UAVs) have assisted various branches of the U.S. armed services.

"The live-fire portion was to determine if robots could be armed and support dismounted soldiers in future fights, to truly see if robots could be armed and support dismounted soldiers," says Jon Anderson, Director of Advanced Systems at Northrop Grumman.

Anderson notes that armed conflicts in which the U.S. becomes involved are typically "‘away games’ – we are always going to go somewhere. As such, with the financial and resource constraints the U.S. military faces today, it is very difficult and expensive to move tanks, Bradley Fighting Vehicles, and other large tactical weapons around the world quickly and efficiently."

The robots at the live-firm demonstration, Anderson says, allow heavier weapons than could be carried by dismounted (on foot) soldiers to be used in situations where additional support is unavailable. "While these robots cannot provide firepower on a parity with the larger pieces of equipment, they do afford more firepower to dismounts than is typically available today," and they can be moved into the theater of battle "a lot quicker, with a lot less expense."

Robots currently being developed for the military are designed to augment, rather than replace, human soldiers. One of the key technologies demonstrated at the live-fire event was Northrop Grumman’s CaMEL (Carry-all Modular Equipment Landrover), a robotic platform the defense contractor says is designed to provide soldiers easy access to weaponry in the field that would otherwise be too heavy to carry. The CaMEL, which looks like a ruggedized industrial cart mounted on tracks, can operate for up to 20 hours on 3.5 gallons of JP8 jet fuel, thanks to its hybrid electric-diesel engine design. It can be controlled by a soldier using a handheld device that looks much like a video game controller, a laptop computer, or a tablet attached to a vest. The CaMEL can identify targets at a distance of up to 3.5 kilometers (roughly 2.2 miles), using a daylight telescope or thermal imaging, and can be equipped with a grenade launcher, an automatic weapon, and anti-tank missiles. However, while the robot can identify targets, it cannot independently discharge its weapon; a human is always in control of the decision to use lethal force.

Indeed, the U.S. Department of Defense (DoD) says there are no current or future directives that call for fully automated, weaponized robots. "DoD currently has no unmanned autonomous weapon systems that use lethal force," says Maureen Schumann, a DOD spokesperson. "The Department does currently have defensive, human-supervised weapon systems that can operate in human-supervised, autonomous mode, such as the Aegis weapon system and the Patriot air defense system, which have been operated for decades. [But], there are currently no evolving or emerging requirements for autonomous lethal robots."

Other manufacturers that participated in the live-fire test, including iRobot, have produced robots that are currently being used in the field, such as the 710 Warrior platform, a three-foot-long mobile platform that allows soldiers in the field to manipulate objects remotely via the robot’s extendable grippers. The 710 Warrior can operate for up to 10 hours on battery power, and can be remotely controlled via digital radios at a range of up to 2,600 feet.

While ground robots currently are used only to support non-weaponized tasks (such as explosive ordinance detection and reconnaissance functions), equipping a robot with a machine gun or other weapon would permit the use of force in a battlefield situation without putting a soldier directly in harm’s way. This very benefit lies at the heart of the debate surrounding the use of autonomous or semi-autonomous robotic technology in battle. The morality surrounding the ability to injure or kill humans without putting a soldier in harm’s way is has already been raised with the increasing use of aerial drones, which can deliver lethal force to targets with no risk of injury or death to pilots.

The debate over the morality of using unmanned robots to deliver lethal force goes on, even as DoD has clearly drawn a line in the sand with respect to responsible uses of semi-autonomous and autonomous robots. In November 2012, DoD issued Directive 3000.09, which "establishes guidelines designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements." The directive restricts the development and use of fully autonomous weapons systems and platforms to those used to protect and respond to cyberattacks, rather than physical attacks. Furthermore, the directive ensures that semi-autonomous systems that have the capability to fire weapons are designed so that "in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator."

Robotics manufacturers such as Northrop Grumman and iRobot are adhering to this directive, both as a matter of law and of ethics.

"There will always be a human in the decision-making loop," says iRobot’s Charlie Vaida. "It is also iRobot’s stance that robots should not be capable of using lethal or less-than-lethal force on their own."

Keith Kirkpatrick is principal of 4K Research & Consulting, LLC, based in Lynbrook, NY.


Comments


Mark Gubrud

Although this article presents a nice survey of some current US lethal robot programs, it is a bit too accepting of the Pentagon spin that "there are currently no evolving or emerging requirements for autonomous lethal robots", and it seriously misrepresents the policy stated in the DoD's Nov. 2012 directive on autonomous weapons.

First, it is incorrect that the "directive restricts the development and use of fully autonomous weapons systems and platforms to those used to protect and respond to cyberattacks, rather than physical attacks." In fact, the directive states that it "Does not apply to autonomous or semi-autonomous cyberspace systems for cyberspace operations". However, it also states that "Autonomous weapon systems may be used to apply non-lethal, non-kinetic force, such as some forms of electronic attack, against materiel targets". This refers to jammers and other electronic countermeasures against radar and communications, not cyberattacks as usually understood.

In addition, the directive allows the development and use of "lock-on-after-launch homing munitions" which it classifies as "semi-autonomous" but which are in fact fully autonomous in seeking, identifying and engaging targets, including humans, for kinetic and lethal attack. The directive really does not restrict US development and use of autonomous weapons in any meaningful way, as I have discussed here. At most, if a weapon is seen as "intended" to engage humans or to "select" targets fully autonomously for engagement by kinetic force, the directive requires that 3 Pentagon officials certify that certain criteria have been met, which are virtually the same criteria that would apply for any weapon system.

It is easy to believe that Northrop Grumman and iRobot are adhering to the directive, even for programs not directly funded by the DoD. Such adherence does not restrain them from doing anything they would do otherwise.

Mark Gubrud
International Committee for Robot Arms Control


Keith Kirkpatrick

Thanks for reading and for your comments, Mark. I would note that while you're correct -- adherence to the directive does not restrain robot manufacturers from doing anything -- my conversations with them indicated to me that at least for the foreseeable future, development of fully autonomous robots have not been requested by the Department of Defense. Furthermore, both Northrop Grumman and iRobot explicitly indicated to me that they are not interested in developing fully autonomous armed robots for use on the battlefield.


Displaying all 2 comments

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account