acm-header
Sign In

Communications of the ACM

ACM News

Here Come the Killer Robots


View as: Print Mobile App Share:
A fictional killer robot.

Precursors to killer robots, such as armed drones, are being developed and deployed by nations including China, Israel, South Korea, Russia, the U.K., and the U.S.

Credit: Shutterstock

In late 2023, the United Nations General Assembly approved its first-ever resolution on autonomous weapons systems, or weapons systems powered by artificial intelligence (AI). The resolution raised concerns over the possible negative impact of weapons systems that operate without the full, direct control of human operators.

Yet despite the U.N.'s concerns, AI is spreading rapidly in the world of war.

OpenAI, the maker of ChatGPT, is working with the U.S. Department of Defense on open-source AI-powered cybersecurity software. Defense technology company Anduril is making a name for itself by selling a family of autonomous defense systems to the Pentagon. The U.S. Air Force, Army, and Navy all are testing and deploying AI weapons and defense systems.

And it's not just an American affair. AI for war and defense is on the rise globally, too. The war in Ukraine has become a testing ground for AI-powered warfare, with fully autonomous drones and truly networked battlefields defining the conflict. Dozens of countries are working on national AI plans with heavy defense components, including India, Japan, and the U.K. China and Russia in particular are investing heavily in AI for defense or war, according to the U.S. Government Accountability Office.

That's making the future of war look a lot more autonomous—whether we like it or not.

Many versions of AI weapons

AI-powered defense systems encompass a wide array of technologies. One predominant category is the autonomous weapons systems, with which the U.N. was concerned.

"From missile defense systems to loitering munitions, there are many combat assets which can select and engage targets without immediate human control or oversight," says Nathan Wood, a researcher at Ghent University who studies autonomous weapons systems.

While we've had versions of autonomous weapons systems for decades, the latest advances in AI have made today's killer robots more sophisticated. Popular autonomous weapons systems include drones that can fly themselves, autonomous drone swarms, and AI targeting systems that automatically choose their targets.

One example is Israel's Harop drone, which is designed to autonomously hang around a battlefield and then attack targets when it detects them—even if the soldiers who launched the drone are long gone. Another is South Korea's development of an autonomous sentry gun that can fire at targets without a human gunner.

Still, AI systems for defense are not just killer robots. Some of the most significant uses of AI on the battlefield include support functions, logistics, intelligence, decision-making, or surveillance, says Wood.

The U.S. Defense Department's Project Maven—which it is developing in partnership with Google—uses AI to process video feeds from drones to gain actionable intelligence. Another AI system, called GAMECHANGER, is used by military officials to query the vast volumes of data the Department of Defense has on policies and requirements.

"Behind any military enterprise, there is a long tail of logistics, personnel, material, and other support functions, and AI is revolutionizing how those things may be managed," says Wood.

In addition, not all AI defense systems are working totally on their own.

Many militaries around the world are engaging in "human-machine teaming," says Anna Nadibaidze, an autonomous weapons researcher at the University of Southern Denmark. This means augmenting warriors with AI technology to improve their effectiveness and decision-making, rather than having AI select and destroy targets on its own.

Plenty of debate

Understandably, there are plenty of fears, criticisms, and concerns around AI-powered defense systems. Skynet, anyone?

The first is whether AI defense systems should exist at all. In the case of autonomous weapons, researchers question whether AI-powered weapons can or should make battlefield decisions, and whether or not they're even legal.

"From a legal perspective, using AI in weapons systems poses unsolved issues regarding responsibility and liability in case of malfunction or third-party interference," says Verena Jackson, a lawyer and researcher at The Center for Intelligence and Security (CISS) at Germany's Bundeswehr University Munich.

Another concern is the fact that, despite its power, AI can go wrong. "AI is brittle in many ways. It can be tricked, will have limitations, and can make its own mistakes, some of which can be truly stupid," says Wood.

Not to mention, one of the main issues related to using AI for defense concerns the people using it. AI defense systems can be used and misused in ways that create problems. AI-enabled systems require operators and handlers who have a deep, subtle understanding of how the systems function. Without that expertise, it's easy to overestimate the capabilities of these systems, or to use them in environments where they should not be deployed.

"While there are obviously technical challenges we need to overcome in order to responsibly and reliably deploy AI for defense, we must bear in mind that there will be institutional challenges which are just as, if not more, significant," explains Wood.

Personnel must learn an entirely new set of AI-focused skills to operate these systems responsibly, including understanding how and when to trust systems, how to oversee the systems, and how to determine when a system will be a liability.

Not to mention, we might simply become too reliant on AI systems, Wood admits. "One of the largest downsides may be that we over-rely on AI systems, losing our own judgment and care in the application of force."

Yet despite the horrors of war, autonomous weapons systems are not all bad, say some experts.

"Autonomous weapons, if employed properly, could actually make war less risky for everyone," says Anthony Pfaff, Research Professor for Strategy, the Military Profession, and Ethics at the Strategic Studies Institute of the U.S. Army War College.

Autonomous weapons can save lives. They can save the lives of soldiers by providing better war-making capabilities than traditional weapons. They can even remove the need for soldiers to appear on the battlefield at all.

Autonomous weapons also tend to be more precise than human combatants, which limits collateral damage. And they don't get an "itchy trigger finger" because they don't get angry, tired, or vengeful, Pfaff says. They also may be able to speed up decision-making on the battlefield, which could end some conflicts quicker.

Pfaff believes that at least some organizations, like the U.S. Department of Defense, are putting significant thought into the governance and ethics around these technologies.

However, despite attention from the U.N., don't expect seriously restrictive regulations on AI weapons any time soon, says Nadibaidze. The benefits of autonomous AI-powered weapons and defense systems outweigh the concerns when it comes to the final decision-makers—the governments using them.

"Many governments are keen on regulating AI, but not so much in the military sphere because they see it as too important to limit," she says.

 

Logan Kugler is a freelance technology writer based in Tampa, Florida. He is a regular contributor to CACM and has written for nearly 100 major publications.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account