acm-header
Sign In

Communications of the ACM

ACM News

'Machines Set Loose to Slaughter': the Dangerous Rise of Military AI


View as: Print Mobile App Share:
The view of a military aerial drone.

Autonomous machines capable of deadly force are increasingly prevalent in modern warfare, despite numerous ethical concerns.

Credit: Pakpoom Makpan/Getty Images/iStockphoto

The video is stark. Two menacing men stand next to a white van in a field, holding remote controls. They open the van's back doors, and the whining sound of quadcopter drones crescendos. They flip a switch, and the drones swarm out like bats from a cave. In a few seconds, we cut to a college classroom. The killer robots flood in through windows and vents. The students scream in terror, trapped inside, as the drones attack with deadly force. The lesson that the film, Slaughterbots, is trying to impart is clear: tiny killer robots are either here or a small technological advance away. Terrorists could easily deploy them. And existing defences are weak or nonexistent.

Some military experts argued that Slaughterbots – which was made by the Future of Life Institute, an organisation researching existential threats to humanity – sensationalised a serious problem, stoking fear where calm reflection was required. But when it comes to the future of war, the line between science fiction and industrial fact is often blurry. The US air force has predicted a future in which "Swat teams will send mechanical insects equipped with video cameras to creep inside a building during a hostage standoff". One "microsystems collaborative" has already released Octoroach, an "extremely small robot with a camera and radio transmitter that can cover up to 100 metres on the ground". It is only one of many "biomimetic", or nature-imitating, weapons that are on the horizon.

 

From The Guardian (U.K.)


View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account