On March 14, a U.S. surveillance drone was on a routine mission in international airspace over the Black Sea when it was intercepted by two Russian fighter jets. For nearly half an hour, the jets harassed the American system, an MQ-9 Reaper drone, buzzing past and dumping fuel over its wings and sensors. One of the jets clipped the Reaper's propeller, rendering it inoperable and forcing its American handlers to crash the drone into the sea. Not long after, Moscow awarded medals to the two Russian pilots involved in the incident.
The Reaper's every move—including its self-destruction after the collision—was overseen and directed by U.S. forces from a control room thousands of miles away. But what if the drone had not been piloted by humans at all, but by independent, artificially intelligent software? What if that software had perceived the Russian harassment as an attack? Given the breakneck speed of innovation in artificial intelligence (AI) and autonomous technologies, that scenario could soon become a reality.
Traditional military systems and technologies come from a world where humans make onsite, or at least real-time, decisions over life and death. AI-enabled systems are less dependent on this human element; future autonomous systems may lack it entirely. This prospect not only raises thorny questions of accountability but also means there are no established protocols for when things go wrong. What if an American autonomous drone bombarded a target it was meant only to surveil? How would Washington reassure the other party that the incident was unintentional and would not reoccur?
From Foreign Affairs
View Full Article
No entries found