acm-header
Sign In

Communications of the ACM

ACM News

Should Algorithms Control Nuclear Launch Codes? The U.S. Says No


View as: Print Mobile App Share:
U.S. soldiers on the move.

U.S. military leaders have often said a human will remain “in the loop” for decisions about the use of deadly force by autonomous weapon systems. However, the official policy does not require this to be the case.

Credit: Chung Sung-Jun/Getty Images

Last Thursday, the U.S. State Department outlined a new vision for developing, testing, and verifying military systems—including weapons—that make use of artificial intelligence (AI).

The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy represents an attempt by the U.S. to guide the development of military AI at a crucial time for the technology. The document does not legally bind the U.S. military, but the hope is that allied nations will agree to its principles, creating a kind of global standard for building AI systems responsibly. 

Among other things, the declaration states that military AI needs to be developed according to international laws, that nations should be transparent about the principles underlying their technology, and that high standards are implemented for verifying the performance of AI systems. It also says that humans alone should make decisions around the use of nuclear weapons.

From Wired
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account