acm-header
Sign In

Communications of the ACM

ACM News

The Pentagon Is Bolstering Its AI Systems—by Hacking Itself


View as: Print Mobile App Share:
The Pentagon.

The Pentagon's Joint Artificial Intelligence Center has created a machine learning red team" to probe pretrained models for weaknesses.

Credit: Jeremy Christensen/Alamy

telligence as a way to outfox, outmaneuver, and dominate future adversaries. But the brittle nature of AI means that without due care, the technology could perhaps hand enemies a new way to attack.

The Joint Artificial Intelligence Center, created by the Pentagon to help the US military make use of AI, recently formed a unit to collect, vet, and distribute open source and industry machine learning models to groups across the Department of Defense. Part of that effort points to a key challenge with using AI for military ends. A machine learning "red team," known as the Test and Evaluation Group, will probe pretrained models for weaknesses. Another cybersecurity team examines AI code and data for hidden vulnerabilities.

Machine learning, the technique behind modern AI, represents a fundamentally different, often more powerful, way to write computer code. Instead of writing rules for a machine to follow, machine learning generates its own rules by learning from data. The trouble is, this learning process, along with artifacts or errors in the training data, can cause AI models to behave in strange or unpredictable ways.

"For some applications, machine learning software is just a bajillion times better than traditional software," says Gregory Allen, director of strategy and policy at the JAIC. But, he adds, machine learning "also breaks in different ways than traditional software."

From Wired
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account