acm-header
Sign In

Communications of the ACM

ACM TechNews

IST Researchers Exploit Vulnerabilities of AI-Powered Game Bots


View as: Print Mobile App Share:
An online gamer in action.

Pennsylvania State University researchers developed an algorithm to train an adversarial bot to automatically identify and exploit the weaknesses of master game bots.

Credit: Parilov Egeniy/Adobe

Researchers at Pennsylvania State University (Penn State) have developed an algorithm to train an adversarial bot to automatically identify and exploit the weaknesses of master game bots trained by deep reinforcement learning algorithms.

The researchers  used their bot to defeat a world-class artificial intelligence (AI) bot in the game StarCraft II, emphasizing the security threat associated with using deep reinforced learning-trained agents as game bots.

Said Penn State's Xinyu Xing, "By using our code, researchers and white-hat hackers could train their own adversarial agents to master many—if not all—multi-party video games. More importantly, game developers could use it to discover the vulnerabilities of their game bots and take rapid action to patch those vulnerabilities."

The researchers have publicly released their code and a variety of adversarial AI bots.

From Penn State News
View Full Article

 

Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account