acm-header
Sign In

Communications of the ACM

ACM TechNews

Could Alphago Bluff Its Way Through Poker?


View as: Print Mobile App Share:
Playing the fantasy card game Magic: The Gathering.

University College London lecturer David Silver suggests software similar to Google DeepMind's AlphaGo could play poker competently.

Credit: Max Mayorov/Flickr

University College London (UCL) lecturer David Silver suggests software similar to Google DeepMind's AlphaGo, which recently won a Go tournament against a human grandmaster, could be crafted to play poker competently.

In collaboration with UCL student Johannes Heinrich, Silver utilized deep reinforcement learning to generate an effective playing strategy in both Leduc and Texas hold'em. With Leduc, the algorithm achieved a Nash equilibrium, or an optimal approach as defined by game theory. In Texas hold'em, the algorithm reached the performance of an expert human player. AlphaGo's triumph against the human Go champion was rooted in its use of deep reinforcement learning and tree search to arrive at successful Go maneuvers. The former technique entails training a large neural network with positive and negative rewards, and the latter consists of a mathematical approach for looking ahead in a game.

Meanwhile, Google DeepMind and the University of Oxford are training a neural network to play the fantasy card games Magic: The Gathering and Hearthstone. The effort involves giving the network the ability to interpret the information displayed on each card, which may either be structured or unstructured.

From Technology Review
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account