After an extraordinarily close contest, Google’s artificially intelligent Go-playing computer system has beaten Lee Sedol, one of the world’s top players, in the first game of their historic five-game match at Seoul’s Four Seasons hotel. Known as AlphaGo, this Google creation not only proved it can compete with the game’s best, but also showed off its remarkable ability to learn the game on its own.
A group of Google researchers spent the last two years building AlphaGo at an AI lab in London called DeepMind. Until recently, experts assumed that another ten years would pass before a machine could beat one of the top human players at Go, a game that is exponentially more complex than chess and requires, at least among the top humans, a certain degree of intuition. But DeepMind accelerated the progress of computer Go using two complimentary forms of machine learning—techniques that allow machines to learn certain tasks by analyzing vast amounts of digital data and, in essence, practicing these tasks on their own.
From Wired
View Full Article
No entries found