I pass on this description by John Bohannon
in Science Magazine of a recent triumph of A.I.:
This year, artificial intelligence (AI) passed a significant milestone when a computer program called AlphaGo beat the world's No. 2 Go player in a five-game match. It's not the first time that AI has surpassed human mastery of a game. After all, it was 20 years ago that IBM's Deep Blue first beat Garry Kasparov in a game of chess, toppling the world champion the following year in a six-game match. But that is where the similarity ends.
The rules of Go are more straightforward than those of chess: You simply place identical stones on a grid, capturing territory by surrounding your opponent's positions. But that simplicity and openness result in an explosion in the number of possible moves for a player to consider—far more than there are atoms in the known universe. That makes it impossible for AI to beat Go masters with an approach like that used by Deep Blue, which relies on handcoded strategies from chess experts to evaluate each possible move.
Instead, AlphaGo, designed by the London-based Google subsidiary DeepMind, studied hundreds of thousands of online Go games played between humans, using those sequences of moves as data for a machine-learning algorithm. Then AlphaGo played against itself—or, rather, slightly different versions of itself—over and over, finetuning its strategies with a technique called deep reinforcement learning. The final result is AI that wins not just with brute-force calculation, but with something that looks strikingly like human intuition.
Most of the things we want AI to master involve a seemingly unmanageable number of possible decisions—walking a robot safely through a crowded room, routing driverless cars, making small talk with passengers. Because hard-coded rules fail for such tasks, AlphaGo's triumph shows just how powerful deep reinforcement learning can be.
D. Mackenzie, “Update: Why this week’s man-versus-Go match doesn’t matter (and what does,” News from Science (15 March 2016)
D. Silver, “Mastering the game of Go with deep neural networks and tree search,” Nature 589, 224 (28 January 2016)
Post a Comment