A computer taught itself the toughest game on the planet. And it's just getting started
Go-ing one better than humans
A new version of Deepmind's AlphaGo game-playing Artificial Intelligence system is now the world's best player of the world's most difficult game - the ancient Asian board-game Go. Last year an older version of AlphaGo beat the world's best human player, but this new version is more than just an upgrade.
How to Go from good to better
The new self-taught version of AlphaGo is not only more effective than older versions, it's more creative. In teaching itself, it re-discovered many of the patterns of play that humans have developed and used, but also found new ones on its own which were superior to the ones human players used. AlphaGo Zero is also more efficient that human-taught AIs. It learned faster, and required far less computing power than previous versions.
But not quite the best
Its near-term applications could include modelling chemical reactions, drug discovery or materials science, or even finance. It's not as well suited for real world problems that can't be easily simulated. However a combination of self-teaching and human teaching could be a powerful way to explore more generally applicable kinds of artificial intelligence. In the meantime, it may be time for a little role-reversal, as humans are earning how to play Go better, by studying how AlphaGo Zero plays.