Quirks and Quarks

A computer taught itself the toughest game on the planet. And it's just getting started

A new self-teaching artificial intelligence is now the world's best Go player - and invents new ways to play.

Self-teaching AI can invent new ways to play ancient Asian board game Go

Computers have now transcended human expertise at the game of Go (Getty Images/iStockphoto)

A new version of Deepmind's AlphaGo game-playing Artificial Intelligence system is now the world's best player of the world's most difficult game: the ancient Asian board game Go.

Last year an older version of AlphaGo beat the world's best human player, but this new version is more than just an upgrade. 

Researchers at Deepmind, part of the Google family of companies, have taken a new approach with this AI. According to team leader, Dr. David Silver, older versions learned by assimilating the playing styles of the best human Go players.

The new version, called AlphaGo Zero, taught itself how to play, and practiced against itself as well. AlphaGo Zero then beat the program that beat the best humans in 100 straight games.

Since 2016 Deepmind's AlphaGo computers have dominated human players. Here Chinese Go player Ke Jie plays a match against AlphaGo in May, 2017. (Chinatopix via Associated Press)

How to Go from good to better

The new self-taught version of AlphaGo is not only more effective than older versions, it's more creative. In teaching itself, it rediscovered many of the patterns of play that humans have developed and used, but also found new ones on its own which were superior to the ones human players used. 

AlphaGo Zero is also more efficient that human-taught AIs. It learned faster, and required far less computing power than previous versions. 

But not quite the best  

AlphaGo Zero is not the last word in AI. According to Dr. Doina Precup, a professor in the School of Computer Science at McGill University, and research team lead for Deepmind's newly formed Montreal team, self-teaching is best for well defined and well understood problems — including games that can be played out in simulation in a computer.

Go is thought to have been played for more than 2,000 years. Here a Korean couple in traditional dress play Go in the early 20th century (US Library of Congress)

Its near-term applications could include modelling chemical reactions, drug discovery or materials science, or even finance. It's not as well suited for real world problems that can't be easily simulated. However a combination of self-teaching and human teaching could be a powerful way to explore more generally applicable kinds of artificial intelligence.

In the meantime, it may be time for a little role-reversal, as humans are earning how to play Go better, by studying how AlphaGo Zero plays.