Enlarge / Starting from random play and knowing just the game rules, AlphaZero defeated a world champion program in the games of Go, chess, and shogi (Japanese chess). (credit: DeepMind Technologies, Ltd.)
Google’s DeepMind—the group that brought you the champion game-playing AIs AlphaGo and AlphaGoZero—is back with a new, improved, and more-generalized version. Dubbed AlphaZero, this program taught itself to play three different board games (chess, Go, and shogi, a Japanese form of chess) in just three days, with no human intervention.
A paper describing the achievement was just published in Science. “Starting from totally random play, AlphaZero gradually learns what good play looks like and forms its own evaluations about the game,” said Demis Hassabis, CEO and co-founder of DeepMind. “In that sense, it is free from the constraints of the way humans think about the game.”
Chess has long been an ideal testing ground for game-playing computers and the development of AI. The very first chess computer program was written in the 1950s at Los Alamos National Laboratory, and in the late 1960s, Richard D. Greenblatt’s Mac Hack IV program was the first to play in a human chess tournament—and to win against a human in tournament play. Many other computer chess programs followed, each a little better than the last, until IBM’s Deep Blue computer defeated chess grand master Garry Kasparov in May 1997.