
In 1992, researchers at IBM developed TD-Gammon, combining a learning-based system with a neural network to play the game of backgammon. Learning-based systems and self-play are elegant research concepts which have facilitated remarkable advances in artificial intelligence. We expect these methods could be applied to many other domains. Using the advances described in our Nature paper, AlphaStar was ranked above 99.8% of active players on, and achieved a Grandmaster level for all three StarCraft II races: Protoss, Terran, and Zerg.
We chose to use general-purpose machine learning techniques – including neural networks, self-play via reinforcement learning, multi-agent learning, and imitation learning – to learn directly from game data with general purpose techniques.
AlphaStar played on the official game server,, using the same maps and conditions as human players. The League training is fully automated, and starts only with agents trained by supervised learning, rather than from previously trained agents from past experiments. Each of the Protoss, Terran, and Zerg agents is a single neural network. AlphaStar can now play in one-on-one matches as and against Protoss, Terran, and Zerg – the three races present in StarCraft II.
AlphaStar now has the same kind of constraints that humans play under – including viewing the world through a camera, and stronger limits on the frequency of its actions* (in collaboration with StarCraft professional Dario “ TLO” Wünsch ). Our new research differs from prior work in several key regards: Since then, we have taken on a much greater challenge: playing the full game at a Grandmaster level under professionally approved conditions. This January, a preliminary version of AlphaStar challenged two of the world's top players in StarCraft II, one of the most enduring and popular real-time strategy video games of all time. TL DR: AlphaStar is the first AI to reach the top league of a widely popular esport without any game restrictions.