Global Thermonuclear War —

OpenAI bot crushes Dota 2 champions, and now anyone can play against it

Reigning International champions Team OG were soundly beaten over the weekend.

Screenshot of a fiery video game monster.
Enlarge / Shadow Fiend, looking shadowy and fiendish.

Over the past several years, OpenAI, a startup with the mission of ensuring that "artificial general intelligence benefits all of humanity," has been developing a machine-learning-driven bot to play Dota 2, the greatest game in the universe. Starting from a very cut-down version of the full game, the bot has been developed over the years through playing millions upon millions of matches against itself, learning not just how to play the five-on-five team game but how to win, consistently.

We've been able to watch the bot's development over a number of show matches, with each one using a more complete version of a game and more skilled human opponents. This culminated in what's expected to be the final show match over the weekend, when OpenAI Five was pitted in a best-of-three match against OG, the team that won the biggest competition in all of esports last year, The International.

OpenAI is subject to a few handicaps in the name of keeping things interesting. Each of its five AI players is running an identical version of the bot software, with no communication among them: they're five independent players who happen to think very alike but have no direct means of coordinating their actions. OpenAI's reaction time is artificially slowed down to ensure that the game isn't simply a showcase of superhuman reflexes. And the bot still isn't using the full version of the game: only a limited selection of heroes is available, and items that create controllable minions or illusions are banned because it's felt that the bot would be able to micromanage its minions more effectively than any human could.

The games can be watched here. The first game looked even until about 19 minutes in. The humans had a small gold advantage, but the bots had better territorial control. The bots came out ahead in a teamfight, killing three human players while losing only one themselves. The game still looked like it was on a knife-edge, but the bots disagreed: they announced that they had a 95-percent chance of winning and, upon making this declaration, instantly used their numbers advantage to deal heavy damage to the human base. This further enhanced their territorial control and gave them a significant gold lead, too.

This put the humans on the back foot, and while they managed to draw the game out for another 20 minutes, they were unable to overcome the bots' lead, giving OpenAI a 1-0 advantage.

In the second game, things weren't even close; the bots took an early lead and breached the human base within 15 minutes. They took the victory five minutes later.

Overall, it was a dominant performance by OpenAI: a 2-0 victory against an established human team accustomed to playing with each other at the very highest level the game has to offer. This performance was far and away OpenAI's strongest over the years.

The bots' coordination is uncanny: though they can't communicate, all five computer-controlled players think in the same way. If one thinks that it's a good opportunity to attack a human player, the other four of them will think the same and will join in the attack. This gives the appearance of great coordination in teamfights—coordination with a precision and rigor that human teams can't match.

A rudimentary Chinese room

But OpenAI does look beatable. It has definite, if surprising, weaknesses—it's not great at scoring last hits, the killing blows on computer-controlled units that are used to accumulate in-game gold. This gives humans an opportunity to get an early gold advantage. The bots also struggled to counter invisibility on the human side. They also seemed to adapt poorly to certain spells from some of the heroes, in particular Earthshaker's Fissure, a spell that temporarily creates an impassable barrier on the map. Humans were effective at using this to trap bot players and restrict their movement, and this seemed to confuse OpenAI.

The behavior of the bots is also an object lesson in the large gap between this kind of machine-learning system and a full general artificial intelligence. While AI Five is clearly effective at winning games, it also clearly doesn't actually know how to play Dota 2. Human players of the game use a technique called "pulling" to redirect the flow of their side's computer-controlled minions (known as creeps in Dota 2) as a way of denying the enemy team both gold and experience. Human players can recognize that this has occurred because creeps don't show up when they're supposed to. Human players have a mental model of the entire game, an understanding of its rules, and hence can recognize that something is amiss; they can reason about where the creeps must have gone and interfere with the pull. The computer, by contrast, just wanders around aimlessly when faced with this scenario.

No pulling

In its millions of games played against itself, OpenAI appears to have never picked up the technique of pulling, and so it has never learned to play against it. So when a human team starts pulling, the bot doesn't recognize the situation and doesn't really know what to do. It can't reason about how the game should be, and it can't speculate as to why the game is behaving in an unexpected way. All the bot can do is look for patterns it recognizes and pick the action most likely to yield the best outcome; give it a pattern that it can't recognize and its performance deteriorates.

Until now, the OpenAI bot has been restricted; certain pros and streamers have been given access to play against it, and it has also been available to play against at some live events. But for a few days, that's changing: Dota 2 players can sign up here to play against the bot—or with it—for a three-day period. Unfortunately, this public period doesn't look like it's going to result in a new and improved bot: beating a top human team was the goal that OpenAI set for its bot, and with that accomplished, the experiment seems to be complete.

Channel Ars Technica