Robots are taking our jobs. It’s a simple fact about our unstoppable march towards the future that we don’t want to acknowledge, but at some point we’ll all have to accept. First agriculture, then manufacturing, and now games have fallen to our silicon-brained superiors. The latest victim of this assault on human employment is Lee Sedol, the South Korean (former) world champion at tabletop classic Go. Over the past week he has faced off against an artificial intelligence created by Google-owned DeepMind – a learning algorithm named AlphaGo – and lost.
Borne out of the research of British former game developer Demis Hassabis (the mind behind Theme Park’s revolutionary and rather vomit-prone crowd AI) AlphaGo was able to defeat a human by observing thousands of professional matches, playing even more against itself and being trained to pick winning moves. The feat is reminiscent of the IBM supercomputer Deep Blue defeating chess grandmaster Garry Kasparov in 1997, albeit in a game of far greater mathematical complexity (there are more potential Go board layouts than atoms in the universe).
Naturally, with Go now ‘beaten', attention is turning to the question of what’s next for AlphaGo and its successors – and it could be realtime video games. On paper, that’s a huge task – anything turn-based means infinitely more scenarios for an AI to process and prioritise. But what’s paper to an emotionless computer program? If in two decades since pitting humans against computers in games they can achieve this level of skill, could an AI match up against an eSports champion in StarCraft, or beat an entire team of Dota players?
“Games like Dota 2 are complex because there are different kinds of decisions – they all need to be made at different speeds, and they all have shifting priorities,” Michael Cook, a senior research fellow at the University of Falmouth and massive Dota fan, tells Red Bull eSports. “If you think too long about any of them, or you do any of it in the wrong order, you don't get a time penalty. You just lose the game.”
Cook’s research over the past few years has taken him to all corners of the artificial mind in pursuit of the root of creativity. He is currently developing an AI which can program and design video games called ANGELINA, an exercise in proving software itself can conceive of, apply and evaluate meaning in games. Given the already ambitious nature of his work, it’s unsurprising that he believes AI could take on something as complex as Dota as well. But to do so, first they would need a team to break the game down into its component parts.
“Some people will deal with the mechanical skill, the robotic equivalent of learning to walk,” Cook says. “For Dota, that's probably getting a bot to win 1v1 mid matchups. Last hitting, denying, creep equilibrium, rune control, all the basics.”
These mechanical aspects are handled quite well by most game-ready AI. If you jump into a bot match right now and push the difficulty up to Insane, you’ll be hard pressed to out-lane an opponent. Having access to all the information in a game through a streamed pipeline of data and numbers makes deciding when to time going in for a last hit a lot easier for a bot. Something that takes months of training for the human eye is a simple lookup function for the computer. These 1v1 adversarial matchups are much easier to create programming for, and in another game programmers are already testing out their attempts on each other.
The annual StarCraft competition at the Artificial Intelligence for Interactive Digital Entertainment (AIIDE) conference has been running for six years. Using an API which allows programmes to interact with, and control, the Brood War instalment of Blizzard’s RTS, students and researchers in the field create AI bots and run them against each other for two weeks straight. Luckily bots don’t need sleep, so there are no Reddit threads or complaints on Twitter.
After the conclusion of last year’s tournament, however, the organisers pitted the top three bots against Djem5, one of the top Brood War Protoss players outside of South Korea. He defeated every single bot, some of them through out-building them (one bot does nothing other than Zerg rush, so he simply built photon cannons) but most were beaten by doing something AI isn’t quite good at yet.
“A lot of situations that require micro are also about making split-second decisions and faking out an enemy,” Cook explained. “These are both things that put a different kind of pressure on computers. StarCraft is already a very complex and fast-paced game, and in encounters between blobs of enemies, humans often make decisions in fractions of a second.”
This could also be the reason strategic tabletop games of logic are seeing robot victors, but humans are still considered better at a range of real-world risk assessment scenarios like driving or combat. “Chess and Go give you a lot of time to think, but StarCraft doesn't, and a big challenge for AI will be responding to a move by a human,” said Cook. “‘Is this real? Are they going to back off? Should I act aggressively?’ These high-level decisions have to be made before micro even enters the equation, and it's this that will be the real bottleneck for the computer.”
On top of the micro-management and mechanical mastery aspects of the game, there is a macro element to consider. This compounds in Dota when you consider the co-ordination of multiple players, or hero units, and the ephemeral “gamesense” of a player to know when enemies may be nearby in the fog of war.
“If four heroes push this lane, what does the enemy do?” Cook asks. “Imagine the Dota map as a five-by-five board of tiles, and just get the AI to reason about movement and how people respond to different strategies. This is why I think commentary could be a powerful way in to learning high-level Dota strategy, because commentators get to forget about the minute-to-minute real-time challenge of playing Dota and focus on the strategic layer.”
In 2013, at the University of Alberta (which coincidentally also hosts the StarCraft AI Competition), Greg Lee, Vadim Bulitko and Elliot Ludvig were looking into creating an AI which could commentate baseball [PDF Link]. By taking in reams of data about previous games, the team taught an AI about the types of narratives that can happen during a game and allowed it to assess what stories were appearing in any current one.
The result was considered a success by a committee of professional sportscasters including hockey commentators Mark Lee and Kevin Weekes. They were particularly impressed by the system piecing together a story about a player once having a successful day batting, the day after his divorce in late April, 1988. But for Cook, having an AI commentate on Dota would do so much more than throw out bizarre or interesting stats.
“AI very frequently lack the ability to explain why they are doing things, it's just not something we encourage in AI research,” he said. “AlphaGo absolutely cannot verbalise in plain English why it plays one move over another, it can't explain in human terms. Some might say that's not important, but I actually think it's very significant in some areas.”
“In videogame design, for example, if you can't explain why you did something then people often don't take you seriously as a designer. So being able to commentate a game and analyse plays made by humans is a good step towards being able to identify and make those decisions as an AI player.”
That’s not to say that hearing a robot attempting to come up with funny turns of phrase for level one Roshan attempts wouldn’t be a dream come true for Cook though. “I love the wordplay that goes on with players' names. Okay, it's not nuanced, but it's great hypecasting,” he said. “I think the narratives that we weave about our players and our teams are really rich and interesting and I'd love AI to be able to tell those stories too.”
Say we were able to have a robot on the TI9 commentary desk, and then a rag-tag team of researchers were able to successfully teach a bot not just how to last hit, but how to last hit on three lanes at once and explain why it went for each one, would it be good enough to beat Team Secret?
“You know, I think an even more interesting question would be: ‘could an AI player captain a team of humans?’” Cook counters. “Not just beating a team of humans, but actually managing a team of humans too, understanding strengths and weaknesses, directing players individually while playing the game itself. An AI team is one challenge, but an AI working with humans? A totally different game.”
At least it’ll save four jobs, right?