Hi, I'm Tommy Thompson and this is AI and
Games: a series on research and applications
of artificial intelligence in video games.
Throughout my case study series I've looked
at a variety of games that present real challenges
to even the most state of the art techniques
in AI. One of the most demanding game genres
out there is real-time strategy or RTS games.
My recent series on the AI of Total War highlights
the continued efforts by series developers
Creative Assembly to improve and expand the
suite of AI systems required to craft the
epic battles and nuanced diplomacy players
comes to expect from that franchise.
It's pretty good! I mean I'm biased and all,
but I think you should check it out...
In this video I want to look at another popular
franchise that is arguably most synonymous
with the genre: Blizzard's StarCraft. I want
to take a look at the challenges this series
presents to AI research and the significant
efforts made in developing new AI techniques
that adopts StarCraft as a testbed. Perhaps
more importantly, I want to challenge the
narrative that is quickly being established
in the media of StarCraft research and RTS
games in general. Contrary to how it is reported
of late, AI research in this arena dates back
to the turn of the century and has been a
regular feature of the game AI academic community
for quite some time now. This isn't some new
idea that Google or Facebook suddenly decided
would be cool to check out. So in this video
I'm going to take a look at the earliest RTS
research, why and how it all started, the
variety of competitions and benchmarks that
are now coded into the original 1998 game
and the future that awaits with the recent
surge of activity behind StarCraft II.
Alright, let's make a quick overview of RTS
games and the challenge they present. StarCraft
adopts many of the principle components of
the real-time strategy genre with a focus
on the control of territory and assets. Players
assume command of one of three factions: the
human terrans, the insectoid Zerg or the advanced
Protoss as they take control of land and resources
within a defined area. By working through
the occlusion that covers the map, referred
to as the 'Fog of War', players lay waste
to enemy forces and fortifications in order
to establish resource locations, build structures
and enhance their existing capabilities and
by reinforcing and updating their existing
assets using technology trees.
All of these mechanics and features ultimately
influence the challenge that game presents.
In more professional play it impacts the strategies
that are in effect during at the beginning,
middle and end of a match will shift dramatically.
In early game the real focus is establishing
enemy locations and defending whilst construction
is taking place, with the acquisition of materials
and the correct build orders shifting based
on the world state. This can result in more
defensive behaviours or rush tactics to try
and breakdown enemy structures of forces.
All of this progresses through medium to end
game as players seek to construct the best
army configurations for assaulting the opposing
players, whilst ensuring they're keeping the
pressure on as they seek to advance their
tech trees. It all gets pretty chaotic in
the closing period of battle and any AI we
seek to build to play the game has to be able
to recognise numerous strategic elements at
play and shift between strategies dynamically.
This is why there's a lot of fuss about StarCraft
for AI research given the overall complexity
of the problem space. In Computer Science
we researchers often seek to quantify the
difficulty of problems in order to establish
whether they're worth trying to solve. This
computational complexity theory allows for
researchers to formally define whether a given
problem is 'interesting' or 'challenging'
from a scientific perspective. This complexity
notation models the amount of memory and resource
as well as the time it will take for good
solutions to be found for that problem. It's
been proven that classic NES and SNES-era
video games such as Super Mario Bros., the
Legend of Zelda and Donkey Kong Country can
be formally classified as Non-Deterministic
Polynomial Time Hard or NP-Hard for short.
This means in essence that the games aren't
easy and carry some level of challenge before
players can begin to control and master them
in the long-term - not just for AI players
but also humans as well. Meanwhile, RTS games
are considered at minimum to be NP-Hard, but
are predicted to be PSPACE complete or in
the worst case EXPTIME - meaning that it's
anticipated that systems that can solve them
effectively would take exponential time to
do so, which - in lay mans terms - means they're
really fucking hard.
So yeah, StarCraft and RTS research, let's
go back to where it all started.
Cast your mind back to 2003 - Metallica were
going through a rough patch, Metroid Prime
was finally released in Europe and Terminator
3: Rise of the Machines was out in cinemas
[God I hate that movie]
A number of academics, most notably Michael
Buro: a professor of Computer Science at the
University of Alberta, were advocating that
RTS games such as WarCraft, StarCraft and
Age of Empires were the next 'killer app'
for AI to be exploring, given that there are
many facets of these games that make for really
interesting decision problems for a system
to try and solve. This included the need to
manage resources, to make critical decisions
in situations in which we are uncertain as
to our current strength in relation to opposing
forces, to conduct spatial and temporal reasoning
of the in game world and foster collaboration
with either AI or human players.
So with this in mind, Buro and other academics
began seeking to conduct research within the
RTS space. But the problem was that video
game companies were typically reluctant to
provide open access to their game engines
and API's back then, as such conducting research
in StarCraft itself was not possible lest
significant effort was made to either replicate,
mod or break the original game. Problem is,
cloning or breaking a game, is generally frowned
upon given it can place academics and their
host institution in a spot of legal bother
- a problem that the Mario AI competition
(which I covered a couple years back) had
to contend with! This led to Buro leading
the development of the Open Real Time Strategy
or ORTS platform: a free and open source reduction
of classic RTS games that was designed for
researchers and hobbyists to experiment in
building AI controllers for a variety of in-game
activity such as combat or construction. This
system slowly grew in scale, complexity and
faithfulness to classic RTS games courtesy
of around 30 undergraduate and graduate students
who slowly contributed to the project over
a period of around 7 years.
This led to a small but steady body of research
in building AI controllers for RTS games,
such as the use of classical planning (an
idea adopted in commercial games such as F.E.A.R.
and Empire: Total War) to create intelligent
build order systems, to developing improved
pathfinding AI and even using Monte Carlo
methods to evaluate the effectiveness of strategic
plans: an idea explored in 2005, a year or
two before the rise of Monte Carlo Tree Search
and it's eventual adoption in Total War: Rome
II in 2013.
ORTS started running competitions back in
2006, encouraging developers to submit their
own RTS controllers. Each tournament had bots
compete in multiple game types, some of which
a subset of the main game such as gathering
resources or unit combat in flat terrain to
the eventual complete RTS experience with
economies, tech trees and fog of war in place.
The fourth and final tournament for ORTS ran
in 2009. The big reason for this was that
the focus was moved towards building something
within StarCraft itself, thanks to the release
of the Adam Heinermann's [CHECK THIS] Brood
War Application Programming Interface or BWAPI
- hang on, do you pronounce this BWAPEE or
BWA-PI? Nah, you know what, I'm just not gonna
utter that abbreviation aloud again.
The Brood War API is an open source C++ framework
that is designed to interact with the original
StarCraft. It provides a full suite of tools
that allows for programmers to build their
own AI controllers within the game. What's
pretty interesting and vital to the Brood
War API being useful in a research and competitive
capacity is that it accurately reflects the
available information that human players would
have in a similar capacity. Brood War API
provides information on the overall game state,
the available unit types, technologies and
weapons as well as provide full control of
build behaviours as well as individual units.
In addition, while a units position and properties
can be made available to your custom AI player,
it is only permissible if the enemy unit is
not occluded by the fog of war and will be
removed from the world model should it leave
the players view again. This prevents AI bots
from cheating and forces them to maintain
their own representation of the perceived
active units in the game at a given time.
Despite this, a Brood War API bot could conceivably
cheat given there is no limit to the number
of actions they can issue to the game in a
given frame. As a result, it was possible
for strange behaviours such as walking ground
units over walls and making buildings that
slide around. Though the community of bot
developers that have came to adopt the API
heavily enforce a code of conduct for appropriate
and legal moves that can be executed by a
bot in a tournament context.
Fast forward to 2010, and with the Brood War
API in place, academia swung away from ORTS
to full blown StarCraft: with the original
competition hosted at the 2010 Artificial
Intelligence for Interactive Digital Entertainment
conference. AIIDE - as we like to call it
- is one of the largest game AI research conferences
in the world and arguably the most prominent
in the United States, so it's a pretty fitting
location to kickstart this new tournament.
The StarCraft AI Competition was first co-ordinated
by Ben Weber - a PhD graduate of UC Santa
Cruz, who also collaborated with Johan Hagelback
and Mike Preuss for a small follow-up competition
later that year at the IEEE Computational
Intelligence and Games conference. However,
since 2011 it has been coordinated by Dave
Churchill a PhD graduate of the University
of Alberta and at the time of publishing this
video an assistant professor of Memorial University
of Newfoundland.
The original event was structured around four
tournaments that - much like ORTS - are focussed
on delivering a variety of game types: with
tournaments 1 and 2 focussing on unit management
and combat on flat and uneven terrain respectively.
Tournament 3 had players explore a tech-limited
version of StarCraft without any fog of war
and a requirement that they use the Protoss
race without any advanced units permitted.
Lastly, Tournament 4 was the complete StarCraft
experience: with fog of war enabled, all factions
permitted and a double-elimination format
for entrants, with each match comprised of
the best of five games.
The first AIIDE tournament was a huge success,
with 26 entrants to the competition, 17 of
whom competed in tournament 5. Victory in
tournament five was handed to the Zerg playing
bot Overmind - built by a team of developers
from the University of California. It's success
came in rushing towards building Mutalisk
aerial units to maintain an active defence
and attack where necessary. This was achieved
courtesy of a refined path planning system
and active memory of threat locations that
could allow ground units to attack more effectively
during early game to destabilise enemy construction
followed by Overlords removing fog of war
and identifying when resources needed to be
diverted to building anti-air defences. Once
Mutalisks were unlocked and trained, it could
maintain defence of their base during any
continued construction and expansion whilst
also targeting the occasional enemy using
Mutalisks. The Mutalisk's adopt a method called
artificial potential fields - a principal
from robotics that creates a field of attractive
and repulsive potential forces in an environment
- with valid targets considered attractive
and threats to the enemy considered negative.
This leads to behaviours such as this on-screen
now, where a hit-and-run strategy can be established
by disabling any attractive forces in between
attacks. The parameteres used to dictate field
strengths was tested by repeatedly running
trials in test maps.
But despite its success, could Overmind have
any chance at competing against human players?
Overmind was tested by playing against - and
occasionally defeating - Berkley PhD candidate
Oriol Vinyals, who was not only one of the
developers behind the bot, but was formerly
Spain's national StarCraft champion. While
Overmind is long behind him, Vinyals is still
actively involved in StarCraft AI, given at
the time of this video he's working over at
Google DeepMind as part of the StarCraft 2
research team.
The competition has subsequently continued
with an increase in scale and organisation,
with some changes made to format. As Churchill
assumed responsibility for the competition
in 2011, all bot source code had to be made
public and tournaments one through three were
removed from the competition due to low entry
rates in the first year. In addition, all
competition matches were now executed on a
client/server framework rather than the previous
attempts which were conducted on two laptops!
Meanwhile 2012's AIIDE competition allowed
for persistent storage, meaning that bots
could learn by watching replays of previous
matches.
The subsequent years saw numerous entries,
with three participants regularly competing
for the top spot between 2011 and 2013 at
both the AIIDE and CIG conferences, this is
largely because Overmind didn't participate
again in subsequent years. However, this doesn't
diminish the fact that the winning bot in
2011 called SkyNet was developed by a single
person: British developer Andrew Smith. SkyNet
was a Protoss bot that adopted a multi-phase
strategy reliant on an early-game defensive
strategy whilst periodically attacking using
a Zealot rush strategy: a tactic that of pushing
your enemies off guard by attacking with large
quantities of Zealot units.
Meanwhile the Aiur bot - another Protoss player
that frequently scored in the top three - used
similar strategies to SkyNet. This included
a Photon Cannon rush strategy (referred to
as Cheese) as well as heavy use of a Zealot
and Dragoon army for mid-game. Aiur is once
again a one man team, developed by Florian
Richoux: then a graduate student of the Université
de Nantes and at the time of this video now
an associate professor at the instution as
part of the Laboratoire des Sciences du Numérique
de Nantes. AIUR is a acronym for Artificial
Intelligence using Randomness, with the bot
reliant on the idea of having a mood system
that dictates gameplay decisions. Moods are
selected against a probability distribution
for a given opponent, with said distribution
continually being improved as the system records
how effective a given mood type is against
that player. This keeps enemies on their toes
given the opponent can't say with certainty
how AIUR will play against it in any two consecutive
matches.
The final big contender is the UAlbertaBot,
submitted by the competition organiser David
Churchill and developed in conjunction with
a number of students at the University of
Alberta. This bot is interesting given the
team switched out from playing as Zerg after
the 2010 competitions to Protoss: thus cementing
the dominance of Protoss forces in AI competitions.
The reason for the switch - and indeed the
dominance of Protoss - was that the strategies
using that faction were easier to build. UAlbertaBot's
biggest innovations come in two distinct subsystems,
the BOSS build system and the SparCraft simulator.
The Build Order Search System is a simulation
system for planning build orders to ensure
optimal execution. Meanwhile, SparCraft is
a combat simulation module that would enable
the bot to more accurately estimate the outcome
of combat between two forces, thus helping
the bot identify when best to push and continue
an attack, or to retreat to base and consolidate
its forces. SparCraft can be configured to
use different search algorithms such as Alpha-Beta
pruning and the Upper Confidence Bound in
Tree or UCT algorithm. For more information
on UCT check both my AI 101 video exploring
the Monte Carlo Tree Search algorithm as well
as part 3 of my series on the AI of Total
War.
The competition continued on, with newcomers
beginning to exert control in 2013, pushing
SkyNet, Aiur and UAlbertaBot down the rankings,
but it was at this time a second strand of
competitions arose courtesy of the StarCraft
Student AI Competition.
In 2011, the Student StarCraft AI competition
was announced: a separate tournament for those
interesting in applying their work in StarCraft
AI. The SSCAIT - as it's often abbreviated
- was founded by Slovakian PhD graduate Michal
Certicky and operated under his supervision
in his current capacity as senior researcher
in the Games and Simluations AI Research Group
at the Czech Technical University in Prague.
The tournament is aimed at being a more open
event than the main StarCraft AI competition,
with competitors ranging from hobbyists to
students and academic researchers, as well
as live streaming of both tournament and practice
matches live on Twitch.
To accommodate for this change in scope, there
are some changes to the format and submission
procedures. The tournament is reduced down
to one match type: 1 vs 1 melee, with victory
achieved should the opposing player lose all
buildings, their AI code crashes or the decision
making processes they're reliant on result
in some significant slowdown of in-game execution.
Should you want to write a bot, programmers
can be develop either in C++ using the standard
Brood War API or in Java instead. The Java
bots need to utilise one of two interface's
aimed at wrapping the core functionality of
the C++ API: JNIBWAPI or BWMirror - the latter
being a much easier one to pronounce! Whilst
bot source code is required as part of submission,
the actual code isn't made public and is only
used to run plagiarism checks against other
existing works.
The two competitions largely exist in harmony,
with the likes of UAlbertaBot competing in
both competitions.
Now with all these innovations in mind, how
far have StarCraft AI players came to being
able to compete against the best human players.
Whilst the media has placed emphasis on the
more recent competition, matches held against
human players have cropped up once or twice
at the AIIDE conference. 2015 saw the top
three ranking bots of the StarCraft AI Competition
- tscmoo by Vegard Mella, ZZZKBot by Chris
Coxe, and Overkill from Sijia Xu - being put
to task against Djem5: a pro StarCraft player
from Russian regarded as one of the best non-Korean
Protoss players in the world. All three bots
were summarily destroyed by Djem5 with no
matches won by AI bots.
Fast forward to late 2017 and another competition
took place at Sejong University in Seoul,
South Korea. Four AI competitors stepped up
to the plate: the MJ Bot from Sejong University,
ZZZK from Australia, TSCMOO from Norway and
lastly CherryPi developed at Facebook's AI
research lab. Their opponent? Song Byung-Gu:
a high profile professional StarCraft player
from South Korea considered one of the best
in the world. It's at this point that the
gulf between human and AI play becomes more
readily apparent, with all four AI opponents
defeated within 27 minutes and the easiest
victory achieved in four and a half minutes.
So yeah, now that we're up to speed, let's
consider some more recent developments and
most notably, Google DeepMind's interest in
StarCraft. Having successfully tackled the
game of Go courtesy of their AlphaGo system
and the well documented competition against
expert player Lee Sedol, DeepMind have set
their sights on StarCraft. The AI research
space, the players involved and the collective
interest in game AI research has changed quite
drastically in the past 15 years. As such,
what seemed like fantasy in the days of ORTS
is now a reality, with Blizzard openly collaborating
with Google to provide an official AI API
for StarCraft II.
DeepMind and Blizzard launched the SC2LE or
StarCraft 2 Learning Environment in August
2017 with fans getting a chance to try their
hand at it with help from Blizzard themselves
at the 2017 BlizzCon in November over in Anaheim,
California. SC2LE is a collection of exciting
tools for developers and researchers that
effectively provides many of the same features
as the Brood War API, only for StarCraft 2.
But it also has some pretty cool new features
as well, this includes:
- A complete API built for both machine learning
and classic AI techniques that enables complete
control of StarCraft II using the Python programming
language. Not just controlling an AI within
the game, developers can start a match, get
observations of current state, conduct in-game
actions through bot controllers and watch
match replays
- The ability to run the game faster than
regular speed, which is highly useful for
training machine learning players.
- Means to build and deploy custom maps within
StarCraft 2 for the game
- 7 mini-games developed by DeepMind to test
and experiment with specific AI tasks and
objectives.
- A collection of data that represented in-game
playthroughs by human players that could be
used for machine learning training purposes.
The API is broken up into two distinct collections:
the 'raw' API which is more akin to the Brood
War API that allows programmers to access
specific information on a given frame and
the main API that is largely for puposes of
machine learning. This API takes all information
from the game and analyses it to provide feature
layers that are more accessible for a machine
learning algorithm. These feature layers,
such as height maps, unit density and selected
units are scraped from the same user interface
that players utilise, well... roughly. It's
not 100% accurate to a human players UI given
that it renders this from a separate orthographic
camera. So it's like, 99% the same. So whilst
the game is rendered in 3D, the API presents
a series of 2D images that are reflective
of the feature layers in the current state.
Why bother with this? Well these feature extractions
can prove more useful for machine learning
algorithms to isolate and focus on key elements
they wish to control and improve. This is
elaborated upon in the research paper published
by DeepMind, as they show how these feature
layer are adopted in two convolutional neural
network solutions they've tested to-date.
The agents implemented by DeepMind to-date
run at around 180 actions per minute. When
testing in the main game itself on the Abyssal
Reef map, it's clear there is still a long
way to go given they are yet to win games
against the built-in StarCraft 2 AI. This
is largely to be expected, given the difficulty
of the challenge faced and only really beginning
to look at how machine learning to crack these
problems.
However, the test games I mentioned earlier
look a lot more promising, with some of the
strategies formulating in these instances
performing reasonably well. They still can't
compete at human level yet, but give them
time and you might be surprised by what comes
next.
StarCraft continues to be a relevant and exciting
problem domain for AI research (and it sure
seems to still be popular with players themselves).
We can be sure to see some more innovation
and success for AI players in StarCraft in
the coming years. Though how long it will
take for AI to successfully challenge if not
defeat the best human players is difficult
to ascertain. Hopefully having watched this
video you now recognise the significant challenges
that need to be overcome for AI to reach a
level playing field. Nonetheless if and when
these innovations come to light, you can be
sure I'll do a follow up video to bring you
up to speed (assuming I'm still making AI
and Games content in like 2 months or 5 years
or whatever).
But another critical point in this video - that
I think is equally if not more important - is
the dangers found in corporations becoming
increasingly involved in game AI research.
In 2001 John Laird and Michael VanLent's publication
"Human-Level AI's Killer Application: Interactive
Computer Games" sought to legitimise and advocate
the use of video games as means through which
to challenge the state of the art in AI, at
a time where it was dismissed and considered
a pointless use of our time. We sit here less
than 20 years later, as the biggest tech giants
in the world - be it Microsoft, Google, Facebook
and others - are taking this very seriously
and investing tremendous resources behind
it. We're in a new age of AI sensationalism
in our media and not only do the big guys
latch onto this, their involvement will re-frame
the narrative on a given topic and obscure
all existing research in this field - sometimes
through no fault of their own. We saw this
as DeepMind took a crack at Go, and it's happening
again with StarCraft. In some cases, it's
failing to acknowledge preceding research
- a point that DeepMind got right in their
papers on the SC2LE - other times it's, well,
just sensationalist bullshit vying for a cool
headline: a point that will resurface in my
upcoming video on the recent surge of research
both academic and corporate in MOBA games
such as DOTA2 and League of Legends.
That's it for this video, I hope you've enjoyed
it and don't forgot to like, subscribe - and
click that bloody bell thing so you know when
I actually post new videos! Are you a StarCraft
fan or even a creator of one of the many bots
submitted to competitions? Get some discussion
rolling down in the comments. If you're interested
in trying your hand at this yourself, be sure
to visit both the competition webpages as
well as StarCraftAI.com which has a variety
of useful links and to tools, tutorials and
research papers on AI bots and study. This
video is only scratching the surface of what's
out there, so get hunting! Both the StarCraft
AI competition and the Student competition
can be followed on Twitter and Facebook via
the URL's on screen now and in the description.
Plus you can even follow AI and Games and
my personal account on Twitter as well. My
thanks as always to my sponsors of this series
over on the crowdfunding platform patreon
whose names are on screen now. Your support
helps me continue to produce this content
and without your help, these videos would
cease being made! If you enjoyed this or any
of my other videos and want contribute, head
on over to patreon.com/ai_and_games. Right
now I'm trying to figure out how the hell
to play DOTA2 for some half-decent footage.
God I'm terrible at that game...
