Hi everyone!
My name's Mike Cook, I'm an AI researcher
and game developer, currently based
at Queen Mary University of London,
and this is the Getting Started in 
Automated Game Design tutorial.
Over the next 90 minutes I'm going to
take you through the past,
present, and future of automated game
design research with the aim of getting
you motivated and excited about this
cool new research area,
and hopefully giving you some tools that
will let you get started on your own
systems.
As we'll see over the course of the next
hour or so automated game design is as
much about
art as it is science, and each person
brings their own perspective to the work,
so before we dive into the tutorial
itself, I wanted to give you some
background on what my perspective is.
I hope you enjoy everything I have to
say in this talk, but I also hope it
leads you to find your own perspective
on AI and game design.
So make sure you take everything i say
with a pinch of salt, and I look forward
to playing and learning about your games
and systems in the future.
I started my PhD almost exactly 10 years
ago now in October 2010
and in 2011 I began creating a very
simple AI system that combined level
designs, rule sets and piece layouts to design
simple arcade games, which I
called ANGELINA. I kept this project name
as I developed new iterations of the
software
experimenting with things like designing
mini-Metroidvanias inspired by news
headlines, entering game jams with weird
three-dimensional nightmare worlds,
and editing code to invent game
mechanics for puzzle platformers.
My research has been greatly influenced
by Computational Creativity which is a
subfield of AI that's interested in how
AI can do things that
we normally describe as creative when a
human does them - so things like composing music,
painting, designing video games.
As a result,
ANGELINA has developed into a system not
just concerned with designing games,
but with being perceived as a game
designer by other people.
The latest version of ANGELINA can
stream its design process on Twitch,
write descriptions of its games, and
interact with its audience to gather
knowledge, ideas and feedback -
and we'll be touching on some of these
ideas later in the tutorial.
At the same time I've continued to
practice and develop as 
a game designer myself.
As a designer I'm most interested in
using procedural generation to enable
new kinds of game experience, but I've
also found myself increasingly
interested in games that allow the
player to modify
or recombine rule sets to create new
rules or capabilities,
something which I think is directly
inspired by having worked with AI game
designers in my research.
You can hear a little bit more about my
game design work in a paper I'm presenting at 
CoG (IEEE Conference on Games) this year
- "Procedural Generation and Information
Games" which I've linked to in the
description of this video.
In this tutorial today I'm going to
take you on a tour of automated game
design,
and try to give you some of the lessons
that I've learned, the everyday
techniques that I use,
and my perspective on where this field
is going. My aim is to guide you past
some of the mistakes and pitfalls that
I've made, and to give you the motivation and
confidence to know how to get started,
and also to offer some guidance on
promising ways to begin research in this
area.
To make this tutorial a little easier
to digest I've broken it down into a few
parts, and I've left timestamps in the
video description to help you jump
around
in case you want to take it piece by
piece. The only thing I would say is that
you probably should watch the parts in
order.
In the first part we're going to look
at what automated game design is, and
look at the history of the field and
what we can learn from that.
Then we're going to talk about why we
would want to automate game design, what
kind of impact it might have on the
games industry and society at large.
Then we're going to look at some
practical issues in building one of
these systems, and I'm also going to
introduce you to Bluecap, which is an
open source
automated game designer example that I
built specifically for this tutorial.
Finally, I'm going to talk to you
about some future issues about
creativity, about communities, and then the kinds of
problems that you might want to tackle
when you build your first AGD system. I'm
always super excited to talk to people
about any of these topics, so if you ever
do want to chat with me about procedural
generation,
automated game design research, making
games, playing games, and anything in
between,
you can always get in touch with me on
Twitter @mtrc,
or drop me an email
mike@possibilityspace.org.
Let's get started!
Let's begin by asking what does
automated game design actually
mean? Now, throughout this talk I'm going
to use that term - or sometimes it's
shortened form, 'AGD' - to describe a
particular set of research areas and
systems, but in reality
there is no official definition for what
automated game design means,
and there's no clear boundaries that
divide what research is in and out of
this
group. In actual fact, if someone else were
giving this talk they would probably
draw different distinctions than the
ones I'm going to over the following few
minutes.
One of the reasons why this is so tricky
to define is that the research is
concerned with things which are also
very hard to define. So the notion of being a game
designer has a specific meaning in the
games industry,
but when we talk about automated game
design research we're referring to more
than just the tasks a game designer
takes on.
Really we're describing anything to do
with the development of games,
but I like the term 'automated game
development' less because
it emphasizes technical aspects more, and
the field already has a significant bias
away from artistic and design problems.
Beyond the term game design we have even
more complicated issues, like the ongoing
discussion on what exactly constitutes a
game, and these are conversations that
scientists are quite bad at having. 
But it's worth noting that
a lot of people in the games
industry struggle to be open-minded in
many of these instances too. 
That might make this field sound
intimidating, but I love working in this space
precisely *because* its definitions are so
fluid,
and because so many different groups are
responsible for shaping them on a daily
basis.
It also makes it easier to strike out in
a direction that you think is promising
and just start working.
There are a lot of unanswered questions
in this field, as we're going to see in
this talk,
and there really isn't much to stop you
from just getting stuck in.
Today we're not going to worry about
hard definitions. Instead we're going to
take a quick look back at the history of
automated game design research,
and use that as a guide to get a feel
for what this research area is about.
And as we'll see, over the last decade or
so there's been a huge divergence in
different approaches,
and today automated game design research
comes in many shapes and sizes.
We're going to start in 1992. 
The first system that I usually put when I'm
telling the story of
automated game design's history is
Pell's METAGAME system.
METAGAME designed what Pell called
"symmetric chess-like games," which you can
think of as games that you could design
if you were given a chess board and all
of the pieces that go with it.
Even though it's one of the earlier
examples of an AGD system,
its approach has a lot in common with
modern AGD systems:
it has a bespoke design language to
describe its games, much like the one
we'll be defining later,
and it uses simple static analysis to
look for problems with games and filter
them out.
Another example which comes 15 years
after METAGAME
is Browne and Maire's Ludi system, which
designed abstract strategy board games.
One of the major contributions by Ludi
was the extensive list of expert
designed metrics for evaluating the
games designed by the system,
which studied many different factors in
what Browne and Maire considered to be a
good strategy game.
These early systems had only one focus:
generating rule sets for games,
and they targeted games where rule sets
were really the entirety of the game
design.
We can think of this as a natural
extension of the role of games in the
history of AI, which generally concerned
itself with abstract physical games
like Chess and Go. This is partly because
these games posed hard search problems,
but also because these kind of ancient
traditional games were seen as
intellectual pursuits,
and thus were respected in a lot of
highbrow scientific spaces,
which is still something that continues
to this day. Between 2000 and 2010
however there was a large growth both in
the number of games-focused study
programs at universities,
and a growth in the number of people
doing dedicated game AI research.
CIG, the precursor to this conference,
began as a symposium in 2005,
and became a full conference in 2010. AIIDE,
another major games conference, began in
2005,
while the Foundation of Digital Games
conference ran for the first time in
2009.
At the same time games themselves were
changing drastically too.
Independent game developers were now
getting a lot more attention in the
press,
and the online distribution of games
combined with console-led indie
publishing programs
were allowing different kind of games to
be made, in different ways to before.
During this time the idea of
'proceduralism' became very popular among
certain circles of developers, critics
and academics.
Proceduralism is the idea that when we
build models and systems of things,
especially for video games,
we convey messages through how we build
these systems and how the player
experiences them.
Proceduralism was extremely popular
during this period. Jason Rohrer's Passage
came out in 2007,
along with Rod Humble's The Marriage, and
in 2008 Jonathan Blow's
Braid came out. All three of these games
were regularly cited as examples of
procedural rhetoric, along with newsgames
which were growing in popularity
at the time.
The internet was awash with essays and
blog posts about what these games meant,
and why this was the future of meaning
in games.
Michael Mateas, who now runs the
Expressive Intelligence Studio at
UC Santa Cruz, wrote in 2008:
A couple of years after Michael wrote
this a few very important systems
emerged from projects at UC Santa Cruz.
The first is this untitled project led
by Mark Nelson, which constructed simple
minigames based on plain English
prompts like "shoot duck",
and as far as I'm aware this is the
first automated game designer that has a
visual component - it can select art from
a database to illustrate its games based on the
prompts. It can also design games from
different perspectives, so the prompt
"shoot pheasant" might result in a game
where you play a pheasant avoiding
bullets, or a game where you fire bullets at
pheasants who are flying around. 
A couple of years after this project the
Game-o-Matic was released led by Mike
Treanor.
This was one of the first examples of an
assistive automated game designer,
originally conceived as a tool for
journalists to make newsgames with.
The Game-o-Matic had a library of
micro-rhetorics, which were
fragments of game designs that had
particular interpretations attached to
them.
For example, if an object A gets
smaller when an object B
touches it, the player could interpret
this as B eating A.
The Game-o-Matic was able to select
combine and tweak these
rhetorics from a big catalog to make
full game designs, and it would tailor
them to match whatever concepts and
relationships it had been given to work
with.
I started work on my own AGD system,
ANGELINA, around 2010, but in 2012 I began
to move away from abstract
arcade games and think more about
representing other parts of the game
design process.
At the time I was mostly unaware of
what was happening in Santa Cruz, but I
was really influenced by research and
Computational Creativity.
My PhD advisor, Simon Colton, advocated
for handing over responsibility to AI
systems in order to help them become
more independent,
and this led to the development of a new
version of ANGELINA where I gave the
system the ability to design
not just the rules and level design, but
the selection of music, sound effects and
artwork.
Although it didn't have the
sophistication of the proceduralist
approaches emerging from Santa Cruz, it
was another way in which AGD systems
were beginning to incorporate different
parts of the game design process.
The push for proceduralism forced AGD
systems to consider how games
communicate ideas, which naturally led to the
consideration of factors outside of just
rule sets, like the selection of
appropriate visual content.
At the same time other systems like
ANGELINA were trying to take on more
responsibility in the creative process,
which also transitioned them away from a
pure focus on rules.
AGD systems were expanding to tackle
multiple creative tasks at once
In 2014 Antonios Liapis, Georgios Yannakakis and Julian Togelius
published a paper titled "Computational
Game Creativity" in which they described
automated game design as a process of
orchestrating different types of content
generation.
In other words, solving not just a
procedural generation problem,
but solving a procedural generation
problem while making sure you are
working together with the rest of the
game's design.
We can clearly see this reflected in
many of the AGD systems in this era,
as well as dozens of other similar
design systems that were made during a
similar period.
Today there are more automated game
design systems than ever before,
working in many different genres and
modeling different kinds of game design
activity.
This is footage of Germinate, the latest
AGD tool to emerge from the work at UC
Santa Cruz,
and the work of many different
researchers including: Adam Summerville
Max Kreminski, Sarah Harmon, Joe Osborne, Chris Martens, Noah Wardrop- Fruin,
Michael Mateas and many more besides.
Germinate is built on top of Gemini,
a rich and complex AGD system that can
design games based on descriptions of
their intended meanings - 
but it can also do the inverse,
extracting possible meanings from
existing game designs.
We can see some of the ideas from the
Game-o-Matic being extended and expanded
upon here,
as well as a lot of innovative work
being done on co-creative interactions,
as demonstrated in this video.
We've also seen some impressive systems
that actually result in fully functional
playable standalone games. Data Agent,
which was developed by Gabriella Barros
at NYU with Michael Green, Antonios Liapis, and Julian Togelius, is a murder mystery
system that generates quite complex
puzzle games using real world data.
The Data Agent team wrote an excellent
paper advocating for 'maximalist game
design' and the project is one of the
most accomplished examples of a playable
research game in existence, I think.
Matthew Guzdial's work on applying
machine learning to automated game
design is another example of exciting
new paradigms opening up brand new
approaches to the field.
This system learns the structure of
games by watching gameplay recordings,
and then builds up an internal model of
how it thinks the game works.
Once this has been done, multiple learned
models can be blended between to create
new designs that combine or interpolate
different aspects of other games.
I really recommend looking this work up
as well as Matthew's related work on
co-creative design tools,
especially if you're keen on applying
machine learning to AGD yourself.
In a similar vein the work of Ahmed
Khalifa and others at NYU
applying reinforcement learning (RL) to level
generation is also worth a look.
RL is applied to generate levels for
multiple games from the VGDL (Video Game
Description Language) corpus including Zelda and
Sokoban,
which means it gets a chance to explore
how changes in genre or subgenre affect
the nature of the learning task.
Also at NYU is the BABA IS Y'ALL project
by Megan Charity and others, which is a
mixed-initiative system 
which you can play online
right now, testing BABA IS YOU-inspired
levels, designing your own, and even
watching AI try to solve them.
This is a particularly notable example
because BABA IS YOU is, in some ways, a
game about game design and general game
playing, and so getting 
AI to play and design for
this game is an especially intriguing
and meta-level task.
Finally, one example that I couldn't
leave out is SuperMash, a commercially
published independent game
developed by Digital Continue and
available on Nintendo Switch and PC.
SuperMash lets the player mix up two
game genres, like JRPG and platformer,
and then generates short blended game
experiences that mix up the genres in
unusual ways.
The generation process is very
constructive, but has some real systemic
flair, as well as looking absolutely
gorgeous.
SuperMash is a weird, wild, and sometimes
broken game, but it's also one of the
bravest and most technically
experimental games I've seen in a long
while, and I think it
 absolutely deserves a
mention here as evidence of how AI-based
game design is slowly creeping into the
games we play every day.
Before we close out this history lesson
I wanted to add a little footnote.
In 2015 Gillian Smith and I wrote a
paper where we argued that automated
game design
was too focused on rules and game
mechanics as the center of all of its
systems,
and that if we didn't do something to
change this we risked rules becoming the
central and most important
thing about automated game design
research.
We suggested that focusing on genres of
game where rules were under-emphasised,
or even non-existent,
like walking simulators would help us
develop the field and broaden its scope.
As you can see from this potted history
of the field, in the five years since we
published that paper not a lot has
changed, and most AGD systems still do
focus on rules, and I say that as someone
who has built half a dozen of these
things, all of which did exactly that.
Building AGD systems that deal with
different problems like art direction
or soundscape design is extremely
difficult, and I've abandoned more than
one project in that area.
The takeaway here isn't that these
systems are impossible to build,
nor is it that it's bad to make AGD
systems about game mechanics or rules.
It's simply a reminder that games are
bigger than this field and they're
definitely bigger than any one AGD
system.
Game design is a creative practice that
is expanding and changing all the time,
and that quality of being a moving
target is one of the most 
exciting things about
it. As long as we keep 
trying to expand our
thinking and look beyond what we're
currently doing
we can keep automated game design
research alive and vibrant.
No two games are alike, just like no two
game designers are alike,
and for the same reason we shouldn't
expect any two AGD systems
or AI researchers to be alike either.
Everyone comes with their own
perspective to this field, and their own
problems that they want to investigate,
and that's why we don't need to worry
about every system covering every aspect
of game design.
I really like this quote from V
Buckenham's manifesto for making games.
It encapsulates a particular kind of
openness and optimism that I find really
appealing,
and that's also what I want you to take
away from this history of the field.
There's no convergence on a correct way
of doing things - the process of building
an automated game designer is as organic
and personal
as the process of building a video game,
and every new AGD system always brings
us new insights and inspiration for that
very reason.
For me this is one of the best things
about working in this area.
This brings us to a new question which
is: why?
We've seen how, over the history of
automated game design research, we've
moved from a fairly narrow field focused
on the abstract
to a bustling place with many
researchers pursuing different design
philosophies.
But to what end is all of this research
being done? Why exactly are we building
all of these complex and often bemusing
systems?
I often leave this reason for last when
making lists of researchers' motivations
but i think it bears listing up front
here: a lot of people 
become researchers
because they love a challenge.
There's something deeply appealing about
wondering if something is possible,
and how we might be able to make it
happen, and that's a great reason to do
something like this because we learn
so much along the way that even if we
never reach the summit
the new discoveries make the trek
worthwhile.
Another reason that's often raised is
the ongoing and escalating crisis of
labor in the games industry,
specifically in the medium to large
commercial games companies
that make some of the biggest and best
known games in the world.
The games industry exploits its workers
in many different ways, but one of the
best known
and most destructive of these is the
extent to which game developers are
overworked to the point of exhaustion,
and how normalized this has become for
both developers and their audiences.
This is sometimes known as 'crunch'. For a
contemporary example of this, CD 
Projekt Red have admitted to working
under crunch since January of this year
to finish their latest game, Cyberpunk
2077. This long period of crunch that exceeds
even the already-high levels in the
industry
is colloquially referred to as a 'death
march'.
Often we propose that automated game
design systems might help alleviate some
of these problems
by providing tools that reduce the
workload for teams,
allowing them to make the games that
they want within normal working hours,
and perhaps even automating away tasks
that are considered repetitive
or less enjoyable. However.
It's absolutely the case that some of
the game's industry's problems with
crunch, mismanagement and 
other issues could be
alleviated with certain AI systems.
Despite this, we very often write papers
about procedural generators and
automated game designers
where we explain that this system will
reduce the workload on developers in
large game studios.
This, I think, gives the games industry a
little too much credit. If we
straightforwardly make games easier to
make, games will either get bigger, or
companies will lay off workers until
crunch is necessary again.
So when we say there's a crunch crisis
in the games industry, I don't really see
AI as a simple solution.
But I do think AI can potentially be
part of a solution. Instead of thinking
about how AGD systems can accelerate the
manufacturing of a product,
instead we should think about their
ability to empower individuals, and the
best way to do this is not to study
their impact at the largest
and most resource-rich companies, but to
look at the needs of creators across the
medium of games.
Instead of asking how we can support
successful large businesses,
who are successful at least in part
because of their broken and exploitative
business practices,
instead we should ask ourselves: why are
smaller developers not able to find the
sustainable income for their work?
Why do they struggle to create the
kinds of games that they want to?
Can we build systems that help make this
easier? The answer won't always be yes,
but I think that this is a better
starting point for doing AI research that serves
the public good.
Beyond this I think automated game
design, and computational creativity, in
general
has a big role to play in advancing how
people are creative on a day-to-day
basis,
and how creative work is integrated into
society. Creative communities provide
support,
mentoring, growth, feedback, inspiration
and camaraderie to the people who create
within them,
and you don't need to be a professional
artist leading a movement to benefit
from one.
We're in creative communities with our
friends, our co-workers, and our family,
but maintaining creative communities and
making sure everyone is served by them
takes time,
work and other resources which not
everyone has equal access to.
Automated game design systems could
become important parts of these
communities,
helping support and maintain these
groups and helping everyone grow and be
creative on a casual basis.
I'll talk a little bit more about how I
see this being possible at the end of
the tutorial.
There are many more reasons to do this
research than the few that I've been
able to list here,
but helping people create more freely,
more confidently, and more happily
are all great reasons in my opinion. So
the next question is:
how exactly do we get there?
It's tempting to think of scientific progress
as a linear path that we slowly move
along until we get to a big goal at the
end of it.
We often frame research problems in
these terms: for example, building AI that
get better and better at playing a
particular video game
until they beat a human expert, like OpenAI's adventures in beating the world
champions at DOTA 2.
This simplified way of thinking about
progress is easy to communicate,
which makes it appealing to both press
and the public, and
easy to evaluate, which makes it easier
to write papers and
argue that the field is moving forwards.
We're led to think of technological
progress along two axes:
the amount of resources put into a
system, whether that's time,
computation, or something else; and the
impact of the output, which again might
be in terms of perceived quality,
commercial worth, cultural value, or
something else.
Typically efforts are made towards
reducing the computational resources
involved
and increasing the impact of the output,
so if you can make a spaceship that goes
to the moon
and costs 10 million dollars then making
it cost 1 million dollars (that's less
resources)
and go to Mars (that's more impact) is
just seen as straightforwardly good.
For automated game design, as with many
creative domains like art and music,
this thinking leads us to focus on what I call
the Infinite Rembrandts problem
where we imagine the ultimate goal as a
system which produces
high quality creative work with very
little time or computation.
During the early years of my work on
ANGELINA the most common question I
would get in interviews with press
is how long does it take ANGELINA to
design a new game.
The follow-up would often be, could I
imagine a world in which someone could
press a button
and receive a brand new AAA video game
just a few minutes later?
This is a captivating science fiction
idea, but it's not a future that I see as
particularly interesting,
or even likely. There are many reasons
why this doesn't fit into the way that
we enjoy
culture as human beings. In talking about
a similar problem, the computational
creativity researcher Simon Colton
poses that we enjoy creativity not just
because of the things that are made,
but because of the processes, backgrounds,
and motivations of the people who make
them.
As he puts it: "Creativity in society
serves various purposes,
only one of which is to bring into being
artifacts of value."
But you don't need to get deep into
philosophy to see why such a creative
system isn't very appealing.
For one thing, for most of us there
already exist more AAA video games than
we could ever hope to finish in our
lifetime. In a sense we already
have a system for
pressing a button and receiving a new
AAA game: it's the new releases tab in
the PlayStation store.
One reason why I think this idea
resonates so strongly with us is that it
aligns with the view of games as a
commodity and a business first, and a
creative endeavor
second. Because the games industry is so
often framed in economic terms, from the
specs of a new console generation to the
budget and earnings of its biggest
studios, our ideas about 
what the future could
mean for games are inevitably shaped
along these lines too.
Games, and videogames especially, have a
huge identity problem, and as a medium
they suffer from a lot of insecurity.
After struggling for acceptance for so
many decades, the way that acceptance has
manifested is largely a result of being
important financially. If you've ever
written a grant application about games
research, the chances are
you've quoted some
figure about the size of the market, or
how fast it's growing.
Instead of thinking about how automated
game design can fit into the ways we
already make games,
it's better to think about automated
game design as a new tool,
a tool that can be used by researchers
but also by artists, by designers, by
teachers,
by everyone interested in making games.
They change who can make games,
how we make them, and the kinds of games
that it's possible to make.
So with that in mind let me propose
three different applications for
automated game design,
that are a little bit different from the
Infinite Rembrandt zone. First let's look
at this corner of the highly scientific
graph we just made,
down here in the left we're looking at
systems which don't need much time or a
big computer to run on,
but unlike the Infinite Rembrandt zone
they don't need to produce games of
great complexity or depth. My example
for this is what i call 'coffee break
games'. Imagine you're sitting on a bus,
waiting for an appointment, or just
putting off going to sleep for five more
minutes. You pull out your
Nintendo Switch, tap on
the Two Minute Minigames app
and ANGELINA generates a short and sweet
puzzle game about counting sheep.
You play the game, complete it, put your
Switch away and never,
ever look at that game again. What
things are we looking for in an AGD
system like this?
Well, one priority is that the design
time needs to be very short. In fact the
ideal model for this would be equivalent
to generating a level in a game like
Spelunky - so short that the player
doesn't even notice it happening, and
that could be a real challenge because
most AGD systems need at least a little
bit of time to test a game to see if
they're playable.
In terms of output quality there's a
real need for the system's output to be
polished and bug free.
The game doesn't have to be a
masterpiece - we'll get to that in a
second -
but because the system serves its output
directly to the player there's
absolutely no chance for a do-over,
or a round of feedback, or anything else.
The priority has to be on producing
games that work.
Fortunately we don't require the games
to be novel at all - in fact novelty isn't
just unnecessary here,
it's actually a bad
thing: if the games generated are too
novel the player won't be able to learn
them and start playing within a few
seconds,
so an AGD system working on coffee break
games would probably be best off
remixing common ideas
to find entertaining variations that
were still instantly accessible.
Let's go back to our very scientific
graph. Moving along the resources axis,
and just a little further up the impact
axis, we reach a spot where AGD systems
are given
a little bit more time and computation
to work with, and an opportunity
to make games with a bit more impact
than our five-minute coffee break
distractions.
Here we can find a type of AGD
system that I'm going to talk a lot
about later that I call 'community-scale
AGD' or, if you prefer, game design buddies.
These are AGD systems that can make
reasonable-sized games on their own, as
well as play and analyze games made by
other people,
offering help and feedback. We might
think that by increasing the resource
cost we're making a system that's less
accessible, that has to run on big
expensive servers somewhere,
but instead these systems solve the
resource problem by just making games
more slowly,
and the result is an AI system that
works on timescales that people can
actually relate to.
What this means is that these systems
are always running, sort of like a
desktop pet that you might have had in
the 1990s. These systems 
are always working on
something, and you can check in at any
time to see what it is,
offer feedback, maybe design part of the
game yourself, and have a dialogue with
the system about what it's doing.
Imagine something like this on a
computer in a classroom, or running on an
old laptop
at the back of an indie studio, or maybe
just sitting at home on your desktop.
These little game designing machines
will learn from you, which changes the
kinds of things they design,
and together you explore a world of game
design together. In terms of our wish
list for a system like this, first off
we're looking at
a design time measured in maybe weeks or
months, rather than minutes.
These systems emphasize the process of
making games, not the finished product,
and so we only need to release games
when they're finished - we're not in any
rush.
When you think about it, this makes a lot
of sense in social contexts.
Most people make games at a rate slower
than this and that's where a lot of
interesting creative interaction comes
from -
you have the excitement of watching a
game develop, waiting for its release,
maybe giving feedback on an early build
that someone sends you,
and then sharing the finished product
with others and reflecting on the
process.
In terms of quality, we don't necessarily
need the games to be flawlessly high
quality.
Much like the games that you and I might
make, they might have some rough edges on
them or not have every idea executed
perfectly -
the important thing is the act of
creating it, what lessons the AI can
learn from each finished game,
and how that leads it to make better
games in the future, and all of this
gives its audience an opportunity to get
involved too.
There's a similar story with novelty
too - the novelty of a system like this
can be purely local to the community the
system exists in,
rather than aiming to be completely
novel across the entire history of games.
We'd hope that the system could surprise the
people around it but it needn't be
revolutionizing the entire field of game
design.
We'll talk a little bit more about this
idea of community-scale AGD later in the
tutorial.
Finally let's stretch up into the far
top right of this graph where the
resource cost is high and so is the
impact.
The greatest irony about the Infinite
Rembrandt Zone as a decadent luxury AI
system
is that it actually isn't that ambitious.
So for the third and final example in
this section I want to look at an AGD
system that aims to change games,
fundamentally, forever.
I call this 'century-scale AGD'. The
premise behind a system like this is
that it aims to create a single piece of
work, but that work is of such immense
impact that we would be willing to wait
for any length of time and spend any
amount of computation to receive it.
An agd system that exists at this spot
in our graph is designed to work for
decades, if not centuries,
on a single game - and this is by design.
We might think, well, if it takes 100
years to make a game why not just double
the computational resources we give it
and get a game in 50 years?
But what I'm proposing is that even if
we could double the resources in this
case
we would simply want to let it run for
twice as long again so as to increase
its impact even further.
The length and depth of the system's
design process is part of what the
system is.
Now the quality requirement of such a
system is astronomically high - the
resulting game
should be important and impactful - the
people that live to play this game for
the first time
will have been watching this system work
for their entire lives, the people who
built and maintain the system have died,
the expectation is that not just the
game, but the process of designing this
game, will have a lasting cultural and
artistic impact
and be remembered for centuries to
follow. Similarly, in novelty terms a game
made by such a system wouldn't just be a
novel idea, but would fundamentally
change the way games were talked about,
thought about, made and played forever. These are just a few possible futures
for automated game design, but I think
they're all more interesting, and more
likely to happen,
than our imagined infinite AAA shooter
machine. They're also all very different
engineering challenges, demanding a wide
variety of different
approaches. Automated game design is a
toolbox, it's a new way of thinking about
modeling, exploring and enjoying design
spaces, and that's what makes it so
difficult to imagine what it might let
us do.
We've talked about what automated game
design is and we've looked at the
history of the people and systems that
made up the research field so far,
and we've also discussed the kinds of
things that we might want this research
to do
in the future for society. Now it's time
to talk about the practicalities of
actually building one of these systems,
from initial sketches
through to publishing and distributing
the games that it makes.
Throughout this section I'm going to
illustrate some of the topics we cover
by referring to Bluecap,
an open source automated game designer
that I built for this tutorial.
Bluecap is a very simple toy-like
example of an automated game designer
that designs two player abstract games
like Noughts and Crosses or Connect Four.
It's written in Unity with C# ('c-sharp') and
comes with some starter scenes for
playing and generating games. Hopefully
by now I've convinced you that automated
game design can
and should be much more than just ruleset
generation, but this simple system is
perfect for exhibiting AGD systems
using examples that are hopefully
familiar to many of you.
It's got a little bit of everything in
it, so I really hope it'll be a useful
starting point for your own experiments.
The first thing to point out is that
I won't be using this section to teach
you about specific techniques like
co-evolution or MAP-Elites.
The main reason for this is that there
is no single best way to do automated
game design
and we certainly don't have enough time
to go over all of the ways people have
tried to do it.
But more importantly algorithms are
something we already spend a lot of time
writing and talking about.
Instead I'd like to focus on aspects of
the process that we don't really discuss
in papers and talks.
I find that a lot of the most important
knowledge and lessons learned when doing
AI research often go unwritten,
and that makes a lot of new areas harder
to get into because these
hidden traps and pitfalls are often not
pointed out to newcomers.
The first step in building pretty
much any generative system is laying out
the space that you want to explore,
and what we mean by this is defining the
range of possible outputs that we're
interested
in our system trying to produce. We don't
have the time,
money, or mental energy to try and build
a system capable of generating any kind
of video game,
so we're going to have to pick and
choose a smaller area to explore that we
find interesting and worthy of study.
I like to start out with a rough
statement summarizing the area I'm
interested in. For Bluecap I 
chose quite a small starting
point:
two-dimensional, two-player, turn-based
strategy games.
A few of these choices in particular
I'm making ahead of time to simplify my
life later on.
Turn-based games are a lot easier to
evaluate than real-time games,
as are two-player games. Two-dimensional
games are also a lot easier to make look
nice,
especially if you're not great at art (like me!)
Even this statement covers a huge space
of possible games, so there's a little
exercise that I like to do to narrow
down the space a little bit before I get
designing. What we're going to do is
define our space by thinking about the
results that we'd like our system to
come up with. The first step is to
define what we call an
'inspiring set' - I like to think of this as
being like marking out the edges of the
space that we want to
explore. This inspiring set is a list of
outputs that we can describe
in detail - we could even make them by
hand ahead of time.
Often they are famous examples in the
domain that we're already trying to
work in. For example, if we're building a
level designer we might pick a few
famous levels from our favorite games or
sketch out designs that we would expect
our system to be able to generate.
For Bluecap I started with a few simple
games that were already in
that category that I defined just now -
noughts and crosses ,which you might
know as tic-tac-toe,
Connect Four, which I used to play a lot
when I was a kid,
and Reversi, which you may have seen on
windows as a free game,
or maybe you've played it with a
physical set yourself. These are all
games that fit into our definition of
turn-based, two-player, abstract strategy
games.
The next step is to review our inspiring
set to get a feel for the kind of design
space this marks out.
In order to check this it's useful to
imagine trying to transform one member
of the inspiring set into another.
We can do this informally - it doesn't
need any code, we just note down the kind
of changes that it would require.
We're looking for two things in
particular:
how many times do we need to change the
game to get from one to another,
and what is the nature of these changes -
how big and how significant are they?
So to transform Noughts and Crosses, or
tic-tac-toe,
into Connect Four we need to increase the
size of the board
from a 3x3 grid to a 7x6 grid - and
straightaway this tells us that the size
of the play area will probably be
something that we want our generator to
have control over.
We also need to change the victory
condition. The two games have quite
similar victory conditions,
but one is three-in-a-row, whereas the
other is four-in-a-row, so this is
another variable that we probably will
want our system to be able to control:
the number of pieces in a row required
to win. Finally, we need one more rule
to make it Connect Four,
which is a rule that applies between
player turns.
When a player places a piece on the
'board', as it were, it should 
fall down to the bottom, and
that's because Connect Four is played on
a grid,
that's kind of standing up, so you
drop pieces in from the top.
We can simulate that here by just
having the pieces fall down
after the player has placed them. Each
of these small changes will inform the
design language that we're going to
describe in just a second,
but importantly the number and
complexity of the changes
tell us a lot about the size of the
design space that this inspiring set is
defining. If there are too many changes
required, or if the changes are extremely
fundamental,
this will make it harder - although not
impossible - to build a system that
encompasses
the entire breadth of the inspiring set,
and it might mean that
subsequent tasks like evaluating the
games are also harder.
We also need to look at how drastic
individual transformations are.
For example, if your inspiring set
contains level designs and one
is a 3D level and the other is a 2D
level, that might be too drastic to
bridge between with a single system.
In the case of rule sets, for example, I
try to make sure all of my inspiring set
games share a control scheme, 
or at the very least
that there's a superset of the
controls that isn't too large.
This tends to make evaluation a lot
easier as we'll see later on.
For Bluecap the space we've marked
out is actually a little on the small
side - the games are quite similar to one
another
and the changes required are
also quite simple,
and this can mean that the resulting
design space is too small. But
fortunately there are ways that we can
improve this as we build the system.
The next step that I find useful is
to establish landmarks in the space that
we're slowly marking out.
A landmark is something which we can
express by mixing together bits of our
inspiring set,
possibly with a few modifications.
Landmarks can be
very lazy - they don't need to be
innovative, or
good, or really clever games, and they
don't need to be tested - you don't need
to make sure that these are good games,
they just need to be relatively
well-formed examples of games that would
be in your design space.
I like to look for mid-points
between two examples in the inspiring
set, and although I've used a triangle
here that's just because my inspiring
set
has three games, in yours might have four,
or it might have eight,
it might just have two, but you
just need to pick a few to blend
between and see what those landmarks
look like.
This is a good way to test out the
density and texture of your design space -
is it hard to think up a plausible
example, or are there loads of
good ideas occurring to you as you mix
up these components?
This will give us an idea of whether
this space will be easy to search or not.
You can see for Bluecap I've come up
with things like Super Tic-Tac-Toe,
which is Noughts and Crosses but on a
Connect Four-sized board - we kind of
already alluded to that before,
it's like Connect Four
without gravity. Or
Battle Tic-Tac-Toe, in this you're
aiming to get three-in-a-row but you can
flip pieces like in Reversi.
This game is
actually broken, if you think
about it, but
it's game-shaped - it makes sense as a
blend, it wasn't too hard to come up with,
and that's fine, that's all a landmark
has to be. The third example there is
Graversi, which is Reversi but it has Connect
Four's rule that makes pieces fall down.
These three games, they're very simple,
they're really just mixing together
rules from different parts of our
inspiring set,
but they were very easy to come up with
which probably means that there are more
designs in that space that are similarly
easy to discover,
and it's not too hard to kind of apply
one rule from one game into another game.
These are all good signs for Bluecap. 
Once we have these inspiring games
and the landmarks laid out we can begin
to think about what our design language
will start to look like.
At this point I want to interject to say
that I actually have a paper at CoG (Conference on Games) this year about the drawbacks of design
languages
and a suggestion for an alternative way
of working, but despite this I
still use design languages all the time.
I think that they're easy to use,
they're fast, they're fun, and they get
pretty good results, so if you've already
seen that paper
don't get confused by my use of design
languages here - I'm still super in favour
of them!
But if you're curious about what other
ways of working might be possible,
I would encourage you to check out that
talk - I'll put a link in the video
description.
We've defined our inspiring set, we've
defined a few landmarks, and now it's
time to precisely define what language
we want
to use to build games levels,
characters, or whatever else we're
dealing with.
Now, the exact nature of the design
language you need will greatly depend on
the kinds of algorithms and techniques
you plan on using,
but there's still some general things
that we want to watch out for. Whatever
our design language is for, whether it's
designing levels, composing music, or
writing game rules,
there are a few common considerations
and challenges. We want a design space
that contains a lot of surprising,
novel and high quality outputs, which
means a bigger and more
complex design language is always going
to be better because it lets
us express a wider variety of designs,
but we also need a design space that
isn't too big,
and one where the ratio of good to bad
outputs is not too small.
The bigger our design space and the
smaller the ratio of good to bad
outputs, the harder it will be to search
and the more computational resources
we'll need to dedicate to it.
This means that every small decision
about our design space is important.
For example, we know we want to have
variable size boards because
our inspiring set games use different
size boards.
So how big should we allow the boards to
be? A good starting point is to use the
limits suggested by our inspiring set.
Noughts and crosses is the smallest
at three by three
and Reversi is the largest at eight by
eight
we want to give our AGD system some room
to experiment in, though, so we might want
to extend our limits a little.
In this case we probably don't want
anything smaller than a three by three -
the space is a little tight and even
Noughts and Crosses has problems on its
3x3 board -
but we could let boards get a little bit
bigger than 8x8, perhaps to 10x10 as
the maximum limit.
Now one criticism of this range is that
it includes a lot of redundancy -
for example a 5x10 board isn't very
different from a 10x5 board if you're
playing a game like Noughts and Crosses,
and most games that work on a 9x9 board
would probably also work on a 10x10
board.
So if we agree that this is unnecessary
we could restrict this axis
to a set of common board sizes instead
of just using arbitrary values,
so for example we could only allow
square boards,
and we could even restrict it to a set
of 3x3, 5x5 or 9x9,
This massively reduces the task of
choosing a board shape,
however there are drawbacks to this very
drastic approach. For example,
one of the rules in our design language
is that pieces fall down - that's the rule
from Connect Four.
In a game with this rule, a 5x10 board
is very different from a 10x5 board,
because of the gravity being applied to
player pieces. Similarly, while most
games might not play differently on a
9x9 board to a 10x10 board,
there might be a
particularly genius game that uses
patterns of three pieces,
and having a board length that is a
multiple of three might be crucial to
making that game work.
This is obviously a hypothetical
claim - we don't know whether this game
exists -
but we might have a hunch, or a hope, and
thus want to keep that option open.
This is a classic trade-off familiar
to anyone who has made a procedural
generator before:
the broader you make your design
language, the more potential good things
you include,
but you also make the task of finding
the good stuff harder.
This is a list of all the components we
get from the process of transforming one
inspiring set game
to another in Bluecap, and it's a good
starting point for our game.
From this we can start to think about
the next important part of our design:
the structure of a valid output and how
to generate one at random.
First, what components are necessary?
Well, every game must specify the size of
the board, so that's necessary,
and for Bluecap we also want every game
to have a win condition.
This one isn't a necessity - we
might want to design games with no end,
so there's no win condition needed.
We also might want our system to have
the freedom to design games where you
can only lose and not win
but for the system I was designing I
decided to have a win condition just be
a part of every game.
We might also want to allow multiple
ways to win, but again for Bluecap
we decided to just allow one win
condition per game.
You can see underneath here we've
got a couple of events that apply
after a player takes a turn. These are
fairly generic rules that can be mixed
and matched together,
and some games like Noughts and 
Crosses don't have any at all, so therefore we
know we don't *need* to have these.
So we'll allow our games to have
anywhere between zero and two
of these after-turn events so we end up
with a simple spec for a game that looks
a bit like this.
Between this and our design language
we could move on right now and start
building the system,
but probably your language at this point,
much like Bluecap's, will be a little bit
on the simple side because of the way
we've started with inspiring set games
only.
That means that the overall design
space will probably be a bit small.
Small design spaces can be really
good, but if they're too
small they probably won't contain much
to surprise us, and they might be a
little bit too easy to search.
The main way we're going to expand Bluecap's
design space is by extending its
language with new components.
Extending the design language might mean
adding more options to an existing
category,
for example a new win condition. It could
also mean adding new categories
altogether, which might require modifying
the structure that we're going to design.
For example, if we decide to add lose
conditions we'll need to modify the game
structure we just defined
so that it has a slot for a lose
condition. However, this is the best time
to do something like that -
once we start building the system itself
and writing code for the games and ways
to evaluate them
it gets a lot more painful to change the
underlying structure of what you're
going to generate.
It's still feasible, and I do it all the
time because we all forget stuff,
but it just takes a bit more work, so
it's good to have these thoughts now
before we start building the system.
Now, when adding to the design language
we might also want to pull concepts in
from games we like, especially games that
are similar to those in our inspiring
set but that we didn't actually consider
at the beginning.
For example we could add an after-
turn rule that deletes rows that are
completely filled,
just like in Tetris. We can also remix
existing logic to use it in other parts
of the design language so,
our Reversi rule which captures pieces
that are bookended by pieces of the
other player,
could also be used as a win condition, so
you win if you can bookend another
player's piece, for example.
Now, in both of these cases - the adding
the rule from Tetris or reusing the
reversi rule in a different way -
we don't know if either of these two
things will lead to good game designs,
but they feel like they are components
that might be interesting to experiment
with and that's reason enough to include
them in the design language.
When expanding our design language
it's useful to ask a few basic questions
for each addition so we get a feel for
the kind of changes we're making to the
space.
First, what exactly is it that we're
adding? Sometimes this is very
straightforward, so
if we're adding a new win condition then
that's it, but sometimes we might be
adding a *set*. of possibilities rather
than a single new possibility.
For example, if we added a third
dimension to the board size which could
also take a value of between 3 and 10
tiles then we're not just 
adding one new board
size we're adding hundreds of new board
configurations.
Being clear about exactly how much
we're adding to the design language is
really important.
The second question is: what excites you about
this change? It might be that you can
imagine this change enabling a really
good kind of game design,
sort of like our inspiring set from
earlier, but more often than not it will
just be something that makes you
interested. The best kind of 
addition is one that
makes you ask a question about the
design space, or one that makes you
curious about something. Is it possible
to design a two-player game that has
Tetris' deleting-filled-rows rule?
Additions that excite or interest you
are always good ones to make.
The third question is how is the density
of the design space changing? By
density I mean the ratio of
acceptable outputs to bad outputs. Of
course we can't know this exactly - that's
why we're building the AGD system - 
so we have to act on feeling.
But we really only need to judge it
within an order of magnitude;
most additions to the design space
barely alter the density, they add some
good ideas and they add some bad ones,
and if you have a hunch that you want to
add it they probably add enough good
ones to balance it out.
However some changes will add a lot of
noise, and these changes should be
considered very carefully.
For example, suppose we wanted to allow
some tiles on the board to be marked as
blocked from the beginning of the game,
meaning that players can't play pieces
on them. This could open up a lot of
interesting new designs by allowing us
to change the shape of the boards,
but if any tile on the board could be
blocked this would massively increase
the design space.
For a 5x5 board there would be over 33
million different board designs with
different tiles being blocked off.
If we want to add this kind of
mechanical possibility without exploding
the design space,
we might want to think of more
restricted ways to add it. For example, we
could design a few templates of blocker
tile patterns, like a checkerboard
pattern
or a ring pattern, and allow these to be
chosen from a big design set,
rather than allowing any arbitrary
combination of tiles to be blocked off.
This will limit the novelty and
ingenuity of the block tile mechanic,
but in return we get a much more
tractable design space modification.
Automated game design is design. It
sounds obvious, but it's easy to forget
when we're viewing it through the lens
of AI research.
There is no optimal video game - except
perhaps Kula World for the PlayStation -
so before we even start we know we're
not trying to find the right way to
design games, just
*a* way, and hopefully a way that reveals
new things to researchers, to designers,
and to us.
Hopefully as you build out your design
space and think about the questions you
want to ask
you'll find yourself curious about
different parts of it. Try making games
using the design language yourself,
try thinking about questions you'd like
to answer about the space that you've
marked out.
On paper automated game design can seem
antithetical to the art of game design,
but in reality I think that it's just
another way to dive deeply into it and
celebrate its depth and complexity,
and hopefully you'll find a similar joy
in this process too.
Whatever algorithm or system structure
you use, at some point you'll need to
evaluate the work produced by your
system.
For most automated game designers
evaluation will be a big part of the
design process, especially for
search-based approaches
where the AI system needs a way of
tracking its progress and ensuring it's
making the right decisions as it works.
Evaluation is always difficult for AI
systems, but for AI that work in creative
domains like video games
it becomes even harder. Games are
evaluated by many different groups of
people,
across many different metrics. A business
executive for example -
illustrated here by Microsoft's Phil
Spencer cosplaying as my 
best friend's dad circa 2003
might be primarily interested in how
many copies the game will sell, or how
marketable it is.
A game designer might be interested in
the tightness of the game systems, or how
it feels to play.
They might also have artistic aims they
wanted to achieve with the work and
whether they feel they succeeded.
And beyond this we have many more
qualities that we often describe games
as having:
we call games challenging, moving,
addictive,
or simply fun. Not only are these
qualities all very different and often
in conflict, but you may notice that a
lot of them are almost impossible to
write down a formal definition for.
Many of these words are intangible ideas
that two people don't normally
completely agree on.
Just think of any time you've disagreed
with a friend about whether a game is
fun, or whether a movie is good. Between
people this kind of disagreement can be
fun,
and is part and parcel of how creative
communities evolve and grow,
but that doesn't help us when we want to
give our AI system a clear definition of
whether a game it's just designed is
worth playing or not.
One of the really important facts I want
you to take away from today's talk
is this: there is no way to objectively
measure a game's goodness.
That's why there are so many different
game designers, that's why there's so
many different games communities, and
schools of thought, and game design
trends, and genres, and generations.
Making games is a subjective, creative
art form and you cannot define your way
out of this problem.
But the good news is that this won't be
a big problem for us, it just requires a
shift in our perspective on evaluation.
So what can we do instead? There are
three useful approaches to evaluating
games that I want to tell you about
today. The first is static 
analysis where, we
try and prove things about the game
before playing it.
The second is human evaluation, where we
get people to do the evaluation for us.
And the third is agent-based playouts
which Bluecap uses and which we'll talk
about in some detail.
Static analysis works by looking at the
description of the game and trying to
infer certain things about it just from
this information.
This has been used to evaluate and
filter games going right back to
METAGAME in 1992 that we talked about
earlier.
I find it particularly useful for
throwing away games that repeatedly
exhibit the same bad properties,
and it's very good for precisely
excluding or including certain things in
your design space.
For example, in Bluecap it's possible to
generate a game with the win condition
'get four-in-a-row' on a 3x3
game board. There's no way I 
can win this game - it's
mathematically impossible - so we can
write a little analysis rule
that spots certain bad patterns like
this and then excludes those games
before they get any further in the
design process.
A more sophisticated static analysis
system might be able to reason about
rules more generally.
In the latest version of ANGELINA
I've built a static analysis system that
tries to infer when a game is not
winnable,
which it does by looking for chains of
rules that lead the game towards a
conclusion.
Human evaluation is as straightforward
as it sounds - we get someone to play the
game, and give feedback to 
the AGD system. This
can be as simple as a yes/no,
or it could be more complex, allowing the
evaluator to suggest changes, for example. 
There's also the question of
how many evaluators we're going to have -
if it's a personally guided system or
perhaps a co-creative system working
with
a human designer then we might just have
a single evaluator, and
value their feedback very highly. However
if the system is receiving feedback from
surveys or large groups of people
we might be aggregating the opinion of
dozens or hundreds of players.
Human feedback is great because it
allows us to capture a little bit of
real human opinion about a game,
and that's as useful to an AI as it is
to real game designers who regularly
playtest their games
to assess their progress. The downsides
are fairly obvious: it's very expensive
and
very time consuming. This means it often
needs to be paired up with other kinds
of evaluations so that humans are only
brought into the loop when the system is
fairly confident about what it's showing.
It's also obviously a fundamental part
of co-creative systems where a 
human and an AI work together.
For a nice example of this look at
something like Antonios Liapis'
Sentient Sketchbook, or Matthew
Guzdial's machine-learning-
driven co-creative game design tools. 
Evaluating games with AI agents involves
building several bots that can play
our games and then setting up play
sessions where they play against each
other,
or on their own in the case of a single
player game, while we record data about
the game.
You can think of these kind of like
experiments: we can tweak both the game
and the bots so that a particular
situation is engineered,
with an expected outcome that we want to
test. For example, a bot that plans ahead
and chooses its move intelligently
should beat a bot that behaves randomly.
So we can test this theory by making
those two bots play against each other
100 times. If the intelligent bot loses a
little that's maybe okay, 
but if it loses a lot
we might think there's something wrong
with our game design.
One of the main drawbacks of this
approach is that it relies on our
ability to design bots that can play our
games -
we'll get onto that in a second - that
means that if you're designing a game
about writing and performing poetry, for
example,
testing it with bots will be pretty hard
because designing bots to play
that kind of game in the first place is
a very hard task.
Another major drawback is that bots
don't always play like humans. For
example, a bot can't get confused or
intimidated,
it can't misunderstand the rules or get
tired, drunk, or distracted.
So sometimes our tests might tell us one
thing, where human players would find the
exact opposite.
Learning to write the right kind of test
just takes a little time and a little
practice.
Writing AI agents that can play games
designed by an AGD system can be
difficult, because
we won't know much about the game in
advance, and so we can't give the AI 
extra heuristics or tips to make it play
the game better.
This makes the task similar to General
Game Playing, another area of game AI
where researchers try to build agents
that can play any game even if they
haven't seen it before.
In Bluecap we have three different
kinds of AI agent that we use in our
evaluation: we have a random agent, which
on its turn picks a random empty tile
and plays a piece on it;
we have a greedy agent, which can look
exactly one step ahead into the future,
so
it can see the direct consequences of an
action, and then it simply picks the
action with the best reward.
So the greedy agent is slightly better
than the random agent because it will
avoid making a move that causes it to
lose and will always take a move that
causes it to win.
However, it can't plan ahead for the long
term. Finally we have an MCTS agent.
MCTS, or 'Monte Carlo Tree Search'
is a very useful algorithm
that is good at playing games without
any assistance using simulations and
random playouts to methodically decide
what the
next best action is. The MCTS agent can
be tweaked to give it more or less
computational resources,
so we can use it to simulate stronger
and weaker agents as well, which is
really useful.
We can mix and match these agents to
test different theories about a game
that we want to be evaluated.
In our evaluation process, which you can
find in the GameEvaluation.cs file,
our system steps through four
different game match-ups from which it
measures five different qualities.
I'll very briefly run through them here
to give you an idea of our approach.
Our first evaluation is two random
agents against each other. This is
very fast to run because there's not
much computation going on at all.
This is to test one specific theory and
that's that the game isn't fundamentally
biased towards one particular player.
We expect our random agents, since
they aren't playing strategically,
to win and lose roughly an equal amount
of times and probably not to actually
win the game much at all.
Know that this doesn't mean that the
game is unbiased just because we passed
this test,
so for instance Noughts and Crosses is
very biased in favor of the first player,
but it requires a lot of planning and
understanding to take advantage of this.
This kind of random versus random
match up is the lowest possible bar for
our game to cross.
Our next test is Greedy versus Random.
Here we expect the greedy player to win
more than the random player,
but we don't necessarily expect much
beyond that. For example, a complex
game may require planning to reach
victory which the greedy player won't be
capable of,
but even in this case the greedy player
should be able to stay even with the
random player because
it will always avoid losing moves if it
can. If the random player, who can't tell
a losing move from a winning move,
beats the greedy player then that's
probably a bad sign for our game.
Our third test is another asymmetric
matchup, this time Monte Carlo Tree Search
versus the greedy agent.
As with the previous matchup we expect
MCTS to win again, but this metric is a
little bit more important than the
greedy random match up.
There are two reasons for this. The first
is that the expected skill gap between
the two agents is much more pronounced
here, so we expect a similar big
difference in
outcome, and the other reason is our
expectations about the game itself.
Since we want to design games which
require strategy and planning,
we expect that an agent that can plan
into the future
should be able to perform better at a
game if it supports that kind of
planning.
This is really explicitly testing a
design theory about our space.
Now our fourth and fifth tests are both
measured from the same matchup, which is two
MCTS agents of equal strength playing
against each other.
This is our highest skill matchup, so
we expect this to give the best
imitation of two good human players who
are trying to win.
Given that both of our agents have the
same strength, in terms of the amount of
time and computational resources given
to them,
we can use this matchup to measure two
things: first, the number of games that
end in draws.
We probably want our game to be
decisive - we don't want games to end in
draws too often - so we prefer
games that have fewer drawn outcomes in
this high-skill match up.
The second expectation here is that
neither agent should have a particular
advantage. This is actually mirroring the random
versus random test,
but at a higher skill level and so
testing kind of a different angle of the
first player bias factor.
To see our tests in action you can open
and run the Generate scene,
press play and watch as the game
generator generates a hundred games and
evaluates each one
in turn. There's no intelligence, or
complex algorithm here,
it's literally just showing you what
the best game that came out of testing
100 random games.
When designing tests like this it's
worth bearing in mind that like any data
gathering exercise the more data we have
the better our conclusions are.
That means that running 20 matchups is
better than running 10, and so on.
However, this can be costly if you're
using a search-based approach like
computational evolution,
where thousands of games might need to
be evaluated in a single design run.
The same goes for the resources given to
a single agent - stronger MCTS 
agents with more time and
computation given to them
will give us much better data on our
game, but this will mean that we spend a
lot more time evaluating games,
because it takes a lot longer to think
up how to take a single move.
Before we close out the evaluation
section I wanted to tell you quickly
about two other systems that incorporate
slightly different evaluation approaches
The first is Variations Forever by Adam
Smith and others,
which uses answer set programming to
define constraints on a design space.
In this approach a designer or a
player alters the constraints which
results in different games coming out of
the answer set system.
This is similar to human evaluation,
but because the constraints are always
met we can think of it more as a process
of refinement by the human user,
rather than just subjective feedback.
If you're building an AGD system with a
machine learning component we can also
think of its evaluation process as being
slightly different too.
Machine learning systems often use
human evaluation to provide feedback to
a live system,
Matthew Guzdial's co-creative designer
is a good example here,
but we can also think of the act of
learning as internalizing an evaluation
of sorts.
If we train a system on examples of good
and bad content then the learned
model, in some ways, is an internal sense
of evaluation that the system has
developed.
We're delving a little into semantics
here but I wanted to point out this
example because evaluation 
comes in many forms.
Evaluation is a very hard part of
building an AGD system but it can also
be the most fun,
because playing broken or sub-optimal
games is very rewarding and it helps you
think about game design,
and about the design of your AGD system.
As you get games coming out of your AGD
system you'll get a chance to re-examine
the ideas that you encoded in the first
place,
and get opportunities to go back to
tweak and improve things.
You'll see that Bluecap's very simple
evaluation leads to a lot of weird games.
Its main weakness is that the default
settings don't run very deep evaluations,
because
I wanted it to be able to completely
generate a game in a relatively short
space of time so that you could play
with it.
If you want to experiment with better
results, try turning some of the settings
up a bit,
using more runs for each evaluation,
giving more resources to the MCTS agents,
and then leaving the system to design
for a few hours. Even then
you'll still find games that are
interesting at first but kind of turn
out to be broken,
often in unusual and subtle ways. Even
though these games are not going to be
award winners I still find these kinds
of examples the most exciting to get
from an AGD system,
and finding out exactly why they're
broken and how this slipped past my
evaluation system
is always a really enjoyable puzzle.
Before we close out this practical
section I wanted to touch on the issue
of actually building one of these
systems.
A lot of the theory we've discussed
so far works great on paper,
but making games is really hard and a
lot of unusual software engineering
challenges can come up
when building a game making system. The
first question is what game engine
should you use.
If you're building an AGD system for
physical games, like card games or board
games, you don't really need to worry
about this.
But if you're making an AGD system for
video games I really recommend building
on top of an existing engine,
saving yourself time and worry.
Ultimately the best advice I can give
you for building better AGD systems is
to make games yourself.
Whenever I wanted to use a new engine
for ANGELINA I would first build some
games in that new engine.
Now, this obviously takes time, and I know
time is something we always need more of,
and I also know that learning a new
skill is painful, it can be very
demoralizing at first.
Finding a small community of a
similar experience level to you is a
great way to start out, and it'll give
you the confidence to learn and
experiment.
The best place to find this today, in my
opinion, is on itch.io.
Go there, join a game jam, set aside a
weekend to enter it,
limit yourself to just a few hours a day -
you don't need to burn out doing 12 hour
game jam days -
and just make something really really
small.
There are many great tools and game
engines I can recommend, some of which
are listed here -
I've included links to each one of these
in the description of this video.
I want to particularly point out
HaxeFlixel here. I worked in Flixel
for many years and it was a very
satisfying game engine for making 2D
games, especially action games, and the
logic of its game engine isn't too hard
to get into,
and it has a lot of useful helper code for
doing common things
Also, don't feel forced to rush into
using a popular or fancy technology -
try out tools, find something you enjoy,
and learn to think small.
Following other indie developers on
Twitter and playing their games is a
really good way to practice this.
Without doubt, the hardest skill to
practice as a game developer, and this is
still true for me eight years on,
is learning to think smaller. The idea
you have is always too big, and that goes
for AGD systems as well as game
development.
With that said, and with that big list
of interesting game engines to
experiment and try,
my overall recommendation for a game
engine - something to aim for
as you get more confident, or perhaps
you're already a developer and you just
want the bottom line -
you should use Unity. Unity is one of the
most popular game engines in the world,
your university may well already teach
it to their students,
and it has a lot of nice advantages for
us as researchers to supercharge our
research work.
In terms of quick positives, some of the
big ones: it runs on almost any
platform, which means you can easily
export to mobile devices,
to the web, even to modern games consoles
if you have dev kits for them,
and that means it's easier to run surveys
and tests wherever you want, easier to
share your work
and make it portable, and that's one of
the most rewarding parts of research.
Because it's popular you'll also find
that most of the common problems you
have have already been asked and
answered
on places like StackOverflow, and Unity
has an asset store that is packed full
of really useful,
and often completely free or very cheap,
utilities and assets
that can help out with everything
from animation to user interface
to AI. It also uses C#
which is a nice, clean programming language
with good support and lots of features.
Unity isn't without its downsides,
the most important one being that it's
quite a big tool,
and if you're new to making games it
might be a bit daunting and for some
projects it's probably overkill.
However, learning Unity has lots of
useful side effects, especially if you
work with students who might be learning
it too,
or using it on other projects, or if you
want to build general tools that are
used by other game
developers.
The other major downside is that Unity
requires a little bit more planning in
order to export your games, which is
something we're going to talk about in a
bit.
If you are interested in learning Unity
I'm always happy to chat and give
feedback and help,
and I've also added a link to Tom
Francis' excellent
YouTube tutorial on making a game with
Unity, which I highly recommend.
Once you've picked out your game
engine you're ready to start building
your system.
There's a few hurdles that often catch
me out at this stage and they may or may
not apply to you depending on what kind
of system you're building,
as well as what tools you're using and
what kind of games you want to make.
The first thing I normally do,
regardless of what system I'm building,
is make a template game that sets up
all of the basics that every 
game generated by my system
will have, so for Bluecap that means a
game that creates a board of tiles of a
custom size,
allows me to tap a tile to place a piece,
and swaps between players
after each turn - that kind of thing.
Then on top of this we add any features
in our design language,
not that they should all be active at
the same time, but so that we can easily
turn them on or off.
There's a few ways that you can do
this.In Bluecap you'll see that the way
I do it is to create classes
that represent different parts of the
design language, so there's a Condition
class which is an abstract class
that is subclassed for the In-A-Row
condition and the Total Piece Count
condition.
Then games can have condition
objects that they use to check the game
logic for winning and losing.
There are more flexible routes you
can take for this and a lot of it will
depend on the language you're writing
code in and how confident you feel.
For example, C# has 'delegates' which
is a feature that allows you to wrap
a method call and use it as an object, so
you can move it around and invoke it
wherever you want.
I really like this feature and there's a
lot of similar features in other
programming languages,
and they're really useful for chopping
and moving around game code at runtime.
The other key piece of the AGD
engineering puzzle is figuring out how
you want to distribute and export your
games.
Whatever algorithms and techniques
you use to solve the automated game
design part,
at some point your system has to output
something.
In terms of output the two best
approaches I've come across
are game description files, and directly
modifying
a template game's codebase. In the
former case we just write out a simple
text file,
usually in a machine readable format
like JSON, and that describes the
structure of the design
produced by the system. So if you're
making a level designer, for instance,
this game description file might include
the locations of all of the world
geometry and any models used. 
If you are developing
rule sets it might be a description of
the win conditions, and the rules applied
at different stages of the game.
This approach is nice because it
produces very small and compact outputs,
so your system is just producing a tiny
text file,
and we can write a parser to actually
load these descriptions back into our
system
which opens up lots of exciting
opportunities like allowing us to read
game designs made by people,
not just by our system, or reading back
in old designs that we worked on in the
past and then continuing to work on them
in the future. The other approach I've
used is modifying the code base of a
game directly.
When I was working with flash games I
had a full template game with gaps in
the code at key locations such as the
update loop,
marked with specific tags. Then when
the AGD system was ready to export a
game design it would generate and then
paste code segments into these tagged
areas
and then use the flash compiler to
compile the template game.
This loses some of the advantages that a
smaller simpler text file output has
but it's also more flexible and it
doesn't require us writing a whole
interpreter to pass game descriptions.
The final step of the process is
deciding exactly how you want to
distribute your games.
If you're generating simple game
description files then you can
distribute a
standalone interpreter which means that
players simply need to download new text
files and load them into this
interpreter that they already have on
their machine.
Or you could allow the interpreter to
contact a server
and then download the latest game
descriptions that you've uploaded there.
This requires more engineering both at
the beginning of the project, in order to
build the infrastructure for the
interpreter,
and as the project continues, to make
sure that the interpreter is up to date
with all of the changes to your design
language,
and also backwards compatible with the
old versions of games that you've
already published.
However the benefits are that your
system is much more flexible,
and distributing your games is also
somewhat simpler especially if you
intend to automate this process.
For example, you can't automate the
process of uploading projects to Steam,
but you could upload a single
interpreter to Steam and then automate
the process of uploading text files to a
server that you own.
The other approach is the baked approach
where you export a standalone game that
doesn't need anything else to play.
If you take the template game
approach that we just mentioned and then
you compile the modified template
you end up with a standalone game that
can be uploaded and distributed however
you like.
Why would you want to do this? Well,
for one thing it's a lot easier to do it
this way,
but a bigger reason in my opinion is
exhibited by this image here,
which is the top of ANGELINA's itch.io
page listing all of its games.
On the left you can see the interpreter
for ANGELINA's latest game engine,
which allows you to load up new game
descriptions and play its games,
on the right are standalone games made
by the last version of ANGELINA that you
can start playing with a single click.
Which one of these are you more
likely to play? Convenience,
nice looks, simple descriptions - they're
all important factors in getting people
to actually play the game that your
system makes.
As developers of AGD systems, just like
game developers in the real world ,we
have to be marketing experts as well as
designers, engineers, and everything else.
This is another small but significant
trade-off to consider.
Speaking of itch.io, before we close out
this section - what should you do with all
of these games your system has made?
My recommendation is that you get
familiar with the amazing website that
I've already mentioned a couple of times
in this tutorial which is itch.io.
This is a huge storefront that doubles
as a community space for many,
many creators and also has a thriving
jam
scene where dozens of game jams are
often going on at any one time.
Signing up to itch.io gives you a unique
URL where you can upload games, as well
as other projects, and
organize them neatly with custom
designed pages, screenshots, trailers and
lots of other stuff.
You also get access to really good
analytics pages showing what games are
being played and downloaded, where people
are coming to your page from, and so
forth, which is really useful
if one of your games gets
shared somewhere big and that you
don't hear about until it's too late.
It's a fantastic website made by really
really good and dedicated people and you
can use all of its amazing features
completely for free, so if you only do
one thing as a result of watching this
tutorial,
please go and register an itch.io account
and get involved in the community there.
It's one of the best ways to experience
the work of modern game developers of
all scales and styles,
and it's a great way to share your own
work too. I also recommend making YouTube
playthroughs of the games that you
distribute from your system
this is really useful for a couple of
reasons: first, lots of the people
interested in your work
may not play or enjoy videogames, and so
being able to watch someone play the
game and see how it works is really
valuable,
and it can help people like journalists
or other researchers understand your
work better and in less time.
Second, it's a sad fact that video games
are not a very durable medium,
especially if your games are not super
famous. When making various AGD systems
I tried to choose technology that I
thought would last the longest
but even some of those platforms are now
beginning to crumble, like Flash.
Recording videos of our games is
important because 10, 20, or 50 years from
now
they may be the only way to experience
these games. And while our games might
look silly and not amazing, life-changing
works of art,
they are all really important parts of
the history of the field, and of games.
Even making a talk like this, barely 10
years into the prime of the field,
it's hard to find records of some of
these systems. It's really important that
we do our best to preserve the research
we do,
and recording video is a good way to do
that. I've tried to pack as much advice
as i could into this central practical
section of the tutorial. Before we move
on to closing out the tutorial, I just
wanted to emphasize that this field is
very, very new in the 
grand scheme of things.
Only a handful of systems exist, and
while we can see some patterns in how
successful systems were built,
there's so much room for experimentation,
for new ideas,
and for better ways to be discovered. So
don't be afraid to get out there and
build things the way you want to build
them - that's what most of us are doing,
and learning about what works and what
doesn't is part of the joy of research.
You can download Bluecap on my GitHub,
it should open in the latest version of
Unity and work right out of the box.
There's a lot of comments in the code,
and some scenes that let you run
examples right then and there.
There's a few other things that I'd
like to add to it, like an example
evolutionary process, which I will try
and do in the coming months,
but I don't need to tell you what
academic promises are like.
We've talked about what automated
game design is, we've looked at the
history of the field, we've discussed why
you might want to do it,
and now we've also looked at some
practical issues that come up when we
actually try and build one of these
systems.
Before I close out this tutorial I want
to talk to you about some ideas that
really matter to me,
about the future of this field and the
kind of directions that I think we
should be taking our systems
in. When we talk about automating
processes with AI
we often focus on the automation of a
task - for example,
automating the analysis of brain scans
to detect illness
is automating a specific task.
Sometimes when we build automated game
design systems we're also doing this.
If we build a system to measure the
distribution of puzzle difficulty in a
level generator,
and advise a designer on an optimal fix,
we're probably focused just on
performing that task
and that task alone. However, the bigger
questions in this field come
not from the simulation of a task but
the simulation of the person performing
the task.
I would argue that when we build an
automated game designer our real intent
is not to simply automate the production
of games,
but to build a system capable of taking
on the same role in society that game
designers have.
Our systems cannot create in a vacuum,
whether they are teachers or assistants,
colleagues or critics, leaders or
followers,
our AGD systems are ultimately destined
to exist in a social context with other
creators and audiences,
and we should never lose sight of what
this means.
Mihaly Csikszentmihalyi, best known 
for his theory of flow,
wrote a paper in 1999 where he modeled
creativity as a system.
The model represents the creative
individual and how they're affected by,
and contribute to, the community and the
domain that they're creative in.
I really like this work because of how
explicit it is about the importance of
the context around
creative systems, but what really sold me
on the model was when I heard Rob
Saunders talk about it,
and his own modelling of social cliques.
Rob presented a great talk in 2013 at an
autumn school in Finland,
a recording of which is still online -
I've linked to it in the description of this
video.
Rob talked about Csikszentmihalyi's
models of creativity, as well as Rob's
own models of creative communities, or
cliques,
where individuals had their own ideas
and knowledge, and influenced others in
the community by being creative.
Each little creative agent would have a
chance of learning something by looking
at the work others created,
and this would influence their own work
in turn. Some agents might move between
communities or cliques as well,
meaning they took ideas from one space
and introduced them to another.
Rob spoke really passionately about the
idea that we can't simply think about AI
as creative individuals,
but instead we have to think about how
they exist in a wider world.
Since 2018 I've been thinking about how
we can build creative systems that live
up to Rob's model
of an individual in a creative community,
how can we build AI systems that don't
just spew out artifacts,
but that exist in a community of people
and other AI systems
and that meaningfully contribute towards
it. If you're interested in hearing me
talk more about this, I gave a talk
at the International Conference on
Computational Creativity about these
ideas,
which I've linked in the description of
this video, but the gist is that in order
to achieve this I believe that we have
to build systems that have a creative
process of their own,
where creating a finished work is just
one small part of what they do.
Just like people these creative AI
systems need time to rest,
time to learn, time to experiment, and
time to reflect.
Building AI systems that do all of these
things not only results in better
systems,
but also gives the community around that
system more opportunities to engage with
it,
learn about it, and become invested in it.
My work in this area will hopefully be
restarting once I move back to the UK
later this year,
and I hope you'll follow ANGELINA's
progress and support the system as it
starts making games again.
But why would we want our AI systems to
join and organize creative communities,
rather than just producing things?
Creative communities exist everywhere -
your household, your family, your friends,
and your text groups are all creative
communities.
In the age of the internet, however, the
biggest creative communities we are a
part of
are online, usually on social media sites
like Instagram,
Twitter, YouTube and TikTok. These
sites are made up of millions of smaller
fluid communities, like the people you
follow on Twitter,
or the subscribers to a particular
channel on YouTube,
but they also form a larger creative
community as the sum of their parts.
Perhaps the most important feature
of these communities is that they're
only partly mediated by people -
the most important influences on these
sites are algorithms, designed and
implemented by the platform holders,
whose primary interest is in profiting
from the site.
This leads to a number of bad outcomes,
each of which could sustain an hour and
a half's worth of discussion on their
own.
Although in theory these algorithms work
to support users and creators,
in reality algorithms eventually become
the driving force behind the sites.
Creators make content that they think
'the algorithm' will respond favorably to,
and audiences are led to whatever
content the algorithm is optimizing for,
which is more likely to be based on
engagement metrics like minutes watched
rather than true enjoyment or interest.
And all of this adds up to a very
unsatisfying whole -
are we all able to reap the benefits of
these creative platforms equally?
Can we all find spaces to grow and
experiment within?
Or are we being lumped together by
algorithms, to be fed content from a
small fraction of the most profitable
creators?
The musician David Byrne wrote about how
systems like this
tend to shape our thinking about
creativity and encourage us to consume,
rather than enjoying the act of creation:
Now, I realize you probably didn't tune
into this tutorial for a lecture on the
ethics of technocapitalism
WINK
and that's fair enough. But I think about
these ideas more and more often recently
and I think about how AI research is
often complicit in constraining and
constricting these creative spaces,
and enabling those who profit off doing
that.
If we want to build AI that truly
enriches people's lives, I don't believe
we can look to centralized solutions
where a massive system is controlled by
a single organization that everyone else
is dependent on.
I think we need to focus on building
systems that are fully owned and
operated by small communities,
and that don't require huge
computational resources to create or
maintain.
We can help to do this by building
creative systems that embody the
creative individual,
not just the creative act. These systems
can help take on the labour of building
maintaining
and growing creative communities, and
become active participants
which teach and are taught, which lead
and are led,
which create and also play. This is the
future that I want to build with AI.
We're really close to the end of this
tutorial now but before I wrap up with
some conclusions
I wanted to leave you with some ideas of
where this field might go next,
and a few open problems that you might
like to tackle with some AGD systems of
your own.
As I've mentioned throughout this video,
one of the problems with automated game
design that we've also
seen in procedural generation research
is a focus on a very narrow and
recurring set of design spaces:
arcade games, platformers, puzzle games
that sort of thing.
A great starting point for people
interested in making an impact in
automated game design is to look at
design challenges in genres that are
much less explored,
so dating sims, slapstick physics games,
and crafting games are a few examples
that spring to mind.
These will be challenging new spaces to
explore, but are full of new design
problems to think about and approach.
Alternatively, trying to build systems
that work on what I call 'goliath'
problems are also rare.
Most automated game design systems focus
on small game experiences like puzzle
games with a few levels,
or a self-contained platforming
experience or a very simple strategy
game.
This makes sense on paper because
smaller design spaces are more tractable,
and thus it's more likely that we can
build systems which perform well working
within them.
However there's a lot of value in trying
to take on tasks which, right from the
outset, we know
are already implausibly massive. The
National Novel Generating Month is a
good example of this,
in which entrants attempt to generate a
50,000 word novel in the month of
November.
This is in contrast to most story
generation research which works on a
smaller scale.
Although this task is much harder to do
competently,
the additional challenge forces people
to innovate, develop different techniques,
and find solutions to problems which
simply don't arise when working at
smaller scales.
So with that in mind why not try writing
an automated game designer for 50 hour
JRPGs (Japanese Roleplaying Games),
or massively multiplayer online games?
Try generating open world action games,
or vast prestige dramas.
You will almost certainly end up making
a system that is messy and broken,
but equally I can guarantee you that
you'll uncover problems,
and hopefully solutions, that the rest of
us have never even thought of before.
Finally I want to recommend approaching
'wicked' genres. I'm borrowing the term
wicked here from Alex Jaffe's 2019 GDC
talk on
'cursed' game design problems, which in
turn is a reference to wicked problems a
term
coined by design theorist Horst Rittel.
Jaffe describes cursed game design
problems as problems without a solution,
but I'm not thinking quite that
drastically here - instead I'm leaning a
bit more on Rittel's original definition.
Rittel defined wicked problems as a
special class of super problem, like
climate change,
world hunger, or nuclear disarmament.
These problems share a few properties
that I'm borrowing here:
people don't quite agree on the
definition of the problem, it's really
hard to evaluate and agree
on how good a solution to the problem, is
and solutions aren't correct or
incorrect,
instead they're on some kind of nebulous
spectrum of goodness.
Wicked genres for automated game design
are genres where even the most
fundamental design questions are wrapped
up in subjectivity and human faculties.
Walking simulators like Firewatch are a
good example of a wicked genre.
Their design relies heavily on narrative
design, on environment design,
lighting, vision, sound, almost every
aspect of the genre is itself a huge
challenge to overcome.
I began and abandoned a couple of
automated game design systems targeting
this genre,
and although I've not given up on it yet -
and you shouldn't either -
it's definitely a tough one. We need
fresh perspectives about how automated
game design can be done,
and this will open up new ways to build
AI systems that can help shed light on
even the most wicked of genres.
I really hope more people attempt to
jump right into a problem like this.
Thank you so much for watching this
tutorial and for listening to me talk
about a subject so close to my heart.
Before I leave you to enjoy the rest of
CoG, I wanted to leave you with some
closing thoughts,
and then some links where you can find
out more about my work, and about
automated game design in general.
As we saw throughout this talk, automated
game design is a wild frontier of AI
research to which
anyone can bring a new perspective. Novel
AIi techniques combine with individual
views about game design and development,
and result in AI systems that not only
create fun and inspiring games,
but also help us gain a different
perspective on how games function,
what they can be, and how we should make
them. Automated game design can be framed,
like a lot of other AI research,
a quest to obtain more for less, more
profit, more content, more control,
for less money, less risk, and less people.
But it can also be framed as a way to
give people the support they need
to build creative communities around
them, in a time when creativity is
increasingly controlled, commoditized, and
curated by algorithms.
It can also be framed as an artistic
pursuit, a way to study and understand
games better by modeling our own thought
processes about them,
and it can be a way to appreciate and
celebrate an important part of our
humanity,
by stepping outside of it to consider it
from another perspective.
I hope this talk has provided some
insights for you, and you feel encouraged
to start work, or continue working, in
this area.
Whether you're excited by something I
showed you today, or simply motivated to
prove me wrong about something,
I want as many people to join this field
as possible and to keep expanding our
idea of what AI is for,
what game design is, and what games can
be.
Thank you all again so much for watching.
If you enjoy hearing me talk for way too
long, about things that are way too
specific,
you might enjoy my YouTube channels, my
blog, or the itch.io page where I post my
games - the links to all of these things
are in the video description.
If you want to ask questions about any
topic that's come up in this talk,
or about related topics in game design
or AI research,
I'm always happy to chat. You can get in
touch with me on twitter @mtrc,
or you can drop me an email:
mike@possibilityspace.org. In a few months' time I'm hoping to
make a short Q&A (question and answer) video 
where I answer any frequently asked questions or
respond to
comments that have come up since then, so
feel free to leave a comment below,
or get in touch with me via any of those
other means.
Thank you all again for watching. I hope
you enjoyed it, I hope you enjoy making
things that make things,
and I also hope to see you all again in
the real world, real
soon. Bye!
(<3 thanks for watching - Mike)
