[MUSIC PLAYING]
MARK MANDEL: Hi, everybody.
My name is Mark Mandel, and
I'm a developer advocate
for Google Cloud for Games.
Today I'm going to talk to
you about scaling globally
with game servers and Agones.
So, today, we're going to
talk about dedicated game
servers for large-scale,
multiplayer games.
If you're not familiar with
dedicated game servers, very
quickly, what they are
is a full simulation
of what's happening
inside a game,
usually running
somewhere in the cloud.
And what happens is,
is the people who
are playing a
particular game session
will all connect to an
individual dedicated game
server process.
Usually they'll send things
like, hey, I'm moving forward
or maybe I did an action
like fired a rocket.
And it's up to that
dedicated game server
to basically be the
authority of what's
happening inside that game
and send that information back
to the players so that they
know exactly what's happening
inside the game.
This is really important for
large-scale, multiplayer games
because it enables
people to have
a lot of control over
the latency experience
that people have with
these real-time games.
This is important because
you have geographic control
over exactly where your
dedicated game servers exist
because players around the world
are going to play your game.
So if you have very low
latency requirements, which
many real-time,
multiplayer games do,
you can put it in a particular
place around the world
so that you can have control and
have knowledge over exactly how
fast the information
is going backwards
and forwards from these
dedicated game servers.
So, that sounds simple
in practice, right?
But orchestrating
these thousands
of machines and
potentially thousands
of dedicated game server
processes around the world
is essentially a tricky problem.
There's a lot of moving parts
that you need to look at
and a lot of things that
could potentially go wrong.
So that's really what I want
to talk to you about today--
how are we solving that
problem with Google Cloud Game
Servers and the Agones project.
So let's start from the
foundation, some things you've
probably already heard of.
So many of you are
probably already
familiar with the
Kubernetes project.
If you're not, it's
an open-source project
initially started
by Google that's
built for running
processes or software
processes over a large number
of machines inside a cluster.
Kubernetes is fantastic.
You can come to Google Cloud,
run it on our GKE platform.
You can also run it wherever
you want to run it, as well.
This is super useful
for game servers
because it means,
then, that you can
put your game servers in
places that your players are.
And players show up in all
sorts of interesting places
around the world.
So Kubernetes a great foundation
for if you want the run game
servers.
So Agones is another wonderful
project you may necessarily
have heard of.
It's an open-source project
that extends Kubernetes
so that Kubernetes understands
how dedicated game servers work
and what a dedicated
game server lifecycle is
and how it is managed.
It's also great that
it's open-source,
much like Kubernetes,
because this, again,
means that Agones can be run
wherever your player base is
or potentially inside your own
studio or inside your own--
on your own laptop, as well.
It's a great project that's been
running for several years now
and went 1.0 back in September.
It has a variety
of functionality
that is super
useful specifically
for running dedicated game
server workloads, including
native integration with
Kubernetes tooling,
autoscaling capabilities,
SDK integrations with things
like Unreal and Unity, a
bunch of cost-optimization
functionality, as well
as metrics and dashboards
so you can see exactly
what's happening
inside the fleets
of game servers
that are running inside Agones.
But we were talking previously.
We were talking
about how we want
to run this in a global network
of distributed game servers
that run around the world.
And that's where things
do get hard again,
Agones is a fantastic
project, but it really
works primarily within a
singular Kubernetes cluster.
So if I want to cluster
machines and say
what region or a zone
inside Google Cloud,
this is fantastic.
But if I want to run
these in multiple regions
around the world, things
get a bit more complicated.
And this is where we step into
Google Cloud Game Servers.
Google Cloud Game Servers
is a management layer
that sits on top of
open-source Agones.
It provides a wonderful set
of functionality and features
that makes it easy to
orchestrate and scale
multiple clusters of
Agones around the world
while still not locking you
into a specific vendor package
and also giving you a lot
of flexibility and power
to be able to run your
game servers exactly
the way that you want to.
So, how does this
look, and what is it we
need to do to get started?
So we're going to
get stuck into that.
So, at a high level,
what concepts do we have?
So, you're probably already
familiar with the Google Cloud
project.
Nothing there has changed.
Our best practice here is
saying that you probably
want a singular project
for a singular game
title and environment.
So if I have a game X and
I have it in production,
that is generally a project
Next we have game
server clusters.
These are simply Agones clusters
that have registered themselves
with Google Cloud Game Servers
so that Google Game Servers is
aware of them.
You have full access still to
that GKE or Kubernetes cluster,
so you can do extra stuff
in there if you want.
Up to you if you choose to.
But it's simply just
that Kubernetes cluster
that is registered.
But we also need a way to
organize those clusters
together.
For example, I
might want to have
a set of clusters that are in
US central to help support my US
player base.
I might just say, OK, cool.
I want to group those together
so that they are essentially
the same from a
latency perspective
or an organizational perspective
so that all the players I know
could choose any one
of those clusters that
are sitting in US central.
We've called that
concept realms.
That is simply a user-defined
grouping of clusters such
that we know that the latency
requirements for those clusters
are going to be basically
the same for any players that
are connecting to them.
From there, we have game
server configs and deployments.
That's how we can set up
our fleets and autoscaling
capabilities and
then deploy them
out either globally or even
just to particular sections
of our global fleet
of game servers.
It gives us a lot of control.
And we're going to dig into
that a little bit more.
So, just looking at
it in a different way.
Here's a really great example
of this particular game
X we were just talking about.
So, here we have
multiple realms.
We have a US realm.
We have a Europe realm.
And we have a Japan realm.
And we also have this working
with multiple clusters
within each of the realm.
Here you can see in the US
realm we have a GCP cluster,
we have an on-prem cluster.
Where inside the
Europe realm, we
have a GCP cluster and another
cluster on another cloud.
And in Japan, we're just using
GCP because for there, it
makes sense.
What we're then able to do
is apply these Google Cloud
deployments and configs
across all the realms
so that we can manage these
global fleets of games
clusters.
And provide a wonderful
experience for all our players,
regardless of where they are.
So, how does this work?
How do we get started?
Pretty straightforward.
First step, start with
open-source Agones.
The migration path from Agones
to Google Cloud Game Servers
is actually really easy.
So you can definitely
get started there
if that's a place you
want to get started.
The next step from
that is you need
to register that cluster with
the Google Cloud Game Server
system.
To do that, you need to turn
on the Games Services API, just
a one-click process.
You then need to define your
realms within your system, how
you want to organize
each of these clusters
into location-based areas.
Then from there,
define your configs
and define your deployments.
How do you want your
fleets and autoscaling
capabilities to be pushed out
across all these clusters.
That's really about it.
It's a multiple-step process,
but it's relatively simple.
And if you're already
familiar with Agones,
you're going to find it
really familiar, as well.
So let's get stuck
into an actual demo
so I can show you exactly
how all this works.
So, let's actually
run some game servers.
Earlier I've prepared four
clusters that are set up.
I have two in Europe.
And I have two in
the United States.
You can see these here.
They're just plain-old Google
Kubernetes engine clusters.
Nothing special about
them except for the fact
I've already installed
Agones on them
and I've already coordinated
them with their realms.
So let me give you
a preview of what
the dashboard looks like for
Google Cloud Game Servers.
So here you can see that
I've set up two realms.
I've set up one in Europe.
And I've set up one
in the United States.
And I've registered each of
these clusters appropriately.
So you can see the
two here inside Europe
and the two here inside
the United States.
That's really all I had to do.
Right now, I don't have any
game servers running on them,
but I'm ready to go.
Everything is ready for me
to run some game servers.
So I'm going to come over
here to my trusty Cloud Shell
that I already have set up
and connected to everything.
I've got it connected
to one of my clusters.
So if you're
familiar with Agones,
I could do kubectl
get gameservers.
And right now I
don't have anything,
but as you can see here,
my Kubernetes cluster
does understand how
game servers work.
So we want to deploy some game
servers via a fleet, which
is a big set of game
servers, to all the clusters
that I have inside my set.
So I'm going to
actually open my editor.
We have here what we call
our fleet configuration.
It's going to be part of what we
call a config for Google Cloud
Game Servers.
So if you used Agones
before, this probably
looks really familiar.
Here we have our image for
what we want to actually run
as our game server.
How many of them we want to run.
So here I have a
replicas of two.
And we also have a version,
which is right here.
We're just going to put a tag
on it called version 1.0, put
this label on it so we can see
some things as we go forward.
So the first thing I
actually need to do
is set up what we
call a deployment.
So I'm going to close
this editor right now.
So Google Cloud Game Servers.
We're just going
to do a deployment.
We're going to create it.
And it's going to call STK.
STK because the game we're
actually going to be using
is a game called SuperTuxKart
It's a really fun multiplayer
racing game that some
people may already
be familiar with because
it's been around for a while.
It's really cute and adorable.
So a deployment is
basically a placeholder
for where we want to
put our configurations.
We want to organize
them, and then we
can apply from there, which
we'll see in a minute.
Next we're going to
create our configuration.
So let's do that.
Configs create.
We're going to attach it to
our deployment, which is STK.
We're going to pass in
our fleet config file,
which is fleet-v1.yaml.
And we're going to call it v1.
Pretty simple stuff.
Now, this actually isn't
going to apply anything
to our cluster yet.
Nothing's been
created at this stage.
So, if we want to
actually apply it,
we're going to actually
do a rollout action.
This gives us a lot
of steps in between,
and we're going to look
at an extra one here
that gives us a lot of
control and a lot of warning
and a lot of preview
on exactly what's going
to happen inside our cluster.
Because we're dealing with
a global-scale system here.
And we really don't
want to mess this up.
So, let's do our rollout.
So, we're going to
look at our deployment.
So, we have this
STK deployments.
We want to apply a default--
deployment, sorry.
Roll-- update rollout.
So we actually want
to roll it out.
We're going to say our
default config is v1.
And we want to do a dry run.
And we want to do it
to our STK deployment.
So, what does this mean?
So, we're saying we
want to do a rollout.
We want to push this out to
all the clusters that we have.
But we want to dry-run it first.
So we want to be able to have
a look at exactly where this
is going to go.
So we have a lot of feelings
of confidence and safety
in what it is that we're doing.
So, here we can see
we have a config v1,
and we can see that
it's being applied
to each of the clusters that
are around the globe, all four
of them, which is
exactly what we want.
So we're like, great,
that's awesome.
That's exactly what we want.
So now let's run that
again with no dry run.
And we'll see that's now
going to start pushing it out
to all the clusters we have.
So that fleet that
we set up previously
with that v1 config
of our fleet,
that's now going to start
rolling out to everywhere.
We're going to see
two game servers.
So if we actually have a
look in this US cluster,
we can see that we have
two game servers there.
We can actually see that it has
v1 in the title there, as well.
If we want to have
a quick look at it,
we describe our game server.
We can see all kinds of
interesting information.
We scroll up here, we can
see that version label,
that 1.0 version.
And if we scroll
down, we can also
see that we have the right
image there, as well.
So that's looking pretty good.
Now, to prove to you that it's
running in multiple clusters,
let's actually connect
to our US two cluster
and do the same thing.
So grab that
information, copy it.
Connect away.
If we look at our
game servers there,
we can see we also have two
game servers there, as well.
So you can see,
right, like, we've
been able to really easily
set up these fleets that
run across multiple
clusters around the globe
with only one command and also
do a preview of it, as well.
So, we're going to do
an allocation next.
Part of both open-source Agones
and Google Cloud Game Servers
working together is
multicluster allocation system.
We actually have a
grPCM point that gets
used for this, which is
why I'm using a script
to make that easier,
that is authenticated
with some certificates.
And what this is
doing is it's going
to hit one of the end points
on one of the clusters.
The one that I'm actually
connected to right
now is US two.
It's going to do an allocation.
For those of you who aren't
familiar with Agones,
an allocation is basically
a special thing that says,
OK, give me a game
server and then mark it
as allocated so that I know
that there are players playing
on that particular game server.
Right?
And don't delete it.
We don't want to
delete game servers
where players are playing.
They tend to get mad
when that happens.
So we got back a game
server, which is awesome.
Let's actually kubectl
get gameservers.
We can look at it this
way, which is super nice.
We can see one of
those is allocated.
I'm going to actually
mark both of them
as allocated right now
just for convenience.
And I'll show you
why in just a second.
So let me grab that one
we previously allocated.
And we're actually
going to play a game.
So this will be fun.
What do I want?
I want this here.
So we'll pop over to
SuperTuxKart in my command
line, and we'll click run game.
Let's pop that up.
So let's connect to our
allocated game server
that we just set up.
Connect there.
Excellent.
So that's connected.
I'm connected.
I probably need some friends
to play with, though.
So why don't we set
up some AI bots that
will also have that running.
So here we go.
Here's a script that
I have running here
that will put three
friends, allow
me to play with some people.
Run that.
Wonderful.
Now we can see we have
these three bots that
have joined me in this game.
Let's start a race.
So I'm going to play as Puffy.
I will choose that track.
That'll give me a couple
of seconds to crack down.
But this is running directly
against the dedicated game
server we have running on
one of our Agones clusters
that is being managed
by Google Cloud Game
Servers at this stage.
So you can see it's actually
running against US central
and I'm able to play a game.
So here I am.
We're just waiting for the
green light to go ahead.
Excellent.
I'm driving.
And so I'm playing this
game against several players
who are also racing against me.
All running from
my local machine.
It's a super cute game.
I love SuperTuxKart.
We won't play for too long.
It's fun.
OK.
That was cool.
All right, I'm going
to shut that down.
So you can see that a
game's actually playing.
So, coming back to Cloud Shell,
we look at our game servers.
We'll see now that we still have
one allocated and one ready.
So SuperTuxKart is great in that
once we've all disconnected,
it'll actually shut itself down.
If you're not
familiar with Agones,
it's going to make
sure that we have
the right number of game servers
ready and available for us.
And we've said we want two.
So two is going to be up.
But we clearly have another
player still playing
with that allocated game server
that's still sitting there,
as well.
We're going to come back to that
a little bit later, as well.
OK.
So, we have those
two game servers.
Maybe we have a new update.
Maybe we have
another game server
that also wants to come up in
a different way or the bug fix
or we want to test
out maybe a new level
or something like that.
So we may not want to roll
that out to the entire globe
all at the same time.
Maybe we want to do a test.
So in this particular
instance, what
we're going to do is we're
just going to roll that out
to Europe first, see how that
goes in that player base.
And assuming
everything goes fine,
then we're also going
to roll that out
to the rest of the world.
So we have a v2.
So if we come back
to our editor--
let's have a look here.
Excellent.
We have this v2 config.
So here we have
said, OK, we actually
want to have five
replicas rather than two.
We're going to put a
version two label on it.
Everything else for this demo,
we're going to keep the same.
But you can imagine
you might have
a whole new version
of this game server
that you want to push out.
So, to take this live,
what we're going to do
is do exactly what
we did before.
Configs create.
We're also going to attach
this to our STK deployment.
We're going to have our fleet
config file of fleet-v1.yaml.
And we're call this--
sorry, fleet-v2.yaml.
And we're call this v2.
So we'll create that.
So, how then do we do that sort
of overwrite sort of situation
where we just want it to run
in one particular cluster?
So we pop back to our editor.
We can write this thing
called an override yaml.
Or you can see here where we can
write this thing called a realm
selector that says, oh, OK,
for these particular realms,
I want to have this
particular config version.
So here we're going to say,
OK, we want the v2 config.
Let's actually apply
this deployment
but with the override
we had previously.
So we are going to
do a update rollout.
Our default config is still v1.
But our config override
is override.yaml.
We still want to do a dry run.
We want to see what's
going to happen there.
And we're applying it
to our STK deployment.
There we go.
So, this is actually
kind of interesting here.
So, we can see here
in our US realm,
we would have that v1 set up
just the way it was before.
Nothing changed there.
But we can see here in our
EU clusters in our EU realm,
we have v2 and that's available.
So, let's actually
roll that out.
Do no dry run.
That's pushing out.
So, we're still connected
to that US cluster.
If we want to have
a look at what's
available there, game servers.
We should see that
it's all the same.
Yeah, we still have that
allocated game server.
We can still see we have
v1 and that's available.
So that's pretty cool.
But if we pop back over
to, say, an EU cluster--
let's connect to that.
There we go.
And we have a look
at what game servers
we have available
here, suddenly we
can see that we have both five
game servers, not like the two
we had previously,
and we can see here
that they're the v2 instance,
which is super nice.
So we have a lot of
control here exactly how
we want these game server
fleet instances to be
run around the world.
Let's do one more thing.
I just want to do
one more fun thing,
which is let's push that v2 to
instance all around the world.
We can do that very similarly
to what we've done previously.
So, we have deployments.
We do a update rollout.
We say our default
config is now v2.
We are actually going to
clear our config overrides.
And again, we'll
do a dry run just
to check to see everything's OK.
And here we can see, OK, we
have that v2 instance running
across all the
clusters that we own.
So let's actually run that.
Updating rollout.
Now I want to show you one
more very important thing.
We come back to
that US cluster we
were connected to previously.
Remember that one
we had allocated
that we use the multicluster
allocation to get
set up and ready to go?
We can have a look here.
And we'll if we do kubectl get
gameservers, that allocated
game server from previously?
That's still there.
This is very similar
exactly to what
Agones would do previously.
Those allocated game servers
are essentially very important.
We don't want to
interrupt those.
And we don't do
that here, as well.
So here we have those v2
clusters we had set up exactly
how we expected it to.
But we still have
those v1 clusters
that players are still playing
on available and still up
and running.
So we don't interrupt player
experience in the slightest.
Awesome.
Well, we're really excited
about Google Cloud Game Servers,
and we're really
excited about working
with customers and
developers who are wanting
to work on this platform.
Most recently, Jam City just
pushed out their production
workload on top of Agones.
World War Doh, which is
an amazing game that you
should totally check out.
We're really excited
to be working with them
on their project and
really great to have
them having a production
workload running
on Agones right now.
So if you want to get started
with Google Cloud Game Servers,
you can do so right now.
It is in public beta and
available to everyone.
There is a two-phase rollout
that is happening here.
So phase one, which is available
right now, as I just said,
is GCP only and provides basic
policy management capabilities.
This means that you can
run your applications
and your games on GKE and you
have basic functionality that
will deal with most of your
workloads for autoscaling
and how you want to
manage your fleets.
Phase two, which will come
in the near future and we're
actively working
on, will provide
more sophisticated and
advanced policy management
in terms of allocation
or time-based scaling
but also will provide multi
and hybrid cloud support, which
we're also really excited
about, as well, because as we've
been talking about,
sometimes you just
need to be able to get these
game servers to players
in the places that they are.
And having that flexibility
is hugely, hugely powerful.
So we're really excited about
both these developments,
and we really want
you to get involved.
Keep an ear out for all the next
phase developments, as well.
We're definitely going to be
looking for people to test out
those alpha features that
roll out in the future.
So if you want to
get started, there
is a quick start available
on the screen right now.
Please take the
step-by-step process
because then you'll be
able to get something up
and running on your cluster
and being able to play a game.
Finally, thank you so much
for spending your time with me
and learning today about
Google Cloud Game Servers.
We're really excited
about working
with you and your games.
And we really want to see what
you're going to build next.
Thank you so much.
[MUSIC PLAYING]
