[MUSIC PLAYING]
NICK HEIKKILA: Hey, everyone.
My name is Nick.
I've been at Bungie
since 2012 when
I started as a test lead
on the original "Destiny"
before moving into the
production engineering group
for "Destiny 2."
Right now, I'm a
technical product owner
on the engine team.
I currently work with
engineering teams
on cool, new
technology like Stadia.
So who is this talk for?
Well, I'm catering this
talk towards developers
who are either starting
to work on Stadia
or who are thinking
about jumping in.
Anyone generally
interested in Stadia
is going to get a
lot out of this talk.
Also, while I'm not
an engineer and I
won't be showing
any code, I will
be delving into some
fairly technical concepts.
That said, I did a dry
run on this presentation
for a group of producers.
And they sort of understood
some of the things I said.
So even non-technical
folks are going
to get something out of this.
So here's what we
going to cover today.
First, I'll explain
some of the reasons
we decided to port
"Destiny" over to Stadia.
And then I'll give a high-level
summary of the project.
I'll take you through what it's
like to develop on a streaming
platform like Stadia.
And finally, we'll dive into a
couple of the more interesting
challenges we faced
in development
and how we addressed them.
So first, let's talk
about why we decided
to bring "Destiny" to Stadia.
So one of our north
stars for "Destiny 2"
is, play it anywhere, anytime.
After hearing about how Stadia
worked at last year's GDC,
we believe that this platform
would give us an opportunity
to go even further towards
that goal of, "play anywhere."
With Stadia, "Destiny"
players can enjoy their hobby
on mobile devices, low-end
laptops and PCs and even
their TV via Chromecast.
Now around this
time, we were also
on the cusp of releasing
our new cross save feature.
With cross save,
players would be
able to share their
"Destiny 2" progress
across all supported platforms.
All their gear, weapons,
loot, any progress they earn
is saved across any
platform they play on.
With this feature, you can
raid on PC with your friends
one night.
And then the next day, you
could use that same account
and character to play
with your brother on PS4
using the same gear you just
earned that previous night.
Now, if you combine the
different methods of play
that Stadia offers
with cross save,
our users have so many more
options for enjoying "Destiny."
They can play "Destiny 2" on
their high-end PC at night
and, then, using Stadia, play
at work using their mobile phone
or laptop.
And cross saves allow them to
do that using the same character
and progression no matter
where they're playing.
And that's something
I do every day.
This combination gets us
so much closer to our north
star of "play anywhere."
So another reason
we did this is we
wanted a chance to work with
this exciting technology.
Now streaming has been something
of a holy grail for years,
as the potential
offers is so huge.
It could allow players to
enjoy our games anywhere
and has potential to bring
in new types of players.
It could expand that
funnel considerably.
Google also has a ton of
experience with streaming,
even 4K streaming with YouTube.
And they've got data centers
absolutely everywhere.
So they're perfectly suited
to tackle champ challenges
inherent in game streaming.
Now, at last year's GDC,
several folks from Bungie
had the opportunity
to play Stadia.
Now, we went in not really
knowing what to expect.
But we left a really impressed.
We were able to play a shooter
streaming from a Google data
center over Wi-Fi.
And it actually felt great.
After getting some explanations
of the tech, how it worked,
and hearing just how
committed Google was here,
we were convinced.
And finally, we were excited
to partner with Google.
From very early on in our
conversations with them,
it became clear that Google
understood and appreciated
our vision for
"Destiny," so much so
that "Destiny 2" became
a first game included
with every purchase of the
Stadia pro subscription.
We also saw this as a
collaborative effort.
And we wanted our
feedback on a platform,
the technical requirements
and the processes
to lead to improvements
for every developer.
And Google was fully
aligned with this.
And working closely
with their teams
has resulted in a much stronger
version of "Destiny" on Stadia.
And it's also led
to some improvements
on the platform itself.
All right, so let's
take a look at what
was actually required to
bring this game to the Stadia
platform.
And before we jump
in here, I'm going
to give a quick history of our
engine to give some context.
So we started work on
the engine around 2009,
and we shipped
"Destiny 1" in 2014
for PS4, Xbox One,
Xbox 360 and the PS3.
"Destiny 2" came out
three years later
and we added a Windows
version of that time.
And we didn't have
a Vulkan build.
And our Windows build used DX11.
Now, we bring that
up as if you already
have a DX12 version of the
game you've already done,
so that lower level work
that Vulkan requires.
And you'll have a head start.
OK, so let's talk a bit
about Stadia's tech stack.
Now at its core, Stadia is built
on top of a Linux operating
system.
It uses PulseAudio, a readily
available, open source solution
for sound.
And it uses Vulkan
for rendering.
Now, because we're going to
dive into some Vulkan stuff
later on, it's worth
explaining this a bit further.
So Vulkan is graphic--
the graphics API
that's used on Stadia.
Now, it's essentially the
next generation of OpenGL.
The biggest difference
here between Vulkan
and previous generations
of graphics APIs
is that it's
considerably lower level.
And that means you have
a lot more direct control
over the GPU.
So on top of all that
is the Stadia SDK.
Now that includes APIs for
play data, saved games, input.
All the platform level
APIs you'd expect.
So the really cool
thing about this tech
stack is a good chunk
of it is open source
and is readily available.
So if you wanted to run some
initial investigations on how
your game or your engine
would run on this stuff,
you can easily do that.
And because it's
available, you may already
have a build of your game that
supports Linux and/or Vulkan.
Now, this was the
case for the folks
at id, who gave a similar
talk to this last year.
So be sure to check that out.
Now in id's case for "Doom," a
lot of the things just worked
from the get go, as they did
have that pre-existing Vulkan
and Linux support.
But for us, the biggest
portion of the work
was adopting that new
Vulkan graphics API.
And again, this is
because we've got
a complex, proprietary engine.
And we hadn't done any of the
work towards Vulkan or DX12.
Now, for a lot of
other developers
this probably won't be the case.
For example, if
you're using Unreal,
which already has
Vulkan support,
you're already way
ahead of where we were.
So anyway, we split the
team up into three groups.
We had one group
focusing on graphics,
one working on services
updates and one group
working on the general
platform features, things
like controller support, UI,
networking, commercialization,
the platform technical
requirements, et cetera.
So because our game is an
online service type game
we don't only have the
client to worry about.
Now, here's a basic version of
what the traditional platform
setup looks like
for "Destiny 2."
The player runs the game
client on a local machine.
And that client communicates
with our Destiny data center.
The data center is responsible
for things like sign
on, for character data
and world servers.
Both the client and
the Destiny server
connect to platform services
for things like friends, invites
and commercialisation.
Now a Stadia, that client
moves into the cloud.
Obviously, this is a huge,
huge, massive change.
But in terms of our
services, they don't really
care about that difference.
But one thing we
did do is to set up
direct peering from
our "Destiny" servers
to the Google data centers.
Because we communicate directly
and not over the internet,
we're guaranteed really, really
low latency and low levels
of packet loss.
Now, our network model
also has some peer
to peer components where
clients communicate directly
with one another.
On Stadia, all that
client kind communication
is happening inside
the data centers
at extremely low latency.
Now, both of these
changes result
in overall latency being cut.
Now, this is going
to be important
for general responsiveness.
But it also helps
mitigate input latency.
And that's something we'll
touch on a little bit
later on this talk.
All right, so I've
given you a sense
of how much work
there actually was
in terms of graphics, services,
general platform features.
Now, here's the schedule
that we followed.
We started work at
the end of April.
And we had to submit our
final build late October
that same year.
So that game has
around six months
to go from absolutely nothing
to a final proof built.
So let's take a quick look
at what the build looked
like during the
various stages, how
it progressed over six months.
Here's what it looked like
after a few weeks of work.
We had the main
game loop running.
And we could render an embedded
model just through GBuffer.
Few weeks later, we could
actually play through a level.
The core graphics
systems were up.
And we could render things
like terrain, skinned objects
and rigid bodies.
About a month later, we had
more graphics features up
and running.
We had some basic lighting.
We had effects,
GPU, CPU particles.
The UI was functional.
Audio was functional.
And we could also play the real
online version of the game,
meaning we had server-to-server
authentication happening.
And you could play
with other people.
About a month and a half after
that we had all of our features
implemented.
And we're really focused
on bugs and optimization.
So you're probably thinking to
yourself, six months for a game
that complex on a brand
new streaming platform.
That's [BLEEP] crazy.
That's the same thing I
thought about a year ago.
But here's how we were
able to deliver that
in such a short amount of time.
So first, we had an
extremely strong team
of engineers, all of whom
were experienced in porting.
Several had experience in
porting previous titles.
And several had experience
porting the actual "Destiny"
engine to other platforms.
We also had a Linux expertise
spread across the team.
And while we didn't have
specific Vulkan knowledge
on the team, we did
have 3D graphics
experts who were able to ramp up
on Vulkan really, really fast.
Here's a more clear
breakdown of the team
that did this awesome work.
We had three dedicated
graphics engineers.
And we had two
platform engineers.
We also had one lead engineer
who was mostly overhead.
But they did some key
implementation work as well.
We also had production support
and a dedicated test team.
Now, all of those
folks were fully
dedicated to only this project.
But we also relied on some
shared Bungie resources,
things like the UI team, one
of the design teams, services
and our Bungie.net team for
work in their respective areas.
We also had unbelievable
support from Google.
This included having a technical
account manager on-site.
When questions about an API
or platform feature came up
or we were running
into problems,
we'd get a really,
really fast response.
We also had a couple of
dedicated Google engineers,
Chris Glover and Hai Nguyen,
who helped us out a ton
with debugging, perf
optimization and just
general firefighting.
Now, these folks had
a lot of experience
helping with other
teams ramp up on Stadia.
And their knowledge and
help was extremely valuable.
So obviously, if you're thinking
about jumping into Stadia,
you shouldn't expect that
level of support from Google.
Our situation was unique.
We were a launch title.
And we had a very,
very short amount
of time to hit that
day one launch.
That said since launch we've
transitioned to a more typical
partner setup.
Our TAMs and the
Google engineers
are no longer on site.
But when we do have questions
or are running into issues,
the team at Google is
still extremely responsive
and they're helpful.
So let's talk about
what it's actually
like to develop on Stadia.
So you're probably
familiar what it's
like to develop on a console,
where a developer has
a dev kit on their desk.
That means every developer
working on that platform
needs a physical
box on their desk.
Now in addition to
that, you've probably
got a stack of dev kits
in some room, somewhere.
Reserved for automation.
Now, managing all of
these physical devices
is a huge pain.
For example, just keeping them
up to date on the same SDK
is really difficult.
With Stadia, you've got a set
of instances within the cloud
that you use for development.
Now, each of these instances is
capable of running your game.
You can access these from any PC
using your Stadia dev account.
Now obviously, there are plenty
of advantage to this system.
You can easily
organize your instances
into different pools, some
for engineers, some for QA,
some for automation.
And managing these are so much
easier than physical boxes.
For example, you can quickly
update an entire pool
to a new SDK using
a simple command.
And seeing which SDK in a
pool is using is dead simple.
So what does it actually
look like for an engineer
to develop on this platform?
So the development environment
should be very familiar.
You develop on Windows
using Visual Studio.
And you compile using
the Clang tool chain.
When an engineer is
ready to run the game,
the deployment flow is fully
integrated into Visual Studio.
Or you can use SSH to
transfer all your assets.
So when you're ready to run
you reserve one instance.
And then you push
whatever files you
need to that specific
instance, again,
either through SSH or
automatically through VS
integration.
Then you can play that
build from your machine
while also connected
via the debugger.
Now, this is the flow
that engineers will often
use in their typical
day to day work.
But there's also a package flow.
Now, this involves packaging
up your loose files build,
using a simple command.
Then you can deploy that
package with another command.
Now, the cool thing here is
that this package automatically
deploys to every instance
within the pool you specify.
So after a package
is deployed, you
can play that build from any PC,
a Chromecast or mobile device
using your Stadia dev account.
And no matter what
instance you're using,
that build is available
and ready to go.
So you can also hook this
up to your build machines,
so that every new
build is automatically
packaged and deployed.
Now, this is the flow
our QA team uses.
And that ensures
that every tester is
able to get into
the previous night's
build within seconds
of sitting down
at their desk in the morning.
It's also helpful for engineers.
Let's say an engineer
wants to test something
in the latest build to see
if it's their change that
broke something.
They can be in that latest
build within seconds
without waiting for any
file transfers whatsoever.
You're also able to switch
to different packages
quickly, as your
environment can support
multiple packages at once.
So if you're hitting
a bug in today's build
and you want to see if it repos
in yesterday's or last week's
build, you can mount or switch
to that build in seconds,
repo the issue and
then switch back.
So this package flow means
you can set up play test labs
extremely quickly.
You basically just sit down at
a lab PC, open a Chrome browser,
sign into your dev
account, go to a URL.
And that's it.
We play test all the
time here at Bungie.
And I cannot stress how awesome
this flow is for setting up
play tests.
I could easily see
developers relying
on our Stadia build
for play tests
and as a general tool for
showing progress and getting
feedback at work.
It's truly amazing.
So here's another cool benefit
of this instanced system.
So let's say you wanted
to run some larger scale
tests before a release.
Now normally, this would
be a huge headache.
Maybe you'd need to
rely on publishing
QA or some other
group who's equipped
with the number of
consoles required.
Or maybe you just
bite the bullet
and buy a bunch
of additional kits
to do the testing in-house.
Either way, this takes time
and it's a pain to organize.
With Stadia, you can work with
your Stadia account manager
to temporarily increase the
number of instances you have.
So you could quickly get up
and running and run your tests.
So developing a Stadia with
kits that are in the cloud
is definitely different.
And it does take
some getting used to.
But there are some really
cool advantages and features
within this new paradigm.
All right, let's dive into
a few of the interesting
challenges we faced.
So the challenges here
I'm going to talk about
are the couple of things
I'd want to tell myself if I
could somehow go back in time.
So past Nick, if you're watching
this presentation right now,
you're going to get a
lot out of this part.
So do not skim it.
For everyone else,
this is going to be
helpful in getting a couple
of things on your radar.
And you'll see how another
developer responded
to some really
interesting challenges
that you might run into.
So first up, game feel.
Now one of the aspects of
"Destiny" most appreciated
by our players is game feel.
Moving through the environment,
firing cool weapons,
it all feels awesome.
And at Bungie we hold that
game feel and everything
that goes into the 30 seconds
of fun is incredibly important.
And that's just something
baked into our DNA.
So one big question we had going
into the Stadia project was,
how would this
actually translate
to a streaming platform?
Would "Destiny" still feel like
"Destiny" if it was on Stadia?
So the first thing we did
here, even before we wrote
a line of data specific code.
We created a PC
build of "Destiny 2"
with the ability to add input
delay to simulate latency.
Then we had our sandbox
design team, those designers
responsible for the run,
jump shoot mechanics,
play around with it.
We had them play with
what we projected
as an average amount of
latency and also the worst case
amount of latency.
So our goal here was just to
get an overall sense of how
it played and identify
some potential issues
we want to address in
terms of game feel.
And we want to do this
early without having
to wait for many disparate
systems to be stood up.
So we weren't really sure what
to expect in this play test,
as these folks, probably more
than anyone else in the world,
know how "Destiny" should feel.
And they're extremely
sensitive to latency.
They're so sensitive
that one of the designers
was actually able to guess
within five milliseconds how
much latency we're adding.
These people are crazy
Anyway, when playing at the
average amount of latency
the overall sentiment was that
the latency could be felt.
But it was definitely playable.
And you quickly adjusted.
Now considering that these were
the most sensitive designers
in the studio, that
was good to hear.
When playing at slightly beyond
the projected worst case,
the feedback was less positive.
They identified some issues with
over steering and some issues
with the magnetism that we use
when playing with a controller.
Now, both of these issues
made it a bit more difficult
to play.
But it was still playable.
Again, this was a
contrived example.
And our projected
worst case latency
was pretty conservative.
With that said, after
this experiment,
we felt that if we aim to hit
that average case experience
and hopefully pushed
a bit beyond it,
the game would feel
good on the platform.
So at this point, our plan
was to focus on latency
and do as much as
possible to cut that down.
Now, Stadia itself provides
some dials to help with this.
For example, there's
a Stadia Steam profile
API, which allows for
customization of the encoder.
Now, we cranked nearly
all of these dials
towards low latency.
We also targeted 60
frames per second.
"Destiny 2" runs at
30 FPS on consoles.
But 60 would halve
the amount of latency.
So we felt this was an absolute
requirement for Stadia.
Now, not only do
we need to hit 60.
We also needed to hit
an extremely stable 60.
Now at Bungie, we've always
held a high bar for performance.
We've got a perf automation test
team who runs perf heartbeat
tests on our most
strenuous scenarios
to identify regressions quickly.
That said, the Stadia build
needed to meet our existing bar
and go slightly above.
And this is because of how
Stadia's streaming tech works.
So I'm going to give a
quick cliff notes version
of how the streamer works.
But I do recommend you check
out the Stadia streaming tech
deep dive talk that's up on
YouTube for more info here.
OK, obviously, the
Stadia streamer tech
includes a lot of crazy magic
that I don't really understand.
But the basic core
principles, they still apply.
So first, the
encoder sends a frame
that has everything in it.
This is the I-frame, because it
includes everything, it's huge,
and it takes longer to transmit.
Now after that, it'll
only send what's changed.
The P-frames contain
only the difference
between the previous frame.
Now, the P-frame
size will vary based
on the size of the difference.
But they're usually much,
much smaller and thus faster
to transmit.
So what you want here is
to only send those P-frames
and keep that loop
between the encoder
and the client feedback
extremely tight.
Now as long as
nothing bad happens,
it does continue
to send P-frames.
Now, one such bad thing is
a severe networking event.
Now, that would cause
frame data to be lost
and would require new
full I-frames to catch up.
The game dropping frames
is another such bad thing.
If the streamer isn't getting
60 frames every second,
it will try to catch up.
And it will send more
data, more I-frames.
Now if both of these
happen at the same time,
the player will perceive
stuttering and an increase
in latency.
And basically, their
experience will just suffer.
So what this all
boils down to is,
it's extremely important to
keep your frame rate stable.
Google has technical
certification requirements
around this, so
it's worth reading
through that
documentation early.
In order to hit
this high bar, we
dedicated around one person
month of time for optimization.
Now we did this just as
our final graphics features
were being implemented.
Now thankfully, we've
got an engineer here
Bungie, Jason Hoerner,
who's extremely
good at diving into code and
identifying opportunities
for optimization.
We also had help from a Google
engineer, Jean-Noé Morissette.
And his work in this
areas greatly appreciated.
JN was able to identify
some really important gains.
Now after this month
of time, we were
able to save several
milliseconds off
of our 16-millisecond frame.
And we were much, much
more confident of hitting
that bar that was required.
So at this point, we've got all
of our features implemented.
We've just come a long way
in terms of performance.
It's time to get back
into the play test lab,
see how this thing feels.
So we had several play
tests around this time.
And the feedback
was mostly great.
Players, especially those
who had played early builds,
were really impressed
with how far it had come
and how good it was
actually feeling.
However, the feedback
was not all positive.
There were a few people who
reported feelings of motion
sickness after playing.
And in two cases, these
were people who had never
exhibited that before.
So this was extremely worrying.
Now, these were only a
few people who complained.
But it was a
significant percentage.
And because we didn't want
people throwing up all
over their brand new
Stadia controllers,
we needed to take a look at
what the heck was causing this.
So after quite a bit of digging,
both from our lead engineer
Andy Firth and Google engineers
Katherine Wu and Chris Glover,
we identified a couple of
issues with the present mode
we were using on "Destiny."
Now, the first thing
was just a bug.
We were using the
NTSC refresh rate,
which is slightly lower than
60, while Stadia use exactly 60.
Now, this caused
a constant cycling
between the refresh and
the latency when we present
and when the present
is actually used.
The player facing result
of this was that it
created additional noise.
And that likely contributed to
the feelings of motion sickness
in some people.
Now the other thing
we found was a problem
with their present mode.
All right, so this what
our frame looks like.
Now, there's a
lot going on here.
But the thing to focus
here on is the latency
between where we read the
controller and the final output
of the frame.
That gap you see there
is the present margin.
Now the available set of
options for present on Stadia
required us to estimate what
our performance profile would
be like in the upcoming frame.
And what that resulted
in was an offset dynamic,
meaning it could change
from frame to frame.
So here are four sequential
frames that showed us.
Now all these have
variable offsets
due to workload changes.
Now the result here
is those varying gaps.
And that makes the
input feel just wrong,
because we're not sampling
at a deterministic rate.
Now, the user would say this
just feels laggy or choppy.
Now at this point, Chris
Glover, a Google engineer,
recommended we look into the
immediate mode present option.
Now, this wasn't a fully
supported mode at the time,
and thus, we hadn't
really considered it.
And one of the reasons
it wasn't supported
is that it requires the game to
run at an extremely stable 60.
Because the system
trusts and requires that
the title send the frames
at a 60 hertz average.
But because we adjust
on a ton of work
to optimize our frame rate, it
seemed like a potential option.
Now as we investigated
this further,
it became clear that this
would have a dramatic impact.
So here's diagram of the same
workloads as the previous
but using the new
immediate mode.
Now, the key difference here
is that the present margin
completely disappears.
There's no more added latency.
And the encoder can
just send the frames
as is when their ready.
Now, this makes the game
feel so much more smooth.
And it eliminates the risk
of feeling motion sickness
in players.
It also makes the game
feel a ton more responsive,
because this just basically
removes a significant amount
of latency.
So for us, arriving to this
conclusion was painful.
And it took a lot of time and
help from Google engineers.
But the good news for you folks
is that using immediate mode
is now the recommended
presentation mode.
Now, Stadia's Vulkan best
practices documentation
covers this.
So be sure to read
those docs as you
begin to dive into this area.
So after all this work, we
ran one final play test.
And the feedback was
extremely positive.
[INAUDIBLE] because
we have people
who really know their "Destiny,"
many could feel a difference.
But even they were impressed.
And most importantly,
those folks
who were affected by
feelings of motion sickness
reported no such thing
in this latest build.
Yay.
So what else did we actually
change in terms of game feel?
Well, we had our sandbox design
team investigate the Stadia
controller and decide on
what controller curves to use
for that new hardware.
But we already had
existing curves
for every other supported
controller like PS4 and Xbox,
so we didn't really need to
do additional work there.
And while we were
really worried that we'd
have to change a lot of things
like aim magnetism or tweak
curves to fight over steer, we
really didn't need to do that,
as everything just
seemed to work.
Now again, this is due in large
part to the crazy latency magic
that Stadia uses.
All right, so the next challenge
we're going to dive into here
is related to Vulkan pipelines
and the pipeline cache.
So again, Vulkan is the
graphics API that Stadia uses.
And one of the
biggest differences
here is that it's
considerably lower level.
Now the advantage
here is important.
It has lower overhead.
And the added control means
more customization specific
to your engine.
And one of the core concepts
here is graphics pipelines.
So you may be thinking
that this is just
another name for shaders and the
shader cache you use on D3D11.
It is much more than that.
Yes, a pipeline object
contains the shader data
but so much more, the shader
modules, the fixed functions,
the render passes.
The graphics pipeline
is this entire series
of steps to take the
vertices of your textures
and your meshes all the
way to pixels rendering.
The cool thing here about
this system and why it exists
is it allows you to run
the operations needed
to draw something well
before you need it.
OK, so how are these
things actually created?
Now, this is a
simplistic breakdown.
But it's going to give you
enough context to understand
the challenges we face.
So a draw call queues up
the creation of a pipeline.
In the actual creation
process, it takes time.
And it's slow.
In assessing thousands of
"Destiny 2" creation times,
the average is around
12 milliseconds.
So obviously, you're not going
to be building these things
on demand.
Now, the good news here is, once
it's created, it's cached off.
And you can reuse
it for future calls.
So this is what it
ends up looking like.
When a pipeline is
required, we check the cache
to see if it already exists.
If so, great.
Use it.
Retrieving from the
cache is an order
of magnitude faster
than creating it.
And if it doesn't
exist in the cache,
create it, and store
it for future needs.
Now the other thing
that helps here
is the ability to dump that
cache into an offline file.
That's the offline
pipeline cache file.
Again, this might
ring some bells being
similar to the shader cache.
But it's much more information
than the shader cache.
Anyway, you can then
load this file up
when you start the game
to pre-populate the cache.
Now, the idea here is to
fill that offline file
with all of the
pipelines in your game,
so that you never have
to eat the high cost
and creating a
pipeline on the fly.
And when you submit your final
build for publishing on Stadia,
you'll almost certainly include
one of these offline cache
files.
And there are certification
requirements around this.
Now, for most games, building
this offline cache file
is probably pretty
straightforward.
Maybe your game is smaller
and playing through the game
is reasonable.
Maybe you can write a script
that loads all your content.
Now for "Destiny," this is not
straightforward for a couple
of reasons.
First, we have 3.2 metric
[BLEEP] tons of content.
Now, that number might
not mean a lot to you.
So let me give some
further context.
"Destiny" has many destinations.
Now these are planets,
moons, et cetera.
Each destination has unique
palettes of materials
and objects and effects.
Each destination is comprised
of many large spaces.
We call these bubbles.
And these bubbles are usually
connected to other bubbles.
Now here's one average
sized bubble zoomed out.
And this is the same
bubble zoomed in a bit,
so you can appreciate just how
big these massive spaces are.
Now, each bubble has
unique components and thus
unique pipelines.
And there are hundreds
of bubbles in the game.
We also have many
combatant races.
All use different materials
and effects, more pipelines.
Each combatant race has around
a half dozen main archetypes
with dozens of variants
of each archetype.
Again, each using unique
materials and effects and thus
more pipelines.
Our game has three
character classes.
And each of these three
has multiple subclasses,
each with different
abilities, grenades,
each with unique effects.
And then there's the gear.
We have thousands of pieces
of gear, from weapons,
to armor pieces, chest,
leg, arms, helmets,
[INAUDIBLE] shells,
vehicles and more.
Also there are hundreds of
activities in game, missions,
adventures, quests,
raids, strikes, et
cetera, all with
unique components.
So obviously,
having a human play
through everything in "Destiny
2" is just not feasible.
But what about some
automated system
for loading all the content?
Unfortunately, for our
game, that's not easy.
We can automate that
loading of the hundreds
of bubbles in game.
That's easy, and we do that.
So the environment pipelines
are mostly covered.
But a huge chunk of the
other pipelines in "Destiny"
are dependent on lots
of additional context,
for example, what
activity you're playing,
what character
class you're using,
which abilities and
modifiers you're using,
which weapon you're using, which
combatants you are engaging,
et cetera.
All these different variables
could introduce new inputs
and result in completely
unique or somewhat unique
pipelines, a
combinatorial explosion.
So we need some
other solutions here.
Now as I said, a
lot of the inputs
necessary to generate a pipeline
aren't available until runtime.
But we do get a lot of
the context needed well
before draw time.
So one of our senior graphics
engineers, Mark Davis,
implemented a system where
pipelines are asynchronously
generated at certain
points in the game.
So how does this work?
Well, here are two examples
of the points where
we're loading lots of data.
So first, at
initial game launch,
we load what we
call global data.
Now this includes things like
UI, player character data.
And also, when a player
launches into the level,
we load things like the
environment, the combatants
and activity data.
So the idea here is to
generate some of the pipelines
at times like this
when we are loading
a bunch of data and context
and when the player's not
going to notice it.
So let's take a look at
this system in action.
Now first, as a
baseline, here's a video
of our game running with
absolutely no pipeline
cache file.
So you're going to
notice a lot of hitches,
because every pipeline
is being generated
on the fly when it's needed.
Again, you'd never
see this in the wild.
But it helps illustrate
just how important
is not to have to build
these things on the fly.
Now, here's the worst case this
grenade explosion slowed down
for a painful illustration.
All right, so let's
see that same thing
with our pre-create
system enabled.
Again, there's absolutely
no pipeline cache file
loaded here.
We're starting with zero
pre-built pipelines,
meaning everything is going to
be built either asynchronously
or on the fly.
Now, here's the game start.
So the highlighted section here
shows the current active queue
of pipelines in yellow and then
the total number of pipelines
generated thus far in blue.
So first, it loads all
the global pipelines
then everything
associated with the level.
Now that we're actually
running in game,
you'll notice it's
much, much smoother.
And that's because,
at those load points,
it actually generated most
of the pipelines needed.
So as you can notice,
there's almost no hitching.
But the biggest test here
is that grenade toss.
So let's see how
well we do here.
All right, pretty smooth.
And again, this is with
absolutely no pre-built cache.
But the fact that there's no
hitching there means the system
is working.
So because this pre-create
system is so effective,
we have it enabled at all times.
And really, this system
acts as a fallback in case
we missed some pipelines
from an offline cache file.
Now when it comes to populating
that offline cache file
that we submit with
our final build,
we rely on automation
to load all the hundreds
of bubbles in the game.
And unfortunately, we also need
to do some manual play-throughs
of activities.
Why do we need to do this?
Well, while the
pre-create system
helps us gather a
lot of the pipelines,
it doesn't help us
gather everything
we get from actually
playing through an activity,
for example, one
of our raids, which
features a ton of unique
content and modifiers.
Now, a lot of this pinnacle
content has this problem.
And we just cannot risk that
this type of content has
hitches.
And automating it is
extremely difficult.
So that's the plan for the final
shipping offline pipeline cache
file, the one we
shipped in November.
But "Destiny 2" is a live game.
We're constantly releasing
new updates and new content
into the game.
And that means new
pipelines that we need
to add to our offline cache.
So thankfully, there's
a method provided
for adding to the cache file.
So what our release
test team does here
is they generate pipelines
for the new seasonal content.
They dump it out.
And then they add it
to our existing cache
file and run a
de-duplication step
to make sure we're not
doubling up on anything.
Now, that step is important,
because that offline cash flow
needs to be loaded it
into memory so it just
can't grow infinitely.
Also, while this plan for our
pipeline cash is acceptable,
it's by no means perfect.
And there's some risk
that we miss things.
And hitches get
through to the user.
So to combat that, we
built a monitoring system.
So anytime a tester
or developer completes
any activity on Stadia, we
upload data about pipelines.
We track different types
of cache hits, cache misses
and the speed of both.
We have a dashboard
for viewing this data
so we can deep dive
into the results.
And we have alerts built
in if we exceed thresholds.
All right, so we
talked about why
we decided to bring our title
to this exciting new platform.
We described what
was actually involved
and what it's like
develop on Stadia.
We also dove into some
interesting challenges
that'll hopefully help
future Stadia devs.
Now it's time to
wrap this thing up.
So getting an awesome
version of "Destiny 2"
on to Stadia in six months was
a crazy, whirlwind experience.
And we really couldn't have
done it without a ton of help
from Google and the many
folks on the Stadia team.
Working with those
teams was great.
And we're really proud
of the title we shipped
and the experience we delivered
to our "Destiny" users.
And a lot of our players agree.
Here's one of my
absolute favorite quotes
off the "Destiny 2" Reddit.
Now, I love this
for obvious reasons.
But it also mirrors our own
experience with this platform.
It is so easy to be
skeptical about something
as difficult as streaming
video games over the internet.
But in the end, through
a ton of hard work
from many dedicated people,
that mofo is as smooth as hell.
Thanks.
[MUSIC PLAYING]
