So this is what we
are going to do today.
Let's meet our game.
We are going to make a very
simple Defend the Castle game.
This is also online right now on
GitHub, in our samples folder.
There are going to be three
bridges connecting the island
from one side to the other.
We'll have Androids
made of cardboard
follow a path
through the bridges
and onto a castle,
trying to invade it.
We'll be adding
stereoscopic VR rendering,
so it will work on
the viewer later.
We'll be adding binaural audio.
Now, audio is really important
for a complete and immersive VR
experience.
3D auditory cues
will tell you when
a cannon is being shot over
your head, and where to look.
Finally, we will add
some interaction using
the input and the reticle.
This will work both for
gaze and the old cardboard,
and for the controller
[INAUDIBLE].
Oh, and this.
How many of you made games?
Put up your hands if you
were a game developer.
Did you ever run
into throttling,
that your GPU or
CPU went too hard,
and you needed to take a break
until you can keep it working.
Or maybe you're like
me, and you just
put your phone into the
fridge for it to cool down.
So we're going to look into
some optimization tips,
so you can get to 60 FPS
and avoid being throttled.
Now, these are
the assets we will
be using in the game,
assets I've made before.
We have the cannon
and the castle.
They're really simple.
And some trees we took off the
Cardboard Design Lab, which
is also open source online.
Now, this is how we
are going to make this.
To make the game, I've
used Unity and Blender.
But you can use any other
tool off the market,
and whatever you like, even
if you have your own engine,
there is a C++ SDK which will
be introduced in more detail
at Nathan Marks' talk later.
OK.
So a very quick
overview into Unity,
if you have never
used it before.
I'm going to go through
it very briefly.
This is how a Unity
game editor looks like.
At the center, we just
have our 3D scene.
We can move it and look
around using a virtual camera.
We can also drag and
drop objects in the scene
to move them around.
On the left is the
scene hierarchy.
That's the object that
we have inside the scene.
On the right is the Inspector.
You can look at any object
and see what it is made of.
Every object is made
of several components.
For example, here on the top
is the transform component,
saying what's our position,
rotation, and scale.
And at the bottom is
the project assets.
That's all of the scripts,
models, and textures
inside our project.
Now, a very quick
word about Blender.
As a game developer
for many years,
I've found that having
some 3D editing skill
really empowers you,
because it means
you can make your own models
and experiment, and make
your own game, if you want.
However, this is how
Blender looks like.
Now, Blender is an
open source project
which is free for everyone to
use, which is why I picked it.
And this is how it looks.
And it can be a
bit intimidating.
I've learned how to use
it through this site,
but I'm pretty sure there are
many other websites out there.
OK.
Let's get to work very quickly,
and see how far we can go.
So I'm just going
to open up Unity.
If you have it, and you
have downloaded the project,
feel free to do the same.
And I will create a very
simple, empty project
to get started with, just to
show you the basics, really.
So this is how a new
project looks like.
I have the scene.
I can press Alt and look around,
or with the mouse wheel button.
And I can go to the game
object menu at the top,
and maybe add a plane
to use as my ground,
and now it's inside the scene.
I have beginning-- I start the
camera, the main camera, which
I can move around.
On the bottom right, you
can see the camera preview
and what it will see.
Now, if I add just a cube
for reference, and then
start in very basic, and
we'll go on in a bit.
So now, I have a
cube in the scene,
and I can press Play
here at the top.
And basically, we
get "what we see
is what we get"
kind of attitude.
However, there are some
problems here, the biggest of,
we don't have
stereoscopic rendering.
Luckily, it is really simple
to get it with our SDK.
Now, I have
downloaded the package
for importing the Google
VR SDK just before,
and this is simply
a Unity package.
I can double click
on it, and it will
load all the relevant
assets into our project.
Now, next, I'm just going
to delete the main camera,
because we no longer need it.
We want to have
stereoscopic view.
So I'm deleting that.
And as you may see, here
down into the project assets,
we have the Google
VR folder now.
So I'll go into it,
and I'll go to prefabs.
And then I'll drag the Google
VR Main into the scene,
and place it just
here, at the back.
Now, a prefab is
basically a collection
of game objects already
set up with components.
So in this case,
this is what we need
to do stereoscopic rendering.
I can press play, and now
we have all we need for VR.
If I press Alt, I can simulate
head movement, use my mouse,
and look around.
The stereoscopic
camera basically
works if I open up it on
the left in the hierarchy.
It has a head and
a stereo renderer.
The head also has
a camera, which
is a template camera for
two different cameras, one
camera for each eye, simulating
the distance between our two
eyes, basically.
This is how we do the illusion
of depth and depth sensation.
So we know how far someone
in the crowd is from me,
or I am from you.
OK, now let's open up a project
I have prepared beforehand,
so we can get started.
Because if we start making
the entire scene from scratch,
it's going to take
quite a while.
And we want to make
any game like that.
So this is the very basic scene.
I can move inside
by zooming in, using
the scroll wheel or the track
pad, and I can look around.
And let's see what
we are going to do.
So we have our castle
made of Cardboard,
and on the other
side of the island,
we have the little
Android statue
signifying where the invading
Androids will come from.
Here on top of the
castle, we also
have our cannon that we'll be
using to fire at the Androids.
And again, all the assets
here, including the scenes,
are already ready for you
to download off GitHub.
If I press Play, I've
already prepared the camera
on top of one of the towers.
So I can look
around and see where
the Androids will come from.
However, nothing is
happening at the moment,
so we'll need to
add some gameplay.
So let's see how we are going
to add Androids to spawn
from somewhere in the island.
So I'm just going to disable
a few scenes on the left.
I'm going to create a
new game object-- let's
say, just an empty one-- and
I'm going to call it spawner,
because this is what's going
to spawn my Androids from.
So I'm doing it here on
the right at the Inspector.
Now I'm going to
create a new component,
and I'll just call it
spawner, because it's simple.
And then I can double click the
script here on the Inspector
to open it up in
MonoDevelop, or whatever
script you want to use.
I'm going to add a short timer.
I'll start it at, maybe,
one second, to begin with.
And let's add another
one for respond time,
and set it up to five seconds.
Now, on the update,
what we should be doing
is decrease our
timer so we can get
new Androids into our scene.
So we'll use time delta time.
And then, if our timer is-- I
keep getting code completion
errors, here.
Oh.
Before I can spawn
Androids, I need
to have some locations
I can spawn them from.
So let's add a list
of transformations,
correspond points, and
just initialize it here.
OK.
So now, if I look
into this scene,
you see that the
spawner script is going
to update in just a moment.
There we go.
So now, we have the timer
and the respond timer,
and we'll be able to add some
points to let the script know
where to place Androids at.
We also need to create
a path into the island.
So for that, what we can do,
just to make it simple again,
I'll create a new game object.
That will be a sphere.
And I can place it
somewhere inside the scene.
And this signifies a point
in space at the moment.
I'll just remove
the sphere collider,
so objects do not
collide with it later on.
And then I'll add a
waypoint script which
I have prepared beforehand.
All it does is have
a list of what's
the next point going to be.
And then it can duplicate this--
just Command C, Command V--
and I can go into
the first sphere
and drag and drop the
sphere from the hierarchy
into the next.
So I can have them
linked to each other.
And then, I will
have to keep doing it
until they cross over the
bridge and go to the castle.
But this is just a way to start.
So now that we have some object,
we will go to our spawner,
and we will add the initial
point into the spawn points.
And now, if the timer
is less than zero,
then we'll just set the timer
back to the respond timer.
And we'll go to an Android pool,
which I'll go to in more detail
later, and I'll
create the position.
That is my position.
I'm going to do a transform.
Create Transform.
And I'll use the size of
the elements inside my list.
There we go.
Have I missed a
bracket somewhere?
All good.
OK.
And now that we have the two
set up, we'll press Play again.
If we go over to the
scene view, we'll
see every few seconds-- every
five seconds, specifically--
we'll have an Android coming up.
Now, I'm not going to create
the entire path, here,
because I think that editing
should be done later.
It's not very interesting.
So I do have enemies path.
And here, if I go to layers,
here at the top right,
and mark invisible to enable
it, we can see them in here.
And I just collapse it.
I'll decollapse this one.
I'll go to the spawner, and
I'll drag and drop my waypoints
here from the left
and on to the list.
And now that I press
Play again, we'll
have Androids spawning
from various spawn points
here on the scene.
And we'll go over to the castle,
through the different paths.
OK.
So we have basic Androids
coming to the scene.
We can look at
them from the game.
I can also maximize.
Let's see.
You might be able to see it.
But we don't have any way
to interact with them.
We need some kind of a way
to shoot them off our island,
so they don't get to the castle.
So let's look into our ground.
Let's look at, how are we
going to shoot with the cannon
onto the ground somehow?
So selecting the terrain,
I'll go to Add Component,
and I'll add a target behavior.
And we'll just make
a very quick script,
so we can shoot
onto the cannons.
I don't actually need
the start or update here.
So what I'll do is make
a new function, on click.
And then I'll use the base
event data from Unity,
and then I'll cast it
into a pointer event.
Data.
Maybe I'll actually name this.
But before that, I need
to make sure I actually
have a cannon inside the scene.
So I did add one beforehand.
So I'll just make sure
it actually exists.
And now, we'll make
the cannon shoot
from the current
side of the scene
onto the intersection point.
What am I missing here?
Wonder if there's brackets.
Oh.
I was in the wrong script.
Sorry about that.
And all I'm going to do
is use the transformation
from the raycast, and
I'll get to more detail
on that in a moment.
Let's see.
What position?
That sounds about right.
OK, but even now, if I
press onto the terrain,
nothing will happen.
We need to add a few
more interactions, here.
The first thing we
need to do is add a way
to receive events from Unity
and from our Google VR SDK,
so we know when an
object had actually
been clicked in any way.
So I'm going to add a
event system from Unity.
The event system,
all it does is relay
the events between objects.
So it comes with a
stand-alone input module,
which I'm going to
remove, because we are not
going to use it.
And I'll add a
gaze input module.
This is basically going to take
the orientation off our camera
and transform it
into the events.
At the moment, there is not
such element for the controller,
but this will be coming soon.
So the next thing
I need to do is
go to my camera
inside Google VR Main,
go to the main camera
template, and we'll
add a physics raycaster.
Now, on its own,
the event system
will allow you to interact
with UI canvas elements.
But if you want to interact
with objects inside the scene,
you need to add a
physics raycaster for it
to interact with it.
So I'm going to
set the event mask.
The event mask basically says
which layers in the scene
are interactable.
And I will just set
it up to nothing,
and then I'll set it
again to the grid.
The grid is the name I
chose for the terrain.
You can see it here up on layer.
It is selected as
grid, and that means
that when the gaze input module
is looking into the camera,
finding its views, it's
going to find exactly
which object it is looking
on, but only from the layers
that you have enabled.
So the next thing we
need to do-- OK, now we
have everything kind of
working and interactable,
but we never actually call
our script from anywhere.
So going back to our terrain,
I'll just click on it here
in the editor, what we need
to do-- and this is our water,
so click again on the terrain--
is create an event trigger.
What the event trigger
does is basically
say that this object is
listening for the following
events.
And I'll add a
pointer click event.
And then, I'll select
the same object.
I can just drag and
drop it from here,
from the scene on the left,
into the missing component.
And then, it shows
me all the components
that are on this
object, and they
can select the one they want.
So I have made a
target behavior.
So I'll just select on click.
And when I press Play the
next time, and I look around,
I'll be able to
activate the cannon,
and we'll be able to
fire onto the Android.
However, it's actually
really difficult.
And I'll just put it
first in, actually.
It is actually really
difficult to aim this way.
I have no idea where
I'm pointing at.
I'm just doing this
at random, almost.
So what I'm going to
do is add a reticle.
So I'll search here
in the project assets,
and I'll find the
Google VR reticle.
And what I'm going to do
is drag it down onto head,
and I'll make sure that
its position is reset here
in the Inspector.
It already is, so it's good.
And the next time
I start my scene,
you see there is a little
point in the middle
of the screen, which is telling
me where I'm looking at.
Now, if it looks at an object
that is marked as interactable,
it will grow to signify
that the object can
be interacted in some way,
and I can shoot there.
However, once again,
we have gameplay.
We have some
minimal interaction.
But the game feels
somewhat empty.
I mentioned before in
the presentation that,
to make a really good
virtual reality experience,
you want to have sound.
So let's look into how we're
going to add some sound in.
So I want to have an epic
sound for the cannon, whenever
I shoot at the Androids.
So I'm going to look into
my cannonball object.
The cannonball object
is found every time
where I select to shoot
somewhere into the scene.
The first thing I will do is,
I will add an audio source.
If you have listened to the
binaural audio presentation
earlier, you might
be familiar with it.
But this is our Google
VR implementation
of 3D binaural audio, and you
can use it just like this.
Now, it's on the object.
I'm going to select a specific
sound, the cannonball fire
sound.
I have also prepared the
cannonball behavior script,
which is what controls
the flight of the cannon,
and then the
explosion at the end.
And it has an another element
here, for the input audio.
So I'll just go in here, and
I'll select the cannon impact
here on the left.
And if we look quickly on
the cannonball behavior,
we will see this on impact
script here at the bottom.
And what it does is select
the variable which I have just
filled in, and it will play
it once our cannonball is just
about to hit the ground,
inside the [INAUDIBLE].
It's a pretty short
script, so if you're
following on the
YouTube video later,
you can easily find it all.
Now here, you have additional
settings for the sounds.
In this case, I want it
to be an epic explosion,
so I'm just going to
put the gain on the max,
and hope it will all work
well on the speakers, here.
But you also have
settings for directivity,
which is how the sound
is going to distribute
and spread out in space.
Here, you'll see the
circle showing how
the sound waves will spread.
If I use the alpha, you see
it become larger on the front,
and then it will spread
more towards the back.
Well, if I use the sharpness,
it will make it more sharp,
as you would expect.
So give it more of a direction.
For the cannonball, I'll
use a sound that spreads
in everywhere at the same time.
You can also enable
occlusion for the sound
to bounce off elements
from the scene,
but I'm going to skip
that at the moment.
Now, I kind of want to
press play and just see
how the sound is working,
but it will not work.
There is one more
step we need to do.
If I go Edit here, on the
top, and to Project Settings,
and go to Audio, we need to set
the specializer plug in here
on the right from None to
Google VR Audio Specializer.
And once again, all
of these instructions
will be in the
GitHub readme file.
Now, the next time I press
Play, when I fire at Androids--
and I'll maximize this
again-- here's one.
It's coming.
There we go.
We'll have some 3D audio.
Now, if you are with
speakers on a device,
or even on the computer,
you will find out
that when you look in
different directions,
the stereo sound
will come as you
would expect it to come from.
I also want to add some
sounds to the Androids,
but I'm going to cheat, here,
because I have a lot to cover.
So a lot of it is
prepared beforehand.
I'm just going to
look for the Android.
I'm going to search
here in the project.
I'll find the Android
prefab, and I'm
going to add another Google
VR audio source component.
Now, I'm not going to
set any sound in here.
I'm going to disable
the play on awake.
This is because I've
already set some sounds here
in different lists, and if
we go into the script itself,
you'll notice there are
multiple lists of audio clips.
And when we enable the Android,
it will play the charge sound.
And when it's going to be
exploded away by the cannon,
we will play an impact
sound, just as an example.
Now, when I press Play, every
time an Android will spawn,
it will have some
kind of a charge.
And when I shoot
towards it and hit it,
it will have an uh-oh
sound and fly away.
You'll probably hear
it better on a device.
You can get the APK
from the GitHub.
It's online right now.
So this is the Google
VR interaction,
the stereoscopic
rendering, and the audio.
A few quick more words
about the interaction.
When you're pointing
at an element--
let's do it as an example.
I'm going to maximize
it-- the reticle
is quite smart, actually.
In VR, you really need to
be able to converge exactly
where you're looking at.
And the reticle tends to
take the depth of the element
that you are pointing at.
So if we are pointing
out at the terrain here,
it's not only spreading out
to show that the terrain is
interactable, but it's
actually casting its position
onto the depth of
the terrain itself.
However, in this
case, I have not
set the trees to be occluders
inside the event mask,
as well as the tower, which
means that the reticle is
probably rendering in front of
them, but still being visible,
which can make some
issues with convergence,
especially if we look down
here onto the terrain,
but also onto the tower.
So what you want to do in this
case is, go to your main camera
and add some more elements
into the event mask down here.
So for example, if
we add the default,
you will notice
that, whenever we
are looking on to a tower or a
tree, it will no longer grow,
and the depth will be cast
properly in the reticle.
Now, I have just
about 15 minutes.
And this is a very
basic game, but I really
want to talk about
performance, here.
When I was making this demo,
I had about two and 1/2 weeks
to do it, and I was trying to
do some really fancy stuff.
However, doing work and working
on a game demo at the same time
actually turned out to
be pretty difficult.
And after the game
was finished, it maybe
ran at, like, 45 frames per
second, which is not great.
Now, it's running on 60 FPS on
the device at full resolution.
But this gave me even
more firsthand experience
to give you some
performance tips.
So let's look into them.
If we look into
the Google VR, we
will notice that we have
our VR mode enabled,
which is what we want
to do, but we also
have distortion correction.
If we press Play, and we
look into the distortion,
we can set it to none,
which means it's not
going to fix the distortion.
The distortion-- it's
been cautioned distortion
caused by the lenses,
when you're looking
through them onto the screen.
You want to use
distortion correction.
However, there are
several ways to do so.
If you have watched the
vertex distortion correction
presentation-- I'm not sure if
it was before mine or after--
but it mentions a
way to do correction
without post-processing.
The way we do it here
is we're doing it
in the post process,
which means we
will need to draw every pixel
again with a correction.
The vertex correction
happens when
you are drawing the objects
themselves onto the screen,
and it will distort them to fit
and look how they should look,
so you don't need to use
additional pixel rendering time
and waste more of
your GPU bandwidth,
because that tends to be one
of the more expensive things.
So if I'm not using the
vertex base correction,
I'll use either Native or Unity.
In this case, I'm using Unity.
And you can play with the
[? stereo ?] screen scale.
That changes the virtual
display that we are drawing to.
When we are drawing our scene,
our game or application,
we are actually drawing it
into an off-screen buffer,
which is a larger size
of our real screen.
We do it so we can completely
remove any artifacts
and distortion, and that
includes any of our details.
So we are fixing the
pixel error by drawing
on a higher resolution.
But sometimes,
depending on the device,
you'll find that the generated
resolution is too high,
and you might want to change
it from 1 to maybe 0.9 or 0.8.
I'm going to leave it
at 1 here, because it's
working quite well.
Now, the other thing you can
do is go into your Edit menu,
and go into Project, and we'll
just look into quality here.
So I like to make an equality
level-- I call it Google VR,
or GVR for short--
and I set Android
to be the user by default.
Now, depending on
your scene, if you're
using a lot of lighting
or heavy shaders,
I'd suggest to bake all your
lighting into a texture.
I'm not going to go
over how to do it now.
There's enough
information on it online.
But in that case, you can
lower your pixel count.
And depending on your
scene, you might even
be able to move to
vertex lighting,
but that's in a different
menu, so I'll go to it later.
If you're not using soft
particles, or any reflections,
you might want to disable this.
I do suggest keeping
multi-sampling at least at 2.
When you're looking into the
screen through the lenses,
and you don't have
multi-sampling on,
then you will see all the
[INAUDIBLE] and jagged edges.
And that's not a pretty
experience on your eyes,
to see all those jagged edges.
So try to make sure that
your game or application
is performing well enough to
enable [INAUDIBLE] at least 2x,
because otherwise, it's just
not going to look as great,
unless you are rendering at
a much higher resolution.
Now, as I mentioned, if you
can, play with your shadows,
play with your light.
Try to bake your
shadows as well.
And then, if you can, you can
disable your real-time shadows
or set it to hard shadows only.
Another thing you can do is
change the shadow cascades
instead.
And shadow cascades is
multiple shadow maps
used to have different
qualities between shadows
that are near the
camera or further away.
Because the shadows near
the camera need to be higher
quality, because
the player can see
all of the artifacts happening.
Now, another thing we can
do is go here into Player.
You have to set your default
orientation to landscape
left to work with the viewer.
Another thing you
might be able to do
to gain more performance-- and
sometimes quality-- over mobile
is, disable the 3D
bit display buffer.
The reason for it is because
some phones do not actually
have a 32-bit support, and they
emulate it using house points.
So on some Galaxy
phones, for example,
you're getting more shadow
artifacts without it, because
of the emulation layer.
And then, if you're
using only 16 bits,
you are getting less
bandwidth usage, which
means you're getting
more performance,
and less heat is being generated
on the hardware itself.
If we go to Auto Setting,
as I mentioned earlier,
you can change to
vertex lighting,
or you can keep it
at forward, if you
have good enough performance,
depending on how you have
made your scene and your game.
I do very much
recommend enabling
multi-threaded
rendering, because that
is going to take a big
chunk of your CPU usage
to another thread, and
then you can do more,
and you will not be
as CPU sorted later
for hitting up one code, and
using all the bandwidth there.
You should use static
and dynamic batching.
What that does is
reduce your draw calls.
So if I go to the game, and I
press on the stats over here,
you can see that I
have about [? 16.6 ?]
draw calls inside this scene,
and I have 162 vertices.
If I press Play, I'm going
to have double that amount,
or just about that amount,
because Unity is going to be
able to optimize to some level.
The reason for it is because
we are drawing the scene twice.
So it's doing double
the work, and you
need to make sure
that you are not
having too many vertices in the
scene, or too many triangles.
Another thing that I
have noticed with Unity--
and you might be
familiar with it,
especially if you
have worked with C++--
is that it doesn't really
like instantiating objects.
That means a lot of
memory allocations.
And for that, I have made
a specific memory pool.
So if you look at the
memory pool in here,
you see it will have two
scripts, one for cannonballs
and one for the Androids.
When I press Play, as this
object is being initialized,
it's creating all these
unused cannonball elements
and the Androids, which
are slowly being used.
When I shoot a
cannonball-- let's say,
over here-- you see one
cannonball is being used here
at the top, and
in a few seconds,
it will be disabled and
ready to be used again.
This is to avoid heavy
memory allocations, which can
be very expensive at runtime.
So the way I make
a script like that
is by creating a
single turn, and then
having a couple of
static functions, one
to create an element,
and one to destroy.
On the initialization
of the script,
it's going to make sure, if
there is no single turn, to set
its own, and then it
will create however
many instances of the
object that I set it to do.
In this case, I've
put it on the code.
You should put it as
a variable instead.
OK, so I've managed to
finish a little bit ahead
of time, which never happened
when I tried to do it before.
[APPLAUSE]
[MUSIC PLAYING]
