MEGAN LINDSAY: Welcome.
Thanks for joining
us this afternoon.
My name is Megan Lindsay,
and I'm the product manager
for WebVR at Google.
Today, I'm here to talk
to you about WebVR.
I'll show you the opportunities
it opens up for web developers,
how this is going to benefit
the VR ecosystem as a whole,
and what others are
already doing with it.
Then, my colleague,
Brandon Jones,
will demonstrate how easy it is
to build a cross-device WebVR
app by doing it
right here on stage.
WebVR enables web
developers to build
fantastic cross-platform,
cross-device VR experiences.
Here's a quick overview of
what WebVR is all about,
and an introduction to
our recently released
WebVR experiment site.
[VIDEO PLAYBACK]
- VR should be
accessible to everyone
because it has the potential
to let everyone explore, play,
and create in amazing new ways.
But right now, VR is
pretty complicated.
To make awesome VR
stuff, developers
might have to learn
a new language,
and then spend a bunch more
time to make that stuff
work on multiple headsets.
And then when we want to play
with their awesome VR stuff,
we've got to have
the right headset.
VR should be easier so
developers can make something
quickly and share it
with everyone, no matter
what device they're on.
Kind of like how easy it is
to share stuff on the web,
but with VR.
Well, that's the
idea behind WebVR--
it's VR on the
web, for everyone.
Here's how it works--
say you're in a
browser like Chrome,
and you come across
a WebVR experience.
You just tap the link, put
on a headset, and boom,
you're in VR.
Developers can build
WebVR things the same way
they build web things--
with JavaScript.
And since it all
works in a browser,
it's easy to make it work
for all kinds of VR devices,
whether it's someone using
their phone, their computer,
or their entire room.
Developers are already
building and sharing
awesome stuff with WebVR.
We've started showcasing
their work on a site called
WebVR Experiments.
It gives you a glimpse into the
kind of stuff that's possible.
You can play simple games,
see the world in a new way,
explore interactive stories,
play with a friend, or lots
of friends.
Each experiment comes
with open source code
to help others make
new experiments,
and developers can
submit what they make.
All of this is an effort
to make VR more accessible,
so anyone can build and everyone
can play with awesome VR stuff.
So come and start playing
at webvrexperiments.com
[END PLAYBACK]
MEGAN LINDSAY: I
want to tell you
why we Google care about WebVR
and why we're investing in it.
As Clay Bavor said in
the keynote yesterday,
immersive computing is
going to change how we play,
work, live, and learn.
We're at the start of the
next computing revolution.
Many of you have seen the
technology adoption curve
before.
2017 is shaping up to be
a pivotal year for VR,
where it's moving beyond
innovators to early adopters.
This is a time of opportunity.
However, one of the
largest barriers
to even broader VR adoption
and more user engagement today
is content.
Content is absolutely
critical to the success
of any new ecosystem.
Giving users great, diverse, and
plentiful things to choose from
will keep them coming back.
I believe that the open
web is exactly what
virtual reality needs to
take it to the next level.
And WebVR is the first
step along that path.
WebVR opens up VR to
the largest developer
platform in the world--
you, web developers.
You can build for VR
for the first time.
The web's an open
ecosystem that we at Google
strongly believe in
and support, where
developers from around
the world work together
to innovate in a standardized,
interoperable way.
The web isn't controlled
by any one company,
and it's unique in
providing access
to content from any device
through any web browser.
There are no walled
gardens here.
What this means is that WebVR
simultaneously decreases
the barrier to entry and extends
the reach of your content.
Using WebVR, you can
start developing for VR
with gradual investments
by progressively enhancing
your existing websites.
You can light up your site with
VR when an immersive experience
adds something
special, from breaking
360 news on the ground to
exploring your next home.
With WebVR, you can
build your experience
just once to reach all VR
headsets and the billions
of mobile and desktop
users, giving you
access to the broadest
audience possible.
And you gain all the
benefits of the web
by making your content
searchable, linkable,
and low friction, with
no installation required.
Sharing is as easy as a link.
So the potential of VR
goes well beyond gaming.
What kinds of VR content just
make sense to do with WebVR?
Your imagination is
the limit, though I
believe the first
wave of WebVR content
will be the use cases
that are already
first and best on the web--
ephemeral content
found primarily
through search and social
media, short form media content,
and important but perhaps
less frequent tasks
where you may just not
want to keep an app around.
So let's take a look
at some of the things
that others are already
doing with WebVR.
Matterport has
created technology
that allows capturing real
world spaces in 3D to view them
virtually for industries
such as real estate, travel
and hospitality, and
architecture, engineering,
and construction.
Matterport customers like
Sotheby's, HomeVisit,
and Mansion Global have scanned
nearly half a million places,
and make these available to
their users with Matterport's
web player.
The web player lets
the user navigate
through the 3D virtual space
on their phone or desktop.
Before WebVR,
users were required
to download a separate app to
view the full VR experience.
This created a lot of
friction and resulted
in significant user drop off.
But now with WebVR, this
friction has been eliminated.
The user can step right into
the home they are looking
at directly from the website.
And when the user
exits VR, they're
still on the original website,
rather than in a separate app.
Matterport supports
WebVR for Daydream View,
and Cardboard support
is coming soon.
With over a million
scenes created and posted
by their community, Sketchfab
is the world's largest platform
to publish, share, and
discover 3D content online.
With WebVR, any Sketchfab model
can be viewed and manipulated
in your VR headset.
Content creators
or enthusiasts can
use Sketchfab to share or embed
models anywhere on the web,
enabling them to be explored
either on a 2D screen or in VR.
Powster creates
custom experiences
for movies and music,
helping with discovery
of major entertainment products.
With the rise of
virtual reality,
Powster used WebVR for the
broadest audience reach
and created experiences focused
on movie websites, showtimes,
and ticketing.
Here's a look at what
Powster has done recently.
[VIDEO PLAYBACK]
[MUSIC PLAYING]
[END PLAYBACK]
MEGAN LINDSAY: So movie studios
saw over five times more movie
theaters selected inside VR
than on the regular websites.
Audiences viewed the 3D trailer
and the 3D gallery images,
and they converted to
seeing the movies in 3D
rather than in 2D in
the actual theaters.
And finally, from
filmmaker Christopher Nolan
comes the epic action
thriller Dunkirk,
opening worldwide this July.
Just as the film offers a
first person perspective,
Warner Bros. wanted
technology to offer
a deep, immersive perspective on
just what happened at Dunkirk.
They brought this
vision to life as one
of the first collaborative
VR experiences on the web,
showing the depth of
soldiers' camaraderie
through a cooperative
experience between two people.
Working together to
survive the evacuation,
each player will become both
the rescuer and the victim.
Here's a taste of what's coming
for "Experience Dunkirk."
[VIDEO PLAYBACK]
[MUSIC PLAYING]
[END PLAYBACK]
MEGAN LINDSAY:
"Experience Dunkirk"
will be releasing in June
and will be open to everyone,
supporting 2D devices
as well as VR headsets.
A teaser of the experience is
live today, so check it out.
Other areas we're seeing
particular interest in WebVR
include news, e-commerce,
interactive VR films,
education, art, and
custom business solutions.
We are eager to see what web
developers use WebVR for next.
So WebVR content is arriving,
and WebVR browser support
is already here.
In Chrome for Android,
we've released WebVR support
as an origin trial for Daydream
View and Google Cardboard.
Our friends at Mozilla,
Microsoft, Oculus, and Samsung
have all released or announced
coming support for WebVR,
bringing it to Samsung Gear VR,
Windows Mixed Reality, Oculus
Rift, and HTC Vive.
In Chrome, we're continuing to
improve and extend our WebVR.
support.
In our latest release
of Chrome for Android,
currently in beta,
we've significantly
improved performance, making
it more consistent and stable
overall, and making
it easier to reach
target framerates by
adjusting rendering settings.
We've also released WebVR
support for Chrome Custom Tabs,
enabling you to enhance
your native Android app
with your WebVR content, too.
Looking forward, we have
support for desktop headsets
in development.
And we're bringing
great WebVR content
right to the
Daydream home screen.
Stay tuned for
more on this soon.
[VIDEO PLAYBACK]
[END PLAYBACK]
MEGAN LINDSAY: We've talked
about progressive enhancement
of VR content and how this is
a super power unique to WebVR.
Let's dig a little deeper
into what that actually means
and how others have solved it.
Weather.com recently released
an interactive WebVR experience
called "The Birth of a Tornado."
They applied progressive
enhancement and responsive
design principles to
ensure their experience can
be used on any device by
optimizing the interaction
model for the device being used.
On desktop, you drag with your
mouse to change the view point
and click to interact.
On tablet, you
change the view point
by dragging with your finger
and tapping to interact.
On mobile, the
phone's accelerometer
is used to provide a Magic
Window into the VR experience.
For Google Cardboard,
head movement
is used to gaze and target, and
tapping the button to select.
And Daydream View uses the
controller for interaction.
"Birth of a Tornado" also
works with Samsung Gear VR
and the HTC Vive.
The model used in
"Birth of a Tornado"
can be used with many
WebVR experiences,
and we have a library
to help make this easy
that Brandon will
show you a bit later.
This is just one way to support
cross-device experiences.
Another example
is Dance Tonight.
Some of you may have experienced
the Dance Tonight project
at I/O yesterday evening,
also built in WebVR.
Dance Tonight is an
ever-changing VR experience
made by LCD Soundsystem
and their fans.
It's made entirely
from VR motion capture
recordings of fans dancing
to a new song by the band.
Another special thing
about the project
is that it works across
devices, but playing
to their individual strengths.
On desktop and mobile, you
get to be in the audience.
On Daydream, you're on stage.
And in room scale,
you're a performer.
If you didn't catch
this in person,
it'll be available
online this summer.
Your input choices
and supported devices
may differ for
your WebVR project,
though we recommend
starting with the goal
of universal access
as a best practice,
and just see how far you can go.
While WebVR content is
still best experienced
immersively in a VR
headset, most people
have still never tried
immersive VR at all.
WebVR content will be
their very first hint
of what they're missing.
The great WebVR
content that you create
will be the reason a new
user decides to pick up
their very first VR headset.
I hope you're as excited about
WebVR's potential as I am.
Now I'd like to
introduce Brandon Jones.
Brandon started WebVR as his
20% project several years ago,
and coauthored the spec.
Today he's going to build
a cross-device WebVR
app for you live on stage.
Please welcome Brandon.
[APPLAUSE]
BRANDON JONES: Thank you, Megan.
So Megan talked
about the principles
of progressive enhancement.
That is, making
pages that can be
used on the desktop
and mobile devices,
as well as across
multiple VR devices.
That can seem very intimidating,
but the right tools
can make it fairly simple.
It all starts with creating
some great WebGL content.
WebGL is an API for rendering
3D graphics to the browser
and is supported across
all platforms today.
There are many great
WebGL tools and frameworks
out there to help you
bring your ideas to life.
From there, turning your WebGL
page into an immersive WebVR
experience can be as easy as
adding a few lines of code.
To show you what we mean, we're
going to build a quick WebVR
experience on stage today.
The app that we're
going to be building
will be a 360
degree Photo Viewer,
which is a great fit for WebVR.
These type of photos are easy
to create with many cameras
available that capture
them, and provide a fun
experience that you don't
get from traditional photos.
Best of all, they can
easily be viewed in 2D
in the browser with a click and
drag or Magic Window controls,
while VR can optionally be used
to provide an enhanced viewing
experience for users
with the right hardware.
360 degree photos also
represent a class of content
that's difficult to get users
to install a native app for.
Because of the
content's simplicity,
the overhead of an
install is probably
enough to discourage
most users, given
that they likely only expect
to spend a few seconds looking
at each image.
It's very likely that
most users would never
get past the app store link.
Ideally, they can fluidly
stepping into VR and out of VR
with very little overhead, view
the images quickly, securely,
and then move on without
having to uninstall anything
afterwards.
This sort of
ephemeral experience
is what WebVR excels at.
So now we're going to
switch over to the code
and actually build
the experience.
Now, we're starting out here--
can we get the
laptop up on screen?
Thank you.
We're starting out here
with some boilerplate code.
We're using Three.js.
It's not the only
framework that you
could use for creating
WebVR and WebGL experiences,
but it is a fairly common one.
There's also frameworks
that are available that
are expressly for WebVR content,
such as A-Frame or React VR.
But because Three.js has fairly
wide developer acceptance
already, we're using that
today as that our example.
Now, the boilerplate that
we're starting here with
is fairly simple, so I'm not
going to cover it in detail.
This is the type of
thing that you would
see in a Three.js Tutorial 0.
And what it produces for
us is a black screen.
Well that's OK, that's
a great starting point.
Thank you, that was awesome.
So because we're creating
a 360 photo gallery viewer,
we need images.
Now, normally these would
come from, of course,
a database, a CMS of some sort.
But we're just going
to hard code them
in for the sake of example.
And then we need a
way to view them.
Now, because it's a
360 image, the method
that we're going to use is
to create a gigantic sphere.
In this case, about
500 meters in radius,
invert it-- and that's what
the viewer scale here is doing,
is inverting on the
x-axis so that all the
faces point inward.
And then keep the
camera for a scene
at the very center of that.
That way when you look
around, and you're
seeing this sphere
all around you
that's practically an infinity.
Next, we need to put
our image on the inside
of that sphere using the
Three.js MeshBasicMaterial.
And this loads up what's
known as a texture in WebGL.
And we'll apply it to the
inside of that sphere.
And it just so happens that
Three.js' default coordinate
systems work out really well for
equirectangular images, which
is the default that most
360 cameras spit out.
Finally, we need to combine
the geometry and the material
together into a
Three.js mesh, which
is the basic primitive
that it uses to render,
and add it to the scene.
So once we've done that, we
can switch back to the browser
and see that we
now have a photo.
Unfortunately, there's
no interaction,
so we can't tell that
it's a 360 photo.
We'll fix that by
pulling in what's
known as the WebVR polyfill.
The WebVR polyfill is a
JavaScript implementation
of the WebVR API that's targeted
primarily at mobile devices.
It uses their
accelerometers to provide
basic head tracking used for
a Cardboard style experience.
It also happens to provide us
with a basic emulated click
and drag mode on
desktop that we can
use to get basic functionality
on this desktop computer.
Now, in order to
make our application
responsive to the
WebVR polyfill,
we have to add a Three.js
extension called VRControls.
We'll attach this to the
camera, and then this
makes it so that any head motion
that happens on your headset
is automatically applied
to the camera itself.
In order to make sure that it
keeps updating with the head
motion, we also need to
add an update function
to the animation loop.
Once those two elements are in
place, we now have basic click
and drag functionality.
And we can see that we actually
now have a complete photo
viewer for a single photo.
But that's not
terribly interesting.
We want a gallery.
So the next thing
that we're going to do
is provide a 2D version of the
thumbnail gallery that allows
us to switch between images.
We'll loop through
the image gallery
that we had loaded
up previously,
load a texture for
each, and then pass them
to this addToGallery2D function
that we're about to define.
Here, we're going
to use a little bit
of basic HTML manipulation
to create a container div,
add a simple class to it--
I'll leave the CSS as an
exercise to the viewer--
append the image that's
associated with our texture
to that container element,
and then add a click handler.
When we click on
this thumbnail, we're
going to swap out the texture
on our gigantic viewer sphere
with the texture that's
associated with the thumbnail.
And this will give us
the basics of iterating
through the gallery.
Now, I should note--
you can see that here.
I should note that this is
actually a terrible practice
because normally,
you would want to use
smaller images for the thumbnail
to not impact loading time.
But because we're doing
everything locally
here, and for the sake of
time, I'm skipping over that.
But you can see here that, as
we click on each of the images,
we can cycle through the
various items in the gallery.
And they're all viewable.
At this point, the
2D side is done.
We've done everything
that we need to work
both on desktop and mobile.
But we're here for WebVR,
so let's figure out
how we allow people to
dive into VR from here.
The next thing that
we're going to pull in
is a utility called WebVR UI.
This is a library created
by the Google Creative Lab
that provides a
button that advertises
WebVR support to your users.
It will also communicate
to them if they
don't have WebVR support.
In order to add that
to our application,
we need to go in, create
an instance of the WebVR UI
button, append it to
the DOM, and then,
in this case, we're
going to ask it
what VR device it's
associated with
and cache that
off for later use.
If we switch over
here now, you can
see that we now have the button
that normally would tell us
that we can go into VR.
But because we're
on the desktop site,
we can't actually go in yet.
That is, we don't have
the hardware connected,
so we can't go into VR here.
We would be able to on mobile,
and I'll switch over there
in just a second.
Now, even if we could
go in because we
have the correct hardware,
we haven't actually wired up
any of the VR rendering yet.
We'll do that with
another Three.js
utility called VREffect.
This makes it so that the
content that would normally
go through the render
and show up on screen
will actually be rendered
twice for a stereo view using
the correct parameters
that it's going
to query from the WebVR API.
We also need to update
the animation loop
to make sure that we can
properly handle when we're
in VR mode versus non-VR mode.
We do this by asking
the enterVR UI
if the user has clicked the
button and if it's presenting,
and if so, we'll render the
scene using the VR effect.
Otherwise, we'll render using
the standard Three.js renderer.
And then the last thing
that we have to do
is, make sure that our standard
requestAnimationFrame is
actually using a VR-specific
variant if it's available.
This makes sure that if
we're on a desktop device
where the VR headset runs
at a higher framerate
than your average monitor--
like 75 or 90 Hertz--
we're running at
the same framerate.
Otherwise, your
user will experience
a lot of stuttering in VR
and come away possibly sick.
So we will update that.
And at this point, we'll
need to switch over
to our Android device to see
the rest of the experience.
All right, great.
So you can see, on Android we
now have that nice Magic Window
interaction mode where we're
able to spin around and see
the 360 photo without
going into VR at all.
This is great if you just
want to showcase, one,
that there's 360 content here--
because the user's
natural hand motion
will give them a hint
that there's more to see.
And two, just gives people
a preview if they're, say,
sitting on a bus and
maybe don't necessarily
want to blindfold themselves.
However, once we hit
the Enter VR button,
we can now switch into VR mode.
And at the moment,
we're configured
to use a Cardboard device.
And you can see that we would
also get the nice stereo view.
Now, these images
aren't stereo, but you
could render a stereo
view of the scene
and it would work correctly
in a Cardboard device.
So we've now created a basic
WebVR-enabled 360 image viewer.
But once again, in
the VR side, we've
only done it for a single image.
And that's not great, especially
when you're using mobile VR.
You don't want to force the user
to put the phone into a headset
and take it out
repeatedly in order
to navigate between different
elements in the scene.
So ideally, we'd like to take
the 2D gallery that we've
created here, and
pull it into 3D
to allow the users to
select between the different
thumbnails there.
Now, this gets a little bit
more complicated than the DOM
elements because we
are dealing with 3D,
so a lot more math is
going to be involved.
But the basics are
still pretty simple.
We're going to be using
spheres, once again, just
like the larger
viewer to represent
the individual thumbnails.
They're just going to be
much, much smaller this time.
And then down in-- when we're
looping through our gallery,
we're just going to add
items to the 3D gallery
as well as the 2D.
And that looks like
this, which now we
start to get into a lot
of math, because it is 3D.
Trig is kind of
par for the course.
But skipping over
the exact details
of what we're doing
here, once again,
we're creating the texture
to go along with the image.
We're associating it with
each of our thumbnail spheres.
And then the positioning
code here just puts them
in a semi-circle around
the user's waist--
somewhere that's kind of
nonobtrusive, but easy
to reach.
So if we switch back over to
the Android, we should now see,
yeah, we have this
nice semi-circle
of the same
thumbnails once again.
Now, this is cool, but that's
probably not the experience
that you want to leave your
users with most of the time
because we're doubling
up on the thumbnails
in the non-VR version.
So over in the code, we'll
just add one more line
to say if we're not in VR mode,
let's just hide the gallery.
Makes things a
little bit cleaner.
So next up, now, we've
got the thumbnails,
but we don't have a way
to interact with them.
And we're going to
add that by using
a library called Ray Input.
Ray Input is the
library that was
used by the Weather.com
example that Megan
was talking about earlier.
And what it does is, provide
a single, unified model
for Cardboard, Daydream, or
higher end desktop experiences
with six DOF controllers.
In all cases, it
gives you a cursor
and a ray that are based
off of whatever the user's
capabilities are, and uses
that to cast into the scene,
find an object, and
allow you to click on it.
So to start off with--
whoops, went a little too far.
To start off with, we have
to instantiate the ray
input we've provided
the camera so that it
knows where we're looking.
We set the size, that's just
a little bit of bookkeeping.
And then add the meshes
that are associated
with that into the scene.
This is so that we can get the
cursors that are associated
with it and the ray itself.
Like the VR controls,
we also have
to update this every
animation frame
to make sure that it stays
in sync with the controller
movement, in this case.
We also want to
make sure that we
know when we're
actually selecting
each of the thumbnails.
So we're going to modify their
opacity whenever the cursor is
over the top of them.
We'll start them out
with a lower opacity--
one moment.
And then we'll have
some event handling here
that makes their opacity higher
as the cursor hovers over them.
One bit that I
skipped here, we do
need to loop through all of
the thumbnails in our gallery
and let Ray Input know
that they are selectable,
otherwise it will also
try to select the larger
sphere in the back, and
that doesn't do us any good.
And then finally, the
last piece is that we say,
when we have clicked on
whatever the primary input is
for our control
mechanism, we're going
to do the same thing that we
did for the 2D gallery, which
is swap out the texture for
that thumbnail for the larger
sphere.
Now, let's see how
that looks on mobile.
Well, let's save it
first, but then we'll
see how it looks mobile.
OK, so you can see, because
I skipped over this,
we no longer have the
image spheres in 2D.
But if we jump
into the VR mode--
and there we go--
we now have a nice
cursor that can
swivel around and select
the different spheres based
on our gaze.
This is, once again,
a Cardboard mode.
And if we click on it, it
switches the thumbnails.
So now we have a fully
functioning gallery
that the user does not have
to leave VR for in order
to switch through images.
Now, to demonstrate that
this works correctly
with more complex input
mechanisms as well,
we're going to come out here--
well, that's not what I wanted.
Sync this up with a
Daydream device, and then--
normal part of the
Daydream entry flow
is that we have to sync
up the controller, which
takes just a moment.
And then we should be back
into our same experience
with-- now you can see a basic
ray-based selection cursor that
does not depend on the
movement of my head,
but can still be used
to do basic selection
from the gallery.
[APPLAUSE]
Thank you.
So that's it.
In about 150 lines
of code, we've
created an experience that works
on desktop, mobile, and VR--
both Cardboard VR,
Daydream VR, and if we
were to put that on some of
the larger desktop systems,
it would even work with
an Oculus Rift or a Vive.
So let's switch
back to the slides.
OK, so to summarize some
of the recommendations
that we covered during the
development, you should--
at this early
stage, we should be
focusing on apps that
can be used in 2D with VR
as an enhancement while the
VR ecosystem is still growing.
We should also
strive to allow users
to stay in VR for
as long as possible,
as frequently switching
between the 2D
and the VR modes can
get a little tiring.
Finally, there's a variety of
input methods across all VR
devices.
And using a library
like Ray Input
helps normalize that into a
single interaction model that's
common between all modes.
So now I'm going to
turn it back over
to Megan, who's going to
tell you a little bit more
about the future
of Chrome in VR.
[APPLAUSE]
MEGAN LINDSAY:
Thank you, Brandon.
Everything up to
this point has been
about bringing VR to the web.
Now, I want to talk about
bringing the web to VR.
Today, when you
encounter a WebVR link,
you drop your phone in
the Daydream viewer,
and then when you're done,
you take the headset back off.
Soon you won't have
to take it back off
to continue your browsing.
As we announced in the
Daydream keynote this morning,
we're bringing the full Chrome
browser and the entire web
to VR for Daydream View first.
You can use the
Daydream controller
to navigate regular web
pages and follow links.
And for WebVR experiences,
you get transported
into fully immersive worlds.
You'll be able to watch
videos in a large screen,
theater-like experience.
Plus, Chrome in
VR is the same app
that you use for browsing in 2D.
It shares all of your tabs,
bookmarks, and history.
You don't have to
re-log-in websites in VR,
things just work.
VR browsing is coming to Chrome
for Android later this year.
So what's next?
Take a look at our
WebVR developer
portal with some great
tutorials and case studies,
and the helper libraries that
Brandon showed us earlier.
Check out the full set
of WebVR experiments
online, and consider
submitting your own.
You can also try out some
of the WebVR experiments
here in person at I/O
in the experiments area.
Thank you so much
for joining us today.
And I can't wait to
see you come up with.
[APPLAUSE]
[MUSIC PLAYING]
