PAUL KINLAN: Little do they
know I don't think I've actually
worked out their bonuses yet.
So that's fine.
Hi, everyone.
My name is Paul Kinlan.
It's great to see you all here.
I had a really great
two days, actually.
Has everyone had
a great two days?
Yeah, so everyone's
been super awesome.
I really like the
conversations that
are going on over lunch
over the break as well.
There was a question before
about kind of web serial
and stuff that I saw.
That's a big one a lot
of people asking for.
And it's kind of cool that you
can go to the Chrome engineers
and actually pitch it for it.
So I think that's pretty good.
But anyway, I'm here to
talk about what comes next.
I don't know how
long it'll take.
It'll be all right.
I think we'll get
out pretty soon.
It should be pretty cool.
Like, the thing I
was going to say
was that this part of
the talk-- [INAUDIBLE]
before this, in fact.
This part of the talk
was-- it was supposed
be just after Jake's talk.
Jack was supposed to talk
about all the practical things,
right?
The things that we want
to see from the future
of the web in terms of, like,
the infrastructural elements,
the improvements in service
work or the improvements
in the networks, stacked with
fetch and background sync,
all these types of things.
And then I get to talk
about all of that kind
of the showbizzy things,
right, like WebVR
and all that type of stuff.
So this is me experimenting
with it in, I think,
2009, 2010 or something.
It didn't work in the slightest.
So I just-- whatever.
It was kind of fun
to play around.
I thought, I got a gyroscope.
I've got canvas.
I'll be kind of cool.
Nothing worked.
I'm terrible.
But anyway, one of the things
I'd like to talk about,
and I was trying to think
about how we frame this talk,
is that if you think about it,
like back to two years ago,
Alec Russell was talking
about-- or one or a year ago.
Alex Russell was talking
about distribution
is the hardest problem
in software, right?
And the web for us is
actually a great way
of actually distributing our
software because you just
click on the link
and go to the places.
And if you've got any experience
helping your grandparents
or if you ever worked in
a big enterprise, like,
you'll get these
types of experiences
where you have to
go through and you
have to go install
the applications
and go and download them.
And it's just such a nightmare.
Has anyone worked in
enterprise deployment at all?
A couple of people.
Like, it's an
absolute nightmare.
You build big pieces of software
to deliver to enterprises
and you have to go
on site and have
massive teams to go out and
build all this infrastructure.
And one of the things
I liked-- and I
used to work for Experian
years and years and years ago.
We moved from this
type of model into kind
of the web type of model.
And for us, that
was great, right?
We could just go to the
user or the customer
and say, go to this URL and log
in with your account details
and you'll get this experience.
It was a really great
model for everyone.
But the interesting
thing was, like, we
knew at the time it
was way less capable.
The browser, although you
had active-x and a bunch
of other stuff, the browser
didn't do a lot of the things
that you'd expect a
native application to do
or a native experience
at that point.
But we traded that off.
Like, we just said, the model
for delivering this software
out to users and all the, like,
the enterprises and any user
that's out there is, like, it's
way better than the model that
has existed in the past.
And I tried to think about this
model of distribution, right?
So in the 1970s, you'd
buy an Apple machine
and then you'd build it and
have to construct it yourself
and take it home.
And then you have to program
the software that was on it.
And then later in the 80s,
you could go to the store
and buy the software there.
And by the 90s, the
web came in, right?
And like, that's the start
of the change, right?
You could build web pages that
were based on CGI at this point
with a little bit of
JavaScript occasionally
and then actually start to build
interactive experiences where
immediately you got a lot
of value from the web.
I think that's quite powerful.
But at the same time,
like, native platforms
are starting to catch up, right?
Native platforms,
especially the time around,
like, specifically
the iPhone came out
is that obviously we
waited a little bit
longer for native applications
to come through this.
But at that point in
time, we got to the bit
where the web is great.
Everyone likes the
distribution model of the web.
We need to solve this
for the platforms
that we're shipping
at the moment.
And obviously,
things have changed.
Like, app stores come along.
And in the future, like, chat
applications and other kind
of different social media--
or not social media,
but different types
of experiences
will enable people to distribute
software more effectively.
And I want the web to play
a massive role and kind
of make-- be everywhere
in all these platforms
and be the key reason
why you'd actually
deploy on the, like,
deploy software
because the web
is a great model.
But the way I was trying
to think about this
again was the reason why
a lot of people, at least
when you speak to a
lot of developers,
why they moved to the native
platforms and went with,
like, native kind of--
I'd say native hardware
or native APIs was-- it's
kind of weird, right?
Like when the iPhone
first launched,
the web was the way that you
delivered the software, right?
Everyone said, this
is the way you're
going to build applications.
They introduced a
whole bunch of new APIs
that were media queries, local
storage, Web SQL, AppCache.
You know, there's a whole
bunch of different APIs
that got launched to
support the ability
to deploy kind of comprehensive
software on the web
through mobile devices.
But then everyone was
like, yeah, that's cool,
but we want like
these native APIs.
We want this kind of ability to
have a distribution platform.
And then that took off.
And at the time,
the web was just,
like, we'll catch
up at some point.
And it kind of, like,
continued on for a long time
without that much change, right?
Thought we had all the
primitives on the platform
to be able to deliver a
comprehensive and compelling
experience.
But it wasn't until kind of--
I guessed all these numbers,
by the way-- about 2012.
Like, we didn't
actually have-- I
think 2013 was when Chrome
came to Android at this point.
But we didn't have like a
compellingly competitive
mobile browser
ecosystem at the time
and we weren't pushing out
all the kind of the features
that we needed.
We knew we needed
to solve payments.
We need to solve all
these other pieces.
But we didn't really have
kind of the emphasis behind it
to do it at the time.
So I was thinking about what
is the game plan for the web.
So the whole thing
about this is-- have you
ever seen a presentation
by Paul Louis
where he draws, like,
these most amazing pictures
and he has custom slides
for every single thing?
Well, I was like, I'm going
to do better than that.
I'm better than Paul Louis.
So I bought an iPad and a pen.
And that is all I could do.
So anyway, the whole
idea behind this
was that I was
thinking about, like,
what is a mobile web game plan?
And at the time, and like for
the last three or four years--
hang on, maybe three years
at least, anyway-- like,
it's kind of
everyone's incentive
to say we just need to
catch up with native, right?
We need to have a whole
bunch of these features
where we know that
native is doing.
So obviously, it's very
hard to kind of get
this all going with
the specification
groups and other browsers
kind of collaborating.
But you can start to
see a trend, right?
You can start to see
more kind of involvement
across the ecosystem.
Say, yeay-- well,
what's this one?
Geolocation JavaScripts
came through straight away.
But we didn't have
access to the camera.
So we got get user media.
We got all these other
different APIs coming through.
We're not kind of
completely compatible
across the entire
browser ecosystem on them
at the moment.
But we have the ability to
try and solve those problems.
And I think it's interesting
that we are reaching more
of kind of the native
parity at the moment, which
is kind of cool.
So-- let me see.
I've lost the slide I was on.
Sorry.
So this is kind of
the graph I've got
is, like, we've got all
these new APIs coming in.
I think it's quite
compelling that we've
started to see a massive
change in the industry.
But there is still a
lot more to think about.
So we've got things
like-- obviously,
like, geolocation
is one big one.
We've recently moved out
to the kind of HTTPS only.
It annoyed a lot of people
that we made that change
but we think it's the right
thing to do for user security.
Actually, we've got cameras,
which is kind of cool.
The interesting
thing about cameras,
and I'll talk about
cameras in a minute,
is we do have access
to the ability
to do inline camera access.
We also have the ability
to fall back as well.
So if you have a native
camera application like iOS,
you can choose that.
Again, limited only to HTTPS.
And this is a common theme
across all the new APIs coming
through to the platform is
that they have to be on HTTPS.
We think they're powerful.
We want you to use them
but they have to be secure
and users have to be able
to trust them at that point.
And again, an extension
for the camera, microphone.
Again, same restrictions--
you have to be on HTTPS,
have to be user granted
permission from them.
We have the battery status.
Again, this is a
little bit contentious.
It's being removed from
some browsers at the moment.
But the idea is you can
understand whether-- people
can access the [INAUDIBLE]
people are actually
trying to power the device.
So you can maybe give
a different experience.
If the user is low on
power, you can say, hey,
we're not going to do all
the kind of fancy animations.
I don't think
people are actually
using that API this
way at the moment.
I don't think many
people are using that API
but that's what it's
there for at the time.
We have permissions
on the platform
so you can actually build
compelling experiences
in terms of, like,
we know that you've
got access to geolocation.
I'm not going to try and prompt
for geolocation straight away.
So you can understand the
state of the permission model
that the user's accepted.
Like, there's a lot more
things to add into this,
but you can start to provide
more compelling experiences.
And this is especially
important when
you're kind of building full
screen applications as well.
We have network information.
A lot of developers, especially
when we've been out to India,
[INAUDIBLE] is like,
we want to understand
the type of the network
that the user is on so we
can adapt the experience.
I don't think we're
actually using this
to our full advantage
at the moment.
We've got things
like a thing called
Downlink Max, which
basically says, hey,
we know that the user is
on at least a 2G connection
or at least they can have the
speed of the 2G connection.
You might want to do
something with it.
And again, I don't think
that many people are using it
right now.
But you can start to think about
how you can adapt your user
interface and your experience
to the needs of the user based
on the types of network
that they're on.
I think that's quite compelling.
And you've got Autofill.
It's kind of boring.
It's really hard to get
people excited about Autofill.
But we know that it improves the
overall experience of the web.
Like, for users who are
trying to fill in data,
everyone hates
keyboards, fill-in forms.
We really encourage
people to use Autofill.
But no one really does.
But that will change
over time, I think,
because we know it has a
measured and improved benefit
for users at that point.
Then, obviously, we saw the
Credential Management API
yesterday.
I think that was actually
really cool, right?
You can get one tap sign
in and have it synchronized
across all your devices.
That type of experience is
a really great experience,
especially when you're
thinking about kind
of cross device,
cross form factor
conversion at that point.
And then obviously
Payment Request API--
it'll be great to see
whether this comes through
and how it's kind of supported
across multiple platforms.
But my whole bit about
this is that I rethink--
and I think Zach said
this yesterday is
that you can start to think
about amazingly compelling
guest check outflows.
Once you know that the browser
supports the credit card
information or the
payment, kind of has
the payment information
that you can provide
across platforms or at
least across the device,
then once you know
that you've potentially
got that, you can
start to think about,
well, I don't actually
need to sign the user in
to be able to get them
to make a payment.
I can just take the
payment, and then
ship them the product after
the back of that, which I think
is powerful.
And obviously push
notifications-- everyone's
been talking about
push notifications
for a little while now.
We know that this
actually has a material
impact on peoples'
kind of engagement
and revenue and
re-interaction and everything.
And it's great on
the special mobile.
It works when the
browser is closed.
You know, I'm not going to
talk about this too much today,
but this is one of those
powerful APIs where
we don't need to build a
full-on progressive web
application to actually
start to take a benefit
or make-- like, receive the
benefit of actually seeing
this API or using
this API at least.
And obviously, we've
got offline support.
We've been talking
about kind of building
these offline-based experiences
for a long time now.
We've got kind of the
tools and the platform
across most of the platform.
And even if you want to
fall back to AppCache,
we don't encourage it.
You can actually start
to think about how
you build these experiences.
And it's not just about
offline, full offline support.
It's about thinking about
resilience of your application
in terms of like kind
of an adverse network
at that point, which I
think is pretty powerful.
And finally, we get like this
whole idea of installability.
You get to the
point where, if you
meet all the criteria, if we
think your application should
be installed or
could be installed,
then you'll let the user say,
hey, we can install this.
And it will be on
the user's device
working like kind of
a native application
would at that point.
And we think that's
pretty powerful.
We think it's pretty good.
But the thing is, it's just
a big list of APIs, right?
Like, we're just talking about
these different kind of APIs,
one after one after one.
We know that the next
API that we need to build
is the most critical API
that we need to solve.
And I think that's the thing
that I'm trying to say here
is, like, we did a whole bunch
of APIs that we kind of thought
were kind of cool
to start off with.
Getting camera access, great.
But it wasn't until the
last maybe two years
we've had to say,
well, actually we
want to build these
resilient applications that
are, like, great in the
face of adverse networks
and actually get to the point
where we need service worker.
We need to make
them installable.
We need to know that
users want to get
reengaged in through
push notifications.
There's been a lot more
tactical about how we actually
start to implement those APIs,
which I think is a good thing.
But it's hard to actually see
that strategy kind of play out.
The thing that I want
to get to this point
though is, like, it could still
get to the point of everything
is just a random API that
we still have to build.
Like, I'm not pointing
out the web serial API.
We know that there's use cases
for things like the web serial
API but we have to
think tactically
about how we bring
those APIs to the web
because there are some really
important things that we
need to get done.
And the thing is, like, we don't
want every single API to be
like, hey, native has got this.
I'm actually going talk
about some of these,
so I'm kind of contradicting
myself a little bit.
We know that, like, the native
platforms have got this.
We're not going to
implement this directly.
We should think about how
we want to kind of have them
in the context of the web.
And the context the
web is the thing
that I'm kind of
interested about.
And we were thinking about
it on the Chrome team a while
and they don't we like people
using this acronym, at least.
Because it's an
acronym and, like,
if you've ever been to a Chrome
dev summit, you get RAIL.
You get AMP.
You get PWA.
The world is full of
acronyms at the moment.
But the reason why I
like this one SLICE
is that it at
least codifies some
of the reasons why I think
the web is important,
the benefits of the web
that other platforms don't
necessarily have.
So, SLICE is kind of simple.
You're secure.
The idea is that we've got an
overly restrictive commissions
model and a security model
where everything is sandboxed.
In the past, we've
had some issues.
But the idea behind this
is you don't automatically
get access to everything
on the user's device.
You have to kind of do it
kind of-- if you want access
to the camera, you have
to ask the user for access
to the camera.
Like, those types of things.
So it's secure.
It's sandboxed.
You can't just go and pull
out data from another website
that the user
might have been to.
Like, the whole
kind of ecosystem
is kind of conscious
about security.
And I think that's a
very kind of cool thing
for our side of things.
The web is linkable.
It's really hard to find a set
of links over the last few days
which haven't been
interesting, I suppose.
But the idea behind it is,
like, we have these links.
Once we have the links, we can
do really interesting things
with them.
Like, we can build
these types of sites.
We can build indexes.
We can build news like
news.ycombinator.com.
Because it's a
link, we can go to
and we can do things with it.
And then once you
think about the things
that you do with it--
like, indexability, right?
That's the heart of Google
from our point of view is,
like, it's indexable.
We can go and archive and
organize and aggregate
the world's information,
provide-- actually,
I don't know.
Does anyone else know what
the mission statement is?
Sorry, I'm pointing
to my boss here.
No?
Cool.
AUDIENCE: [INAUDIBLE]
PAUL KINLAN: That's the one.
But that's the
whole point, right,
is like, we can go
out discover the data.
It's an easy, parsable manner.
Sorry about that, by the way.
It's an easy, parsable manner
that we can start to understand
and then we can do interesting
things because it was indexable
and because it was linkable.
And then the idea behind
is, like, the next bit is,
like, we know this from kind of
the whole start of the AJAX era
was it's composable.
We can take JavaScript
form somewhere.
We can take an I-frame.
I know Paul didn't like
the I-frame thing before.
But we can start to
mash together and build
interesting applications
just off the fact
that other interesting
applications and components
exist on the web.
I think that's
incredibly powerful.
And then I think, like, the
whole idea behind ephemerality,
this is the
guardian's mobile-- we
were out in one of
the breakouts before.
The guardians mobile
apps experiment of it'll
deliver you news
via notifications.
You go in.
You install it.
You forget about the web page.
You never have to go
back to the web page
to start experiencing
these applications.
Normally, the web lives and dies
when the browser tab closes.
Service worker changes
that a little bit.
But like, these types of
experiences we can build.
Where we say, like, I'm
going to use it once.
And in this case, I was using
it-- this wasn't for Brexit,
but like, it was on for Brexit.
And we got to it.
I fell asleep.
I got all my notifications.
I saw kind of Brexit play
out via notifications.
Once I'd kind of cried a little
bit and closed everything,
it was closed.
It was gone.
I never received another
notification again.
And I think that's a
very powerful model
for the web is we don't have to
think about these experiences
where you have to go off,
install it, and start to use it
just to get some
experience out of it.
Like, it can live and die with
kind of how you want it to.
But the thing is, like,
SLICE is just a model, right?
It doesn't cover all
the other benefits
that we know of the web, right?
It's accessible.
Should be available for everyone
to work on and use irrespective
of kind of whether they
can actually see it,
whether they can hear the
kind of experience from it,
or even actually
interact with it.
It's installable.
Like, it's updateable.
It's deployable.
Like, it's composable.
Like, there's lots of even more.
I've said composable
once before, but like,
there's lots of different
kinds of properties
that we know the web to
be that actually just
don't make this acronym
make a lot of sense.
Like, if you think
about it, we've
got this idea of huge amounts
of different properties.
This is the thing
that-- actually, I
was speaking to one of
the PMs the other day.
Like, we've got this
massive ecosystem, right?
And the thing I
liked about the way
he was phrasing this, like,
it's a massive ecosystem.
You pull in from all these are
the tools from around the web.
Lots of other web developers
are building on it.
If one of those kind of
industries goes away,
it's fine, right?
Because more people kind of
come back in and likewise
there's no one owner
for it as well.
You get to the
point where there's
no one owner for the web.
It means you've not
found a gatekeeper.
You're not controlled ultimately
by their whims at that point.
We can go out deploy it.
And as long as you give
the person the link,
they can access that link.
They can start to
experience your experiences.
And I think that's
incredibly powerful.
So for me, one of the things
I was trying to think about
is, like, if it's not
just about a feature race,
what is it about?
Well, we've been
doing a lot of work.
And I think over the last two
days, we've seen some of it
by Rick Byers and
everyone as well
is we want to smooth
out the platform.
We do definitely want to
reduce the feature gap.
But we want to do
it in a way that
enables brand new
styles of content
and new levels of
interaction that we're never
going to see from any other
platform unless it's the web.
So one of the things
I was thinking about,
the smooth laying
out of the platform--
this is the second image
I drew with the iPad.
I was actually
quite proud of it.
Everyone else hates it.
But the idea is like,
you have this kind
of level of lumpiness, right?
Like, the web is not even.
Not every single
browser implements
every single feature.
And as web developers,
we quite frequently
find that really,
really frustrating.
But the interesting thing for
us is, like, there are really
big things.
And some of these big things
I'm going to talk about today.
Like, things like
Bluetooth or ES6,
like, you know it's not there
but you can kind of see it,
right?
So you can kind of ignore
it and go around it
and say when that
becomes ubiquitous,
I'm going to start to use it.
But then there's the
really frustrating things
like Flex Box where there was
two different implementations
of Flex Box and it was
really hard to work out
which individual
browser supported
which individual version of
Flex Box and which syntax.
Like, those types of
frustrations really
kind of-- well, they
frustrate developers.
It means that you can't
build great experiences
for your users that are
responsive and accessible
for everyone.
So one of the things that
we've been trying to do
is, like, smooth out some
of those rough edges.
And the first one, and it's
one of the most recent,
is, like, position sticky
is like one of the ones
are developers have
always wanted, right?
They've wanted the
ability to say,
anchor an element to the
top of the view port.
And we had it in Chrome.
And everyone was like,
that is great, right?
Apple have got it.
Chrome's got it.
I think Firefox
had it at the time.
And we were like, yeah,
it's not that performant.
So I'm going to remove it.
By removing it at
that point, I mean,
it might have been the right
thing to do ultimately.
We got to the point where
it wasn't compatible, right?
So people couldn't use it.
You couldn't rely on
it so you couldn't
build the types of
experiences that-- you know,
like, this is not a
great experience of where
you might want to use it.
But you couldn't do it without
JavaScript at that point.
You'd either have to include
it or not include it.
And for developers,
that is actually
a really frustrating part
of the experience for them.
And then we get
this idea of things
like Intersection
Observer, right?
We know that the web is slow
when you scroll and you're
trying to kind of keep
something in the view
or know when something
has gone into the view.
Now, this isn't necessarily
about bringing ubiquity
to the platform
because I think Chrome
is the only one that
implements Intersection
Observer at the moment.
But the idea behind
Intersection Observer
is, like, we want to kind
of provide a level playing
field for performance as well.
So you can start to
understand when elements
come into the view port and
when they leave the view port so
that then you can do
kind of-- your room
or whatever you want to do
[INAUDIBLE], whether it's ads
or whether it's some
other types of logic
as well, which I think is very
cool because then when you
start to think about the next
part of the future-- and, like,
this is one of those ones where
this is really hard to actually
see in terms of the
code and there's not
a lot of detail in this.
I stole this from Paul Lewis'
Polymer talk, which is actually
a really good Polymer
Summit talk, which
is a really good talk.
But the idea behind it is, like,
custom elements for a long time
have been kind of talked about.
They've been deployed
in some browsers.
We didn't deploy it
completely because we
had had a V0 and now a V1.
I think now is the point on the
web where developer have been
really frustrated
that they couldn't
do these types of experiences.
It was completely--
uneven is the easiest
way of saying it at the moment.
And it's great to see that this
has come to a lot more browsers
at the moment.
It's definitely in Chrome.
The latest versions of Safari
definitely had template syntax.
And now they've got custom
elements as well in the shadow
dome.
So that whole part
of the ecosystem
is all started to play out.
And it's great to see that
a lot of the browser vendors
are all starting to
work together on,
we know that these are
the important APIs that
need to get done.
Developers have been
saying that we need
to get these APIs completed.
And finally, we're starting to
kind of get a rounder picture.
And on that subject is
another one, is like,
we on the Chrome team,
we made this decision
to not support pointer events.
And I think about two or
three years ago, we had said,
we don't want introduce
multiple points
of models or multiple
interaction models to the web.
We don't think
developers want it.
Microsoft was like, yeah, no.
Developers do want it.
And know we've got
this experience.
They want to have
one unified model
of interacting with
things like touch
or interacting with things
like the mouse pointer.
They don't want to
have to deal with all
the different ways of doing it.
And so developers shouted a lot.
And Rick Byers, who
was on yesterday,
was one of the engineers who
started to kind of implement
that and flesh that out.
And now, pointer
events is in Chrome.
So we're trying to start
to bring more compatibility
to the web to, like, level up
that part of the playing field
so for you as a developer,
it makes it 10 times easier
to work out what
you should support
and how you should support it.
I think that's
pretty interesting.
And then also, like,
Darren mentioned
this in the keynote
yesterday is that we've
been pushing progressive web
apps for a long, long time,
right?
And we've been
saying-- well, I'll
say about the last year
and a half, maybe year.
Now, the whole idea behind it
is we want your applications,
if you want them to and
the user wants them,
to act and feel like a
native-like experience.
If it's installed on the device,
it should appear everywhere.
And if you've actually ever
installed one of these,
yes, you can get
on the home screen.
You can launch here and
it's in the tab switcher.
But that's when
kind of the illusion
breaks down after that, right?
Like, we've got a
nice model but there's
a massive uncanny
valley at the point of,
like, these aren't actually
native applications
on the system.
They don't live in the app view.
And there's a whole bunch
of other kind of edge cases
that every single developer has
implemented a progressive web
app, either with push
notifications or not,
has been complaining about.
So what Darren was saying
is, like, we actually
want these experiences to look
and feel like they're native.
And so this is the flow
that we've got, I think.
This is the Add to Homescreen
flow normally-- or not the Add
to Homescreen flow normally.
This is the new kind
of install flow.
So we've taken the
whole idea of Add
to Homescreen, which
essentially was
a bookmark on the Homescreen
with a special parameter
that Chrome knew how to launch
the screen, into a fully kind
of native model.
So the application is
downloaded, installed.
It's still a progressive
web application at the time.
Pulled evidence,
pulled from the web.
It's not anything kind of
packaged up and everything.
But it's a native application
on the user's system
at that point.
And I think that is
incredibly powerful.
Now, you can experiment
with this today
and I'll show you how
to do it in a minute.
But the idea behind
it is, like, we
want to experiment with this.
It's a flow that we
think is going to work.
But we do need a
lot more feedback.
But once you actually get
these applications installed,
it's really good.
So one of the kind of
things that we've seen
is that we've got-- developers
wanted-- or users at least as
well-- wanted their
applications to appear
in the app drawer and other
elements of the system UI
as well.
So it now appears in app drawer.
You can actually go and
inspect the storage model of it
as well.
So you can see the storage.
Storage will be allocated
to the application,
not just to Chrome or
the-- Chrome as a whole.
And you can do a bunch
of other stuff as well.
So you can force stop
it, uninstall it.
You get access to the,
like, the battery profile
and a bunch of other stuff.
So your application is
ultimately accountable
versus just being accountable
to the browser at that point.
We also get deep
integration with links.
So if you own, like, in
this case, airhorner.com
and the user clicks on
the link to airhorner.com,
rather than going
to the website,
you'll go directly into
your native population
at that point or your
progressive installed web
application at that point.
And I think that's pretty cool
because all you have to do
is update the manifest
to actually say
how it should actually be
intercepted on the user's
system.
And likewise for
notifications, like,
you click on notification,
like, the bugs
that we have in the
system were like,
you click on the
notification and you
go to the actual web app
versus the thing that
was installed in the home
screen, which essentially
are the same thing.
We just didn't know and we
couldn't launch the application
at all because we didn't know
it was installed on the home
screen at that point.
So we kind of leveled out that
part of the playing field.
So it's a lot more
compelling and a lot more
natively integrated
at that point.
Hey, thank you.
The other thing as well is we do
continue to respect the launch
information as well.
And the interesting thing
about the launch information
is that obviously when you
click on the Home screen or you
click on the link or you
click on the notification,
you do want it to
launch in portrait.
Or if it's a game,
you might want
to launch in landscape--
like, that type of thing.
We've had a lot of trouble
actually making sure
that was synchronized
across the entire device.
The cool thing for
me at least, anyway,
is that the biggest
thing is that we
can keep the application
name and launch
profile on the manifest--
everything, all
that information,
up to date as well.
So the good thing is like, if
you update your manifest right
now, because it's
just a bookmark,
we don't know whether your
application is updated.
We don't know whether
you've changed
your name or the icon's
change a little bit,
those types of things.
We now have the
ability to actually say
we know it's changed.
We know that the user
has got it installed
and we can update
it across the device
as well, which I think is
actually pretty powerful.
And the great thing is,
like, if you're already
building a native or a
progressive-- native web
application?
That's not the
word to say, is it?
If you're building a progressive
web application, like,
you don't really
have to do anything.
You have an optional
scope attribute
and that's pretty much it.
The scope attribute just
says this is the URL string.
But if the user
clicks on it, it will
cause my native,
or my web-- I keep
saying native application-- my
progressive web app to open up.
I think that's really cool.
So it's experimental today.
If you just go to Chrome
flags and search for enable
improved Add to Homescreen,
you'll be able to get it
and it's actually
really interesting.
But the thing I would say is
we do want a lot of feedback
around this because we want to
make sure that the model works
for users, works for developers,
and we can go more from there.
But I did say that was kind
of smoothing out the platform.
I think a lot of the things
that we've been talking about
is just making developers'
lives a little bit easier.
I do want to talk briefly about
kind of decreasing the feature
gap because this
is where, for me,
some of the showbizzy
things come in.
But the interesting
thing about this
is that we're in
this weird tension
where there's a lot of new
APIs coming to the platform.
Some of them are not completely
specified at that point.
Like, in the past, you
go through Chrome flags
and you enable it
to test the API.
But that's actually really
hard for doing kind of,
in this case, Alex was sort
of saying doing science
on the web at scale.
Like if you want to know
that an API works with all
your user base and how it works
and how users interact with it,
you somehow have to get that
out onto a stable channel
somewhere.
But if it gets onto
the stable channel
and then developers
start finding it
like the old kind of
saying webkit prefixes
but the old prefix
model, like, that
causes a lot of problems in
the long term for developers.
And we don't want
that kind of happen.
We want to be responsible about
how new features and new APIs
designed but tested at scale.
So I definitely
encourage everyone
to look at Alex
[INAUDIBLE] post on this
because he gives a lot of
insight about how we're
thinking about this model.
But the name-- and Alex alluded
to this in the panel session--
is Origin Trials.
Now, the idea
behind Origin Trials
is that we sit
there and go, well,
we think that this API
is going to be important.
Like, in case of web
Bluetooth or web USB
or persistent stores
that Drew worked on,
we know that this is
like an important piece
of the overall kind
of API ecosystem.
It's not fully
specified just yet,
but we want to
get it tested out.
You have to sign up for the API.
There's basically a
link that you can go to
on any of these pages.
You sign up for the API.
You drop it inside
your web page.
In this case, it's a meta tag.
And the whole thing is
designed to kind of-- I
don't want to say
fail, but it's kind of
designed to only run for
a certain amount of time.
So this is like--
as a developer,
you know that you're opting
into this experience.
You know that at some
point, the API will change.
It might change significantly.
It might actually get
pulled out once we
know that we don't actually
want to-- developers and users
don't want to see this
shipped on the web.
But the point is,
these Origin Trials
allow you to have
that flexibility
to experiment with the API,
give us a lot of feedback.
And then we can actually kind of
help the specification process
move along a little
bit more effectively.
And one API that I want to talk
about what is behind an Origin
Trial, it's quite close to my
heart, is the web sharing API.
Like, I used to work
on the Web Intents API.
And the whole idea
behind that model
was to say the user should be
in control of the applications
that they use to
perform common tasks.
So if you want to
edit an image, you
would use the image
editing application
that was on your site or
inside your native application
or inside your device
at that moment.
The problem with it
was it was broad.
We learn a lot about building
ecosystems and building APIs
where it's an undefined scope
and an undefined kind of range
about how big this should be.
We got a lot of
feedback from developers
that go like, well,
I don't want to edit
and I don't want to save.
I'll do an edit and save
intent at the same time.
And it got to a point
where we couldn't feasibly
deploy this API at scale.
So the thing was, we said we
should go back to the drawing
board and design smaller chunks
where we should try and solve
the sharing intent.
We should try and solve
the different kind
of aspects of what we
were trying to-- like,
in the original vision
was going to do.
But do it in an isolated sense.
So this is the sharing to
app or this is-- sorry,
this is the web share
API at the moment.
It's an Origin
Trial inside Chrome.
You have to subscribe up to it.
We're testing out.
We want a lot of
feedback around this.
But it's a simple API.
Cool, it's all right and
it works pretty well.
But the idea behind it is
you just share some data.
And then that will be
passed to the underlying
kind of sharing information.
Like in the case of
Android, it will just
do a [INAUDIBLE]
basically a [INAUDIBLE].
And then the application
picker will pick it up
and then you'll be able
to share the data to it.
Like, it's got some
problems still.
We need to flesh out images
and a bunch of other things.
But the capability
is there and we've
got the ability to test this
on-site with every single user
who, say, visits my rather low
traction blog at the moment.
But I think it's powerful API.
But that's going from
web to native, right?
And what we're saying is we
want the web to be across all
the user's ecosystem.
So in this case, we
want the-- and this
isn't actually ready yet.
We're still trying
to work this out.
But the web target
API-- the idea
is, like, your web
applications should
appear in the native picker.
Now we're trying to do
this via the web app
manifest and then also the
service worker as well.
But this is one of those things
where the intent is clear,
right?
We want to make sure
web applications,
if the user installs
them, act as first class
citizens on the web.
That thing is pretty powerful.
There's also a whole bunch of
media improvements as well.
And this is where things
get kind of a little bit
more interesting at least.
The whole media team
have been working
on this idea of-- developers
don't have to do everything.
Like, we can provide a lot
of integrated experiences
with the user's device.
So the first thing that we did,
and this was about a year ago,
was anything that
you did, if you
had like a connected-- who's
got an Android web device?
That's more than I
thought, actually.
It's cool-- cool.
So if you're play-- normally,
no one puts their hands up.
But if you're
playing some media,
you'll be able to kind
of-- that notification
will get generated in the
user's device, passed across
to your watch.
And then you control
it from there.
The developer doesn't
have to do anything.
And that's actually pretty cool.
You get in this kind
of thing for free.
Again, kind of just making
the platform a little
bit richer for web developers.
We've also got the ability to
do things like background play.
So you can take like a
movie file on an audio file,
close it down or turn your phone
off to go to the home screens
go dark.
And then you can just like still
control the web experience.
I think that's actually
quite powerful.
Like, you can start to think
about podcast applications
or music applications
which you can just
run in the background
continuously still
but have the ability to
control them from the web,
from the user's device.
And then if you move a
little bit further on,
some of the rest of the work
that the media team are doing--
and this is one of my favorite,
because I did a little demo
on which I quite liked.
But anyway, the idea is,
like, capture stream, right?
You want to record
something from a canvas
and actually record it
into a movie file, right?
Like, a lot of people have been
doing this to try and generate
animated gifs and a bunch
of other stuff and movies.
Like, there is a dedicated API
now-- canvas.capturestream.
It's behind Chrome
flags at the moment.
It's in Canary normal.
Anyway, you basically
get the canvas.
You say, I want to capture
at 25 frames per second.
And in this case, I'm
just going to attach it
to a video element.
It's probably not
the best use case
to do anything with
that video element.
But it's a stream.
You can put it onto something
that can read streams.
And once you can put
it into something
that can read streams is
you can do things like,
well, I'm going to put
it on a WebRTC connection
and I'm going to send it
out to someone in Australia.
And we're going to
kind of-- they're
going to be able to see what
I'm doing on the screen,
like, inside my kind of
webGL 3D game, which I think
is pretty powerful, right?
Like, it's very hard to do
these types of experiences
on any other platform.
On the web, now three or
four lines of code and you
can start to stream your
experiences with your kind
of friends and family.
And one of the
things I do like is
that you can then think about,
well, I've got the stream.
I actually want to record it
and actually kind of save it.
I can persist it to disk.
So this is using
the media recorder
API, which takes the stream
from the camera at this point.
And then when the data
kind of comes through,
you append it to a blob.
And then you just
start recording.
And then once it's
completed, you get the blob.
And in this case, this
is a demo that I wrote.
It's a little bookmark
that that I wrote.
It finds a canvas on
the page, records it,
stops after 10 seconds, and
then downloads it as a webm file
to your hard drive
at that point.
Like, it's 20 lines of code
and you can get this experience
where I've not actually seen
this type of thing on the web
before.
Record a webGL game, kind
of throw it up to YouTube.
It's pretty cool and
pretty powerful, I think,
at that point.
But once you kind of
have the the camera--
and this is the thing that
most people don't know
is, like, you have
the streams coming in.
You've got WebRTC.
You can send the video stream.
Now you can send the canvas
directly to the user.
One thing everyone
says is, like, we
can do a lot with the
user's camera right now.
We've got getUserMedia,
which gets
that stream from the camera.
That's like a camera.
But we only found this out with
maybe about six months ago.
If you actually capture frame
from the getUserMedia API,
it's only like a 1080p.
It's not like a raw full
dump of the entire kind
of camera frame at that point.
Now, the thing is, we've
got the image capture API.
Again, it's in
Canary at the moment.
But the idea is you can pull
in a getUserMedia stream.
Say I want to take
a photo and it
will give you the
photo-- like, the blob
of the photo at that point.
So if you've got a 21
megapixel camera, in theory,
you'll get a 21
megapixel image, which
I think is pretty powerful.
The more important thing
is that you actually
get to understand the settings
and capabilities of the camera.
We haven't had
this before, right?
We can take the media stream and
say, what can this camera do?
Well, it can zoom in.
You can control the ISO.
You can auto-focus.
You can do all these
other things with it.
We now get that
piece of information.
We can get that back.
And once you can
get the information,
the next thing to do is,
can I do something with it?
Well, the answer
is yes, roughly.
And it's like, the
idea behind this
is if you know that
the range for zoom
in is that you can say, well,
I want to do a double zoom.
And the idea here is that you
will obviously do the zoom.
And this is the video,
at least, anyway where
I was trying to
recall the slides
and it didn't quite work.
But the idea is you
have the camera.
You change the properties
and it updates in real time.
And then when you
take a photo, it
will use those
properties as well again.
I think that's pretty powerful.
We can build kind of full
on camera applications,
not that we need to.
But we can help build full on
camera applications on the web.
And I think that's
pretty powerful.
And then one of the other ones--
and this came in last night.
So this is one of
those ones where
I was speaking to one of the
engineers on this, Miguel.
And he was like, Paul,
I've got this API for you.
Can you talk about it tomorrow?
I said, what's the API?
Because I'm kind of running over
time and I've run way over time
now already.
he said, I can detect faces.
I've got an object
detection API.
In the future,
it'll do QR codes.
It'll do bar codes.
Right now, it does faces.
I was like, nah,
that's not true.
And he showed me.
This is the code, right?
You basically do face detector.
You detect the
faces with the image
that you've just captured
from the image capture API.
And then you can
start-- you can pass it
to the underlying kind of
system behind the scenes.
And then it will find the faces
and you get the information
that comes off the back.
And I think that's actually
pretty powerful, right?
It's like, I built a QR
scanner a couple of years ago.
And to get it running
at 60 frames a second,
it's an absolute
nightmare to do.
And if I have one API that
lets me do that, like,
that is actually a really great
thing for me at that point.
So that kind of brings
me on to the next bit.
It's like, we have
this idea of sensors
behind the scenes in terms of
like-- face detector sensor is
not really a sensor,
but it's a thing
that pulls out data from
the underlying device.
Now, the generic sensor
API is an interesting one
because the idea behind
the generic sensor
is it basically provides
a common-- I need
to get this right, but
like, a common abstraction
for how to access
hardware consistently
inside the browser.
The browser vendors
have a way of saying,
we've got all these
different APIs.
We've got like a gyroscope.
We've got an accelerometer.
We've got all these
different ones.
How do I kind of access
them consistently
in a relatively sane
and equal way across all
the different sensors?
This has been kind
of in edit mode
for I think about
a year and a half.
It's only recently that
we've started to actually put
this inside the browser.
And it's on.
There we go.
I'm really proud of
that demo because I
was like, that's going
to annoy so many people.
But the idea behind
it is you can
have a sensor that is like,
what, the ambient light sensor
in this case.
It landed in Chrome in
Canary the other day.
And the idea behind it is it
just reads the light values
from the image sensor
or whatever kind of
sense that you've got
which can actually detect
light levels at that point.
You initiate the ambient
light or you get a hold
of the ambient light sensor.
You put a handler
on it for on change.
And then you start
it and then it
will deliver changes as--
maybe a specified frequency
if you want and regularly
to your on change.
Handler.
You just put your
application logic in there
and you can kind of
do whatever you want.
The interesting
thing about this API
is that you can
also poll as well.
So if you don't want to
have, like, an on change
handler always
firing but you only
want to do it kind of
synchronized to a frame,
you can actually say, well, what
is the data from this sensor
or what is the value
of this sensor?
And it'll return the last value.
And I think that's
quite interesting.
Ambient light, I don't
know how much use there is.
You might put a dark mode
inside your application
or you might do
something super annoying.
But it gets a little
bit more interesting
when you think about
like a compass, right?
A compass needs-- actually
I didn't realize this.
I just thought it was
like the alpha component
the orientation.
It actually needs
multiple sensors
to be able to build a
compelling compass for the web.
And Kenneth from Intel gave me
this demo, which I'm grateful.
I think he's a-- there he is.
Hello.
But the idea behind this,
there's multiple sensors.
You need the accelerometer
and the gyroscope
to actually start to think
about how you can actually
kind of get the
proper compass values.
And at this point,
like, it's quite simple.
You start both the sensors up.
And then you kind
of get the changes.
Then, you store the changes
in some global state.
And then you update when you
need to render at that point.
It's just quite simple.
The logic behind
this is quite-- like,
it was harder than I thought.
It was using [INAUDIBLE] and
a whole bunch of other stuff.
But like, that whole
point is like, you've
got two or three
sensors on the device.
You can start to do really
interesting, compelling things
with them once you start
to get that data through.
And not necessarily you have to
rely on a browser vendor making
the compass API
at that point just
to actually solve
those problems.
And I think that's
actually pretty cool.
Actually, did I just talk
about the wrong slide?
I didn't play the video.
I'm sorry about that.
So that's kind of like the
newer APIs coming through.
I think some of them
are pretty cool.
Like, some are very hardware
driven at that point.
And the one thing I do want
to try and get across at least
is that I want these web APIs
to kind of-- I say mimic native.
That's the wrong way of saying.
I want all the capabilities
of the native platforms
to be available
to web developers.
But I don't want us
to kind of like lose
our soul at the point
of saying that we
must have an exact
parity with those APIs.
There are very webby
things that we can do
that no other platform can do.
And, like, that's
one of the things
that I think is pretty
cool, especially
on the whole ephemeral
aspect, right?
Is like for very short,
lightweight experiences,
whether it might be a marketing
campaign that a lot of people
get asked to build or just even
things like the election users,
like, you don't want to have
to build a native application,
deploy it through the stores.
You just want someone
to go to a URL
and start interacting
with the experience
and then once something happens
be able to respond to it.
I think that's a very
powerful thing to do.
And I think if you look at
things like the physical web--
has anyone interacted with
any of the physical web
beacons today?
That's cool, quite a few people.
Like, we had Polymon kind of
on the physical web broadcast
the URL.
Your phone picks
it up or any device
that can actually pick up the
beacon signature at that point.
It understands what the
URL is being broadcast.
Present you with some metadata
in the user interface.
And then you can
click on it and start
to interact with
that experience.
That's super light weight.
Like, no one's ever
going to build or install
an application
which is just there
to interact with the
TV just like, say
for a conference and
those types of things.
Like, the lightweight kind
of like ephemeral nature
of these experiences,
especially through physical web,
are really powerful.
But the really interesting
thing for me is like, yes,
we can discover
like a beacon, which
is kind of cool, that
points to a web experience.
But actually, sometimes we want
to take the web experience,
like the URL that's
been presented,
and actually to a
physical device, right?
Like, I know we were talking
about the internet of things
before.
But like, this is where you can
start to see kind of the tie-in
with web Bluetooth.
And web Bluetooth, again, we
talked about this last year.
But it's at this point now
again where is an Origin Trile.
I think it's an Origin Trial.
Is it still an Origin Trial?
Yeah, it's still an
Origin Trial, man.
I thought we'd--
it's an Origin Trial.
So you have to enable
it and kind of-- you
have to enable it and then
kind of if going to use it,
which is cool.
It's fine.
The API still might
change at this point.
But you can start to build
really compelling experiences.
You can have a
piece of hardware.
This is the Play candle.
Vincent has been
walking around the venue
a couple times with
the actual PLAYBULBs.
And we've had a code lab
as well where you can
go and start to play with it.
But the idea behind it is you
don't need a native application
to start to interact
with that experience.
It literally link to
a website which then
connects through to the beacon.
And then you can start-- or not
the beacon but the Bluetooth
device.
And you can start
to interact with it.
And then you can walk away.
You don't have to install.
This is an added to the home
screen progressive web app.
Once you've interacted
with it, you
don't have to install
it again and use it.
I think that's
incredibly powerful--
super lightweight experiences
that we can do a lot with.
And it's kind of interesting.
Like, I'm not going to get too
much into the whole Bluetooth
space.
Because if you're not going to
build things with Bluetooth,
you probably don't have
to understand it too much.
But it's like, you have
this idea of the BLE beacon
or the BLE device broadcasting
a whole bunch of attributes.
Those attributes-- well,
you broadcast a whole bunch
of attributes through
the GAT server.
You have this idea of services.
Like, your device can
have multiple kind
of super capabilities.
Like, it could be
a battery service.
It could be in this case
like the candle service
at that point.
You connect to the
service and then
you can get different
attributes off the back of it.
So a service might have
multiple attributes.
Like in this case,
the battery service,
you probably only want to
ever read the battery level.
But you can kind of get that
and start to read from the data.
And then you can also
get prompted for changes
to those data.
And it's actually a really
simple or relatively simple
API.
Once you understand how to
actually start interacting
with the device and you know
what data you need to send it
and how you should connect to
it, it's relatively simple.
And it gets even
simpler when you
start to think about the
async await syntax as well.
Like, you're not having
to chain promises together
to the next and the
next and the next.
It's actually pretty simple.
But in this case,
the discovery phase
is you just basically call
navigates.bluetooth.request
device.
Tell the type of service
that you want to connect to.
And then you'll get
prompted to say,
well, we know that
there's the device here.
Do you want to access it?
Once you get access, you
can physically connect.
You get the access
to the service.
You can try and get access
to the service in this case.
This is for a
heart rate monitor.
You say I want the heart
rate-- the heart rate service.
And then you can say,
well, I got the service.
I need to get regular data
from it at that point.
So I'm going to get the
heart rate measurement.
And in this case, I want to
be notified whenever the heart
rate measurement changes.
And I think that's a very
kind of relatively easy flow
just to start getting some
lightweight interactions
with the device at that point.
It gets a little
bit more complex
when you think about
things like Web USB.
And Web USB is-- it's
an interesting API.
And again, this is
a demo from Kenneth.
But the idea is that
any web page could
connect to the USB device.
So this is kind of interesting.
So you send some data
through-- slow typer.
Press send.
Then it appears on the device.
So you connect it to
the device and you've
sent some data to it.
But the interesting thing is,
like, the first thing people
that say is like,
I don't want a web
page accessing my USB devices.
And there's a very
great medium document
by Roddy Grant, who's engineer
on this project who's basically
describing this security
model of web USB.
And getting to the whole point
is like, not every single site
will be able to get access to
any USB device-- specifically,
only white-listed
sites by the device.
So the device has to say, this
site can access my-- like,
can actually connect
to the device.
And then only when the user has
actually opted in and granted
it will the connection be made.
So the idea is that you can
get the USB-based experiences.
You can plug it in
and, like, the owner
of the piece of hardware
will be able to say,
yes, I'm going to build
the web-based user
interface for this experience.
A lot of other random sites
won't be able to do that.
I think that's quite a
powerful security model
for the web at this point.
And again, the API is very
similar to the Bluetooth API.
Rather than Bluetooth.connect
request device,
you do the same
thing but with USB.
You as the hardware
vendor know your vendor
ID and all that type stuff
so you can connect to it.
The user grants access.
You get the callback through.
And then it actually gets
really complex, right?
I remember a couple
of years ago,
we tried to make an Xbox
connect thing for Chrome apps.
And it gets really
complex when you
have to deal with the types of
control methods and the data
transfer.
Like, if you're into
USB or hardware,
you probably understand this.
I don't particularly
understand this
because I'm not kind of
build hardware interactions.
But you do get to choose, like,
the control method and the data
transport mechanisms as well.
So you get a lot of control
over the device at that point.
And then we also kind of start
to think about the new types
of experiences.
Like, those two
experiences in theory
have been quite
lightweight, right?
You can have the device
or a thing around.
You can connect to it, start
experiencing with it, leave,
and then it's fine, right?
You've not installed
a new device.
The WebVR experience I think
is an interest in space
to be in because it is quite--
immature is the wrong word.
But its quite nascent
at the moment.
Things are changing.
Everyone is trying
to explore what
to do in the space of WebVR.
Like, has anyone
got PlayStation VR?
One person, two.
I've got one.
They're pretty cool.
But like, we don't know how to
use these experiences properly.
We don't know how to build
them properly as well.
So it's a very kind
of emergent market.
But the Chrome
team in particular
have been working on
kind of making sure
that you can start to
build web-based experiences
that a kind of are powered
by the VR subsystem, at least
anyway.
And it's in Chrome
56 at the moment.
Again, it's behind
an Origin Trial.
And I think the
thing about Web VR
for me is not that it's
going to like take over
the world and everyone
is going to use it.
But it is uniquely
positioned to be
able to provide compelling
experiences that
are very web-like.
You don't have to
install a whole bunch
of native applications just to
experience some web-based VR
content.
And the interesting thing-- and
if you've used the Chrome dev
summit site, is that we
believe at this point,
like, progressive enhancement
is key to this, right?
That you can build experiences
that live on the web.
They're there for people to
interact with irrespective
of whether they have
the piece of hardware
that they need to
experience a VR system.
So that's pretty cool
on that side of things.
The way that we implemented
it-- and we're trying
to think about how you
implement some of these early VR
experiences.
Like, we're not saying right
now that you go out and build
a whole bunch of games on
these triple-a class games
to actually take
advantage of web VR.
But it's a very much more
incremental approach.
So on the Chrome
dev summit site,
we had the plain old image.
You got the kind of
picture of this venue.
Then, you had like
this 2D immersive view.
So if you had a device that had
WebGL, you could click on it.
And then you could kind of,
like, drag your mouse around
and scroll around the page.
Then, you had this
kind of AR view.
If you had an iPad or an iPhone
or any device with a gyroscope
on, you'd be able to
kind of like basically
move your device around.
It's not complete AR.
It's like faux-AR at this point.
But then if you have a headset--
and I think the headset got
launched today.
If you have the headset,
you can plop your phone in
and experience the web VR
experience first class.
So this is the experience that
we've got on the Chrome dev
summit insight.
This is like, we know that this
device doesn't have what we are
but we can provide this
immersive experience
because we have WebGL.
And I think that's
pretty cool because this
is the model, right?
The plain image view,
the immersive 2D view--
so this is where I'm kind
of-- I've got my phone
and I can move it around.
Like, not every experience
is going to be like this.
But it's quite powerful
that you can do that.
And then we've got this idea
of the full immersion, right?
And this is rendered using
WebGL using the web VR
view that Boris Smus
on the Chrome team
wrote or is in the
Google team at least now.
Is that you can basically
plop your phone in.
It will know that you've
connected your phone
to the hardware at least.
And then you move around
and it automatically
moves into this model.
I think that's actually-- it's
actually pretty cool, right?
Because you get to this
point of every single user
can experience your site.
You're not building
an application
for this experience.
You're not having to get
people to go install it.
If they have the VR
kind of capabilities,
they can start to take
advantage of it pretty quickly
and you can do this for videos.
You can do this
for images-- like,
a really nice way of doing it.
Now, the thing I would say is,
like, you can get to this point
where we want to
ultimately build
these triple class or
triple-a class types of games.
I personally don't know
whether we're there on the web
just yet.
But I think we're getting
into a good place.
And the final thing
I would say is
that I want to get
to this point where
we have a common understanding.
And it could be SLICE.
It could be any other
kind of model going.
We have a common
understanding for how
we want to deploy these
experiences on the web.
The web has properties
that no other platform has,
specifically around
ephemerality and the ability
to-- linkability
and indexability.
You can give a link to anyone.
You can start to use that
experience and go anywhere.
The last thing I would say is if
you're interested in obviously
the progressive web app space
and the future developments
from Chrome and these new APIs,
we do have our developer portal
on developers.google.com/web.
If you go to /web/updates,
you will get all the new APIs
as they come through Chrome.
But our guidance and our focus
is-- the new and shiny stuff
is great, right?
It gets people
excited and it gets
you inspired to actually build
the next generation things.
Developers.google.com/web
is our place build-- like,
gives you practical guidance for
all the technologies that are
available today.
So we're much more focused
on responsive design
performance and service work
and progressive web applications
and obviously developer
feedback as well.
So with that, I know I've
run completely over time.
But I would like
to thank everyone.
This rehearsal,
right, by the way--
SPEAKER 1: You're still talking.
PAUL KINLAN: Yes.
So
SPEAKER 2: Did they
say keep going?
SPEAKER 1: Are you trying to run
straight into Chrome dev summit
2017?
PAUL KINLAN: Yes.
OK.
SPEAKER 1: Ladies and
gentlemen, Paul Kinlan.
PAUL KINLAN: Always.
SPEAKER 1: Not for me.
You can take it.
