[MUSIC PLAYING]
[APPLAUSE]
JAKE ARCHIBALD:So, I realize
that you're all probably quite
sick of me now.
I've kind of been
around all day.
This is actually
only the second talk
I've given at Chrome Dev Summit.
And the other one was the
very first Chrome Dev Summit
back in 2013, and it went
a little bit like this.
And the new thing is
the service worker.
Actually, I think this
is the first talk on it.
There's nothing to play
with in the browser, yet.
So, this was before anyone had
ever written a service worker.
There was nothing in
the browser at all.
But now, we have two fully
independent implementations
in Chrome and Firefox, and that
means we get the other Chromium
browsers to come along for
the ride, things like Opera,
and Samsung
Internet, and others.
Microsoft, they're working
on their implementation, now.
It's a high priority,
and bits and pieces
are starting to land in
there insider build as well.
Safari still haven't
made a public commitment,
but they have been given
implementation feedback
on the specs.
So, they've been looking
at it in a lot of detail,
and they've been implementing
the Fetch API as well,
which is a big part of it.
It's a prerequisite
if you were going
to implement service workers.
But thanks to
progressive enhancement,
we've gone from having nothing
in any browser to hundreds
of millions of
page loads handled
by a service worker every day,
and that's just in Chrome.
And I'm not talking about
service workers are just
there for push
messages and things,
because there's loads
more of those as well.
I'm talking about
service workers
that are actually handling
fetch events like page loads.
So, that means that today,
which I couldn't back in 2013,
I can stand here and talk
about actual shipped things.
Because in 2013, I basically
made stuff up for 30 minutes.
This slide in particular
is a total work of fiction.
It's great.
[LAUGHTER]
Look how happy I look
there not wearing a suit.
Thanks, everyone.
Oh, to anyone
who's watching this
in the video in the
future, they voted
that I had to wear a suit
for this, and it's horrible.
Thank you, everyone.
Anyway.
But this talk, I
enjoyed this talk.
It was a bit of a laugh, so
I'm going to do it again.
Because there's lot of stuff
we're starting to implement,
or starting to think about,
in service worker land.
And I'd like to share it, and
see what you think about it.
Which things you want
in a browser right now,
and which things you're not
all that bothered about.
I probably should have called
this talk seven things that
don't so much exist right now,
but I'm pretty excited about,
and you might be as well.
It's going to be a
journey to the future.
This is a real FAQ page for
a train company in Wales,
and it's just this
one question that
says, can I buy train
tickets for future travel.
[LAUGHTER]
To which their answer is, yes.
Just that.
I've been to Wales
before, and it definitely
feels like time travel.
Maybe not forwards.
So, what have we got coming up?
OK.
So, we've got streams.
I love streams.
And there's a lot streams
already in the browser.
You can fetch a URL just
by just fetch and await
like we saw before.
Get a reader for
the readable stream,
and then we can set
up an infinite loop,
and we can call
read on the reader.
And this gives us
an object back,
which is very similar to
what iterators return.
There's two properties.
It's done and value.
If done is true, were done, and
otherwise, we've got the value.
And I think this
code could be nicer.
I always get very nervous
about while true code.
This works, but I don't
know, it makes me nervous.
And that brings me to
the first future feature
that I want to talk
about, async iterators.
Now, I have learned from
my mistakes in 2013.
So, this is the vagueness graph.
And I'd say async iterators
are about this vague, but do
bear in mind that this graph
is itself about this vague.
And that's quite vague.
I hope that clears
everything up.
Async iterators, they're
being specced right now.
They're at stage 3 of
the [INAUDIBLE] process,
so we can expect some
implementations pretty soon.
So, how do they actually work?
Well, instead of this while
loop and getting a reader,
we can just do this.
Its much simpler, for
await value of stream.
And it works just the same
way that the while true loop
worked before.
And when these
land in JavaScript,
we'll start to see DOM
APIs updated to use them.
So, thinking about things
like the cache API,
you'll have an iterator to
go over caches or over items
in caches as well.
I'd love to see this
added to IndexedDB cursors
for going through
an entire data set.
If you want to know more
about async iterators,
there it is on the
tc39 GitHub page.
I will tweet out all of the
links I show in the talk.
But if you can't
wait for that, you
can play with them
today using Babel.
This is it running
here in the Babel REPL.
I'm only showing
you this, because I
have an excuse to say Babel
REPL, which is very satisfying.
Babel REPL.
I actually really love the way
we name things in the industry.
We just don't care.
Look at this.
This is a totally legitimate
sentence in our industry.
My tiny Yelp clone,
built with Redux,
is now up on Ember Twiddle.
My tiny Yelp clone is
now up on Ember Twiddle.
I love it!
I should've put it on Babel
REPL and completed the set.
So, when you stream
values from fetch,
each value is a
unit(8) array of bytes.
But often you don't want bytes.
You want some other
format like text.
And you can actually do this
today using the text decoder,
so I'm going to get the new text
decoder there loop the stream.
But this time I'm going
to pass every value
through decoder.decode.
Now, instead of logging bytes,
its going to log streams.
But having to call decode
on each value, I don't know,
it's a bit of a pain.
It'd be nice just to
have a stream of text.
And that's going to be
a lot simpler thanks
to the next feature,
transform streams.
Transform streams,
I'd say they're
about as vague as
async iterators,
maybe a little less vague.
They're still being specced.
There is a sort of JavaScript
implementation of proof
of concept, and
some implementation
is happening in
Chrome right now.
So before we
introduced a decoder,
we were streaming stuff from the
network straight into our log.
No?
OK.
[LAUGHTER]
Thank you.
Transform stream's
become this little bit
that sits in the middle
that takes stuff in and puts
something else out.
In terms of code, they look
like this, new transform stream.
And then you pass in
an object of methods,
like start, called
straight away, transform,
and that's called every
time a chunk is received,
and then flush, which is when
the incoming stream has ended.
And what you get
back is an object
with two properties,
which is a readable stream
and a writeable stream at
the input and the output.
And this works really
well, because you
can pass just one of those
bits to another piece of code
without passing on the
whole transform stream.
So, say we want to
create this text
decoder as a transform stream.
We'd start off by
creating a function that's
going to return it,
set up our decoder,
the internal
implementation, and return
a fancy new transform stream.
We only need the
transform function,
and in there just every
time we get a chunk,
we're going to do
controler.enqueue, which
is passing a chunk
out, and we're
going to call textDecoder.decode
and pass that chunk through.
So, if we go back
to our fetch code
from before that was
logging out bytes,
we can change this round about
here and take our stream,
and we pipe it through the
decoder we just created.
And this, the pipeThrough,
connects the readable
it has, puts it
into the transform
and returns the readable
of the transform.
So, now all the logs will
be text at this point.
Now like async iterators, once
this lands in the browser,
we'll start to see
them appear in the DOM
as well, so the APIs
will be changed.
Things like compression
and decompression,
there's a lot of that in the
browser already [INAUDIBLE],
et cetera.
Image encoding and decoding,
they already exist too.
They're just not exposed
to developers very well,
and they'd be perfect
for transform streams.
But the first DOM
API that is going
to become a transform
stream, and we've
wasted our time by recreating
it, is going to text decoder.
So that's going to be changed
in a backwards compatible way
to be a native transform stream.
And once you do that, you'll
get stream out of the stream.
So, if you want to dig
into streams a bit more,
check out the spec,
and that's where
you'll find the
JavaScript implementation.
I'm really excited about
streams landing in JavaScript,
in case you can't tell, because
I think it's about time,
because streams have been
behind the scenes of the browser
for like 20 years.
If a page is well built, you'll
see it rendered gradually,
and this is because the
browser streams the content
from the network and passes it
through the HTMLParser, which
supports streams.
It can process it
as it's arriving.
Wiki Offline, this
is a Wikipedia PWA,
and it makes good use
of this ancient browser
feature on a low end
device on a 3G connection
or emulating in Chrome anyway.
With an empty cache, the
HTML takes around about just
under five seconds to download.
But all the while
that's happening,
the parser is processing
what it receives,
and that means we get a first
render in less than half
a second.
I think Chrome's throttling
is actually quite kind here.
On a real device it
would be a bit later
than that due to SSL setup.
So, at this point,
we're just displaying
the top banner, the title.
We haven't got the full
page of content yet,
but at least the user feels
like something's happening.
And then at 1.8 seconds,
we get the first page
of content rendered,
and rendering continues
as more stuff is received.
As an experiment, I
also built Wiki Offline
as a single page app, which is a
popular pattern with JavaScript
frameworks.
So, here I'm going to return
this little bit of HTML
and then let JavaScript
handle the rest.
This actually changes
the story quite a lot.
The HTML fetching and
parsing is way quicker,
because there's not a lot of it.
And then here we get the
first render, just the shell.
So, at this point
performance is neck and neck.
But while this is happening,
JavaScript is downloading,
and that needs to
execute, and then it
fetches the actual content
it needs for the page
and inserts it.
Now, we get to content render,
almost two seconds later
than the service
rendered version.
And I'm being kind
here, I think.
We regularly see
single page apps
taking a lot longer than this
to get content on screen.
It's a little bit of
a misleading graph
because it looks like the single
page app completes everything
a lot sooner.
The reason for this is in
the server rendered version,
as it's downloading the
HTML, it discovers things.
It discovers things like
style sheets, images, fonts,
all of that stuff,
and it starts going,
oh, actually, some of this
is important for the top
of the page, so I'm
going to devote bandwidth
to dealing with that.
In the single page app
version, none of that
can happen until that
content is parsed
and that happens right at
the end at that render there.
So it's loads slower.
What can we do about
this performance problem?
Well, we can bring
in a service worker,
and we can store the
actual page in the cache,
and so that makes that
a little bit shorter.
That download time goes away.
We can do the same with
the script as well.
But the page content still
comes from the network.
We can't cache all of Wikipedia.
The problem we have here is
that JavaScript initiates
the content download, so we
have to wait for the JavaScript
to run before we can start
fetching the content.
We can avoid this using link rel
preload, which we saw earlier.
So, doing this means we can run
those two things in parallel.
But, so what?
All of that optimization
later, a service worker,
preloading, caching, we're still
slower than the empty cache
server render.
Just an update for everyone,
the screen of my notes
just went off for three seconds.
This could be happening again.
But we're still slower than
the empty cache render there,
and that's because we're
spending all this time
downloading content and then not
doing anything with it until we
have all of it.
So, we've traded this
gradual rendering model here
for one where we
just display nothing
until we have everything.
And this is because
there's no API that
can take a stream of HTML
and inject it into the page,
and we really need that.
I hope we get that one day.
But until then, we shouldn't
be breaking performance
by using a single
page app then just
trying to limit the damage.
We should be taking the well
performing server render
and then making
that even better.
And streams combined with
service workers let us do this.
So, like we saw
before, this streams.
The same is true if we put a
service worker in the middle.
It doesn't really
change anything.
If the content is
coming from a cache
it will also stream,
which is still important
if it's a large video file.
You still want that
to stream from disk.
But ideally, we want a mixture.
So, we want to serve a single
HTML response, where parts come
from the cache, the static
parts like the header,
but the dynamic parts
come from network.
And you can already
do this in Chrome.
In a service worker
fetch event, I'm
going to get three
parts of the page.
I'm going the start from
the cache, the middle
from the network, like
a sort of include,
and then I'm going to get the
end from the cache as well.
Then I'm going to get
readers for all of those,
because I'm going to
process those streams.
I'm going to create my
own readable stream,
and I'm going to make
a response using it,
so I can just pass the
readable into new response
and off it goes.
Unfortunately, populating
that stream is not so easy.
It's like this.
It's a big bit of code.
I'm not going to
talk through it.
It's quite ugly, and it
involves passing every chunk
through JavaScript
and dealing with it
and processing all of
those streams in order.
This is actually going to
get a whole lot easier thanks
to identity streams, which is
the next of the 2017 features
I want to look at.
I would say these are more vague
than transform streams, mostly
because the API changed less
than two weeks ago, so things
are moving around, but I
think it's pretty stable now.
To use this in
your service worker
fetch events, just
as before, I'm
going to get those three parts
that I'm going to display,
but this time I'm going to
create an identity stream.
An identity stream is just
a transform stream that
doesn't do any transforming.
The input just
goes to the output.
So, I'm going to respond
with that readable part
of the transform, but
then, before I do that,
this is how we deal
with the writeable.
I'm just going to do
something asynchronously,
so I'm going to have a self
invoking async function there.
For each of the responses,
promises that we have,
I'm going to pipe the body
to the right writeable.
And I'm going to say prevent
close here, which is just
saying, hey, once
all of this stream
has gone into that stream,
don't close the other stream,
because we've got more to do.
So, we're going to do
it for each stream,
and then we can close it out.
And that's it.
And not only is
this code simpler,
it's also faster, because we're
no longer passing every chunk
through JavaScript.
Because the browser
can go, oh, hang on,
the stream that we're receiving
is from behind the scenes.
It's either coming from
the network or the cache.
And then the thing
receiving the stream
is the HTMLParser, which
is also behind the scenes,
and it can just do the whole
thing in the background
and save a whole lot
of processing time.
So, now, we're getting
the best of both.
We're responding
quickly from the cache
but streaming the rest of
the data from the network.
And the result of that-- So,
here's where we were before.
We can optimize our
server rendered version
with the service
worker in streams.
The parse starts
earlier, because it
receives that big
lump of content right
at the start from the cache.
And this means our
first [INAUDIBLE]
happens much sooner,
but the important bit
is the content
happens way sooner.
So, we get that quick offline
first cache render, but still
the benefit of the
streaming render
for the uncached content.
So, it's now over a
second quicker for content
than the hacky single page app.
And with a model like
this, I'm actually
kind of happy with
full page reloads when
it comes to navigating around.
So, on the left here I
have a single page app,
so every I click
a link, JavaScript
is going to fetch the data
and put it on the page.
On the right, it's
just a web page.
You click a link, and
it's going to reload,
and it's going to
load that data.
So, I set them off
at the same time.
You see that with
all the complexity
I added, with making
this a single page app
and using push data
et cetera, it's
still slower than
full page reloads,
especially when they're super
charged by a streaming service
worker.
Your mileage may vary.
It can depend on the amount
of content you've got,
but I'm not making
this up though.
Although this is
a demo, I actually
got hit by a real world case of
this only a couple of days ago.
On Monday, I was
at Heathrow Airport
browsing GitHub on airport
Wi-Fi which is not so great.
Now GitHub will use
pushStates, and it
will use JavaScript for
all of its navigations,
unless you're in a new tab.
Then it will do a server render.
So, what I'm going
to here is going
to click a link
on the left here,
and then I'm going to paste the
same link into an empty tab.
So here I go.
Click the link,
paste it, off we go.
And we can see that the server
render wins by a country mile.
It's way faster.
And this is not throttled
or anything, well
not artificially.
This is just airport Wi-Fi.
And this is because,
on the left,
it has to download
anything before it
can show-- it has to
download everything
before it can show anything.
At GitHub here, they've
written a lot of JavaScript
to make this quite slow.
[LAUGHTER]
Unfortunately, all
too often I hear
people say that a progressive
web app must be a single page
app, and I am not so sure.
You might not need
a single page app.
A single page app can end up
being a lot of work and slower.
There's a lot of
cargo-cultculting
around single page
apps, and I know
what happens when you just copy
someone else without really
understanding the situation.
You see, I went out for
a meal with Paul Irish.
Yeah, that's right.
I've had a meal with Paul Irish.
He wants to touch me.
Anyway, I watched
Paul taste some wine,
and this was amazing.
He swirled around in the glass,
and he took this huge sniff
like, [SNIFFING], huge sniff.
And then just took a sip, and I
thought, wow, Paul is so cool.
[LAUGHTER]
He really knows what he's doing.
This is amazing.
Anyway, a couple of months
later, I was back in England
out with some friends, and
we were at a restaurant,
and we had some wine.
And I thought, I've got this.
I know what to do here.
I've seen this done.
So I took the wine, I swirled
it, and I took a big old sniff.
But I tipped the wine glass
just a little bit too far
and dipped my nose in it.
[LAUGHTER]
I don't know if you've
ever snorted wine before.
It is not pleasant.
I just sneezed it
out everywhere,
and my friends were just staring
at me covered in a wine mist.
And they were like,
Jake why didn't you
just drink it with your mouth?
It would have been
so much easier.
The moral of the story is you
might not need a single page
app when it comes to--
[LAUGHTER]
There's a link there.
The server render might be
enough, especially when you've
involved the service worker.
And of course, if you're
using a client side framework,
server rendering is
an absolute must.
I mean React, Ember, and
YouTube web components,
they all let you get something
on screen in a streaming manner
before JavaScript fetches.
Just make sure that you're not
displaying things that should
be interactive but aren't.
So, things are
looking pretty good.
However, Facebook, they've been
prototyping with this stream
stuff and identified a problem.
If you're serving
from a service worker,
there is the start up time with
the service worker to consider.
And that's zero if
it's already running,
but the service
worker shuts down
if it hasn't done anything for
30 seconds, to preserve memory.
Depending on the user's device
or other things going on,
that start up can add, in the
worst cases, a few hundred
milliseconds.
And that delays the content
fetch just by a little bit.
And we are looking to
reduce that start up time,
but it's always going
to be more than zero
if your service worker
isn't already running.
Are we just going
to live with that?
Over my tiny Yelp clone we are.
So, we're going to introduce
navigation preload.
Now, I would say this
is a little higher
on the vagueness scale.
We have an implementation
in progress,
but the spec is still
moving around a little bit,
so take this with
a grain of salt.
Our goal here is to
start the HTML fetch
in parallel with the
service worker start up,
which you can enable just
using this one line here.
You can do that
whenever you want,
but the service worker activate
event is a pretty good place
to do it.
And this means for
navigation requests,
the browser will make the
request of the network
while the service
worker is booting up,
and that response appears
on the fetch event
as a preload response.
And that's a promise.
And that will resolve
with undefined
if it's not a navigation or
if the feature is enabled.
So, it's always worth
looking, and if it's false,
you just do a normal fetch if
that's what you're wanting.
Now, what you do with
this is up to you.
You could respond from the cache
and fall back to the network,
but given that this preload
can happen pretty early,
it becomes realistic that the
network may beat the cache API.
So why not race the
two of them and see
which one comes back first?
I want to pick up on a point
someone said yesterday.
He was very right that
Promise.race is not your friend
for doing this at all.
When you give Promise.race
an array of promises,
it takes the result of
whichever one ends first,
not whichever one
succeeds first.
Take this race.
There's a race.
I'd say this race
was in progress,
because no one has won yet.
Promise.race, on the
other hand, would
say, [GASP] she fell over.
[LAUGHTER]
Don't care about anything else.
The whole race was a
failure because of her.
[LAUGHTER]
Promise.race is a dick.
So, you will need to write
your own racing function here.
You want the value
of the first promise
to resolve with a
truth you value.
It's a few lines, but
that's what you need.
But what about our
streaming code from before?
A straight up preload
wouldn't work here,
because we're not fetching
the same thing that
would be fetched if the
service worker wasn't there.
Because we just want
the middle of the page,
just that middle bit,
because we've already
got the top and the
bottom in the cache.
Thankfully, this
is not a problem,
because those
preload request are
sent with a special
header, this header here.
And if your server
sees that, you can go,
oh, OK, I'm just going to serve
the middle bit, because this
is going to go through
the service worker,
and it knows how
to deal with it.
So, back in our code,
we can deal with that.
Just, right here, use
the preload response,
if it's there, otherwise
falling back to fetch.
That means for
navigation requests
that will happen
at the same time
as the service
worker is booting up.
And this is something that
we can improve on even more.
With this feature, we can
potentially look at doing it
as the browser is
booting up, which
is particularly good
for progressive web
apps added to the home screen.
We hope we can get there,
as well, just as soon
as the user presses it just
as the browser's booting up,
we can have that request
started well early.
If you want to dig into
this a little bit more,
there's a huge thread
on GitHub about it.
As I said, I'll post
the links up later on.
What else have we got?
So, the current way the
service worker works
is that requests from your
page will go via the service
worker, your service worker.
And that happens
even if the request
is to a completely different
origin, like a font service.
Your service worker
decides what to do.
And this is by design, because
it means you can cache things,
like images and fonts, even if
the destination server hasn't
even thought about how that
would work or how to do that.
Downside to that is
many sites may end up
with similar logic for
font caching or analytics
and can end up storing the
same thing independently.
And in the future,
we could look at ways
of duplicating that
inside the browser,
but the logic is still
being duplicated.
So, to the rescue here
comes foreign fetch.
So, I would say, this is
a little vaguer still,
only because I'm pretty
certain parts of this API
are going to change.
But there is a version
of it in Chrome Canary
already, which you can
actually test with real uses.
I'll put a link up on how
to do that in a minute.
So, what is it?
With foreign fetch,
the font service
has its own service
worker and storage,
and if you make a request
of the font service,
it first goes to
your service worker,
you get the first
shout of what to do.
But if you send the request
on to the font service,
it goes to its service worker,
and they get decide what to do,
which could be to get the
stuff out of the cache
and send it back.
So, that means, now, if another
website makes the same request
to the font service, it can
get that caching benefit,
the same resource that the
font service has cached.
So, if you want
to do this, if you
wanted to be the font
service and make this work,
in your service worker, you
listen for this new event,
like foreign fetch,
and this will
be triggered when another
origin requests something
from your origin.
And from there on,
it's a little familiar.
Respond with what you're
going to respond with,
however you want.
Let's look to see if there's
something in the cache,
otherwise fall back
to the network,
and then, you
return the response.
And this is where things
get a little bit different.
Rather than just returning
the response, or a promise
for the response,
you return an object,
which has a response property.
Now, when you do this,
the destination server
will not have scripting
access to the content
of that response.
It won't be able to
get the text of it,
but it will be able to
include it as a script tag
or as an image element
or something like that.
The same way cause works today.
This is like a no
cause response.
It just won't be able to get
at the text or the pixel data
of the image.
If you want the other
server to have that access,
you add the origin
property, and you set it
to the origin you
want to have access.
So, here I'm just passing
through event.origin,
so I'm saying, if I have
visibility to this resource,
I want them to have
visibility to it
as well, which you think
carefully if that's
what you actually want.
Otherwise, you can
set up some kind
of white list or something.
You could even get this
information from IndexedDB.
You're coding.
You can do what you want.
So, this is a representation
of cause but with JavaScript.
So, you can do a lot
different things.
We talked about font and
image APIs in analytics,
but you can use this to
create whole REST like APIs
that work entirely offline.
One detail is missing, though.
How do we get this
service worker installed
on the user's machine?
Because if it's a REST
API or a font service,
like where the fonts come
from, the users very rarely
going to actually
go there, and that's
when the service
worker is installed.
So, to fix this,
when you actually
give a resource to
a page, you can also
serve it with this
special header, which
tells the browser about the
service worker you have,
and it will then
go and install it.
If you keen on
foreign fetch, here's
an article by Jeff, who
was speaking earlier.
He covers it, and he also
covers how you can actually
use it on websites today
as part of an origin trial.
Oh, yeah.
So, earlier on,
background sync was
mentioned, which is a feature
we shipped many months ago.
It allows you to defer single
tasks until the user regains
connectivity.
So, say the user updated
some setting in their profile
or sent a chat message when
they had no connection,
background sync lets
you queue that work,
and now the user
can navigate away.
They can close the browser,
and later once they
have connectivity,
the service worker
can wake up and send
that stuff to the server.
And this is shipped in Chrome.
It's done, and it's great
for small bits of data,
like profile updates,
sending a chat
message, that kind of thing.
The problem here is,
while the sync happens,
the service worker has to
be awake the whole time,
and that's bad for privacy
and bad for battery.
So, we're not going to do that.
What we do now is if a
sync runs for too long,
we just kill the process.
But for large uploads
and downloads,
we're working on something
else, background fetch.
Now, it's quite early days for
this one so it's pretty vague.
Vaguer than the vague graph
itself, so it's quite vague.
All we have right
now is an API sketch,
and we're starting
to explore the issues
and get a feel for
how it can work.
It's a cross-browser effort.
So, here's the idea.
From your page or
your service worker,
whichever, you get a
hold of the registration
and then call
backgroundFetch.fetch,
give it an ID, and then
give it some requests.
So, for a movie, this could
include the video resource
but also, some metadata or
something, poster image,
or whatever.
And that's it.
That fetch will now
happen in the background,
even if the user closes the
page or the browser on mobile.
And once the fetch completes,
you get an event for it,
and that will give you
information about it.
You can start having a look.
What's the tag?
Yeah, I'm going to
actually cache this stuff,
so I'm going to open the cache.
And then event.fetches will be
a JavaScript map of requests
and responses that
arrived, so you
can do what you want with that.
Of course, if you're uploading
photos you, don't want to cache
the result. You'll just
maybe show a notification,
so you've got the freedom there.
And during the
fetch, the user will
see a notification
that will show
the progress of the download.
And because of this
high visibility
and it being easily
cancelable, we're
hoping that we can deliver this
feature without any mission
prompts or something like that.
We just need to make sure that
the privacy aspect is correct
and make sure it
isn't to abusable.
If this is something
you're interested in,
you can take part on GitHub.
I will move that repo somewhere
a little bit more neutral,
like the WICG standards thing.
Oh, yeah!
Earlier on, I showed
you this thing here.
The full page navigations
being significantly faster.
But I know why people
go down the SPA route.
It's because they want the
ability to do a nice transition
from one state to the other.
It makes me sad, because I've
seen developers introduce
large frameworks just for
basic transitions, which
is a little bit of
a shame, especially
to have to reimplement the
entire navigation stack
just because you want a nice
fade from one thing to another.
And that's why we are going to
take another look at navigation
transitions.
I mentioned them yesterday.
I really want us to have a
good plan for this in 2017,
but right now, the idea
is very, very vague.
In fact, we have to scale
the whole graph down just
to see the top of it.
So, take this with
a big bag of salt.
[LAUGHTER]
And it's not the first time
we've looked into this either.
Internet Explorer 5, you
could use this meta tag
to specify an enter or
exit transition from a set
of configurable presets.
So, with this page in
Internet Explorer 5,
the user would click the link,
and Internet Explorer would
crash, is what it usually did.
[LAUGHTER]
Well, that was my
experience anyway.
But then in 2014
Chrome Dev Summit,
we pitched this transitions
idea, we showed demos.
It didn't really pan out.
Mozilla had a proposal, as well.
But they're both solutions
that live in CSS,
and they're limited by what you
can declaratively say up front.
I don't think they're
expressive enough.
Stuff like this
should be possible,
and that would be
a full page reload
utilizing the full navigation
stack of the browser
and the streaming HTMLParser.
Because when you do this,
you get the back and forward
buttons working for free.
If we actually
take a closer look
at this transition,
the first part,
we can do that without
any additional data.
We already have the image,
we know where it's going,
and we have that
title already stored.
We can do that bit, and we
can improve the perception
of performance by doing this
bit while the actual fetch is
happening, and then we can bring
in the content once it arrives.
And if it arrives while
we're transitioning,
we can bring it in
earlier and make it part
of that sliding transition.
The transition out is
a little bit different,
and we actually need more
data to do that transition.
Because we need to know where
we're sending the clock back
to, which depends
on layout, but also
scroll position,
because when you
use the back and
forward buttons,
it will try and restore
the scroll position.
I really think we need an API
that allows this, something
like a navigate event that
fires when this page is
going to be changed.
And you can say, hey, I'm about
to do a transition so keep
this document around for a bit.
And at this point, you can
start doing the very first part
of the transition.
You're getting
everything into place,
where you think things
are going to be.
Get a hold of the new
window object, which
will represent the
page that's coming in,
and that will resolve
as undefined if it's
a cross-origin navigation.
I would like us to look at
cross-origin navigations
as well, but they have
to be pretty restricted
for security reasons.
But once you've got
this new window,
you've got scripting
access to it.
You can start doing
what you want.
By default, I think the new
window will draw on top,
but the transparent parts
will show the page underneath.
So, here, you can start
looking at where elements are,
what the scroll position is.
Here, I'm just going to
set the opacity to zero
of the new document, wait
for document.interactive,
and then fade that document in.
So, that's a simple
fading animation.
This is a simple example,
but it's as complex
as you want to make it.
So, with this, you'll be able to
do these expressive animations
but retain all of the features
that the browser gives you
for free in navigations.
If that's interesting to you,
the details are on GitHub.
Once again, I intend to
move this repo somewhere
a little bit more neutral.
The term progressive web
app is just over a year old,
but the work has been happening
for years on this stuff,
and we're not done.
I think you've heard over
the past couple of days
how much we love the web
and where we want it to go,
but now it is over to you.
We want your feedback
on this stuff,
be it in GitHub at
the very early stages
or playing with this
stuff in Chrome Canary.
So, come and talk
to us about it.
Basically, I can't put it better
than this shop window sign.
We're you're not
till not happy--
[LAUGHTER]
Wait.
We're not happy till
you're not happy.
No, That's not it either.
[LAUGHTER]
Till-- Oh, I don't know.
Anyway, thank you very much.
[APPLAUSE]
[MUSIC PLAYING]
