PAUL LEWIS: So yes, we're going
to talk about performance.
We're going to be talking about
both page load performance,
so getting things
to load quickly.
And then we're going to
be talking about 60 frames
a second kind of stuff as well,
the runtime aspects of this.
So introducing the panel then.
So at the far right
we have Pat Meenan,
Software Engineer on Chrome.
If you've ever used
something like WebPagetest,
you've used one of the
finest performance tools
that we have available today.
And if you haven't, go to
webpagetest.org and shove
in your site and find
out how you're doing.
It really is that good.
To his left, my right,
that's Paul Irish, which
I think most of us know him--
Product Manager on Chrome.
I think we do know him from
things like jQuery Modernizr,
and of course Chrome DevTools.
He's sort of been involved
in that for a good long time.
To my right we've got
Siddharth Vijayakrishnan.
And he's a product manager
focusing on Chrome's network
stack, ensuring that Chrome
loads in the best way possible.
And he's also responsible for
the Chrome data compression
proxy on Android, as well.
To my left it's Nat Duca.
He's a software
engineer on Chrome.
And if you've heard
the word jank,
then essentially that's
down to Nat, pretty much.
He's the person behind Chrome's
trace-viewer GPU rasterization.
Basically, I discovered pretty
early on in my career at Google
the most graphics
roads kind of lead back
to him, for good or ill.
To his left it's Jan.
And he's a software
engineer works
on things like mod_pagespeed
or nginx pagespeed.
So if you never tried those,
you absolutely should.
They can make a world of
difference to your site.
They optimize your content
automatically on the server
side, so you don't have to think
quite as hard about all that.
And then at the far
left, a late addition
and a very kind and
willing person to join,
is Alex Russell.
He's going to hopefully help
us out a little bit with things
about ServiceWorker because
it's going to come up.
Let's be honest, it's been on
the tip of everybody's tongue
for the last day or so.
So all right.
So we'll get started.
Question from a guy called Paul.
I like how this starts already.
Oh yeah, oh yeah.
Paul for performance,
performance with Pauls.
It's good.
"Performance is
a massive area--"
it's a fair question-- "so where
should I invest my efforts?
On the network or on the
client and rendering pipeline?
And if I can't do
all of it, what
do I spend the bulk
of my time on?"
Who wants to go?
Pat.
PAT MEENAN: I think
it's largely going
to depend on what kind
of app and experience
you're delivering.
If you're delivering sort of
more of a content experience,
you're probably going to want
to focus more on the loading.
If you're delivering sort
of a rich app-interactive
experience, you can afford to
spend more time on the loading
to get all of the experience
there ahead of time,
and have the rich interactions
and really smooth interactivity
with the user.
PAUL LEWIS: Does
that sort of bank
on having ServiceWorker there?
Are you sort of saying,
well, you know what?
Actually we just figure
that that first load will
take the hit, and then
we hope that things
will be better for feature load?
PAT MEENAN: I mean just look
at how your users are going
to interact with
your content, right?
If they're going to
interact with it as an app,
focus on their
actual interactions.
If they're going to
go to it to consume
content and sort of jump
away, get the content to them
as quickly as you can.
PAUL LEWIS: Nat, do
you feel the same?
NAT DUCA: Oh, yeah, totally.
I mean, this all begins
with a model of your users
and who you're trying to attract
to make money from or exploit
in some way.
Right?
Clearly.
So if you have
terrible jank, it's
going to drop them right out.
So there's sort of
this baseline you
need to achieve
across the board.
If you have a 10
second page load time,
you're going to lose people,
and you need not to do that.
Once you get down into sort of
the baseline of you are mostly
smooth and you are
mostly quickly loading,
then you get into this nuance.
So first of all, get that
baseline established.
And then from there
you want to decide,
are people returning
to your site enough
to start benefiting from
cache ability and things
on that side.
And the cool thing here is
there's this trade off of,
especially if you could
start assuming ServiceWorker
or cacheability.
You start being able to
simplify your loading process
and have a little more free time
to do great, smooth UI effects.
But you really have to do both.
You have to plan to figure
out how to afford to do both.
And then if you can't
afford to do both,
you're going to
have to figure out
which side to simplify
in order to get
to the end of your launch.
PAUL IRISH: I was going
to add in one thing, which
is think about it from
the user perspective.
The page load is like
your introduction to them.
And if you are just like
stuttering while you're
introducing yourself
to someone, it's
not going to really
work out so well.
So there's the introduction.
And then there's
where you actually
demonstrate your value.
And that's more of like
the application performing,
and things are smooth, and not
breaking user expectations.
And so obviously we have
to do both of these things.
It comes a lot down to using
what your users pain points
are.
And so getting good
feedback from them
on like it feels
weird, it feels slow.
And then tell me more
about the slow thing.
Well, I come to it
every single day
and it takes so long for
it to get up and running.
Then you understand that it's
all about that initial load.
So you have to do both.
And just identify,
work with your user
to identify what
is really impacting
their perception
of the experience.
PAUL LEWIS: OK, fair enough.
I guess related to that,
there's another question
from another Paul.
I fully approved of all the
Paul's asking the questions,
incidentally.
"If I was a developer in India
or another emerging economy,
where would you now tell
them to focus their efforts?"
Is that the same, or do we
think that it changes matters?
PAT MEENAN: First of all, sort
of one thing that really grinds
me about the question
is, you don't
need to be a developer
in an emerging market
to target an emerging market.
So, let's face it.
The emerging
markets are actually
going to have a lot
more mobile users
than the non-emerging markets.
So hopefully we're all
targeting all of them.
And the things that you do
for the emerging markets,
if it's a situation where
bandwidth is severely limited
or network connectivity is
limited, a lot of the things
that we've been talking
about here for offline
are going to be huge.
Like ServiceWorker
is going to be huge.
Reducing the amount of stuff
you're shoving down to them
is going to be huge, especially
on the initial load and stuff
like that.
But everything
you do for them is
going to be beneficial for
all of your user base anyway.
PAUL LEWIS: So let
me put it this way.
According to the HTTP Archive
I checked, 58% of the web
has pages that are one
megabyte or larger.
That was as of the
start of this month.
And certainly we're pushing--
PAT MEENAN: Actually, it's
close to two megabytes, I think.
PAUL LEWIS: Yeah.
PAT MEENAN: Which is
scaring the crap out of me.
PAUL LEWIS: Yeah, right?
And so we're kind of pushing to
have these richer experiences,
which kind of implies more
JavaScript, more resources,
just more, more,
more of everything.
And even without ServiceWorker,
if you're talking about, say,
somewhere where ServiceWorker
hasn't shipped that much.
Should we be looking
at say, lo-fi versions
of sites and apps?
Where is that dividing line?
Do we sort of go, well
actually, I look at this market
and maybe it doesn't
do so well here.
And I want users here, therefore
I'm going to scale things back?
What do I do for the people
who have good connections?
Lots of things in there.
Who wants to tackle that?
JAN MAESSEN: Bad connections.
I think a lot of us
sitting in this room--
at least, if we're developers
in the US, like I am-- sort of
don't realize just how bad bad
connectivity really can be.
So when it takes 10 seconds to
get the first 15 of your page
downloaded, and 30 seconds to
just get the rest of the HTML
down, the other megabyte
of image resources
just isn't even on the map yet.
So the only way to address that
kind of market is to simplify.
And that might mean you should
be simplifying your entire app
experience.
PAUL LEWIS: So what we do
then to make that actually
a real thing then
for developers?
Like you say, it's
very easy to forget.
And unless a developer
is very willing
to go and, say, spend
time with WebPagetest
and manually set
up all these tests,
how can it be more
front of mind?
ALEX RUSSELL: I think the
answer is user research.
You need to be talking to users
and studying them, and studying
their behavior, and
studying their actions.
I know that it's very
difficult to classify users
on poor connections,
in many cases.
You can sort of understand
the total RTT time
from your first
payload, and then
understand maybe something
about their latency.
And I know that
that's difficult.
We have looked at
Google at systems
that will allow us
to classify users
based on those sorts of signals
that we get from the network.
But those aren't available
to everybody yet.
So I think there's a tools
gap-- understanding who
your users are, where
they're coming from,
what sorts of experiences
they're having.
Real user metrics are
an important component.
And then doing science
about the performance,
as opposed to guessing.
Like talking to users, looking
at the actual performance,
looking at the
actual engagement,
and then running
tests, A/B tests.
You've got to test.
Like introduce
artificial latency.
See how it affects
usage of your site.
Come back and see what happens
if you reduce the latency.
If you do a performance
pass, what does it improve?
Does it improve engagement?
Does it improve responsiveness?
You'll figure out
where to invest only
by looking at the problem
with an open mind.
PAUL IRISH: I was
going to add thing
that is pretty
concrete here, which
is, there's a lot of APIs
available for identifying what
the network story is and
collecting it from the wilds.
So navigation timing, you can
reconstruct the entire network
waterfall for every single page
load, for every single user.
Send that back to the server,
and then take a look at it
in aggregate, see
what's happening.
Even in Google Analytics, you
can get the page load timings
for absolutely all your users.
Segment them out.
You can get an idea of how long
things are taking right now.
So that's definitely
something to look into.
NAT DUCA: One thing that I'd
love everybody's feedback
on here is, some
peeps at Facebook
have been throwing around
this idea of device classes
and network classes, which
is separate from like
whether you have Wi-Fi.
It's more about, OK,
you got good pings,
terrible pings, low
pings, whatever.
And then the same thing,
you have a 2010 device,
you have a 2011-era device.
We obviously don't
have this on the web.
It strikes me that this
is kind of interesting.
Please reach out
to us if you think
that's something we
should be pursuing.
It seems interesting.
One other comment there-- keep
in mind that like 640 by 480
is back when you go to
the developing world.
So you don't need a megabyte
of JPEG for 640 by 480.
Or 480 by 600, I guess,
because they make it vertical.
SIDDHARTH
VIJAYAKRISHNAN: In terms
of reducing the bytes
transferred over the wire,
with the Chrome data
compression proxy,
we found that a few simple
things actually go a long way.
So the proxy actually does some
very simple transformations.
It applies gzip.
So in 2014, it is surprising the
number of websites and servers
that don't use gzip,
even though it's
just one line in the
Apache config file.
Transcoding images
to WebP huge gain.
CSS minification, stripping
white space from JavaScript.
These are small things, but
in aggregate, the Chrome data
compression proxy achieves
about 50% reduction
in the bytes transferred.
So even those simple
things can go a long way
in actually reducing
the number of bytes
that you have to send
out to the client.
JAN MAESSEN: And I'll point out
that, in fact, mod_pagespeed
can do that it exactly
for your website.
The Chrome data
compression proxy
is doing for users what
we do for site owners.
So if you're not serving
WebPs, for example, to Chrome
enabled devices,
you should think
about serving WebPs to
Chrome enabled devices.
And we can do that for you,
but there are other techniques
out there to do the same thing.
And that's a huge
savings in bandwidth
for devices that support that.
PAUL LEWIS: All right,
so let's move on.
"With the shift to
HTTP/2 and SPDY with SSL,
is it better to have
multiple smaller files
or fewer larger files?"
We're going straight
into the technical stuff.
And that was from Matt.
It doesn't look like a Paul to
me, but let's ask it anyway.
SIDDHARTH
VIJAYAKRISHNAN: I think
the short answer there is yes.
HTTP/2 has features like
multiplexing, priorities,
stream dependencies,
all of which
work much better when you
actually have multiple smaller
files.
The one caveat that
I would add is,
you have to make sure that-- so
domain sharding was something
that people use in
the past to get around
the limits of two
connections per host.
That is not going
to work with HTTP/2.
With HTTP/2, you have to
make sure that your host--
so domain sharding
is not something
that you should be
doing for HTTP/2.
Today, if your host
resolves to the same IP,
then Chrome will
sort of transparently
unshard this for you.
But that's not something
that is recommended
for HTTP/2 going forward.
PAUL LEWIS: Is
there other things?
Are there other, let's say,
best practices that we've
baked into developers heads
over the past years about how
to do what with HTTP as it is
today that actually now break--
for want of a better
word-- when we hit HTTP/2?
And how do they migrate
from one to the other?
And is there a
seamless path for them?
Or is it just--
SIDDHARTH VIJAYAKRISHNAN:
For the most part
the transformations--
PAUL LEWIS: [INAUDIBLE].
SIDDHARTH VIJAYAKRISHNAN: Yeah.
The most part,
the transformation
should be handled by
the server itself.
So for things like
priorities and push,
the servers are typically
configured to do that.
So the developers don't really
have to do a lot of work.
But you have to
actually go and make
sure that the version of the
server that you're running,
whether it's Nginx or
Apache, have actually
implemented the
features correctly.
Because when we did some
analysis in the past,
we found that some
versions of Nginx,
for example, got
prioritization wrong.
I think that's been
fixed now, but you
have to make sure that the
version that you're actually
running is something that
supports all of these.
PAUL LEWIS: And is there going
to be some kind of page logic
there that kind of goes,
hey, if this is HTTP/2,
it just like spits
out all the files?
Or if it's HTTP
1, actually I want
you to serve the concat
version of all my resources,
like old school?
PAT MEENAN: Yes.
So there are some
old best practices
that are no longer
applicable-- or necessary
I guess is probably a
better way to put it.
Spriting is no longer
really something
that you have to worry about.
Concatenating the
files together,
also not something you
really have to worry about.
Being able to version
the individual resources
and change just the one file
that changed without having
to push down your whole package
is one of those huge things.
Sharding, also.
Serving content off
of a static domain
instead of your main domain.
All of these things that
sort of require another DNS
lookup or another socket
connect sort of hurt you.
So if you can serve as much
as possible off of your base
domain, you're going to get
as much benefit as possible.
So domain sharding
was a big deal
when there were only two
connections per host.
It's been a long time
since that's been the case.
So that's no longer
something that you even
have to care about, is it a
new browser or an old browser.
If someone is still
using IE 6, OK, they'll
have a slightly
slower experience
because you're not
domain sharding.
Yeah, they'll get over it.
Or hopefully--
PAUL LEWIS: Kind of feels like
they might have other problems
if their browser is
that old at this point.
PAT MEENAN: And we're
actually at the point
now where all of
the modern browsers
support SPDY or HTTP/2.
It's going to be kind of
a little period in time
where you're going to
have to sort of think
about supporting both.
IE 11 is sort of the
one edge case, where
I guess on Windows 7 it doesn't
support it, but on Windows 8
it does.
But Safari rolled it out with
iOS 8, Firefox has had it,
Chrome has had it.
Look at your market
share for your user base,
but you're probably at the
point where you can just shift.
It degrades gracefully.
If someone happens to be on
one of the older browsers,
they'll get a slightly
slower experience.
But it saves your life
a whole lot of effort.
And everyone on
the newer browsers
gets the really
great experience.
And, you know, while
you're doing all of this,
you also sort of
back in yourself
to be able to do all
of the new cool stuff,
like ServiceWorkers
that needs TLS.
And if you want
fast TLS experience,
you're going to want
to use SPDY or HTTP/2.
So sort of do all of this
to get prepared to use
all of the cool new stuff.
PAUL LEWIS: All right,
so let's move on.
A question from somebody who
label themselves as Time Hat.
I'm not quite sure what
that is, but anyway.
"For larger web
apps, what have you
found to be the best
balance between serving?
Like all resources needed
upfront, larger initial delay,
providing resources needed
for a set of functionality
on demand, like
just holding off?
And how does that change with
ServiceWorker, if at all?"
PAT MEENAN: Just really quickly.
For the love of
God, please don't
try to load something
the minute or the instant
a user is trying
to do an action.
You can fiddle and get
stuff in during idle cycles
if you don't want to sort
of pay the upfront cost.
But you really don't want to get
in the way of their activity.
So if you're going
to delay something,
don't delay it until they
actually go to use it.
JAN MAESSEN: A second
order of concern here
is to beware of radio
shutdown on mobile devices.
That once the radios been
idle for long enough, it's
going to take you longer to spin
the radio up and start loading
the resources than it might
have done to load the resources
you needed upfront while
the radio was still running.
And it's going to eat more
of their battery too, which
is going to piss them off.
PAUL LEWIS: So since
there's no radio API,
how should a developer attack
that particular problem?
Should they just kind
of go, you know what?
I'm just going to do it all up
front and hope for the best.
What if it then, we're
talking about one, two, three
megs of stuff?
ALEX RUSSELL: So
we actually sort of
see these problems show up
in large scale Google Apps.
So Google Docs and
Gmail have this problem.
They're putting megabytes
and JavaScript on the wire
to help you get
through your day.
And in Gmail's
case, what you see
is that they will load
the whole package up front
to optimize for
interaction latency.
And Google Docs takes
a different approach,
which is interesting, because
they've sort of constructed
their applications so that
you load the initial content
and you get the document.
And then they load in editors.
You load in these packages of
things, which sort of decorate
the UI and make it
more interesting.
The thing you can do
with ServiceWorkers
that will allow you to maybe
do better in the future
is if you structure
of your application
in that way, where you've
loaded initial experience
and then you decorate it later.
You can make sure that
you've got those things
starting to install,
and starting
to be kept at your
initial page for you
when you install your
ServiceWorker and register it.
That activation and
installation phase
will happen pretty
much while you've
got the radio warmed
up the first time.
And then the next time
you load the page,
the Service Worker can know that
it's got those cache's filled,
it's in control, you can
serve the full experience.
So again, it's progressive
enhancement, the network layer.
And it's going to be available
on mobile for Android
beta in a couple weeks.
And very soon towards
the end of the year.
So you could start taking
advantage of that now.
NAT DUCA: Please keep an
eye on your sprite sheets.
We went through this
lovely experience recently.
Where a well-meaning app
had a 26 megabyte sprite
sheet for all the icons in
three different pixel densities.
And that was unfortunate.
So we've seen people putting
their eyes towards SVG,
using source set, and
the picture element.
And try to really use the
[? swish ?] screen if you can,
because that combined
with HTTP/2--
we're trying to articulate
a more sane way to do this.
The reality of supporting
older browsers is still there.
But say what you mean
is a really good thing
to keep trying to do.
Because 26 megabytes
is rough for us.
PAUL LEWIS: Yeah,
that's sizable.
That's something
else, isn't that?
NAT DUCA: Yeah.
PAUL LEWIS: OK.
NAT DUCA: It was a good day.
PAUL LEWIS: Yeah, I bet it was.
But on the ServiceWorker thing.
I get that it's the first load.
You still have to care.
And you still have to kind
of do a good first loading
experience.
And for more second
load and beyond,
does that just solve all
network performance issues now?
Are we done?
Can we just kind
of say that's it?
Now it's solved.
Or is there anything that
ServiceWorker doesn't actually
fix for us on the
networking side?
ALEX RUSSELL: Aside from
actually making your breakfast
and letting you ride
to work on a unicorn--
PAUL LEWIS: Hey, I'm not
trying to dig at you.
I'm trying to just make sure.
Because we've all been
like, ServiceWorker, woohoo!
And I'm just trying to make
sure is there any gotcha here.
That's like--
ALEX RUSSELL: So
it's worth noting
that there are going to be
many users in the next couple
of years who don't have
ServiceWorker-enabled browsers.
And it's going to create
tension in your design.
It's going to create a
difference that you're
going to have to reckon with.
And again, that first
load experience matters.
And it's also important
to note that if you're
going to be caching 26
megabyte spreadsheets
into a ServiceWorker-- I
don't know that that makes
it saner, but you
could-- you still
have to download
the 26 megabytes.
And if someone is paying per
kilobyte for a connection,
that's still a pretty
crappy thing to do a person.
And in fact, it could be
worse than today's experience.
Because you're not going to see
the loading bar necessarily.
And these may be bits of
UI that don't reflect stuff
that you're
currently displaying.
So it's going to be
worth paying attention
to the overall budget.
Again, we're going to
make the navigation timing
and the resource
timing APIs available
from inside the ServiceWorker at
some point in the near future,
which will allow us to let you
have a handle on what you're
doing and what the network looks
like from that perspective.
But you are going
to have to stitch
those experiences
together yourself.
And it does create tension.
It is going to
create opportunities,
but they're
opportunities that we
expect that you're going to
have to be careful about.
PAUL LEWIS: Great.
Just on ServiceWorker,
one final one.
I think it's quite
good, this one.
"Where is the line between
relying on the browser's
historical caching mechanism,
like cache-control headers
and so forth, and implementing
your own caching and saying,
I will deal with this?
Especially for starting on less
frequently changing content,
where you're not quite sure,
should I go to the network,
should I not."
ALEX RUSSELL: So
the mental model
that you can adopt when you're
thinking about the caches
API instead of a
ServiceWorker is
that it's like
reference counting,
if you're familiar with that.
It's the HTTP cache
sits logically
behind the ServiceWorker.
So it's like documents consults
service worker, consults cache,
consults network.
So there's now several
layers of faulting
when you make a request.
You can get it from
the ServiceWorker
maybe and its caches.
Or you can get it from
the local HTTP cache,
if it's not expired
or evicted there.
And maybe you can get it from
the network, if none of those
have joy for you.
And so what that means is
that the ServiceWorker's
primary advantage
in terms of caching
is knowing what's there,
not necessarily anything
to do with eviction.
Caches are coherent.
That is to say, if you evict a
cache, it's going to all gone.
And that's their primary virtue.
You otherwise don't know
the state of the HTTP cache.
ServiceWorkers let you hold
on to something longer then
your cache headers would
otherwise allow you to.
So the primary benefit
is that now you
can understand what's
there and hold onto it
in a way that is more reliable.
The browser, when it goes to
throw away older resources,
won't throw yours away.
But if both the HTTP cache
would have still had it,
and your ServiceWorker
is holding onto it,
well it's still
going to be there.
And when you're populating
your ServiceWorker,
all the caching
headers that you set
for the HTTP cache
will take control.
So that is to say that if you
set so long expires header
on some resource
that you are then
going to hold onto from
inside your ServiceWorker,
we don't have to go
back to the network
to get it when you start
populating this caches.
And so it's still
useful thing to do
to set some distance forward
in the future for resources
that you're going to put
in a ServiceWorker cache.
PAUL LEWIS: Don't forget you
can ask questions in person.
You can also tweet @ChromiumDev.
And you can ask your questions
and I will find them.
PAT MEENAN: And really quick.
I think along those
lines, one of things
that has me most excited
about ServiceWorker
is being able to
stomp on the cache
expires of
third-party resources.
So I can do things like
stale-while-revalidate
in ServiceWorker for the
ads.js, where I can say, hey,
I'll always serve
whatever I have,
then go out and fetch
whatever the latest
version is on the network.
If the network's
not there, great, it
doesn't fail my page at all.
ALEX RUSSELL: Yeah.
One of the other things
that we've seen teams
here talking about
with ServiceWorkers
is-- they're starting
to investigate
it-- is that many of our teams
have very large JavaScript
payloads, and they'd like
to only serve a delta.
But if you don't
know what's in cache,
then you can't figure
out which delta to send
and you don't know
how to apply it.
The ServiceWorker
would let you do
all of that programmatically.
You could do it yourself
in code because you
have all of the control.
You have the ability
to start implementing
truly exotic
strategies for reducing
not just that upfront burden
and that second time burden,
but the ongoing upgrade
costs of your application.
NAT DUCA: Alex, in
a previous panel,
mentioned this concept
of an app shell.
And I think it's
really important
to take that to heart in then
running with your caching
decisions.
This notion that the most
important thing for you to do--
and this just falls out
of building a native app,
for example-- is to get your
app up, get a spinner up
if you have to,
but get content up.
And go to server to
get more content.
Or get your interface
up, and then show,
and then provide clicking.
And as long as you've got
enough of your shell up
that you can provide
visual feedback
that you're getting something.
So if you tap down, if you
don't have your data back yet,
give them some visual bling
that says, yo, I'm working.
This is a much, much, much
better experience for people.
And so if you start with that
idea of building a shell that
then goes to the network,
you're going to end up,
I think, in a better
place in the total run.
Than if you start
bottom up and think I'm
still shooting a page across
the wire in its entirety.
SIDDHARTH VIJAYAKRISHNAN:
One note of caution
about HTTP caches though,
is that you should never
assume that reading
from the cache
is always faster than
going out to the network.
We have seen cases where they
take about the same time.
This is because disk
reads from mobile
can sometimes be
really painfully slow.
ALEX RUSSELL: And it
gets worse on desktop
where you've got anti-virus
software, which is, again,
written primarily as a
practical joke on users.
But disk reads can
be very, very bad.
I'm not just up here to try
to sell you ServiceWorkers,
but you should get some.
But ServiceWorkers will
also allow you to race
to the network.
They'll let you race
disk in the network
and respond with whichever
one comes back first.
You have the control
to do that yourself.
Which is, again, a new thing
you couldn't do before.
So I'm very excited about that.
PAUL LEWIS: I'm pretty sure
Jake's Trained to Thrill
sample does that exact model.
So if you're like,
how would I do that?
There you go.
ALEX RUSSELL: Yeah.
And in terms of building
a shell and populating it.
I, again, can't recommend
Jake's Trained to Thrill code
base enough.
It sort of has eked
out every last ounce
of network performance there.
Getting you out of the gates
quickly the first time.
And then building
the app in a way that
lets you be as fast
as you can locally.
And then loading
new content later.
It's a beautiful
exposition of how
to architect your
app with the network.
PAUL LEWIS: All right.
So we're going to switch
gears up to the bit
after it's loaded.
60 frames a second.
This is a question from your
lovely, warm-hearted moderator.
60 frames a second seems
so hard to get on mobile,
often requiring hacks
and code contortions.
What's being done about this?
And can we ever
realistically expect
60 frames a second on
mobile to be the norm?
NAT DUCA: Yes.
PAUL LEWIS: And what's being
done to make that a reality?
NAT DUCA: He's such a
warm-hearted moderator.
PAUL LEWIS: I know.
NAT DUCA: So Ryan
was up yesterday
and gave this overview of how
we're just seriously focused
on being wicked fast.
We've done a huge amount
of work over the last year,
not just looking
at 60 FPS, but just
all the little details of that.
Hundreds and hundreds
of bug fixes.
60 or so major projects that
have gotten Chrome from that
cited number of a 129.
Not quite 128, which
would've been pleasing.
PAUL LEWIS: Very pleasing.
NAT DUCA: Very pleasing in
[? a monk ?] kind of way.
PAUL LEWIS: I just
love all that.
NAT DUCA: So we've
done that, but we're
going to keep doing that.
And so for example, the
GPU Rasterization project.
We have it on some smaller
set of devices now.
We're going to try to
get that everywhere.
And then we're really looking
at all the secondary things
on the main thread that happen.
So for example, we're
looking at whether GC
is hitting the right
time, right scheduling.
We're looking at how
input is delivered
and trying to figure
out ways to coordinate
your scroll with your scroll
handler, for instance.
What else are we doing, Paul?
PAUL IRISH: Slimming Paint.
NAT DUCA: Slimming Paint.
We're re-architecting the
entire blank rendering engine,
from the way it was
done since it was WebKit
to this whole new thing that
we think will give us about two
x more performance on Paint.
Which is pretty cool.
PAUL IRISH: Yeah, it's great.
NAT DUCA: Or more.
Five sometimes.
PAUL IRISH: Yeah, it's a lot.
It's interesting because a lot
of times you'll hear something
like, Paul Lewis
will give a talk,
and tell you these are
things like to follow.
And these are your tips.
And you should go and
write your app like this.
And the good stuff.
Yeah, certainly.
And same time, we don't
just let Paul do the work
and then say that
Chrome is done.
We are very invested
in making sure
that we can give you
that awesome performance
that we're all chasing
after without you
having to do all those hacks.
So that's why a lot of
these projects like Ganesh,
like Slimming Paint, a
lot of input latency work
is all about what are
the fundamental platform
improvements that we can
make to get you there,
so you don't have to do things.
PAUL LEWIS: One of the things
I often stand up and say
is like, transform
an [? opac state. ?]
Because these are like the
compositor only things,
and everything
else will possibly
cause huge jank
problems, probably.
And so, do these
improvements actually
mean that you could transition
width, height, left, top
without causing a major problem?
PAUL IRISH: So yes.
So right now transform
capacity, fast.
Fast across all
browsers, every browser.
This is just something
that we-- all
browsers, just through the GPU.
When you animate, pretty
much every other attribute,
it requires a lot more work.
Things are slower.
So that's why animate
left and top, or animate
in height, not so great.
It's painful, it's not good.
A lot of the effects that
you see that are coming out
is very dependent on a things
like a height animation.
You have a height image and
you have things fading out.
That should look and work great.
And right now the
best you can do
is basically just like try
and fake it with transforms.
And it doesn't really work.
And we don't think
you should have
to go through those sorts of
contortions as a developer
to get the sorts of effects.
NAT DUCA: I like asking people
how many properties there
are in CSS.
And actually it's kind
of hard to answer it,
which is very telling.
But it's something like
115, if you're conservative,
or 300 or 400, if you start
looking at the real world.
It's a lot.
Two are fast.
Maybe three.
PAUL IRISH: Yeah, two.
NAT DUCA: There are two
directions that this could go.
And where this heads is partly
a technical problem and partly
audience participation
with other vendors.
One direction is
that the browser
starts making
painting properties.
So look at csstriggers.com.
PAUL LEWIS: Yeah.
NAT DUCA: Painting properties,
like border color--
this doesn't affect
where things are,
it just changes the
visual appearance.
We can make those
really, really fast.
But if you look at all the
effects you see on mobile,
things move around, right?
It's nice for things
to change colors,
but what really makes something
compelling is things move.
And that's CSS layout,
and layout should be fast.
The data show, when we
really, really dig into it,
that we can run layout at 60
FPS without breaking a sweat.
It's just that nobody
ever measured carefully
and determined that.
So where we think we
should go in the platform
is to actually enable 60 FPS
mutation of any of your CSS
properties, because that's
the natural way to do it.
But that is a opinion held
by a lot of Chrome engineers.
And we kind of need to
hear the audiences voice
about whether that's
what you want too.
Or whether you'd like us to--
[APPLAUSE]
PAUL LEWIS: That
sounds like a yes.
It's hard to discern, I think
they need to clap louder.
No it's--
NAT DUCA: Go over to Cupertino.
PAUL LEWIS: Just
to be super clear,
you're saying that
you think it's
possible to reach
60 frames a second,
no matter which CSS
properties you change.
NAT DUCA: Yes.
PAUL LEWIS: I like that.
PAT MEENAN: What about the "or"?
PAUL LEWIS: Yeah, what was
part two of this one, Nat?
NAT DUCA: Well, so
the hard part here
is that requires some
really hard examination
of the fundamentals
of web rendering.
You have to really
start going we're
going to optimize the
core of the render,
rather than bolt on compositing
magic on the outside.
And so what leads us to the
point of being able to say,
we think we can
do this, is a year
worth of major re-architecture.
And another year already
planned of even more
major re-architecture.
That's a lot of work.
We can do this because Chrome
and Google were determined
to move the web forward.
And so we think we can do this.
But this is a tough order for
our friends in other vendors.
And so the more
pragmatic thing here
is to say, oh, that's
going to be really hard.
Let's make background
color easy to animate.
So it's a tough space.
We sort of recognize
the tension here.
I still think CSS is a
frigging awesome thing,
and it's a shame to
throw it out when
you want to grow
something taller.
And you have to do
so by bypassing it.
PAUL LEWIS: So in the
interim, while not everything
is 60 FPS, 60 frames a second,
if, say, 30 frames a second
looks less janky than
something variable.
Are we saying to developers,
just kind of still go for 60,
and just don't worry
about everything less?
Or are there ways in which
we'd say, well, actually you
should throttle back?
Are we looking at ways that
we'd suggest people throttle?
Or are we just sort of saying,
go for the best you can.
And we'll try and do the rest.
ALEX RUSSELL: So the web
proper platform traditionally
has had a series of problems.
You heard about web components
yesterday and Polymer.
And that came from
some of us asking
a question like, why can't
I do what the browser does?
Why can't I run script to
do what it's clearly doing?
It's parsing a thing.
It's got a tag name.
It looks it up in the table.
It creates an instance of
that thing and spits it back.
Why can't I be part
of that conversation?
It's doing that work, why
can't I be part of it?
Implicit in this question is
the reality that today in CSS,
you can't be part of it.
There's just no way that
you could figure out
how style recalc is happening,
or when a frame is generated,
and control that in
any meaningful way.
We have requestAnimationFrame,
which is effectively
a bolt on that
tells you when we're
going to swap buffers
or something like that.
And we've got CSS animations.
But there's no API
that connects them.
This is one of the fundamental
challenges of the way
that we have to design new APIs
for the web platform, which
is to draw these
connections together.
And it's something that we,
frankly, on the API design
side, have traditionally
done a very poor job of.
Because it's much easier
to design a new feature
that solves a problem.
That someone says, hey,
I've got this problem.
I want to make things animated.
OK, we'll design some CSS.
Hey, you can animate something.
But I can't animate everything.
But I want control.
How do I do it?
What we didn't do
was say, well here's
an API that gives
you control that
lets you do all this stuff.
We have, I think
collectively, made
a shift on the back of the
extensible web manifesto,
and some of that other thinking
over the last couple of years,
about how to deliver
new features to you.
And I think the thing you
can continue to demand of us
is why aren't those
two things connected?
Tell me how we connect those.
If I want to do it at
30 frames a second,
what is the API that I
use to make that happen?
Tell me what it is.
And today, talking
to us, the answer
is we haven't given it
to you, and that's on us.
Our bad.
NAT DUCA: I feel like I want
a bug report for this one.
PAUL LEWIS: Yeah.
That sounds good.
NAT DUCA: The web needs
something like this.
We do have the frame timing
API, I guess, coming.
And that's going
to be pretty cool.
And this gets back to some
of those device classes.
Like, hey, yo, this is
like a really cheap device.
Maybe I should show
the non-animated UI.
So for example, we've
been working very closely
with-- in L, so in
lollipop-- you'll
see a new look to Google search.
And that switches between a
"full material design with lots
of animations" version and
a "it's still the same look,
but there's no animations
on lower-end end devices."
And that's a pretty
good technique.
Right now, we do that
by user agent sniffing.
[SIGHS]
PAUL LEWIS: See, they sigh,
but really they enjoy that.
NAT DUCA: But we do this.
Yes, I know.
PAUL LEWIS: They go actually,
I'm going to do that.
NAT DUCA: It's this thing.
We might want to do better.
PAUL LEWIS: Going
back to the point
you made about the
extensible web and so on,
and explaining the platform.
But there's been a rise.
There's Famous.
There is React.
They sort of
virtualize the platform
and explain things
in a different way,
and then sort of
bake out to screen.
And they certainly project
breakthroughs, right?
By achieving 60 frames
a second for web apps,
mobile apps, and
they've introduced
DOM WebGL crazy mixed modes.
What are the thoughts on this?
Like should we be looking
at this kind of thing?
Are there any
drawbacks that we see
when people go in
this direction?
Is that something you
guys want to talk about?
NAT DUCA: I'm looking at
Paul, but I can attempt to--
PAUL IRISH: Give it a shot.
Give it a shot.
NAT DUCA: OK.
Famous is leading the way
in a lot of regards on this.
And there are a couple
other frameworks
that are just awesome.
We want to look at those
effects and make sure
that everybody can get to
them without a framework.
And this is somewhat like the
story with Web Components,
where we go, OK, you
can build a framework.
But this is sometimes a
symptom of some deeper
problem in the platform.
If you have to do and
adopt an entire framework
to move something from the
left side of the screen
to the right side of the
screen, something's wrong.
And we should fix that.
Now--
[APPLAUSE]
I'll stop there.
Ranting to the-- yeah.
PAUL IRISH: Well, so,
I kind of disagree.
NAT DUCA: Great.
PAUL IRISH: In that--
PAUL LEWIS: Yay.
Conflict, go!
Sorry, I was not
fighting for that at all.
PAUL IRISH: Well,
basically I feel
as though pulling off
a fantastic 60 FPS UI
is really hard.
And there's not a
huge amount of people
that like know enough
to really do it.
And so in many ways
for that experience,
to see it scale
across all developers,
we need frameworks to exist that
have that expertise behind it.
This is just manage of
distribution of expertise
and making sure that it's there.
So things that like
Famous are doing,
and things that Polymer
and Ionic are pursuing,
it's all kind of
like giving everyone
the ability to pull
this stuff off.
And it's really important,
I think, as a developer,
to target the sort of
experience that you want
and chase that down.
And maybe you go through a
framework, and that's good.
On the platform, we
want to make sure
that that effect is absolutely
available for everyone.
But anyways, the sort
of things-- so Famous
and what they're doing on
the UI is very inspiring.
The sort of virtual DOM
stuff that React has done
and now Ember is adopting
is really exciting.
And it drives a lot of
the conversations that
happen on the web
platform on how
we can make sure that the sorts
of things that they're chasing,
we just deliver
right out of the box.
NAT DUCA: As always, Dr. Irish
is completely right here.
So, you know, there's
the long-term,
which I'm speaking
to, of like we
want to bring this to everybody.
But the reality is this
darn tough right now.
One thing that's really
cool, by the way,
is this Virtual DOM stuff.
One of the fundamental
things we don't think we
can make all that
much faster-- by which
I mean I think we got
another 5x in us--
but recalc style is hard.
It's computationally evil.
Somebody just did a
proof of how bad it is.
It's terrible.
PAUL LEWIS: How terrible?
NAT DUCA: [? Runo ?] did this.
He has this selector
that's like a comma
a star comma a
twiddle star star.
It's this horrible thing.
PAUL LEWIS: That sounds
exactly like the CSS I'd write.
NAT DUCA: It has to
recalc the entire world.
This is not going to
get that much faster.
Therefore, keep the
number of DOM elements
that you have under control.
And we've actually
been muttering
about saying, keep to about
1,000 on mobile, not more.
And then if you have to
have it more than 1,000
due to scrolling, virtualize,
virtualize, defer,
so on and so forth.
So keep an eye on that.
And Famous and React and a
bunch of people, you know,
all the virtual lists.
They're all doing this.
This is super important
for performance.
PAUL IRISH: Man.
Descendant selectors.
NAT DUCA: Yes.
PAUL IRISH: It's a--
I've done presentations
where I've been like,
guys, don't worry about it.
Descendant selectors are fine.
They're fast.
Turns out--
PAUL LEWIS: Is this is
an admission, my friend?
PAUL IRISH: It is.
NAT DUCA: Twiddle
is worse, though.
PAUL IRISH: When we were in
2008, I was like they're slow.
And then I'm like, they're fast.
But it turns out they're
very fast to evaluate.
But, this is that
recalc style thing.
Descendant selector
in the recalc style
introduces a lot more cost.
And so staying specific
with class names--
the sort of BEM style
class name architecture
that people using these days
actually works incredibly well.
And so [? taking ?]
your classes,
very localized to
your component,
is just a good way
to go going forward.
And reduces this
cost of recalc style.
PAUL LEWIS: To clarify
the recalc style bit.
Does that mean that
during an animation--
say I'm transitioning
something in an animation.
Do we not figure out
that you're still
talking about the same element,
and the recalc style just
applies to that same element
that it did last frame?
Or are you saying that we have
to kind of compute to the class
name every single
time, and we never
remember it between frames?
Does that make sense?
PAUL IRISH: Yeah, yeah.
I think, uh--
PAUL LEWIS: It just seems like,
it feels like if it's expensive
we should do it
once and cache that.
NAT DUCA: He's now gone
off in outer space.
PAUL LEWIS: Sorry, man.
I'm just thinking out loud.
NAT DUCA: CSS is
computational evil.
And here's the problem.
Any time you do anything
at the sort of basic level,
any time you mutate a class,
any time you mutate a style,
technically we have to go
recompute the universe.
Like every single element.
So we learn Big O notation
in undergrad, right?
Or somebody beats us over the
head with it, and we're like,
screw this.
But the fundamental
thing here is
that CSS can get as bad as all
of the elements in your DOM
times all of your selectors.
And like this whole,
is descendant selector
fast, is the ancestor
selector fast,
that's actually just
a secondary sideshow.
The fundamental is CSS is slow.
And then we have these magical,
horrible, really painful
to understand tricks
for certain selectors.
That instead of having to check
all of the universe every time
you do anything, we check
some of the universe.
And so direct
descendant selectors
are really the only ones that
we can do faster, or just class.
So the BEM style happens to make
the computational complexity
of this closer to the amount
of change you've done.
PAUL LEWIS: Now,
earlier you said--
NAT DUCA: Then I just
went off into outer space.
PAUL LEWIS: I know,
but I like it.
You can join me in the universe.
NAT DUCA: We'll try
to explain this over,
and over, and over, because
this one of the true evils.
PAUL LEWIS: Earlier
you were saying
that you'd be ashamed
to throw out CSS.
But at the same time,
CSS is horrible and slow
and we want people to make
fast performing sites and apps.
NAT DUCA: Yeah.
We need to make an effigy of it.
PAUL IRISH: I think part of it
is just that, the issue here
is that the algorithm and the
cost for calculating styles
is it's specified.
It is a thing that
all browsers agree on.
And so it's not like something
that we can just make faster.
ALEX RUSSELL: CSS--
by the way, this
is one of the best
things about Shadow DOM--
is that it finally lets
you encapsulate right.
CSS works when it's small.
CSS is fine when
you've got like five
elements and a couple of rules.
CSS scales like a
Buick off a cliff.
It is terrible.
PAUL LEWIS: That's bad, right,
for anybody who's like me
and don't know what a Buick is.
NAT DUCA: Although if
you have one element,
it scales like a
Buick off the cliff.
But if you have two elements
in your DOM, just two,
and no selectors.
We have to check 82 different
permutations just for two.
So this is just to
give you an idea
of the scope of this thing.
It's pretty terrible.
And then you have three,
and it's 82 times 3.
It's bad.
PAUL IRISH: And just
to scope all this,
we're merely talking
about style calculation.
And so this is the recalc
style, and purple, and timeline,
which is a fraction of
everything that's happening.
ALEX RUSSELL: Yeah.
So we've paid a lot of attention
to this particular problem.
And it's a small portion
of your frames today.
You're going to
see a lot more time
in well-architected
apps in Paint today.
And so that's why we're focusing
in the next year on Paint
Slimming.
PAUL LEWIS: OK, as you-- Oh,
Jan, did you have something?
OK.
I just wanted to move us on.
So "Chrome has landed a
lot of ES6 features lately.
But they're often so slow
that they're not usable.
Can you make them faster?
Now I think yes, OK.
But I think the
question is really,
how do we prioritize our
performance workload?
And how are we deciding
what needs optimizing when,
and we make sure that
it fits developer needs.
I think that what that's
really driving at.
Are we all happy?
Yeah.
OK.
Who wants to go?
Who decides the workload?
ALEX RUSSELL: So I just want to
give a little insight into how
new language features get built.
So one of the things
that I have done
is to participate
in the TC39 process
for defining new
standards for JavaScript,
new features for JavaScript.
And so defining a new feature
actually for the web platform
requires iteration.
It requires us to turn the
crank a couple times to decide
that something that looks like
a real use case has a solution,
and that we like the
solution, and that users
like the solution, and
that we can make it fast.
That's an iterative process.
That's a process that
requires us to do experiments.
See how they go.
See how it feels.
Write [? transpilers ?]
like traceur
that will allow us to use
a feature early and then
figure out how it goes.
One of the things you
can always count on
is that if a feature
gets a lot of use,
we're going to make it fast,
as fast as we possibly can.
That's a thing we do.
That is one of the best
things that we could possibly
do for you.
And so we're going to
continue to focus on that.
If there's a new ES6
feature, the reason
that it might be slow
is that it's new.
Now would you rather have a new
feature that isn't completely
optimized all the way
through [INAUDIBLE] yet?
Or would you rather not
have the new feature?
And if your answer is I would
rather not have new feature,
I think you're not
with it entirely.
Because let's talk
about slow and fast.
Slow and fast in
JavaScript today is,
slow is 1,000 times
faster than fast
was when I was writing
JavaScript and making
libraries.
Like we don't have
an interpreter in V8.
We only JIT.
OK, that's slow?
OK, let's talk about your work
load and what you're doing.
So these are relative questions.
And today we're going to
introduce new features.
We need to get new features
out faster, I agree with that.
And those features need to get
faster, I agree with that too.
Please form your question--
Linus Upson said this.
Please phrase your
question in the form
of a benchmark and we
will pay attention to it.
PAUL LEWIS: We've
got a live question.
Go for it.
AUDIENCE: Sure.
You talked a lot
about recalc style.
And you mentioned virtual
lists and virtual DOMs.
There's a lot of
frameworks kicking around.
There's a lot
performance optimizations
that we are trying
to do at the moment
around batching expensive layout
triggering DOM operations.
Most of that is a lot
of mental overhead
and quite often very hackey.
Are there ways that
the browser will
be able to learn to take
care of this stuff in a more
performant, more
intelligent way?
And is that something
you're going to work on?
NAT DUCA: One of
the things that I'm
really excited about for this
coming year is a measure API.
Finally, element.measure.
And the reason I'm
excited about this--
and you can measure an element
without even attaching it
to the DOM by giving it
some initial constraints.
It's going to be
frigging awesome.
The reason this
is really neat is
because right now,
all of this-- there
are a couple of great
libraries out there
that are sort of mutate
measure, measure mutate, right?
That are meant to
avoid style thrashing.
That's sort of the way to cope
with the web as it is today.
But it's all a Bandaid
around the fact
that any time you
attach to the DOM,
there are these
global operations
that happen that are
proportional to the size
of the universe.
And so when you want to measure
a little thing that's just
the size of a card, and you
want to know, how wide is this?
How wide does it want to be?
Currently, we
actually recalculate
the size of the
universe and recalculate
all of your elements.
And so this is why
it's actually slow.
Measure will not
have this property.
So it'll be considerably
less the suck.
So that's coming.
And I think we should
be super excited.
PAUL LEWIS: So what's the
difference between getting
a measurement-- like saying,
give me all the things
you know about this-- and
then setting width and height.
What are anticipating
developers are really
going to use the measure
API for that they
couldn't do otherwise?
NAT DUCA: Well actually,
in the Dev summit site,
we saw in Paul's
talk about you want
to move a card from
where it is in your UI.
You have a button
and you want it
to grow to take over the screen.
But you want it to
take it over the screen
and maybe you want to center
it or something, right?
So to do this, you have
to set up an animation
to scale from one to the other.
So that means you need
to know from and to.
And typically, the
way you do that is you
use getComputedSize.
Or you do any number
of these, right?
getClientBoundingRect, that's
the one that I'm trying to say.
PAUL LEWIS: I love that.
Oh, yeah.
It's my new best friend, BFFs.
NAT DUCA: I mean there is
getComputedStyle.width,
which is funky.
So there are all these
things that people do now.
But those trigger
world-sized calculations.
And so that's really the
thing we're trying to fix.
PAUL IRISH: And do they have to?
NAT DUCA: Yes.
I mean, we can fix it.
Like, we're continually
trying to make it better.
But this gets to, is the
platform predictable?
Like one of the nice things
about transform and opacity,
as a developer, I think, is
that when you do figure out
how to make your effect work
with the transform animation,
it's going to be pretty fast.
So there's this mental
model that you can form it
in your head about if I do
it this way, it will be fast.
When you do get computed client
bounding wrecked square thingy.
PAUL LEWIS: That's
it's actual name.
It's only alias to--
[INTERPOSING VOICES]
NAT DUCA: Thingy subfixes
to go with browser prefixes.
When you do that, it
sometimes is fast.
And then sometimes it's
catastrophically slow.
That's fine for like
getting by, but I
don't think that's what we want
to hang a web platform from.
PAUL IRISH: I will point out, as
part of the render performance
work that's happened recently,
a lot of these issues
around layout thrashing
have gotten better.
So if you go and look at like
those blog posts around layout
thrashing and see the demo
where it's like, try this,
it's terrible and slow.
Try this, it's fast.
Slow is actually not as
bad as it was before.
So there's been
platform improvements
to just make that faster.
And we're a lot more
knowledgeable now on the Chrome
side about when we
invalidate something,
when we say that the geometry
or the pixels on that screen
are our old.
And we have a lot more insight
into how we do that, and make
sure that we don't
over invalidate.
So that means we can
reduce the amount of work
that we end up doing.
PAUL LEWIS: Question on tooling.
"Page speed insights
and web based tests
are both really useful to
help improve loading times.
Are there any plans to
extend either or both
to measure runtime concerns,
such as idle FPS, scrolling
FPS, memory usage, long paint
times, and the prevalence
of layout thrashing,
just as examples,?
Pat?
PAT MEENAN: Oh, is that all?
PAUL LEWIS: Yeah, you know.
PAT MEENAN: Yeah.
I mean, so
traditionally it's been
about figuring out where
people's pain points are,
and trying to expose that
as easily as possible.
Sort of the raw,
easy version of that
was just implement
timeline and tracing.
So you can get that
from both of them.
One of the things
that's been on my list
for probably the last year and
I just haven't had time to do
yet is, to do a scroll test.
After your page loads, go
ahead and fling the page,
and do the same jank kind of
stuff that Telemetry does.
It'd be interesting to get some
feedback from people on sort
of what their pain points are,
and what they want measured.
I kind of like being
able to provide
sort of low-level information
and not try and round
everything up to a,
yeah, you're doing great
or you're not doing so great.
It's like, OK, when you get
halfway through your page,
you start to jank or
something like that.
So that's why I've
sort of leaned
on giving timeline
and tracing available,
because you have like so
much data in there already
and great presentation
in those tools.
It's kind of one of
those fine lines to walk.
JAN MAESSEN: And I know
the page speed insights
team has been looking a
lot at mobile usability.
And one of the aspects of
that is UI responsiveness.
So far they've been
looking mostly at,
can you use the UI at
all on a mobile device?
Because that's a really
important question.
That's sort of the
first order question
when you're visiting on mobile.
But I think sort of
the next big question
is, now that you know
the UI is usable,
you can touch the buttons.
Are things actually
responding well?
PAUL IRISH: I'm just going
to add a quick thing.
The Chrome developer tools, you
can go in the network panel,
see what the network activity.
There is a waterfall.
But for professionals that are
trying to understand the user
experience of page loading,
pretty much everyone
heads over to WebPagetest, where
they get a lot of information.
At Dev tools, we're
really inspired by this.
The sorts of things that you see
in Filmstrip and understanding
speed index really
give a whole insight
into what the critical path is.
And so we are looking to
do more work around this.
So that you get insight into
what is between the network
request of the HTML and
users seeing the page
loaded on the screen,
and have that information
available to you right
inside the Dev tools.
So a lot coming there.
JAN MAESSEN: Yeah.
I think it's really
important to look
at stuff like the visual
completeness metrics.
Because it's really easy to
get caught up in the numbers
that the browser can
send back to you,
and forget what this
really means for your users
in terms of what
they're actually
experiencing on the screen.
And getting a feel for
tying those two together
is really important.
PAUL IRISH: Can
I ask a question?
PAUL LEWIS: No.
I ask the questions around here.
When you're a
moderateor, you get to.
Yeah, that's fine, go.
PAUL IRISH: OK.
Speed Index is probably one
of the best metrics-- that's
generalized metrics--
for evaluating
the user-perceived
load of the page.
And I guess you invented it?
Yeah, OK, cool.
Awesome.
So you can get it, it's great.
You get it through
WebPagetest, and you can get it
through [? Grameen ?] Telemetry.
There's a lot of things
are kind of adopting speed
and access as something
that's visible.
I was just asking about the
work to have it available
through the browser.
And I know that's
something you were in.
What's the story?
PAT MEENAN: So the
current story is
I tried to implement
it once in Chrome.
And then that changed the
entire rendering architecture
and broke it.
PAUL LEWIS: As you do.
It was a morning's
work, you know.
PAT MEENAN: But no.
More recently, we've
come up with sort
of a ROM version of it that's
almost entirely JavaScript
based.
It's based on resource timing
that works surprisingly well.
It correlates 90 plus percent
with the video version of it.
So I want to try and
get a little feedback
on using that in the
field, see how it goes.
There's some privacy concerns
and some performance concerns
with exposing too much low-level
stuff from the renderer
out to the DOM for
the pages to consume.
So we have to kind of figure out
where that line is going to be.
NAT DUCA: One thing
that I'd just add.
So sorry.
This battle for FPS is hard
to balance against monitoring.
One thing just
keep in mind, when
you think about all
your page load times,
also think about whether you're
leaving the thread responsive
after or during the page load.
Because the first
thing people do
when they see your page
get visually complete
is they're going to scroll
or they're going to tap.
And if you're offloading
2 and 1/2 megabytes more
of JavaScript and
parsing and executing it,
they're not going
to have a good time.
So you need to start
thinking about that too.
This is a lot of
constraints, but this
is sort of the emerging
side of the things.
People load, and then they tap.
And we don't have a great
experience there on the web.
PAT MEENAN: And test that
on actual mobile devices.
Because the CPUs
and memory on mobile
is like orders of magnitude
slower than your desktop.
In WebPagetest, for example, we
have the timeline little flame
chart below the waterfall.
And on desktop, it
was almost never
worth even paying attention to,
because stuff happened so fast.
But on mobile sites,
we actually end up
seeing a lot of cases
where you're actually
CPU constrained, even with
slow network conditions.
And where the main
thread is like
locked during the entire load
of your page and for some period
after.
PAUL LEWIS: I'm totally going
to promote the frame timing
API at this point.
Because even for devices
that you don't have,
we're able to ship
the frame timing API,
and that is open for your
feedback on the GitHub repo
/WC3/frame-timing.
Wow, it's like I memorized it.
That would allow you
to start figuring out
this kind of stuff.
You'll see the CPU
time in your frames
just after your page load.
And go, you know what, I
have a real problem here.
This is costing me users.
They're leaving straight after
I do this horrible thing.
So I think that's a good idea
that we should have that.
But maybe that's just me.
OK we, I'm afraid,
are out of time.
So, I know.
This was fun, wasn't it?
[APPLAUSE]
