RYAN SCHOEN: Good
morning, everyone.
My name is Ryan Schoen,
and I'm the product manager
working on Chrome performance.
As Darren told you,
the Chrome team
is dedicated to making a
first class user and developer
experience on the web.
And we think that a
very large part of that
is performance,
enabling you to make
those really efficient,
really performant, really
smooth mobile web applications,
so-- I should probably put that
up.
What I'm going to do
in the next 20 minutes
is tell you a
little bit about how
we're so committed to this goal.
We have teams
dedicated to making
that goal a reality for
all developers in a very
natural way.
And I want to tell
you a little bit
about what they've
been working on.
So I'm going to
start out and talk
about traditional performance,
in the sense of JavaScript
operations, and
DOM manipulations.
And then I want to
spend a lot of time
talking about rendering
performance, and that 60 frames
per second goal
that you're going
to hear a lot about today.
I want to touch on
energy efficiency,
and what we're doing to
improve your battery life.
And then, finally, talk a little
bit about perceived speed,
because that's
really the experience
that your users are
going to end up getting.
So I said I wanted to talk
about traditional performance.
And usually, when you have
somebody from the Chrome team
on a stage talking
about performance,
we talk about JavaScript
operations, DOM manipulation,
and synthetic benchmarks--
things like Octane, and Kraken.
And we do want to be
really, really good at that.
We want to do it really fast.
But any synthetic
benchmark always
has pieces of it that
aren't really realistic.
It's not the experience
that the user gets.
And earlier this
year, you may have
heard about a new
benchmark that was
released called Speedometer.
And what Speedometer does is
it takes a real world app,
like a to do list, writes it
up in a bunch of frameworks--
so Angular, and jQuery, and
React, and a couple others.
And then runs through it--
adds items to the to do lists,
checks them off,
removes them-- and tries
to do that as fast as possible.
Like I said, all
benchmarks are synthetic,
but there are parts of this
benchmark that we really liked.
It was a real app that was
going to be in front of users,
and so we wanted it to be fast.
And when we took a look at it,
it was a little embarrassing,
because Chrome
really wasn't good.
And you may have
seen some numbers
in the press that hinted that.
And so we put a team on
it, and we dedicated work
to that over the
past six months.
And obviously, they're allowing
me to be up here talking
about it, so there's
some good news.
I'm going to put up a
graph of Speedometer.
And on the y-axis,
here, you have the time
that it takes to
complete the benchmark.
And then on the x-axis is
Chromium revision, essentially,
as it progresses through time.
You can see that about
six months ago, we
were up at-- what is that--
14,000 milliseconds, 14 seconds
to complete the benchmark.
And through a couple
of pretty large drops,
we're now at around
6,000 milliseconds,
six seconds to complete
that benchmark--
an improvement of over 2x.
I think this is on a Mac, but
you have the same performance
improvement across
every platform.
And so we're really
excited about this,
because if your
JavaScript is anything
like the type of
JavaScript you see
in those frameworks--
or better yet,
you take advantage
of one of those
frameworks-- you're going
to get that performance
boost for free.
Your users are going to get
that performance boost for free.
So we think that's really great.
But as I hinted at earlier,
this is only really
one part of the
performance gain.
And from talking
to a lot of you,
we know that what
you really care about
is that 60 frames per second.
I think Darren mentioned it,
I'm going to mention it a bunch.
You're going to keep hearing it
throughout today and tomorrow.
And I want to be really
precise about that.
Because while 60 frames
per second is a number,
it's not always clear
what web content we're
talking about when we say 60
frames per second, and doing
what operations, and
on what mobile devices,
or what devices at all.
Reading Wikipedia
on a Nexus 6 is
going to be a very
different experience
from playing a 3D
game on something
like a Galaxy Nexus
or a lower-end device.
And so to be
explicit, our goal is
to make 60 frames per second
a reality on all content
and on all devices.
And that's a pretty
large goal for Chrome.
We're on a lot of devices,
with a lot of different GPUs,
and a lot of
different processors.
And so we know that
that's an ambitious goal,
and I hope that
you'll excuse me when
I say that we're not there yet.
But we are making pretty
incredible progress.
So to help frame this problem
a little bit easier for Chrome
developers-- the
engineers who work
on Chrome-- we flip it around.
And we don't say 60
frames per second,
but we talk about frame
time-- how much time
does it take the CPU
to generate everything
that it has to do for a frame
and get it out onto the screen?
And so 60 frames
per second-- you
do the math-- that works
out to 16.6 milliseconds.
That's the budget that
we have to do whatever
we have to do to get that
next frame on screen.
So using this way of
thinking about it,
we collected 27
different websites,
different effects that
we wanted to make sure
were really good
on the mobile web,
but that traditionally
had been very, very
hard to do in a performant way.
These are things
like a slide drawer
that sticks to your
thumb, or pull to refresh.
These are effects that we know
that you, as developers, like
to do, and want to
put in your apps,
and we want to see that happen.
And so with the
goal that in order
to be at 60 frames
per second, we
have to be under
16.6 milliseconds,
we ran through all
27 of these effects
on a Nexus 5, a relatively
powerful device,
and came to 129 milliseconds
on average to generate a frame.
Luckily, I'm not fired yet.
This is January.
So we ran this again,
after all the improvements
that we've been making,
and I'm happy to report
that as of this week, we are
finally at 16 milliseconds.
[APPLAUSE]
16 milliseconds for these
really, really tough effects
that we wanted to make
sure were enabled.
And so this is a
pretty big jump.
This is 10X.
And so you're
probably wondering,
how was that even a
thing that's possible?
So I told you at the
beginning of the talk
that we had several
teams that were dedicated
to making this a reasonable
thing that could be done.
And one of those teams was a
task force called Project Silk.
It was a cross-cutting team
between Chrome and Blink,
and was really enabled
by the tight integration
between those two.
They worked on an
immense amount of things.
I'm going to throw a bunch of
words up here on the slide.
I think Darren had a
slide like this too.
And I don't expect you to
understand all these things--
I'm certainly not going
to talk about them all.
But just I just want to
demonstrate the real breadth
of what they were working on.
To highlight a few,
Darren this morning
mentioned the
will-change attribute.
And we think of this as
really, really powerful.
It enables you, as a developer,
who probably knows more
about your app than Chrome
does, to label certain content,
and say, this will change.
This will animate in some way.
That lets us do a lot of
the prep work ahead of time,
so that as soon as
you start to animate,
we don't have to do anything.
It's going to be
responsive, and it's
going to be smooth right away.
All the prep work has been done.
Another project is garbage
collection during idle time.
In JavaScript, or in several
programming languages,
garbage collection
is the process
of reclaiming memory
that's not used anymore,
so that it can be useful again.
And historically, this
has been totally unlinked
from the Chrome
rendering engine.
So if you're trying to get
that 16 milliseconds, 60
frames per second, and then
Chrome was going to say, nah,
it's time to do a
garbage collection,
you're kind of screwed.
And your animation will
just screech to a halt.
So we've hooked it up
with the rendering system.
So now, if you take a look
at that frame budget, that 16
milliseconds, maybe there's
six seconds left at the end
that you have time
to do things with.
And so we can fit in
garbage collection there,
and so you can keep running
at that 60 frames per second,
but still get the
useful memory reclaimed.
And then another
project that I want
to talk about, that Darren
also touched on this morning,
is a project called hardware
accelerated rasterization.
And that's a lot of
words that I don't
want to have to say
over and over again,
so I'm going to call it
by our code word, Ganesh.
And before I go
into what Ganesh is,
I want to explain
a little bit more
about the Chrome
rendering pipeline.
If you've ever profiled your
website in Chrome dev tools,
you've probably seen little
chunks of time called painting.
And part of painting is a
process called rasterization.
And all that means is that
we're taking the draw commands--
like put text 12 point here,
draw a rectangle that's
200 pixels by 300 pixels and
put it here-- and take that
into actual pixels that are
going to appear on your screen.
So historically, that process
has looked something like this.
This is a vast simplification,
but the draw commands
go into the CPU, where that
rasterization takes place,
and actual pixels-- the pixels
in the form of a texture--
are uploaded to
the GPU, which then
puts those pixels
onto the screen.
It's pretty simple, but there
are a couple issues with it,
as well.
As Darren said this
morning, the CPU
is not really optimized
for that type of work.
The GPU, after all,
is for graphics.
And so it would be much
more efficient to do it
on the GPU, which is tuned
for those sort of operations.
There's also that
really expensive upload,
of the texture from
the CPU to the GPU.
And if you're doing
that every single frame,
that's really, really
going to slow things down.
In addition, we're not just
doing this process for what's
on screen on your
mobile device, we're
doing it for everything
that's around.
We're rasterizing
everything around there,
so that when the
user scrolls, they're
going to get those
pixels right away.
We're not confident in our own
rasterization ability to say,
yeah, we can get it
up there on time.
So we do it ahead of time.
So that's really an
extra burden on the CPU,
and it costs a lot of memory.
Even if you didn't
know how this works,
you've probably
feel the effects,
if you've ever tried to do
a JavaScript-based animation
with something like
requestanimationframe.
There are these
weird rules like, oh,
yeah, [? passity, ?]
you can change that.
But left and top-- like,
no, you can't touch those.
That'll just ruin
your animation.
And that's ridiculous, right?
We don't believe that
you as developers should
have to have in your head these
rules that don't make sense,
but it's just the way
it is, because that's
how the rendering engine works.
And so that's where
Ganesh comes in.
We can largely throw
away this pipeline
that I was talking about-- and
again, vast simplification.
But the draw commands can
come in, for the most part,
skip the CPU, and
all the hardware
can be done on the GPU.
We're working hard to
enable this kind of pipeline
for all content and
for all devices.
Right now, it's just a
handful of modern devices,
and about 15% of web content.
Because it's actually really
hard, with the many GPUs
that we have, and the
many platforms that
we're working on,
to get this going.
But it's really, really
a huge improvement.
And to motivate that, I
want to show a quick video.
So this is The Verge's website.
On the left, you have Chrome 32.
And on the right, you have
the latest Canary, Chrome 41,
and that has Ganesh enabled.
And in a minute, I'm
going to start the video.
And simultaneously,
I'll fling the two sites
to scroll through
The Verge's website.
A very, very fast scroll.
So fast that I slow down
the video by four times,
so you can see what's going on.
And you'll notice two things.
Even though I scroll both
of them at the same time,
Chrome 32 takes a very
long time to start,
because it's thinking
really hard at the beginning
of that animation.
And the second thing you'll
see is that without Ganesh,
the rasterization
just cannot keep up.
And it'll be very apparent
once I start the video.
So let's take a look.
There's the fling.
41 is off and running.
And there's the scroll on
32, can't even keep up.
But 41 rasters all the
content, and gets it
on screen in that super
fast scroll, which
on here is four times slower.
So we think this is
incredible progress.
[APPLAUSE] Thank you.
And just because I want to end
this section on a low note,
we do recognize that even
16 milliseconds is not
the golden standard.
We still have a
lot of work to do,
because if there's anything
else happening on your phone--
or heaven forbid,
you as a developer
want to do something besides
just rendering frames-- that's
going to blow that
frame budget right away.
And so we're going to continue
working on Project Silk,
and I expect we'll have some
more good results for you soon.
OK.
So far I've talked about
speed, and that's great.
But there's another
part of performance
that we're starting to
realize is equally important,
and that's efficiency.
Specifically, energy
efficiency and battery life.
When we started
thinking about-- when
I started thinking
about energy efficiency,
I had sort of this very
naive approach to it.
It was like, well, we're
already optimizing CPU and GPU,
so let's just do
less work on those,
and then we'll use less
energy, and we'll just
get the energy
efficiency for free.
Why do we even need to think
about energy efficiency?
And that's partially true.
That does improve
energy efficiency.
But there's a lot more
than we can be doing there,
and it's not always intuitive.
And to motivate this
with a concrete example,
this was what I thought
energy usage looked like.
When you're doing work on
the CPU, you're using energy.
When you're not doing work,
you're not doing energy.
So again, just do what
you have to do, and do it
as fast as possible,
and get back to idle.
This is not quite how it is.
And this is still an
oversimplification,
but you have a little bit
of a warm up and a cool down
period any time that you use the
CPU, where you're using energy,
but you're not getting
efficient work done--
or you're not getting
useful work done,
you're just burning energy.
And so in this
contrived example,
you're going to use about
two times as much energy
as you would in
my naive approach.
And so what you want to do
instead is take this work
and nudge it by
just microseconds,
and coalesce it all
together, so that you
have a single warm up and
a single cool down period.
And you get an energy
savings of about 2x
here, again, in this example.
And so I don't want to
imply that we're always
going to wait as long as
we can to do this work,
but it becomes an engineering
trade-off of how much energy
savings can you squeeze
out of your work
just by those little nudges
that aren't going to,
in reality, affect the
user-perceived performance
of your app.
So we're taking
learnings like this,
and going back, and taking a
look at Chrome's energy usage.
And we started with
the worst offender
of this, which was
Chrome for Windows.
And I've happy report
that at idle-- so
when you're reading
a Wikipedia page,
or when you're just
reading a news article,
or something--
Chrome for Windows
is now using only a
quarter of the energy
that it was six months ago.
So we think that's a
fantastic improvement,
and we're really looking forward
to taking those same learnings
and applying it to
other platforms.
OK.
The last thing I want
to talk about today
is perception of speed.
Because at the end
of the day, that's
really what you guys
probably care about,
is the experience that you
can deliver to your users.
And so I'm not going to talk
about graphs, and tech specs,
and percentage
improvements anymore.
If you've been following the
release of Android Lollipop,
you may have heard about
an API that they're
working on called
activity transitions.
And it enables what you're
going to see behind me here.
When you transition between two
activities in Android, which
is essentially a page
navigation in Android,
it lets you transition
smoothly, and keep some elements
on the screen.
So you can see in this example,
you click on the album art,
smoothly transitions over
to the next activity.
And there was no break
in the user experience.
Not only is this a smooth and
beautiful user experience,
but it also hides
any latency that's
involved in creating
that new activity.
If you were to do an A/B test
with users, and on one hand,
you take just the activity
that as soon as it's ready,
you throw it up on screen, no
transition, and on the other,
you do this smooth animation
to the new activity.
Even if the clock time
is exactly the same
between the two,
the user will always
say that the one with
the smooth transition
was objectively faster,
even though the time was
exactly the same.
So this is the kind of thing
that we want to do on the web.
But today, if you do
absolutely nothing,
your navigation is going to have
that horrid white flash that
is sort of like the signature
of the web today, that reminds
the user that
they're on the web.
If you have something like
a single page web app,
you can make this work, right?
You can do Ajax calls, you
can manage your URL state,
so that this is still a thing
that's even possible, certainly
not easy.
And if you want to
transition between origins,
then we need to start
talking about iFrames,
and things just
get really complex.
And that's not how
it should be right?
You're just
transitioning an element.
You shouldn't have to jump
through all these hoops
to avoid this legacy
piece of the browser.
And so we've been working
on this experiment.
It's a new API,
new prototype API,
that we've been calling
Navigation Transitions, that
enables this sort
of thing on the web.
And so I want to show
you how that would work.
If we can switch to
the podium camera.
Please?
All right.
Cool.
So I have here a prototype
that is Google web
search-- obviously, not live.
And what you're going
to see in a second
is I'm going to click
on the Search button.
And we're going to navigate
to the Search Results page,
but we're going
to keep the Google
logo, we're going to
keep the search bar,
because those elements are also
present on the Search Results
page.
So am I here?
Yep.
You can click,
smooth navigation,
and the results just fly in.
There was no break in the user
experience. [APPLAUSE] Thanks.
So if we can switch back
to the slides for a second.
Like I said, this is
still an experimental API,
so everything is
likely to change,
and I don't want to go into
the specifics of how it works.
But I do want to call your
attention to two things.
First one is that
this is pretty simple.
We have a meta tag
that identifies
the elements you want to
transition, a link header that
tells the browser where the
CSS animations are, and then
whatever CSS animations you
want for those elements.
Really dead simple to implement.
The second thing I want
to call your attention to
is that a browser that
chooses to implement this,
in whatever final
form it ends up
looking like, you'll get
those great transitions.
And if you have a
legacy browser that
doesn't understand
this stuff, you're
going to have the same
transitions that you would have
anyway.
So it's fully
backward compatible,
and it'll continue to have those
navigations everywhere else.
So I started out motivating
this with the Android activity
transitions API,
which I said lets
you do these smooth transitions
between native Android apps.
Chrome on Android is just
a native app on Android.
So what happens if we hook
up activity transitions API
with the navigation
transitions API?
So let me show you
another demo, if we
can go back to
the podium camera.
So I have here-- it's a little
washed out, but it's all right,
you'll still get it--
this is just a web page.
And has some thumbnails on it.
And I also have
this toy native app
that we just have
for an example.
And it has some
pictures, and you'll
see that some of the
pictures are the same,
and then there's
some text there.
If I open up my web app, and I
want to now smoothly transition
into that native app, all I
do, click on one of the images,
and you're inside the
native app. [APPLAUSE]
So as you can see,
this really starts
to blur the line-- if we
can go back to the slides--
really starts to blur the line
between native apps and web
apps, and we think
that that's great.
We want you as developers to use
whatever development experience
you think is right for you
and your users and your app,
and really deliver that
great experience to the user.
The user doesn't
need to be concerned
with what tool chain you used
to get that experience in front
of them.
They can just enjoy and be
immersed in your experience.
So we think that
that's really great.
So that's all I have
prepared to talk about today.
Just to quickly recap, I touched
on traditional performance,
and the work that
we've been doing there.
I spent a lot more time going
into rendering performance,
and all the projects that we've
been working on there to get
a great, nearly 10x improvement
on some of those tough cases.
Talked a bit about battery
life and energy efficiency,
and how we're thinking
about optimization there.
And then finally, talked
about perceived speed,
and this new prototype API
that we've been working on,
that we think can really
make a large difference
for navigations on the web.
But if nothing else, what I want
you to take away from this talk
was what I said
at the beginning.
Which is like, we care.
We get it.
We know that performance
on the web is painful,
and we're doing
everything that we
can to make that a
better situation.
We've come a long way with
some of the improvements
that I talked about today, but
there's still a lot more to do.
And so if I have my way, I'll be
back up here on stage one year
from today with as much
good news, if not more.
Thank you very much.
[APPLAUSE]
