[MUSIC PLAYING]
ELIZABETH SWEENY: Well, we are
excited because nobody likes
to wait, and we want to
talk to you about all
of the optimizations
and measurement tools
that we have been working
on and can provide to you.
And we're going to
start with going over
an overview of user metrics,
kind of what we care about
and why, then looking at
the latest developments
in Lighthouse, the Chrome
UX Report, or CrUX,
and talk about how
we're unifying tooling
across the board.
PAUL IRISH: All right, so when
it comes to web performance,
one thing you could
say is you can't
improve what you don't measure.
ELIZABETH SWEENY: This
was Peter Drucker, right?
He was a management guy?
PAUL IRISH: This
is actually true.
Peter Drucker-- really,
really well known kind
of business management guru.
To be honest, I
think he might have
been a front-end
developer, as well,
but this is just
absolutely true.
If you want to make something
better, step one is measure it.
And let's get into that
with web performance.
To measure, we need to
look at some metrics.
And it's really
important to make sure
that your metrics are user
centric and really focused
on the user.
We heard a little bit ago about
pending wait time and some
of those custom
metrics, but we'd
like to have metrics
that really focus
on how the user experience is.
So let's take a page load,
break it down, and look
at a few key metrics in here.
So a little film strip here.
We're loading a
search result page.
And so we just have a little
progression of a few images,
getting to our final result or
where the page is done loading.
Now, underneath the things
that we see visually
is a few things happening.
There's the main thread
and network requests,
and these are really
important, too,
because it actually weighs
in a lot on the actual user
experience.
So I want to point a few things
out here, the first of which
is this point right here.
This is the first time that
text shows up on the page.
It's first time the
content is there.
So we call this, the
duration from the navigation
to the point at which text shows
up, the first contentful paint.
Easy enough-- I think
we know this one.
A little bit later
though, you can
see right after the main thread
kind of quiets down a bit,
this right here is
important moment.
All the long tasks in the
main thread have done,
so the main thread has kind
of quieted down and allowed
the page to now be
responsive to users
once they choose to
interact with it.
Also, the network
is quiet, so we
know there's no one big
massive script hanging out
ready to run and take up
time in the main thread.
So the duration from
navigation to this point,
we call this time
to interactive.
All right, and there's
one more key metric
I want to cover real quick.
Now, in this page load,
a user could really
touch the screen at any point.
They could interact with
it here once there's
a paint on screen or
a little bit later,
but let's just say for
the purposes of this,
they tap the screen
at this point.
Now, the thing is, if they
tap the screen at this point,
take a look at what's
happening on the main thread--
a lot of things.
We're in the middle
of a big long task.
And that means the
page is not going to be
able to respond to the user.
So we have to wait a little
bit until the page can actually
respond.
So this duration, the
time from the input
until the end of the long
task that we're dealing with,
we call this the
first input delay.
This is an important
metric, and I
want to spend a little
bit more time on it.
If this is a main
thread, well, it's
a very open and
available main thread.
Really, nothing happening.
So let's say if the user
had some input, well, piece
of cake, we can just
reply to it immediately.
Your event handler is going to
run, touch, start, or click,
or you're going to
do style, and layout,
and paint, and ship a frame.
They're going to see something.
So if, let's say, they're
tapping on the Menu icon
and then the menu slides
out, so we're good.
But if there is one task
sitting on the main thread,
well, we're just
going to have to wait.
So we always, yes, we'll be
doing the event handling,
but this time between when
the inputs's first received
and when the events
will be dispatched,
that is the input delay.
First input delay is just
the first input of the page.
It's just the first time
that a user touches the page.
And the important thing here
is that first input delay
is a field metric.
It really only makes sense
to gather in the field.
Time to interactive is a great
and really powerful metric,
but it makes sense mostly in
the lab, in a lab scenario.
And we've recognized this, that
basically time to interactive
out in the field where real
users are tapping on the screen
as the page is loading,
it really kind of messes
with this metric.
So there's a few
metrics on the screen
here basically just
outlining which
makes sense to
gather in the lab,
and then some that are only
exclusive in the field.
So I want to point out TTI and
First Input Delay or FID are
our interactivity metrics--
really key for understanding
how available the main thread is
to the user.
ELIZABETH SWEENY: OK,
so all of these metrics
are awesome, obviously,
but where can
we actually find them?
So all three of
these metrics are
readily available in their
respective lab and field
environments.
So as Paul was
saying, because FCP
can be measured in both
the lab and in the field
with real users, it's
available across the board.
So that's in Lighthouse, in
the Chrome User Experience
Report or CrUX, and
there's a web perf API.
TTI is only
available in the lab,
and so can only be accessed
via Lighthouse and Page Speed
Insights.
Now, FID requires real
user input to measure,
so it's available in CrUX.
And FID is exciting
because it's actually
going to be coming to
Chrome in Q4 or early Q1
as a web perf API.
So it should be
able to give you a--
you can view it in a
Performance Observer,
just as you get FCP today,
which is kind of cool.
PAUL IRISH: Super cool.
ELIZABETH SWEENY: Yeah.
PAUL IRISH: Yeah, excited about
standardization of this stuff.
It's really exciting.
ELIZABETH SWEENY: So
for those of you who
aren't familiar with Lighthouse,
and we know that it's awesome
and a lot of people
know, but Lighthouse
is an open source automated
tool for improving
the quality of web pages.
So you can run it
against any web page,
and that's either public or
requiring authentication,
and it has audits for
performance, accessibility,
PWA, and more.
I'm excited to tell
you about some things
that we've been doing
with Lighthouse.
So one of those things
is a PWA refactor.
So currently, there
is a broad spectrum
of PWA definitions
in the wild that
can make it difficult to
identify whether or not,
definitively, you are a PWA.
And while our PWA checklist
is absolutely wonderful,
and it gives you no helpful
guidance towards what a PWA is,
we want a machine verifiable
way to say yes or no.
So today, we're launching
the new Lighthouse UI
with a more binary badging
system for the PWA category.
And the badge groupings
reflect that we want everybody
to be able to achieve the
fast and reliable badge.
All experiences should
be that whether or not
you're installable or not.
In order to actually become a
full PWA and get that badge,
you have to successfully fulfill
all audits in the categories.
PAUL IRISH: Yeah.
There's a few more things
that we've been doing.
And in the new 4.0,
4.0 alpha that's
coming out in Lighthouse,
there's a few nice changes
that we made.
So one of the things that
we've been working on
is reducing the
amount of time that it
takes to run Lighthouse.
Nobody wants to sit around
waiting for a long time.
So we're happy to report
that the median runtime
of all Lighthouse runs that
we're aware of has dropped down
about 50%, and up
the 90th percentile,
we've also dropped
this down about 66%.
So we're really
jazzed about this.
We want to make
sure that it's not
a long wait for you to get the
insights that are available.
A few more changes--
we've changed how scores
are kind of represented.
So if you've seen kind of at
the top of a Lighthouse report
these score gauges,
right beneath them
is this little scale, right?
So this is just how the numeric
scale is mapped to a color.
We made a change here.
And I just want
to point out, none
of the numerical scores
and those calculations
have changed in this new update.
It's just deciding
which color is applied.
So this is basically the change.
We've just adjusted how the
various numerical scores
map to these colors.
ELIZABETH SWEENY: Yeah,
so basically, we're
raising the bar about
what our expectations are
for a performance site.
But if you're in the green,
you should feel really good
about it.
PAUL IRISH: Yeah, it's good.
I know a lot of--
yeah, it's nice
to go for the 100.
I love the 100.
I'm excited about it.
But, yeah, if you're in
the green, you're good,
so I just want to
make that clear.
All right, sweet.
Now, there's a few more changes,
and this one about throttling--
when it comes the throttling,
a good mobile-throttling preset
shouldn't necessarily map
to the particular conditions
of a telecommunications
system and its specification.
A good preset maps to
what real users feel.
And so really,
what we want to do
is we want to capture the
latency and throughput
at the 80th percentile,
the frustrating experiences
that you oftentimes experience.
And we want to keep pace
with this measurement
as our global telecommunications
infrastructure gets upgraded.
A lot of people are
moving from 3G to 4G,
and we want to make sure
that we capture that.
And so we're making a change,
but actually not in the latency
and throughput numbers.
This is actually just
a labeling change.
So wherever you
see fast 3G today,
you'll be seeing slow 4G.
And it's actually because the
preset that we use actually
captures a 4G experience
more than a 3G experience.
So just FYI, same stuff--
different now.
It's all good.
What's next?
Oh, yeah, so there's a
few other things going on
with Lighthouse, really nice
projects making use of it.
First up, check out
some of the projects
on GitHub taking
advantage of Lighthouse,
some of the dependent projects.
Really cool stuff in here.
A lot happening in
the recent months.
Many projects building systems
around using Lighthouse
in a continuous
integration experience
so that on every commit,
you run Lighthouse, store
all that data, get graphs--
some really cool stuff happening
in here, so take a look.
Lighthouse is also
available in a number
of different commercial
products as well.
First is Calibre--
fantastic stuff here.
Treo is another one.
I think this is my
site, which is doing OK.
Accessibility actually
does need some work,
but there's some nice stuff.
And the last is
SpeedCurve, which
actually just added
support for Lighthouse
just a few months ago.
So we're excited
to see Lighthouse
becoming part of the production
monitoring kind of ecosystem.
ELIZABETH SWEENY: And
even internally, we're
excited to see where
Lighthouse is being integrated.
So one of those examples, as
was announced in the keynote,
is the new site web.dev.
And it's exciting
to be integrating it
with really prescriptive,
actionable guidance.
And you can run
Lighthouse with any URL,
and it will provide you with
a prioritized to-do list
with that guidance
and interactive code
labs for the specific things
that you need to work on.
What's so exciting about this
is that for the first time,
tooling is directly integrated
with the documentation.
PAUL IRISH: Yeah,
that's pretty hot.
ELIZABETH SWEENY: And we
wanted to also call out
another wonderful
partner who has done
a good job of using Lighthouse.
So Squarespace was able to use
Lighthouse as an out-of-the-box
auditing and reporting
system to build on top of.
And it allowed them to
improve their 50th percentile
and 95th percentile TTI
by over three times,
so we were super
excited by that.
They used it to generate
traces and dig that deep
into specific problems
as they happened,
as opposed to post progression.
So now, we are going to talk
a little bit about the Chrome
User Experience
Report, or as it says,
CrUX, as I've already
said, I think, three times.
CrUX actually provides
user experience metrics
for how real world Chrome users
experience popular destinations
on the web.
So it's a data set that
is powered by real user
measurement by key user
metrics across the public web,
and it's aggregated anonymously
from users who have opted in.
We're excited to talk
about some of the updates
that we've done here.
One of the things,
and it was actually
featured in [? Aanchal's ?] talk
earlier, is regional analysis.
So we heard loud and
clear from developers
that we needed to be
able to break down
this data set in a
country specific way,
and now you can do that.
So via BigQuery, which is
where you can interact and play
with this data set you can
now get separate country
specific data sets
to pull it apart.
PAUL IRISH: And,
yeah, so this is
just-- this is how
you've been interacting
with Chrome UX Report in the
past is working with BigQuery.
But I heard that there's like
a nice, new, shiny thing?
ELIZABETH SWEENY: Yeah.
You can get it way easier now.
So the brand new
CrUX dashboard, which
was announced just
a bit ago, it allows
you to understand how an
origin's performance evolves
over time.
And so it's built
on Data Studio.
It's much more
easily accessible,
and it can be easily
customized and shared
with everyone on your team.
And it doesn't require you
to write your own script
on BigQuery to access it.
And it's automatically synced
with all the latest data
sets, so you're good to go.
Also, to ensure consistency
across all of our tooling,
as we've mentioned,
that's a huge goal for us,
FID is now launched as an
experimental metric in CrUX.
So when we announced last
year, the data set only
had 10,000 origins, and now
we are at over 4 million.
And if you are excited
to see your websites--
PAUL IRISH: I am excited to see
paulirish.com in this data set.
ELIZABETH SWEENY: Yeah.
PAUL IRISH: Because it's
not, and it would be great.
ELIZABETH SWEENY: Yeah,
we're working hard
to improve it, and
expand quickly,
and so if you're
excited, check in
soon because we are
working hard to move fast.
PAUL IRISH: Awesome.
All right, so one of the things
that's really important to us
is to have a unified story
between our performance tools.
So, OK, so hand raising time.
Raise your hand if
you've used Lighthouse.
All right, yes.
Raise your hand if you've
used Page Speed Insights.
Yes, of course.
Raise your hand
if you've noticed
that what you're seeing
in Lighthouse and Page
Speed Insights
doesn't necessarily
be telling the same story.
Yeah, I'm there with you too.
Now, we saw this was
a bit of an issue,
and we wanted to improve it
because we don't want advice
from two different tools
that Google provides
that is kind of conflicting.
So we've been working
hard and collaborating
with the Search team on this.
And so today, we're
excited to announce
that there's a brand new
next generation of Page Speed
Insights now powered
by Lighthouse.
And this is really
exciting stuff.
So now, if you use
Page Speed Insights,
all of the data that you've
been seeing in Lighthouse when
it comes to performance
is now in the report.
All of the metrics, and
[? op trees, ?] and diagnostics
all right there.
You also see the top
score that you have been
seeing in Page Speed Insights.
That score is the Lighthouse
performance category score,
so kind of speaking
the same language.
And still, if you've really
enjoyed kind of the Chrome UX
Report data that has
been available inside
of Page Speed Insights,
that's there too.
I'm going to play a quick little
screencast of how this looks.
So let's take a
look at Chrome.com
in Page Speed Insights.
We're going.
Come on.
Yeah.
Great, good, awesome.
This is in real time.
I did not speed
anything up, so we've
got to wait for the latency.
So, yeah, so here's--
this should look
very fairly familiar
if you've used Lighthouse.
But up at the top,
we have field data.
And by default,
paid speed on sites
runs both analysis on mobile
and desktop at the same time,
delivers you the
results simultaneously.
So you can check that out.
So this is live today,
so go check it out.
Take a look.
Give us your feedback I'm
excited to have this out there.
AUDIENCE: Woo!
PAUL IRISH: All right, thanks.
Oh, yeah, I mean, you
can clap if you want.
I mean, that's cool.
[APPLAUSE]
All right, and if you've
ever actually opened up--
I don't know.
I have a tendency of opening
up the dev tools on basically
every site that I do.
You know, it's just a habit.
So I opened up the dev tools
on Page Speed Insights, and lo
and behold, it's just
a thin web app that
makes a call to an
API, a RESTful API,
and it's actually the
Page Speed Insights API.
So we were like, well,
this kind of means
that in order to
do this, then we're
going to have to have
all the Lighthouse
data available over the API.
So that's what we have.
The new Page Speed API V5, we
consider it the Lighthouse API
V1.
All the same Lighthouse data,
including all categories, not
just performance,
but all of them.
And all the work
is done for you.
No waiting for your own Chrome
to reload and do the analysis.
So we'll do the work for you.
And the Chrome UX Report data,
that summary is still in--
it's added into the response.
That's the word.
Basic usage-- I don't know if
you'd use it from fetch client
side, but if you did, it would
look something like this.
Well, just pass the URL.
There's a few other
parameters customize things.
Get back the result.
Looks a little like this.
There's a Lighthouse result full
of the exact same Lighthouse
data that you'd be getting
by running Lighthouse
anywhere else.
And inside that loading
experience property,
that is the Chrome
UX Report stuff.
So really cool, check it out.
Details and documentations,
reference guides here--
Page Speed Insights V5.
All right.
ELIZABETH SWEENY: And
so this is really cool.
PAUL IRISH: It's really cool.
ELIZABETH SWEENY: Yeah, it
means all unified analysis,
and it's the same.
So when you're measuring,
you're optimizing.
You're monitoring.
If you want to start making
changes and testing things out,
there's a place for
you to go for that.
So that's great, and we have
all of these things aligned,
but where do you go for what?
And when should you go there?
If you need a snapshot
of a page's performance,
as Paul said earlier,
Page Speed Insights
is a good default to go
to because it provides you
with both the field and the lab
and gives you a good benchmark.
If you want to make
changes, test, and iterate,
and really have
that fast feedback,
then the Chrome extension,
the audits panel,
or operating within the
command line interface
is going to be a
good place to go.
And finally, if you want to set
up production monitoring or set
budgets, then the API is
going to be fantastic.
But across the entire
development lifecycle,
you now are completely
powered by Lighthouse, which
we're super excited about.
PAUL IRISH: Yeah.
So to wrap up, well,
I guess if there's
one thing or four things that
you take away from this--
first up, measure
well, measure often.
You can't approve what
you don't measure.
ELIZABETH SWEENY: Yep.
You can now use the
Page Speed Insights
for quick Lighthouse analysis.
PAUL IRISH: The CrUX
real world data really
helps round out
your view of what's
happening with your users
and really understand
different percentiles
where users are
feeling pain and frustration.
ELIZABETH SWEENY: And finally,
to evaluate performance
at every stage, which is
really important to us,
you can now check out
the API, so go use it.
PAUL IRISH: All right,
I think that's it.
Thank you guys very much.
ELIZABETH SWEENY: Thank you.
[MUSIC PLAYING]
