
English: 
Hey there. Welcome to web.dev LIVE.
I'm Dion Almaer and I work on the web developer ecosystem
at Google and I'm delighted to kick off our online event.
First, though, I want to acknowledge the times we're in,
we're dealing with a global pandemic that is taking a huge
toll on us all.
And most recently, we witnessed two events which have once
again surfaced the systemic racism in our society,
that we must do everything we can to eradicate.
You know, these events have been really humbling.

English: 
[MUSIC PLAYING]
DION ALMAER: Hey there.
Welcome to Web.Dev Live.
I'm Dion Almaer, and I work
on the Web Developer Ecosystem
at Google.
And I'm delighted to kick
off our online event.
First though, I want to
acknowledge the times we're in.
We're dealing with a
global pandemic that has
taken a huge toll on us all.
And most recently,
we're witness to events
which have once again
surfaced the systemic racism
in our society that we must do
everything we can to eradicate.
These events have
been really humbling.

English: 
They're showing us how much
work we have ahead of us.
But they also show us
the power of community.
So we join you today
and over the next three
days in the spirit of
being together and helping
each other, because
we were upset
when we had to
cancel Google I/O.
And I kept thinking
about an empty Shoreline
Amphitheater on the
days that some of us
would have congregated.
And web developers reached out
sharing these same feelings,
wishing we could
be discussing ideas
and enjoying the
whole way track.
Whether you're joining us
from your couch, kitchen,
or hammock, we hope you're
safe and ready to kick
Web.Dev Live into gear.
Now we'll be coming to you in
different time zones each day,
reaching you no matter
where you are on the globe.
We'll be bringing you content
from across all teams, as well
as members from the
web community at large.
Now each day, you'll
have Googles on stand
by to answer your
questions in real time.
So as you're
watching the session,
simply head over to the
live chat on Web.Dev/Live
or on YouTube,
and just ask away.

English: 
They're showing us how much work we have ahead of us.
But they also show us the power of community.
So we join you today, and over the next three days, in
the spirit of being together and helping each other,
because we were upset when we had to cancel Google I/O.
And I kept thinking about an empty Shoreline Amphitheater
on the days that some of us would have congregated.
Web developers reached out sharing these same
feelings, wishing we could be discussing ideas and
enjoying the hallway track whether you're joining us from
your couch, kitchen or hammock.
We hope you're safe and ready to kick web.dev LIVE
into gear. Now we'll be coming to you in different time
zones each day, reaching you no matter where you are on the
globe. We'll be bringing you content from across our teams
as well as members from the web community at large.
Now, each day you'll have Googlers on standby to answer
your questions in real time.
So as you're watching the session, simply head over to the
live chat on web.dev/live
or on YouTube and just ask away.

English: 
Now, when coronavirus became global, we really
felt the need to stabilize.
This resulted in us pausing Chrome releases and
temporarily rolling back the SameSite cookie changes.
Now, we wanted to track Chrome usage and see what changed
to make sure that we could be on top of any ecosystem
changes, too. You probably won't be surprised that we saw
surges in usage of media APIs
as video chat and streaming really soared.
Now also some types of content saw large traffic surges
such as food, commerce, entertainment, health,
science, etc.
and many developers will focus in on making sure
these sites were as resilient as possible.
That's when we gathered our best practices and made them
available on web.dev/covid19.
We saw a lot of developers scramble to make changes to
their websites and many created new ones.
Governments had to jump on this to make sure that people
had all of the critical information that was changing
rapidly.

English: 
Now when coronavirus
became global,
we really felt the
need to stabilize.
This resulted in us
pausing from releases
and temporarily rolling back
the same site cookie changes.
Now we wanted to
track Chrome usage
and see what changed to make
sure that we could be on top
of any ecosystem changes too.
You probably won't
be surprised that we
saw surges in
usage of media APIs
as video chat and
streaming really soared.
Now also some types
of content saw
large traffic surges such as
food, commerce, entertainment,
health, science, et cetera.
And many developers
were focusing
on making sure these sites
were as resilient as possible.
That's when we gathered our
best practices and made them
available on web.dev/COVID19.
We saw a lot of developers
scramble to make changes
to their websites.
And many created new ones.
Governments had to jump
on this to make sure
that people had all of the
critical information that
was changing rapidly.

English: 
I remember seeing Alex Russell tweet about one of these
government sites from the state of California.
We were really inspired with their work and wanted to ask
them about their experience and they kindly agreed to join
us. So let's welcome Aaron Hans, the engineering
tech lead on the project.
Thank you. Great to be here.
So, Aaron, I'm really curious about how this site even all
came together.
The alpha.ca.gov team was formed in December of 2019 by
Angelica Quirarte to bring human centered design
processes to the state of California and improve their
online services.
We built a lot of prototypes for things like how to help
people review the safety of their tap water, and see if
they're eligible for subsidized phone services,
and to prepare for wildfires.
Then when the pandemic hit the state, we were asked to
stand up the public response site.
So when a government team has to build something like this,
like how do you go about it?
What are your core principles?
The number one goal is to make something that works well

English: 
I remember seeing Alex
Russel tweet about one
of these government sites
from the state of California.
We were really inspired
with their work
and wanted to awesome
about their experience.
And they kindly
agreed to join us.
So let's welcome Aaron Hans,
the engineering tech lead
on the project.
AARON HANS: Thank you.
Great to be here.
DION ALMAER: So
Aaron, I'm really
curious about how this site
even all came together.
AARON HANS: The
Alpha.CA.gov team
was formed in December of
2019 by Angelica Quirarte
to bring human-centered
design processes
to the state of California and
improve their online services.
We built a lot of
prototypes for things
like how to help people review
the safety of their tap water
and see if they're eligible
for subsidized phone services,
and to prepare for wildfires.
And then when the
pandemic at the state,
we were asked to stand up
the public response site.
DION ALMAER: Got it.
So when a government team has
to build something like this,
like how do you go about it?
What are your core principles?
AARON HANS: The
number one goal is
to make something that
works well for everybody.

English: 
And the technical
considerations--
they are passing
accessibility audits,
making sure it works
with keyboard navigation,
with screen readers, and that
it has a smooth experience
on low-end hardware.
We used the cheapest phone
you get from the local Cricket
Wireless as our test device.
And the non-technical
considerations
are readability, what's the
grade-level of all the content,
and are we really building
something that users need
and iterating based
on their feedback?
DION ALMAER: Got
it Now, I've been
trying to picture the
time pressure that you had
to get to get this site out.
Can you tell us a little
bit about how you actually
built the website
and how you managed
the trade-offs between
quality and that timeline?
AARON HANS: Sure.
It's definitely an
accelerated timeline.
We put the site up in four days.
Then the governor announced
a statewide lockdown
and we had millions of visitors.
Really happy that we chose a
static site generator for that,
because it helped us weather
the traffic smoothly.
We chose Eleventy for the
static site generator,

English: 
for everybody and the technical considerations
there are passing accessibility audits, making sure it
works with keyboard navigation, with screen readers,
and that it has a smooth experience on low end hardware.
We use the cheapest phone we can get from the local
Cricket Wireless as our test device.
And, the non-technical considerations
are readability.
What's the grade level of all the content?
And, are we really building something that
users need and iterating based on their feedback?
Got it now.
I've been trying to picture the time pressure that you had
to get this site out. Can you tell a little bit about how
you actually built the site and how you manage the
tradeoffs between quality and that timeline?
Sure, it's definitely an accelerated timeline.
We put the site up in four days.
Then the governor announced a statewide lockdown and we had
millions of visitors really happy that we chose a static
site generator for that because it helped us weather the
traffic smoothly.
We chose Eleventy for the static site generator,

English: 
and we augment that with web
components and serverless
APIs built on Node.js.
DION ALMAER: Got it.
We're actually fans
of Eleventy too.
We use it on Web.Dev
and really like it.
Was this kind of a
new setup for the team
to build a website
like this, or have you
been doing this for a while?
AARON HANS: I remember reading
the article about how Web.Dev
was built and being
really happy that we're
using some of the same tools.
We started using Eleventy
the end of last year
just to use it for a blog
on news.alpha.ca.gov.
And when we used it
for the COVID-19 site,
it's built on content authored
in the WordPress environment.
And then we consume it
with a WordPress API,
and use GitHub actions from
the Eleventy production build.
DION ALMAER: Got it.
So now you've got the
site out there now.
I'm curious what's next
for you and the team?
AARON HANS: Next
things for the team
are continuing to
respond to the pandemic.
We're going to be
getting back to helping
improve other online services.
And we're hiring.
Check us out at
news.alpha.ca.gov

English: 
and we augment that with web components and serverless APIs
built on Node.js
Got it. We're actually fans of Eleventy too.
We use it on web.dev and really like it.
Was this kind of a new setup for the team to
build a website like this, or have you been doing this for
a while?
I remember reading an article about how web.dev was built
and being really happy that we're using some of the same
tools.
We started using Eleventy at the end of last year just
to use that for a blog on news.alpha.ca.gov.
And when we used it for the Covid-19 site,
it's built on content authored in the WordPress
environment and then we consume it with the WordPress API
and used GitHub actions from the Eleventy production build.
Got it. So now you've got
the site out there now. I'm curious, what's next for it for
you and the team?
Next things for the team are continuing to respond to the
pandemic. We're going to be getting back to helping improve
other online services and we're hiring.
Check us out at news.alpha.ca.gov, if you want to help out.

English: 
We're talking about Eleventy and the other tools that we're
using. I wanted to mention that Lighthouse is an incredibly
important tool for us because performance is such
a paramount concern.
And I love the way that it gamifies web development.
You can get the rest of your teammates to challenge each
other and say, 'Who can put up some more points today?
We really need to get that score up.'
Nice, I'm curious who who's winning in the points race.
And I'm really impressed on how you
think of performance being a key part of the accessibility
story in general. Well, Aaron, It's been really inspiring
to see the work that you and the team did here again
at an incredibly stressful time.
Thank you so much for coming on and sharing the story with
us.
Thank you.
Now, it's been great to see developers like Aaron focus on
accessibility, resilience, and performance.
And we've made some announcements over the last month about
a program that brings this all together under the umbrella
of Web Vitals.
To hear more, let's welcome Elizabeth, a PM on the Chrome
team, to explain.

English: 
if you want to help out.
We're talking about
Eleventy and the other tools
that we're using.
I wanted to mention
that Lighthouse
is an incredibly
important tool for us,
because performance is
such a paramount concern.
And I love the way that it
gamifies my development.
You can get the rest
of your teammates
to challenge each other and
say who can put up some more
points today?
We really need to
get that score up.
DION ALMAER: Yeah, nice.
I'm curious who's winning
on the points race?
And I'm really impressed and
how you think of performance
being a key part of the
accessibility story in general.
Well Aaron, it's been really
inspiring to see the work
that the and the
team did here again,
at an incredibly stressful time.
Thank you so much for coming on
and sharing the story with us.
AARON HANS: Thank you.
DION ALMAER: Now it's been
great to see developers
like our focus on accessibility
resilience and performance.
And we've made
some announcements
over the last month
about a program that
brings us all together under
the umbrella of web vitals.
To hem haw, let's welcome
Elizabeth, a PM on the Chrome
team to explain.
ELIZABETH SWEENY: Thanks, Dion.

English: 
Yeah, there have been a lot of
product updates and releases.
And I'm really excited
to go over them with you.
DION ALMAER: Yeah, it's
been particularly busy here
over the past couple of months.
So it would be great
to have you get us up
to speed on web videos and what
developers should be really
considering here.
ELIZABETH SWEENY: Yep.
That's great.
Let's dive in.
First off, what are
Core Web Vitals?
They are a set of user-centric
metrics and thresholds
that apply to all web pages
across all industry verticals
and all types of
experiences on the web.
They are signals to developers
and business stakeholders
about the basic
health of your site,
and as such, they should
be measured by everybody.
But OK, I jumped straight
into definitions.
Let's take a step back.
Why did we introduce Core
Web Vitals as a thing?
There are already
tons of metrics,
lots of guidance about
how to measure your site's
performance.
How do Core Web Vitals help us?
Well, let's go back to
our foundational goal.
We want to create outlandishly
phenomenal experience
for all of our users.

English: 
Thanks, Dan. Yeah. There have been a lot of product updates
and releases, and I'm really excited to go over them with
you.
Yeah, it's been particularly busy here over the past couple
of months. So it'd be great to have you get us up to speed
on Web Vitals and what developers should be really
considering here.
Yeah, that's great, let's dive in. First off, what
are Core Web Vitals?
They are a set of user centric metrics and thresholds that
apply to all web pages across all industry verticals
and all types of experiences on the Web.
They are signals to developers and business stakeholders
about the basic health of your site.
And as such, they should be measured by everybody.
But okay, I jumped straight into definitions.
Let's take a step back.
Why did we introduce Core Web Vitals as a thing?
There are already tons of metrics, lots of guidance about
how to measure your site's performance.
How do Core Web Vitals help us?
Well, let's go back to our foundational goal.
We want to create outlandishly phenomenal experience for
all of our users.

English: 
And it's not just out of the
goodness of our hearts either.
We know that every time we have
a rate clicker on our site,
we lose out on a reader,
a customer, or a client.
Also, we want the money bug.
So there is this mythical
absolutely fabulous experience
that we've set our
sights on creating.
It seems easy until you realize
that the unicorn horn requires
both loading and interactivity
performance measurement.
And the rainbow--
well, the rainbow
requires an entire ROM
set up for each color.
So there you are watching
your flying unishund--
Unicorn dachshund-- and you
realize that you have this.
It's gorged on a bit
too much JavaScript.
It doesn't respond when
you're issuing at commands.
And that's upsetting.
But it's going to take quite
a bit to get this to this.
So the question is,
where do you start?
Well, in order to know
if you've improved,
we need to know what to measure.
To know what to measure, we
need to define our goals.

English: 
And it's not just out of the goodness of our hearts either.
We know that every time we have a rage clicker on our site,
we lose out on a reader, a customer, or a client.
Also, we want a money pug.
So there is this mythical, absolutely fabulous experience
that we've set our sights on creating.
It seems easy until you realize that the unicorn horn
requires both loading and interactivity performance
measurement and the rainbow - well, the rainbow requires an
entire RUM set up for each color.
So there you are watching your flying unishund
(unicorn dauchshund) and you realize that you have
this.
It's gorged on a bit too much JavaScript, it doesn't
respond when you're issuing it commands, and that's
upsetting. But it's gonna take quite a bit to get this
to this. So the question is, where do you start?
Well, in order to know if you've improved, we need to know
what to measure. To know what to measure, we need to define
our goals. So- put another way- what makes a web
experience shine?

English: 
So put another way, what
makes a web experience shine?
This is where the core
dimensions of quality come in.
There are foundational
elements of a user experience
that make a unishund shine
above the competition.
Content needs to load quickly.
We've all been there.
The longer we have to wait,
the more likely we are to bail.
So your pages have to load fast.
Interactivity is
just as important.
You're clicking and
nothing is happening.
No fun.
You don't just need
content to be visible,
you need it to be
available for use.
Lastly, we want a page to
be stable and predictable.
Just a few pixels moving around
can make a huge difference.
These core dimensions of quality
reflect user-centric signals
that have long been
mission critical for you
and your site's success.
So we are closer to
defining quality.
But how do we measure
these quality dimensions?
And that's where
representative metrics come in.
To represent fast loading, we
have Largest Contentful Paint
or LCP.
It provides insight
into how quickly

English: 
This is where the core dimensions of quality come in.
There are foundational elements of a user experience that
make a unishund shine above the competition.
Content needs to load quickly.
We've all been there. The longer we have to wait, the more
likely we are to bail.
So your pages have to load fast.
Interactivity is just as important.
You're clicking and nothing is happening.
No fun. You don't just need content to be visible.
You need it to be available for use.
Lastly, we want a page to be stable and predictable.
Just a few pixels moving around can make a huge difference.
These core dimensions of quality reflect user centric
signals that have long been mission critical for you and
your site's success.
So we are closer to defining quality.
But how do we measure these quality dimensions?
And that's where representative metrics come in.
To represent fast loading, we have largest contentful paint
or LCP.
It provides insight into how quickly a user is able to see

English: 
a user is able to see the meat
of what they are expecting
and wanting out of a page.
For responsive interactivity, we
have First Input Delay or FID.
This metric has been
a critical signal
for developers for
some time to understand
how long a page takes to respond
to a user's initial input.
And finally, to represent
visual stability,
we have Cumulative
Layout Shift, CLS.
CLS measures the amount that
the elements within the viewport
move around during load time.
OK, so we know how to measure
our core quality dimensions.
And let's say my LCP
is three seconds.
Do I celebrate?
Wait, I don't actually have
any idea whether or not
that's good.
So I need to evaluate
my performance
on a spectrum for
each metric, which
is where the final element
of Core Web Vitals comes in,
our thresholds.
For each representative
metric, we
have clear goalposts around what
constitutes a good experience,
one that needs improvement,
and one that's poor.
So for instance,
for LCP, anything

English: 
the meat of what they are expecting and wanting out of a
page. For responsive interactivity, we
have First Input Delay, or FID.
This metric has been a critical signal for developers for
some time to understand how long a page takes to respond
to a user's initial input.
And finally, to represent visual stability, we have
Cumulative Layout Shift, CLS. CLS measures the amount
that the elements within the viewport move around during
load time.
OK, so we know how to measure our core quality dimensions
and let's say my LCP is three
seconds. Do I celebrate?
Wait, I don't actually have any idea whether or not that's
good. So I need to evaluate my performance on a spectrum
for each metric, which is where
the final element of Core Web Vitals comes in: our
thresholds. For each representative metric, we
have clear goal posts around what constitutes a good
experience, one that needs improvement, and one that's
poor. So, for instance, for LCP, anything

English: 
that is 2.5 seconds or less is on its way to being
a unishund. Anything between 2.5 and 4 seconds
needs some work, and anything above 4 seconds is needing
quite a bit of love.
So to finish up our definition of what are Core Web Vitals,
the initiative is a combination of three things.
First is user-centric quality dimensions.
Then we have representative metrics of those dimensions.
And finally, thresholds to help you evaluate whether or not
your performance is good or not against any given metric.
But there is one more piece of really important
information. We need to know how many page loads
need to hit the thresholds for the Core Web Vitals metrics
to constitute a good experience.
So say we have 100 users, if only one of them has an
LCP below 2.5 seconds - do I pass
Core Web Vitals? The answer is no.
Core Web Vitals uses the 75th percentile value of all
page views in the field to evaluate against the thresholds.
In other words, if at least 75% of page

English: 
that is 2.5 seconds or less is
on its way to being a unishund,
anything between 2.5 and
4 seconds needs some work,
and anything above 4 seconds
is needing quite a bit of love.
So to finish up our
definition of what
our Core Web Vitals,
the initiative
is a combination
of three things.
First, is user-centric
quality dimensions.
Then we have representative
metrics of those dimensions.
And finally, thresholds to help
you evaluate whether or not
your performance is good or
not against any given metric.
But there is one more piece of
really important information.
We need to know how
many page loads need
to hit the thresholds for
the Core Web Vitals metrics
to constitute a good experience.
So say we have 100 users.
If only one of them has
an LCP below 2.5 seconds,
do I pass Core Web Vitals?
The answer is no.
Core Web Vitals uses the
75th percentile value
of all page views in
the field to evaluate
against the thresholds.

English: 
In other words, if at least
75% of page views to a site
meet the good
threshold, the site
is classified as having a good
performance for that metric.
And this applies to all three
metrics, LCP, FID, and CLS.
The 75th percentile is
used to evaluate all three.
Core Web Vitals is a holistic
package of everything
you need to create the
foundation of a healthy site.
They are valuable
because they show you
exactly where to start to
set yourself up for success.
If 75% of your users are getting
fast, interactive, stable
content, it's cause
for celebration.
But as we know, there are
other dimensions of quality
that are extremely important.
Accessibility, security,
mobile friendliness.
There are a lot
of dimensions that
make a basic unit and
even more fabulous,
and are important to
your site's success.
So don't stop measuring
these if you already are.
And if you aren't already, once
you've optimized your Core Web
Vitals, you can begin to venture
into measuring and benchmarking
against other
important vitals that

English: 
views to a site meet the 'good' threshold, the site
is classified as having a good performance for that metric.
And this applies to all three metrics: LCP, FID,
and CLS.
The 75th percentile is used to evaluate all three.
Core Web Vitals is a holistic package of everything you
need to create the foundation of a healthy site.
They are valuable because they show you exactly where to
start to set yourself up for success.
If 75% of your users are getting fast,
interactive, stable content, it's cause for celebration.
But as we know, there are other dimensions of quality that
are extremely important.
Accessibility, security, mobile friendliness...
There are a lot of dimensions that make a basic unishund
even more fabulous and are important to your site's
success. So don't stop measuring these if you already
are, and if you aren't already, once
you've optimized your Core Web Vitals, you can begin to
venture into measuring and benchmarking against other
important vitals that are relevant to your business and

English: 
your users.
Core Web Vitals are just as the name indicates, they are
core and provide you with a solid foundation upon
which to further optimize.
Given how important it is to quantify a user's experience
accurately in order to be successful on the web, we are
constantly working to find ways to better measure all
quality dimensions.
What this evolution has often meant in the past is a
stream of new metrics, tweaks to existing metrics
and new guidance - many times at an unpredictable
cadence. We know how difficult this can be when trying to
set goals, align roadmaps and get organizational buy-in.
Because of this, we want to set a predictable cadence of
updates to Core Web Vitals.
They will be refreshed once a year around the time of
Google I/O to ensure that they reflect the latest in our
learnings, and this includes adjustments to the set of
metrics as well as the thresholds.
Looking ahead towards 2021, we will be providing
regular updates on future metric candidates, motivations,
and implementation status.

English: 
are relevant to your
business and your users.
Core Web Vitals are just
as the name indicates,
they are core and provide
you with a solid foundation
upon which to further optimize.
Given how important it is to
quantify a user's experience
accurately in order to
be successful on the web,
we are constantly working to
find ways to better measure
all quality dimensions.
What this evolution has
often meant in the past
is a stream of
new metrics tweaks
to existing metrics and
new guidance, many times
at an unpredictable cadence.
We know how
difficult this can be
when trying to set
goals, align roadmaps,
and good organizational buy-in.
Because of this, we want to
set a predictable cadence
of updates to Core Web Vitals.
They will be
refreshed once a year
around the time of Google I/O
to ensure that they reflect
the latest in our learnings.
And this includes adjustments
to the set of metrics,
as well as the thresholds.
Looking ahead
towards 2021, we will
be providing regular updates
on future metric candidates,
motivations, and
implementation status.
OK, so this is
all fine and good.

English: 
Okay, so this is all fine and good, but how do I get
started. To know what's optimized, you have to measure
first!
And Core Web Vitals are now in all of your favorite
developer tools, and there are more than what is listed
here - including a new web vitals library and a bunch of
ecosystem tools that have already adopted them.
As you can see, Core Web Vitals are available across the
board. You're able to measure them for a specific page, for
your origin, locally in the lab, and from real users
in the field.
Remember that First Input Delay is only measurable on the
field; so you have to have a real user clicking on your
page in order to measure it.
But that doesn't mean you can't use lab tools to help you
improve it. Total blocking time, TBT, is
a proxy lab metric for FID that allows you to debug
and improve your interactivity in the lab before your
users ever have to experience a bad FID.
The next obvious question is: again, this is great, but
where do I start? What tools should I use?
I'm so glad you asked.
Each tool has its own strength.
For example, PSI is one of the only places you can see your

English: 
But how do I get started?
To know what's optimized,
you have to measure first.
And Core Web are now in all of
your favorite developer tools.
And there are more than
what is listed here,
including a new
web vitals library
and a bunch of ecosystem tools
that have already adopted them.
As you can see, Core Web Vitals
are available across the board.
You're able to measure
them for a specific page,
for your origin,
locally in the lab,
and from real
users in the field.
Remember, that
First Input Delay is
only measurable in the field.
So you have to have a real
user clicking on your page
in order to measure it.
But that doesn't mean
you can't use lab tools
to help you improve it.
Total Blocking Time, TBT, is
a proxy lab metric for FID
that allows you to debug and
improve your interactivity
in the lab before you users ever
have to experience a bad FID.
The next obvious question
is again, this is great,
but where do I start?
What tools should I use?
I'm so glad you asked.
Each tool has its own strength.
For example, PSI is
one of the only places

English: 
lab and field data in one place, and Search Console is
critical for identifying page types that need improvement.
As I mentioned earlier, we're seeing so many great
ecosystem players and production monitoring solutions
already implementing support for Core Web Vitals.
And we're really delighted.
But again, you ask, 'You've shown me the magical unishund
and now you've given me a palette of tools to choose from.
That's amazing. But tell me what to do first.' OK,
two things. First, go to PageSpeed Insights.
That will give you a pulse of your Core Web Vitals
performance in both the field and the lab.
From CrUX, you'll be able to see whether or not 75% of your
loads are hitting the Core Web Vitals thresholds for both
your page and your origin in the field.
Then you can take a look at your lab data from Lighthouse
to see whether or not you are hitting the Core Web Vitals
thresholds for each metric in a synthetic testing
environment. This helps to guide you towards actionable
opportunities to improve your page's performance.
Second, check out some more in-depth talks later today that
go into detail about measuring and optimizing against your
Core Web Vitals.

English: 
you can see your lab and
field data in one place.
And search console is critical
for identifying page types that
need improvement.
As I mentioned earlier, we're
seeing so many great ecosystem
players and production
monitoring solutions already
implementing support
for Core Web Vitals,
and we're really delighted.
But again, you ask you've
shown me the magical unishund,
and now you've
given me a palette
of tools to choose from.
That's amazing, but tell
me what to do first.
OK, two things.
First, go to PageSpeed Insights.
That will give you a pulse
of your Core Web Vitals
performance in both
the field and the lab.
From CrUX, you'll be able
to see whether or not
75% of your loads are
hitting the Core Web Vitals
thresholds for both your page
and your origin in the field.
Then you can take a
look at your lab data
from Lighthouse to
see whether or not
you are hitting
the Core Web Vitals
thresholds for each metric in a
synthetic testing environment.
This helps to guide you towards
actionable opportunities
to improve your
page's performance.
Second, check out some more
in-depth talks later today
that go into detail about
measuring and optimizing
against your Core Web Vitals.

English: 
And with that, I'm going
to pass it back to Dion.
Thank you so much.
DION ALMAER: Great.
Yeah, thanks for
showing us the context
and all of the information
across the whole slew of tools
there, Elizabeth.
ELIZABETH SWEENY: My pleasure.
DION ALMAER: One of the critical
steps in modern web development
with a lot of influence over
your vitals is your build step.
That's where your CSS modules
are turned into real CSS,
your bundler analyzes
your module graph,
and optimization is
going to really kick in.
We wanted to go deeper here
to understand the popular
bundlers, how they work,
what they can and cannot do,
and how to set them
up for success.
Let's welcome Surma
to tell us more.
Hey, Surma.
SURMA: Hey, Dion.
DION ALMAER: So there
are many best practices
to follow in web development.
Knowing them is one thing,
but getting your build system
to follow them as well
is kind of another beast.
So do you have anything to
maybe report on that front?
SURMA: So there is two bits
on this side of things.
On the one hand, there
are many developers
who want to know what build
tool they should learn and use
for their next project.
And on the other hand,
there are many project
that already have a build
tool set up, but are looking

English: 
And with that, I'm going to pass it back to Dion.
Thank you so much.
Great. Yeah. Thanks for showing us the context and all of
the information across the whole slew of tools there,
Elizabeth.
My pleasure.
One of the critical steps in modern web development with a
lot of influence over your vitals is your build step.
That's where your CSS modules are turned into real CSS,
your bundler analyzes your module graph, and optimizations
can really kick in.
We wanted to go deeper here to understand the popular
bundlers, how they work, what they can and cannot do,
and how to set them up for success.
Let's welcome Surma to tell us more.
Hey, Surma.
Hey, Dion.
So there are many best practices to follow in web
development. Knowing them is one thing, but getting your
build system to follow them as well is kind of another
beast. So do you have anything to maybe report
on that from?
So there's two bits on this side of things.
On the one hand, there are many developers who want to know
what build tool they should learn and use for their next
project. And on the other hand, there are many projects
that already have a build tool set up, but are looking to

English: 
to improve their output.
To tackle both of
these problems,
we built Tuning Report.
Tuning Report is a website
that you can actually go
to right now, Tooling.Report.
We create an extensive list
of best practices in web
development, took what we think
are the four most popular build
tools, and checked for each
build tool if it allows you
to below that best practice.
And each tool gets a point
for each test that it passes.
We chose to start this project
with Browserify, Parcel,
Rollup, and Webpack.
Now, Browserify might
be surprising to some.
But the data indicates that
there are still many sites
out there that use Browserify.
And if you want to
help those projects
improve their sites as well.
Of course, we have been working
with the core teams of all
these tools to make sure
that we not only use
the tool correctly, but
also represent them fairly.
The tests are subdivided
into categories.
And in the overview, you can
get a quick sense of which tool
is excelling at what category.
You can get more information
to test in the overview
and learn more about it.

English: 
improve their output.
To tackle both of these problems, we built tooling.report.
Tooling.report is a website that you can actually go to
right now: tooling.report.
We created an extensive list of best practices in web
development, took what we think are the 4 most popular
build tools, and checked for each build tool if it allows
you to follow that best practice. And, each tool gets a
point for each test that it passes.
We chose to start this project with browserify, parcell,
rollup, and webpack.
Now, browserify might be surprising to some, but the data
indicates that there are still many sites out there that
use browserify and we want to help those projects improve
their sites as well.
Of course, we have been working with the core teams of all
these tools to make sure that we not only use the tool
correctly, but also represent them fairly.
The tests are subdivided into categories and in the
overview you can get a quick sense of which tool is
excelling at what category.
You can get more information on a test in the overview and
learn more about it.

English: 
And now this is where I think tooling.report gets really
interesting! Each test has a dedicated page
where you can compare how the tools score on this specific
test. There is an in-depth explanation on why
this test is important and how it relates to best practices
in web development.
We also explain how we codified the best practice and
what the expected outcome is.
And finally, at the bottom, you can find an explanation for
each tool and why it's passed or why it might
have failed this test.
If a tool is not passing a test, we will also link to bug
reports on the tools issue tracker.
Many of them we have actually filed ourselves while
building tooling.report.
We also link to a minimal NPM project that we
use to determine the tools behavior.
This way tooling.report not only tells you what a tool can
and cannot do, but you can also look at the configuration
files and plugins to see how you can follow a best
practice with this tool.
This way, the site will function as a source of
documentation. The entire site is open source
on GitHub and we'd love the community to help us come up

English: 
And now this is where I
think Tooling Report gets
really interesting.
Each test has a
dedicated page where
you can compare how the tools
score on this specific test.
There is an in-depth explanation
on why this test is important,
and how it relates to best
practices in web development.
We also explain how we
codify the best practice
and what the
expected outcome is.
And finally, at
the bottom, you can
find an explanation for each
tool and why it is passed
or why it might have
failed this test.
If a tool is not
passing a test, we
will also link to back reports
on the tool's issue tracker.
Many of them, we have
actually filed ourselves
while building Tooling Report.
We also linked to a
minimal NPM project
that we use to determine
the tool's behavior.
This way Tooling Report not
only tells you what a tool can
and cannot do, but it can also
look at the configuration files
and plugins to see how you can
follow a best practice with
this tool.
This way the site
double functions
as a source of documentation.
The entire site is
open source on GitHub.

English: 
with more tests and help us add more tools over time.
So you can check this out now on
tooling.report. Thanks for joining me, Surma.
Cheers.
Now we're all becoming more aware of the importance of
security and privacy.
Chrome believes in an open web that's respectful of users
privacy and maintains key use cases that keeps the Web
working for everyone.
I'd love to welcome Rowan to have a chat and kind of share
some of what's new here.
Hey there. Thanks, Don. My name's Rowan.
And I look after Web DevRel for Security,
Privacy, Payments, and Identity or SPPI
for short. Now, while that's a cute internal
name, we are part of the wider Trust and Safety team
within Chrome.
Great. So why don't we start with SameSite cookies and the
temporary rollback that kind of kicked into gear for us
when COVID kind of really started to hit globally.
Can you kind of share what the latest news is there?
Sure, yeah. So hopefully, as a lot of you are aware,

English: 
And we'd love the community to
help us come up with more tests
and help us add more
tools over time.
DION ALMAER: So you can check
this out now on Tooling.Report.
Thanks for joining me, Surma.
SURMA: Cheers.
DION ALMAER: Now
we're all becoming
more aware of the importance
of security and privacy.
Chrome believes in
an open web that's
respectful of users'
privacy and maintains
key use cases that keeps the
web working for everyone.
I'd love to welcome Rowan to
have a chat and share some
of what's new here.
ROWAN MEREWOOD: Hey, there.
Thanks, Dion.
My name's Rowan, and
I look after Web Dev
Rel for security, privacy,
payments, and identity,
or SPI for short.
Now while that's a
cute internal name,
we are part of the wider Trust
and Safety team. within Chrome.
DION ALMAER: Great.
So why don't we start
with same site cookies
and the temporary rollback that
kind of kicked into gear for us
when COVID kind of really
started to hit globally.
Can you kind share what
the latest news is there?
ROWAN MEREWOOD: Sure.
Yeah.
So hopefully as a
lot of you are aware,

English: 
there's an update to the
cookie standard that's
being adopted across Chrome,
Firefox, Edge, and others
to restrict cookies to
first party by default
along with requiring
explicitly marking cookies
for third party contexts.
Now that's all configured
via the same site attribute,
hence, same site cookies.
We were rolling this
out to staple Chrome,
but decided to reverse
this at the start of April,
because the COVID situation
saw a huge jump in demand
for online services, but also
a huge shift in developers
being at home without
their equipment
or looking after their families.
We made the call that it
was important to prioritize
stability at that moment.
Now these changes are intended
to make the web a safer place,
protecting against
cross-site requests forgery
and trying to minimize the
surface for covert tracking.
Sadly, during a crisis when
people are most vulnerable,
you see these kind of
scams and attacks jump too.
So with the Chrome
84 Stable release,

English: 
there's an update to the cookie standard that's being
adopted across Chrome, Firefox, Edge and others
to restrict cookies to first-party by default,
along with requiring explicitly marking cookies
for third-party context.
Now, that's all configured via the SameSite attribute,
hence - SameSite cookies.
We were rolling this out to stable Chrome, but decided to
reverse this at the start of April because the COVID
situation saw a huge jump in demand for online
services, but also a huge shift in developers
being at home without their equipment or looking after
their families.
We made the call that it was important to prioritize
stability at that moment.
Now, these changes are intended to make the web a safer
place - protecting against cross-site request
forgery and trying to minimize the surface for covert
tracking.
Sadly, during a crisis when people are most vulnerable,
you see these kind of scams and attacks jump, too.
So with the Chrome 84 stable release, which

English: 
which is mid-July or
about two weeks from now
if you're watching
the stream, we
are going to start
rolling this out again
across all Chrome versions.
DION ALMAER: Got it.
So what I'm hearing here is that
if you haven't tested your site
yet, if you haven't made
changes to kind of make sure
that everything works
well, now is actually
the time to get going?
ROWAN MEREWOOD: Absolutely.
So we have documentation
and examples and samples
out there right
now for same site
on web.dev as well
as on Chromium.org.
And we'll be covering
implementation and debugging
in our segment on day three.
DION ALMAER: Okey-doke.
So we all love cookies.
But I'm assuming there's
going to be a few more
things that we're
going to be talking
about in the kind of general
view of trust and safety?
ROWAN MEREWOOD: I'll
be honest with you.
I am going to talk
about cookies a lot.
But the rest of
the team does have
a healthier range of interests.
DION ALMAER: Got it.
OK, sounds good.
So we're going to
cover things like--
back in 2018, Spectre
kind of raised its head.
And we as a web
community started

English: 
is mid-July or about two weeks from now, if
you're watching the stream, we are going to start rolling
this out again across all Chrome versions.
So what I'm hearing here is that if you haven't tested your
site yet, if you haven't made changes to kind of make sure
that everything works well, now is actually the time to get
going.
Absolutely. So we have documentation and examples
and samples out there right now for SameSite on
web.dev, as well as on chromium.org, and we'll be covering
implementation and debugging in our segment on
day three.
Okey-doke. So we all love cookies, but I'm assuming there's
going to be a few more things that we're going to be
talking about in the kind of general view of
trust and safety.
I'll be honest with you.
I am going to talk about cookies a lot, but the
rest of the team does have a healthier range of interests.
Okay, sounds good.
So are we going to cover things like, you know, back in
2018, Spectre kind of raised its head
and we as a web community started to really look at

English: 
to really look at what can
we do to help make sure
that our users are secure.
Are there going to be kind
of those type of aspects
that we'll be covering too?
ROWAN MEREWOOD: For sure.
Yeah.
So Addy is going to be
taking us through some
of the new cross origin
opener and embedder policies,
or COOP and COEP for sure.
So like you were saying,
Spectre was a vulnerability
that in a super
short summary, meant
that malicious code running
in one browser process
might be able to read any data
associated with that process,
even if it's from
a different origin.
And that is super bad.
Now one of the
mitigations for that
is site isolation
or putting each site
into a separate process.
Addy is going to be running
through how the headers allow
sites to opt into that, along
with a bunch of other benefits
that it brings as well.
DION ALMAER: Got
it, got it, got it.
OK.
So we've got restricting
cross-site cookies.
And then we've got isolating
sites to individual processes.
We've got this
interesting evolution.
So I'm sensing there's kind
of a bit of a theme here.
ROWAN MEREWOOD: Yeah, there
is definitely a theme.

English: 
what can we do to help make sure that our users are secure?
Are they going to be kind of those type of aspects that
we'll be covering, too?
For sure. Yeah. So Eiji is going to be taking us through
some of the new cross origin opener and embedder policies
or COOP and COEP, for short.
So like you were saying, Spectre was a vulnerability that
in a super short summary meant that malicious
code running in one browser process might be able to read
any data associated with that process, even
if it's from a different origin - and that is super bad.
Now, one of the mitigations for that is site
isolation - or putting each site into a separate
process. Eiji is going to be running through how the
headers allow sites to opt-in to that, along
with a bunch of other benefits that it brings as well.
Got it. Got it. Got it. Okay. So we've got restricting
cross-site cookies and then we've got isolating sites to
individual processes. We've got this interesting evolution.
So I'm sensing there's kind of a bit of a theme here.
Yeah, there is definitely a theme.

English: 
So we've also got Sam
and Maud on the team.
And they're going to kick off
our little segment to explain
the link between these.
And really, it comes
down to the web today
is seeing this evolution of
expectations regarding privacy.
That includes users expecting
more transparency and control
over their online data, and
new regulations impacting how
data can be used and collected.
Now at Google, we
believe in an open web
that's respectful of
the user's privacy,
whilst also maintaining
a healthy ecosystem.
So under the banner of
the privacy sandbox,
we're introducing a number
of standards proposals that
aim to support
the use cases that
let people make their living
off creating web content,
but do that in a way that
better respects user privacy.
We're also actively seeking
feedback on these proposals.
We're participating in
all the open forums at W3C
to discuss our proposals,
as well as those
submitted by other parties too.
DION ALMAER: OK.
So the web's evolving.

English: 
So we've also got Sam and Maud on the team, and they're
going to kick off our little segment to explain the link
between these.
And it really comes down to the web today is seeing this
evolution of expectations regarding privacy
that includes users expecting more transparency and
control over their online data, and new regulations
impacting how data can be used and collected.
Now, at Google, we believe in an open web that's respectful
of the users' privacy whilst also maintaining a healthy
ecosystem. So under the banner of the Privacy
Sandbox, we're introducing a number of standards proposals
that aim to support the use cases that let people
make their living off creating web content, but
do that in a way that better respects user privacy.
We are also actively seeking feedback on these proposals,
we're participating in all the open forums
of W3C to discuss our proposals, as well
as those submitted by other parties, too.
Okay. So the web's evolving and we get a new privacy

English: 
preserving APIs coming in and
we're getting rid of the old cross-site data leaking APIs.
So they're kind of moving out.
Exactly. And one of the ways I like to think about it with
our team as well is that we're kind of all about the
places where you create relationships on the
web. So people should feel in control of their
data when they browse around the web with a clear choice
about what and where they share things
and when they do want to create a relationship - like
signing into a site or making a purchase - that
should be simple, secure and only share what's
needed.
Awesome. Thanks so much for the breakdown on what we're
thinking about here in the realm of trust and safety,
Rowan. And I'm really excited to see the content that is
coming later on the stream from the team where we
can kind of go into more of a deep dive.
Cool. Thanks. And I'll see you around.
Now, the web has a great history as a content platform with
its roots in hyperlink documents, but digital content
has gotten richer and richer.

English: 
And we're getting new privacy
preserving APIs coming in.
And we're getting rid of the old
cross-site data-leaking APIs.
So they're kind of moving out.
ROWAN MEREWOOD: Exactly.
And one of the ways I like to
think about it with our team
as well is that we're kind
of all about the places
where you create
relationships on the web.
So people should feel
in control of their data
when they browse around the web,
with a clear choice about what
and where they share things.
And when they do want to
create a relationship,
like signing into
a site or making
a purchase, that should
be simple, secure,
and only share what's needed.
DION ALMAER: Awesome.
Thanks so much
for the brain dump
on what we're thinking about
here in the realm of trust
and safety, Rowan.
I'm really excited to see the
content that's coming later
on the stream from
the team, where we can
go into more of a deep dive.
ROWAN MEREWOOD: Cool.
Thanks.
And I'll see you around.
DION ALMAER: Now the
web has a great history
as a content platform, with its
roots in hyperlink documents.
But digital content has
gotten richer and richer.

English: 
We think the web has a
great role to play here too.
I'd like to invite Paul Bakaus
to talk about a new content
type that we're really excited
about called Web Stories.
PAUL BAKAUS: Hey, Dion.
DION ALMAER: Hey, Paul.
So what are these Web Stories?
And why are we working on them?
PAUL BAKAUS: My
team and I have been
hard at work working
on Web Stories,
and I'm excited to share
some updates with you.
And yes, I'm talking about
these kind of stories.
You know, full screen,
portrait, tap to advance, swipe
to move on.
And if you're like,
wait a second.
Aren't you a little
late to the show?
Then you'd be right.
But these are not your
stand-up world garden stories.
Current implementations focus
on ephamerality and ultra
low barrier to creation.
But our bet is that
the Stories format
works beyond the
ephemeral use case,
and can become its own pillar
in the open web media landscape
and that's because
they're really cheaper
to make them video and more
engaging than a text article.
And really important,
Web Stories
are different to other stories
in many important ways.
Just like a regular web page,
you own them, you host them,
and very important, you
get the money from the ads,
not the platform
serving the stories.

English: 
We think the web has a great role to play here, too.
I'd like to invite Paul Bakaus to talk about a new content
type that we're really excited about called Web Stories.
Hey, Dion.
Hey, Paul. So what are these Web Stories, and
why are we working on them?
My team and I have been hard at work working on Web
Stories, and I'm very excited to share some updates with
you. And yes, I'm talking about these kind of stories.
You know, full screen, portrait, tap to advance,
swipe to move-on.
And if you're like, "Wait a second.
Aren't you a little late to the show?".
Then you'd be right. But these are not your standard walled
garden stories.
Current implementations focus on ephemerality and ultra
low barrier to creation.
But our bet is that the Stories format works beyond the
ephemeral use case and can become its own pillar in the
open web media landscape.
And that's because they're really cheaper to make than
video and more engaging than a text article.
And really important, Web stories are different to
walled-off stories in many important ways.
Just like a regular webpage. You own them.
You host them. And very important, you get the money from
the ads, not the platform serving the stories.

English: 
Because Stories are
really a visual format,
my friends at Google
Search and Discover
are showcasing them
in really cool ways,
telling me that many
more integrations are
coming later this year.
We think these can be
a great net new traffic
source for web creators.
DION ALMAER: These stories look
visually really compelling.
But how hard is it
actually to create them?
PAUL BAKAUS: If we
want the web to be
able to compete with a
closed platforms out there,
story creation needs to
be as intuitive and fast
for all content creators.
Now, lots of people are working
on making Web Stories a thing.
But one of the things
my own team is doing
is bringing story
creation to WordPress,
the most used CMS in the world,
in the form of a visual editor
coming to you very soon.
Find out more about the
beta at goo.gle/storyeditor.
You'll hopefully see all
the basic editing features
you would expect, like
smooth image and video
handling, text controls,
shape masking, and so on.
But we're also working on
some you might not expect,
like this one we
call text magic,
running in real time against
images from on splash API here.
Target on this feature makes
it so that the editor always

English: 
Because stories are really a visual format, my friends at
Google Search and Discover showcasing them in really cool
ways, telling me that many more integrations are coming
later this year. We think these can be a great net-new
traffic source for web creators.
These stories look visually really compelling.
But how hard is it actually to create them?
If you want the web to be able to compete with the closed
platforms out there, story creation needs to be as
intuitive and fast for all content creators.
Now, lots of people are working on making Web Stories a
thing. But one of the things my own team is doing is
bringing story creation to WordPress, the most used CMS
in the world, in the form of a visual editor coming to you
very soon. Find out more about the beta and
goo.gle/storyeditor.
You'll hopefully see all the basic editing features you
would expect, like smooth image and video handling, text
controls, shape masking and so on.
But we're also working on some you might not expect like
this one we call text magic, running in real time against
images from the Unsplash API here.
Toggled on, this feature makes it so that the editor always

English: 
ensures text is readable, making dynamic decisions about
the background, line height, and so on.
I hope you like it as much as I do.
Yeah, it looks really cool.
You know, I can't wait to read some of these stories on the
web. Thanks so much for for sharing there, Paul.
And thanks again to Paul and everyone who took the time to
join me as we kick off the event today.
I'm really excited about the upcoming sessions, starting
with the focus on how to make your website hit its Vitals
and discovery through Search.
Now, please enjoy the show.
Remember, the whole team is here to chat with you on
web.dev/live and via YouTube.
I hope we'll see you there today.
And I'll be back tomorrow morning for the day two kick off.
Hello again, everybody!

English: 
ensures text is readable,
making dynamic decisions
about the background
line height and so on.
I hope you like it
as much as I do.
DION ALMAER: Yeah,
it looks really cool.
You know, I can't wait to
read some of these stories
on the web.
Thanks so much for
sharing that, Paul.
And thanks again to
Paul and everyone
who took the time to join me
as we kick off the event today.
Yeah, I'm really excited about
the upcoming sessions, starting
with the focus on how to make
your website hit its vitals,
and discovery through search.
Now please enjoy the show.
Remember, the whole team
is here to chat with you
on web.dev/live and via YouTube.
I hope we'll see
you there today.
And I'll be back tomorrow
morning for the day to kickoff.
[MUSIC PLAYING]
ELIZABETH SWEENY:
Hello again, everybody.

English: 
For those of you who don't know me yet, my name is
Elizabeth Sweeny. I'm a product manager on the Web Platform
team at Chrome.
I'm excited to talk with you all today about the latest and
greatest in our speed tooling.
I'll be sharing some updates as far as how we think
about measuring user experience, including metrics, updates
and our new Core Web Vitals initiative, as well as making
sure that we're privy to all of the newest features,
products, and updates to our developer tooling as far as
speed measurement is concerned.
So let's dive in. Well, I know we've heard it before, it is
worth reiterating why metrics change.
Well, ultimately, it's because our understanding
of how to best measure user experience evolves over time
as we learn more and work through technical hurdles.
We need to make sure that our metrics and tooling are
updated to reflect the latest in our learnings.
Fundamentally, we view it as mission critical to give you
the most accurate and effective mechanisms by which to
optimize your site's experience and help you achieve your
goals.
And that doesn't just mean for one of your users or a few.

English: 
For those of you who
don't know me yet,
my name is Elizabeth Sweeny.
I'm a product manager on the
Web Platform team in Chrome.
I'm excited to talk
with you all today
about the latest and greatest
in our Speed Tooling.
I'll be sharing some
updates as far as
how we think about measuring
user experience, including
metrics updates and our new
Core Web Vitals initiative,
as well as making
sure that we're
privy to all of the newest
features, products, and updates
to our developer tooling, as
far as speed measurement is
concerned.
So let's dive in.
While I know we've
heard it before,
it is worth reiterating
why metrics change.
Well, ultimately, it's
because our understanding
of how to best measure user
experience evolves over time
as we learn more and work
through technical hurdles.
We need to make sure that
our metrics and tooling are
updated to reflect the
latest in our learnings.
Fundamentally, we
view it as mission
critical to give you the
most accurate and effective
mechanisms by which to optimize
your site's experience,
and help you achieve your goals.
And that doesn't just mean for
one of your users or a few.

English: 
We want to make sure that
as many users as possible,
regardless of what
network they are on
or what hardware
they're using, are
in the bucket of users that
want to come back to your site
again and again.
And that brings us to the
impetus behind Core Web Vitals.
We have long been espousing
performance and user experience
quality because we believe that
good site performance leads
to better outcomes for users,
businesses, developers,
and for the web in general.
The Core Web Vitals
initiative aims
to bring together a more
cohesive picture of web
performance so that
there is a better
shared understanding of what
should be prioritized first.
Let's take a moment to review
the metrics themselves.
Largest Contentful Paint, LCP,
is a measurement of perceived
loading experience.
It marks the point
during page load when
the primary or largest
content element has loaded
and is visible to the
user within the viewport.
It's an important complement
to First Contentful Paint, FCP,
which only captures the very
beginning of the loading
experience.
LCP provides a signal about
how quickly a user is actually

English: 
We want to make sure that as many users as possible,
regardless of what network they are on or what hardware
they're using, are in the bucket of users that want
to come back to your site again and again.
And that brings us to the impetus behind Core Web Vitals.
We have long been espousing performance and user experience
quality because we believe that good site performance leads
to better outcomes for users, businesses, developers
and for the web in general.
The Core Web Vitals initiative aims to bring together a
more cohesive picture of web performance so that there is
a better shared understanding of what should be prioritized
first. Let's take a moment to review the metrics
themselves.
Largest Contentful Paint, LCP, is a measurement of
perceived loading experience.
It marks the point during page load when the primary or
largest content element has loaded and is visible
to the user within the viewport.
It's an important complement to First Contentful Paint,
FCP, which only captures the very beginning of the loading
experience. LCP provides a signal about how
quickly a user is actually able to see the content

English: 
of the page.
To provide a good user experience sites
should strive to have Largest Contentful Paint occur within
the first 2.5 seconds of the page starting to load.
To ensure you're hitting this target for most of your
users, a good threshold to measure is the 75th percentile
of page loads, segmented across mobile and desktop devices.
First Input Delay, FID, measures the time
from when a user first interacts with the page, so they're
clicking on something, tapping a button, that
kind of thing, to the time when the browser is actually
able to respond to that interaction.
To provide a good user experience for FID, sites should
strive to have a First Input Delay of less than
100 milliseconds.
To ensure you're hitting this target for most of your
users, a good threshold to measure again is the
75th percentile of page loads. Given that FID can
only be measured in the field with real users, we want
to make sure that you have a way to locally debug and

English: 
able to see the
content of the page.
To provide a good
user experience,
sites should strive to have
Largest Contentful Paint occur
within the first 2.5
seconds of the page starting
to load to ensure you're
hitting this target for most
of your users, a good
threshold to measure
is the 75th percentile
page floats,
segmented across mobile
and desktop devices.
First Input Delay, FID, measures
the time from when a user first
interact with the page-- so
they're clicking on something,
or tapping a button,
that kind of thing--
to the time when the
browser is actually
able to respond to
that interaction.
To provide a good user
experience for FID,
sites should strive to
have a First Input Delay
of less than 100 milliseconds.
To ensure you're hitting this
target for most of your users,
a good threshold
to measure again,
is the 75th percentile
of page loads.
Given that FID can
only be measured
in the field with
real users, we want
to make sure that you have a way
to locally debug and optimize

English: 
optimize FID in the lab.
That's where Total Blocking Time, TBT, comes in.
TBT quantifies load responsiveness.
Measuring the total amount of time when the main thread is
blocked long enough to prevent input responsiveness.
So TBT measures the total amount of time between First
Contentful Paint and Time to Interactive.
So in short, you should definitely make sure
that you're leveraging the signals that you're getting from
TBT in the lab to optimize for FID in the field.
Cumulative Layout Shift, CLS, is a measurement of visual
stability. It quantifies how much a page's content
visually shifts around.
A low CLS score is a signal to developers that their users
aren't experiencing undue content shifts.
A CLS score below 0.1 Is considered good.
CLS in a lab environment is measured through the end of a
page load, whereas in the field you can measure CLS
up to the first user interaction or including all user
input.
So that was a quick overview, but it's important to

English: 
FID in the lab.
That's where Total Blocking
Time, TBT, comes in.
TBT quantifies load
responsiveness,
measuring the total
amount of time
when the main thread is
blocked long enough to prevent
input responsiveness.
So TBT measures the
total amount of time
between First Contentful
Paint and Time To Interactive.
So in short, you
should definitely
make sure that you're leveraging
the signals that you're
getting from TBT in the lab to
optimize for FID in the field.
Cumulative Layout Shift,
CLS, is a measurement
of visual stability.
It quantifies how much a page's
content visually shifts around.
A low CLS score is a
signal to developers
that their users aren't
experiencing undue content
shifts.
A CLS score below 0.1
is considered good.
CLS in a lab
environment is measured
through the end of a page
load, whereas in the field,
you can measure CLS up to
the first user interaction
or including all user input.
So that was a quick
overview, but it's

English: 
important to remember
that our goal is
to have the vast majority
of our users served
with fast, interactive,
stable experiences.
To that end, Core
Web Vitals used
as the 75th percentile
value of all page
views in the field to evaluate
against these thresholds.
So in other words, if at least
75% of page views to a site
meet the good
threshold, then the site
is classified as having a good
performance for that metric.
And this applies to all three of
the Core Web Vitals, LCP, FID,
and CLS.
The 75th percentile is used
to evaluate all of them.
As I mentioned
before, our ability
to measure user experience
quality is always improving.
We expect to update Core Web
Vitals on an annual basis,
and provide regular updates
on the future candidates,
motivation, and
implementation status.
Looking ahead toward
2021, the Core Web Vitals
will be refreshed to
ensure that it reflects
the latest in our learnings.
And this includes adjustments
to the set of metrics,
as well as the thresholds.
Let's do a quick
refresher on the value

English: 
remember that our goal is to have the vast majority of our
users served with fast, interactive, stable experiences.
To that end, Core Web Vitals uses the 75th percentile
value of all page views in the field to evaluate against
these thresholds. So in other words, if at least
75% of page views to a site meet
the 'good' threshold, then the site is classified as having
a good performance for that metric.
And this applies to all three of the Core Web Vitals: LCP,
FID, and CLS.
The 75 percentile is used to evaluate all of them.
As I mentioned before, our ability to measure user
experience quality is always improving.
We expect to update Core Web Vitals on an annual basis
and provide regular updates on the future candidates,
motivation, and implementation status.
Looking ahead toward 2021, the Core Web Vitals
will be refreshed to ensure that it reflects the latest in
our learnings. And this includes adjustments to the set of
metrics as well as the thresholds.

English: 
Let's do a quick refresher on the value of combining both
lab and field signals together to diagnose, optimize,
and monitor your site's performance.
Lab data, which is synthetically collected in a testing
environment, is critical for tracking down bugs and
diagnosing issues because it is reproducible and has an
immediate feedback loop.
Field data allows you to understand what real world users
are experiencing, conditions that are impossible to
simulate in the lab. The real world is messy.
You mean there's permutations of devices, there's network
configurations, cache conditions.
The list is long.
Either set of metrics taken in isolation aren't nearly as
powerful as when they're combined.
And that's why we try to provide you with ample coverage
for both lab and field tools.
We have the tools that focus on providing you with
information about what real users are experiencing, field
tools, such as the Chrome User Experience Report, Search
Console, and the new Web Vitals extension.
And then we have our lab tools as well coming in to provide
you with mechanisms to see what needs improvement before
a user ever even sees your page.

English: 
of combining both lab and
field signals together
to diagnose, optimize,
and monitor your site's
performance.
Lab data, which is synthetically
collected in a testing
environment, is critical
for tracking down
bugs and diagnosing
issues because it
is reproducible and has an
immediate feedback loop.
Field data allows you to
understand what real world
users are experiencing,
conditions
that are impossible to
simulate in the lab.
The real world is messy.
I mean, there's
permutations of devices,
there's network configurations,
cash conditions.
The list is long.
Either set of metrics
taken in isolation
aren't nearly as powerful
as when they're combined.
And that's why we
try to provide you
with ample coverage for
both lab and field Tools.
We have the tools that
focus on providing you
with what information about what
real users are experiencing,
field tools, such as the
Chrome User Experience Report,
Search Console, and the
new Web Vitals extension.
And then we have our
lab tools as well,
coming in to provide
you with mechanisms
to see what needs improvement
before a user ever even sees
your page.

English: 
And it gives you a
reproducible environment
to debug and optimize.
Those are tools like Chrome
DevTools and Lighthouse.
Page speed Insights
is a great place
to start to give you a pulse
on your Core Web Vitals
performance in both the field
and in the lab, because it
leverages CrUX and
Lighthouse under the hood.
Given that the Core
Web Vitals initiative
aims to help folks know what
should be prioritized first,
we wanted to make sure you
had full support and tooling
coverage for LCP, FID, and CLS.
Core Web Vitals are now in
all of your favorite developer
tools.
And there are more than
what is even listed here.
And that includes a
new Web Vitals library,
and a bunch of ecosystem tools
that have already adopted them.
You're able to measure your Core
Web Vitals for a specific page,
for your origin,
locally in the lab,
and from real
users in the field.
And as I mentioned before,
Total Blocking Time, TBT--
it's a proxy metric for FID that
allows you to debug and improve
your interactivity in the lab,
which is why it's listed here
in the FID column.

English: 
And it gives you a reproducible environment to debug and
optimize.
Those are tools like Chrome DevTools and Lighthouse.
PageSpeed Insights is a great place to start to give you a
pulse on your Core Web Vitals performance in both the field
and in the lab because it leverages CrUX and Lighthouse
under the hood.
Given that the Core Web Vitals initiative aims to help
folks know what should be prioritized first, we wanted
to make sure you had full support and tooling coverage for
LCP, FID, and CLS.
Core Web Vitals are now in all of your favorite developer
tools, and there are more than what is even listed
here. And that includes a new Web Vitals library and a
bunch of ecosystem tools that have already adopted them.
You're able to measure your Core Web Vitals for a specific
page, for your origin, locally in the lab, and
from real users in the field.
And as I mentioned before, Total Blocking Time, TBT.
It's a proxy metric for FID that allows you to debug and
improve your interactivity in the lab, which is why it's
listed here in the FID column.

English: 
Before we go over all of the
latest updates in each tool,
I wanted to make
sure that you had
all of our tools mapped in a
workflow for Core Web Vitals.
Which tools do you want?
Where do I go first?
As I said before, a good place
to start to get a general pulse
is PageSpeed Insights.
But all of our tools have a
really critical role to play.
Using Search Console allows you
to see across your entire site
and identify which types
of pages need improvement.
Then you can diagnose
and optimize locally
with Lighthouse and
Chrome DevTools.
We have some really new
capabilities by the way,
I'm excited to share
with you a moment.
And then you can prevent
regressions with Lighthouse CI,
and create a custom dashboard
to monitor your site with CrUX.
Along the entire journey,
you can turn to Web.Dev
for guidance.
All right, let's get into
the tool updates themselves.
Lighthouse just
announced V6 last month,
which has new metrics, including
Core Web Vitals, new audits,
and a new performance score.
Let's start with the
updates to the perf score.
On a high level, we
want to make sure

English: 
Before we go over all of the latest updates and each tool,
I wanted to make sure that you had all of our tools mapped
in a workflow for Core Web Vitals.
Which tools do what? Where do I go first?
As I said before, a good place to start to get a general
pulse is PageSpeed Insights.
But all of our tools have a really critical role to play.
Using Search Console allows you to see across your entire
site and identify which types of pages need improvement.
Then you can diagnose and optimize locally with Lighthouse
and Chrome DevTools. We have some really new capabilities,
by the way, I'm excited to share with you in a moment.
And then you can prevent regressions with Lighthouse CI
and create a custom dashboard to monitor your site with
CrUX. Along the entire journey, you can turn to
web.dev for guidance.
All right. Let's get into the tool updates themselves.
Lighthouse just announced V6 last month, which has new
metrics, including Core Web Vitals, new audits,
and a new performance score. Let's start with the updates
to the perf score.
On a high level, we want to make sure that you can get a

English: 
sense of your loading performance, interactivity, and
layout predictability.
The metrics and the weights of those metrics that formulate
the top level score are intended to give you a balanced
view of your user experience against
critical dimensions of quality.
While three new metrics have been added, the Core Web
Vitals metrics, three old ones have been removed:
First Meaningful Paint, First CPU Idle, and Max Potential
FID. These removals are due to considerations
like metric variability, as well as simply having newer
metrics that offer better reflections of the part of the
user experience that we're trying to measure with that
metric. There are also improvements to the weights
based on user feedback.
For instance, reduction of Time to Interactive's weight in
the final scoring calculation is in direct response to user
feedback about variability and inconsistencies
in metric optimizations correlating with improvements
to the user experience.
However, it is still a valuable signal to understand when a
page is fully interactive.
That's why we still keep it.

English: 
that you can get a sense of
your loading performance,
interactivity, and
layout predictability.
The metrics and the
weights of those metrics
that formulate the
top level score
are intended to give you a
balanced view of your user
experience against critical
dimensions of quality.
While three new metrics
have been added,
the Core Web Vitals
metrics, three old ones
have been removed, First
Meaningful Paint First CPU
Idle, and Max Potential FID.
These removals are
due to considerations
like metric variability, as well
as simply having newer metrics
that offer better reflections of
the part of the user experience
that we're trying to
measure with that metric.
There are also improvements
to the weights,
based on user feedback.
For instance, reduction of
time to interactive's weight
in the final scoring
calculation is
in direct response to user
feedback about variability
and inconsistencies in
metric optimizations
correlating with improvements
to the user experience.
However, it is still
a valuable signal
to understand when a page
is fully interactive.
And that's why we still keep it.

English: 
TBT serves as a nice compliment to TTI
so that together you're able to more effectively optimize
for user interactivity.
There's also a super nifty scoring calculator to help
explore the performance score, the calculator gives you a
comparison between V5 and V6 scores as well.
It's not shown here, but it's in the tool.
And when you run an audit with Lighthouse 6.0, the report
comes with a link to the calculator with your results
pre-populated. So I highly recommend you check it out.
Lighthouse V6 also offers quite a few new audits.
These are with a focus on JavaScript analysis and
accessibility. You can now easily trace how much unused
code is being shipped with your application, as well as
making sure that you're providing audits to check that
screen readers and other assistive technologies have all
of the information they need about the behavior and purpose
of controls on your web page to serve users well.
All of the products that Lighthouse powers are updated to
reflect the latest version, including Lighthouse CI,

English: 
TBT serves as a nice
compliment to TTI
so that together, you're able
to more effectively optimize
for user interactivity.
There's also a super
nifty scoring calculator
to help explore the
performance score.
The calculator gives you a
comparison between V5 and V6
scores as well.
It's not shown here,
but it's in the tool.
And when you run an audit
with Lighthouse 6.0,
the report comes with a
link to the calculator
with your results pre-populated.
So I highly recommend
you check it out.
Lighthouse V6 also offers
quite a few new audits.
These are with a focus
on JavaScript analysis
and accessibility.
You can now easily
trace how much
unused code is being shipped
with your application,
as well as making sure that
you're providing audits
to check that screen readers
and other assistive technologies
have all of the information
they need about the behavior
and purpose of controls on your
web page to serve users well.
All of the products that
Lighthouse powers are
updated to reflect the latest
version, including Lighthouse

English: 
CI, which now enables you to
easily measure your Core Web
Vitals on pull requests before
they're merged and deployed.
PageSpeed Insights, PSI,
reports on the lab and field
performance of a page on both
mobile and desktop devices.
The tool provides an overview
of how real world users are
experiencing the page--
that's powered by CrUX--
and a set of actionable
recommendations
on how a site owner can
improve page experience.
And that's provided
by Lighthouse.
Page insights and
the PSI API have also
been upgraded to use
Lighthouse 6.0 under the hood,
and now support measuring Core
Web Vitals in both the lab
and field sections
of the report.
So Core Web Vitals are
annotated with the blue ribbon
that you see here.
From the CrUX data
set, you'll be
able to see whether or
not 75% of your loads
are hitting the Core
Web Vitals thresholds
for each metric in the
field, for both your page
and for your origin.
Then you can take a
look at your lab data
from Lighthouse to
see whether or not
you are hitting
the Core Web Vitals
thresholds for each metric in a
synthetic testing environment.
This helps to guide you towards
actionable opportunities

English: 
which now enables you to easily measure your Core Web
Vitals on pull requests before they're merged and deployed.
PageSpeed Insights, PSI, reports on the lab and field
performance of a page on both mobile and desktop devices.
The tool provides an overview of how real world users
are experiencing the page, that's powered by CrUX, and
a set of actionable recommendations on how a site owner
can improve page experience.
And that's provided by Lighthouse.
PageSpeed Insights and the PSI API have also
been upgraded to use Lighthouse 6.0 under the hood and now
support measuring Core Web Vitals in both the lab and field
sections of the report.
So Core Web Vitals are annotated with the blue ribbon that
you see here from the CrUX dataset, you'll be able to see
whether or not 75% of your loads are hitting the Core Web
Vitals thresholds for each metric in the field for both
your page and for your origin.
Then you can take a look at your lab data from Lighthouse
to see whether or not you are hitting the Core Web Vitals
thresholds for each metric in a synthetic testing
environment. This helps to guide you towards actionable

English: 
to improve your
page's performance.
Now the new Core Web Vitals
Report and Search Console
helps you to identify
groups of pages
across your site that
require attention.
And this is also based on a
real world field data from CrUX.
URL performance is grouped
by status, metric type,
and URL group,
which is basically
groups of similar web pages.
The report is based on the
three Core Web Vitals metrics.
And it's a great way
to identify pages that
need attention on your site.
There are many, many cool
new things in DevTools,
but I'm going to focus
on just two of them
right now that are related
to Core Web Vital support.
First, is the capacity to now
debug interaction readiness
with Total Blocking
Time in the footer.
The Total Blocking
Time, TBT metric, again,
the proxy for first
input delay, is now
shown in the footer of the
Chrome DevTools Performance
panel when you measure
page performance.
The Performance panel has
a new experience section
that can help you detect
unexpected layout shifts.
This is helpful for
finding and fixing

English: 
opportunities to improve your pages performance.
Now the new Core Web Vitals report in Search Console helps
you to identify groups of pages across your site that
require attention. And this is also based on a real world
field data from CrUX.
URL performance is grouped by status, metric type,
and URL group.
Which is basically groups of similar web pages.
The report is based on the three Core Web Vitals metrics,
and it's a great way to identify pages that need attention
on your site.
There are many, many cool new things in DevTools, but I'm
going to focus on just two of them right now that are
related to Core Web Vitals support.
First is the capacity to now debug interaction readiness
with Total Blocking Time in the footer.
The Total Blocking Time (TBT) metric, again the proxy
for First Input Delay, is now shown in the footer of the
Chrome DevTools Performance panel when you measure page
performance. The Performance panel has a new Experience
section that can help you detect unexpected layout shifts.

English: 
visuals instability
issues on your page
that contribute to
cumulative shift.
So you select a Layout Shift to
view its details in the Summary
tab.
And to visualize where
the shift itself occurred,
hover over the Moved
From and Move To fields.
And for more information
on everything
that's new in DevTools,
see the What's
New in DevTools Chrome
84 link that's here.
The Chromium UX report,
CrUX, is a public data
set of real user experience
data on millions of web sites.
We just had over 7 millions.
So that's awesome.
It measures field versions of
all of the Core Web Vitals.
Even if you don't have
Realm on your site,
CrUX can provide a
quick and easy way
to assess your Core Web Vitals.
The newly redesigned
CrUX dashboard
allows you to easily track an
origin's performance over time.
And now you can
use it to monitor
the distributions of all of
your Core Web Vitals metrics.
To get started
with the dashboard,
you can check out the
tutorial on web.dev.
We've also introduced this new
Core Web Vitals landing page
to make it even easier to see
how your site is performing
at a glance.
There is also a new
CrUX API for you to use,

English: 
This is helpful for finding and fixing visuals instability
issues on your page that contribute to Cumulative Layout
Shift. So you select a layout shift to view its details in
the summary tab. And to visualize where the shift itself
occurred, hover over the Moved From and Moved To fields.
And for more information on everything that's new in
DevTools, see the What's New in DevTools (Chrome 84)
link that's here.
The Chrome UX report, CrUX, is a public data set
of real user experience data on millions of websites.
We just hit over 7 million, so that's awesome.
It measures field versions of all of the Core Web Vitals.
Even if you don't have RUM on your site, CrUX can provide a
quick and easy way to assess your Core Web Vitals.
The newly redesigned CrUX dashboard allows you to easily
track an origin's performance over time, and now you can
use it to monitor the distributions of all of your Core Web
Vitals metrics. To get started with the dashboard, you can
check out the tutorial on web.dev.
We've also introduced this new Core Web Vitals landing page
to make it even easier to see how your site is performing
at a glance. There is also a new CrUX API for you to use,

English: 
built from the ground
up to provide developers
with simple, fast, and
comprehensive access
to field-based experience data.
Developers can query
for an origin or a URL,
and segment results based
on different form factors.
The API updates daily, and
summarizes the previous 28
days worth of data, including
your Core Web Vitals
performance.
We're excited to integrate
more features over time
to enable new ways
to explore the data
and discover insights about
the state of user experiences.
Web.Dev is your go-to place for
guidance on web development.
It also now supports the
canonical page for information
about web vitals.
The Web.Dev measure
tool also allows
you to measure the performance
of your page over time.
And it provides a prioritized
list of guides and code labs
on how to improve.
Its measurement is powered
by PageSpeed Insights, which
has Lighthouse 6.0
under the hood,
and fully supports the
Core Web Vitals metrics.
As you can see
here, there are also
a slew of other amazing
tools to help you
with measuring, optimizing,
and monitoring your Core Web
Vitals.

English: 
built from the ground up to provide developers with simple,
fast, and comprehensive access to field based experience
data. Developers can query for an origin or a URL
and segment results based on different form factors.
The API updates daily and summarizes the previous 28
days worth of data, including your Core Web Vitals
performance. We're excited to integrate more features over
time to enable new ways to explore the data and discover
insights about the state of user experiences.
web.dev is your go-to place for guidance on web
development. It also now supports the canonical page
for information about Web Vitals.
The web.dev measure tool also allows you to measure the
performance of your page over time.
And it provides a prioritized list of guides and codelabs
on how to improve. Its measurement is powered by PageSpeed
Insights, which has Lighthouse 6.0 under the hood
and fully supports the Core Web Vitals metrics, as you can
see here. There are also a slew of other amazing tools to
help you with measuring, optimizing, and monitoring your
Core Web Vitals.

English: 
The web vitals extension
measures the three Core Web
Vitals metrics in real time
for desktop in Google Chrome.
This is helpful
for catching issues
early on during your
development workflow,
and as a diagnostic tool to
assess performance of Core Web
Vitals as you browse the web.
The extension is now available
to install from the Chrome Web
Store.
The web-vitals library
is a tiny modular library
for measuring web-vitals metrics
on real users in a way that
accurately matches how
they're measured from Chrome
and reported to
other Google tools.
The library supports all
of the Core Web Vitals,
as well as other field vitals
Site Kit, Google's
official WordPress plugin,
allows you to get insights
about how people find and use
your site, how to improve,
monetize your content directly
in your WordPress dashboard.
They've also just
updated to ensure
that you know how you're
performing against Core Web
Vitals.
As I mentioned
earlier too, we're
so excited to have so many
amazing ecosystem players
and Production Monitoring
solutions already implementing

English: 
The Web Vitals extension measures the three Core Web Vitals
metrics in real time for desktop in Google Chrome.
This is helpful for catching issues early on during your
development workflow and as a diagnostic tool to assess
performance of Core Web Vitals as you browse the web.
The extension is now available to install from the Chrome
Web store. The web-vitals library is a tiny modular
library for measuring Web Vitals metrics on real users
in a way that accurately matches how they're measured from
Chrome and reported to other Google tools.
The library supports all of the Core Web Vitals, as well as
other field vitals.
Site Kit, Google's official WordPress plugin, allows
you to get insights about how people find and use your
site, how to improve, monetize your content directly
in your WordPress dashboard.
They've also just updated to ensure that you know how
you're performing against Core Web Vitals.
As I mentioned earlier too, we're so excited to have so
many amazing ecosystem players and production
monitoring solutions already implementing support for Core

English: 
support for Core Web Vitals.
Honestly, we're delighted.
And thank you so much
for your amazing work.
It's really cool.
And this is a long
list of links.
But I'll make sure to tweet
them as well so that you can
click through them more easily.
There are a bunch
of goodies in here.
And with that, I'm just going
to give you a huge thank you.
Really appreciate your time.
[MUSIC PLAYING]
ADDY OSMANI: Hey folks.
My name is Addy Osmani.
And welcome to optimizing
for Core Web Vitals.
So today, we're going
to talk about optimizing
user experiences on the web with
a case study on French luxury
fashion house Chloe.
Chloe have recently been
taking a fresh look at
where performance.
And I'm really excited to
share their learnings with you.
Now you may have seen Google
Search announce an upcoming
search ranking change
recently that incorporates
page experience metrics.

English: 
Web Vitals. Honestly, we're delighted.
And thank you so much for your amazing work.
It's really cool.
And this is a long list of links, but I'll make sure to
tweet them as well so that you can click through them more
easily. There are a bunch of goodies in here.
And with that, I'm just gonna give you a huge thank you.
Really appreciate your time.
Hey, folks! My name is Addy Osmani, and welcome to
Optimizing for Core Web Vitals.
So today we're going to talk about optimizing user
experiences on the web with a case study on
French luxury fashion house Chloé.
Chloé have recently been taking a fresh look at web
performance, and I'm really excited to share their
learnings with you. Now, you may have seen Google Search
announce an upcoming search ranking change recently
that incorporates page experience metrics.

English: 
Now, these metrics include the Core Web Vitals, which,
together with a few other signals, paint a pretty holistic
picture about the quality of user experiences on
a page. But what are the Core Web Vitals
and how do you go about optimizing for them?
Well, Core Web Vitals are a set of metrics related
to speed, responsiveness, and visual
stability. Now these three aspects of user experience
are measured using three metrics.
So first of all, we have Largest Contentful Paints,
which measures loading performance.
Next up, we have First Input Delay, which measures
interactivity.
And last, we've got Cumulative Layout Shift, which measures
layout stability.
Let's kick things off by talking about
Cumulative Layout Shift or CLS.
Now, CLS is a pretty important metric for measuring visual
stability because it helps quantify all those times
when we see really surprising shifts in the content
on page.
It helps make sure that the page is as delightful as

English: 
Now these metrics include
the Core Web Vitals,
which together with
a few other signals,
paint a pretty holistic picture
about the quality of user
experiences on a page.
But what are the
Core Web Vitals,
and how do you go about
optimizing for them?
Well, Core Web Vitals
are a set of metrics
related to speed,
responsiveness,
and visual stability.
Now, these three aspects
of user experience
are measured using
three metrics.
So first of all, we have
Largest Contentful Paint, which
measures loading performance.
Next up, we have
First Input Delay,
which measures interactivity.
And last, we've got
Cumulative Layout Shift, which
measures layout stability.
Let's kick things off by
talking about Cumulative Layout
Shift, or CLS.
Now CLS is a pretty
important metric
for measuring visual
stability, because it
helps quantify all
those times when
we see really surprising
shifts in the content on page.

English: 
It helps make sure that the page
is as delightful as possible.
Have you ever been reading
like, an article online,
when all of a sudden, something
suddenly changes on the page,
and without warning,
the text moves
and you've lost your place?
That's literally what happens.
A giant chicken kicks
your content away.
And he has no regrets.
Look at him.
He's basically CLS.
So what causes CLS?
Well, first of all, we've got
images without dimensions, ads,
embeds, or iframes without
dimensions, dynamically
injected content, and web
fonts that might cause
a flash of styled content.
Now, as I mentioned, Chloe is
a French luxury fashion house.
And it's become a
bit of a go-to brand,
not just for like
luxury apparel,
but also handbags and
fragrances and things like that.
And they have
recently been focused

English: 
possible. Have you ever been reading
like, an article online when
all of a sudden something suddenly changes
on the page?
And without warning, the text moves and you've lost
your place.
That's literally what happens.
A giant chicken kicks your content away.
And he has no regrets. Look at him.
He's basically CLS.
So what causes poor CLS?
Well, first of all, we've got images without dimensions,
ads, embeds or iframes without dimensions,
dynamically injected content and web fonts
that might cause a flash of unstyled content.
Now, as I mentioned, Chloé is a French luxury
fashion house, and it's become a bit of a go to brand, not
just for like luxury apparel, but also handbags and
fragrances and things like that.
And they have recently been focused on improving

English: 
on improving Cumulative Layout
Shift on all their main pages,
so their Home page, their
Product Listings page,
and their Product Details page.
Through a bunch of
work, they've been
able to reduce their CLS
all the way down to 0,
which is about as
perfect as you can get.
So how did they get here?
This is the before view
of the Chloe Home page
where we can observe a number
of surprising layout shifts
due to elements on the page not
following CLS best practices.
So let's dive into a few
tips that worked well here.
First off, always include width
and height size attributes
on your images and
video elements.
Alternatively, you can
always do things like reserve
the required space with
CSS aspect ratio boxes.
But in general,
this approach just
makes sure that the
browser can allocate
the correct amount of
space in the document
while the image is loading.
So here's a demo
of this in action.
These are some images
that don't have
width and height specified.
And what you see happening is
that they're pushing content

English: 
Cumulative Layout Shift on all their main pages.
So their home page, their product listings page,
and their product details page.
Through a bunch of work, they've been able to reduce
their CLS all the way down to zero, which is
about as perfect as you can get.
So how did they get here?
This is the before view of the Chloé home page, where
we can observe a number of surprising layout shifts due
to elements on the page not following CLS best practices.
So let's dive into a few tips that worked well here.
First off, always include width and height size
attributes on your images and video elements.
Alternatively, you can always do things like reserve
the required space with CSS aspect ratio boxes.
But in general, this approach just makes sure that the
browser can allocate the correct amount of space in the
document while the image is loading.
So here's a demo of this in action.
These are some images that don't have width and height
specified. And what you see happening is that they're
pushing content in the page all the way down.

English: 
This is something that's reflected in our tools like
Lighthouse. And I've got a little bit of a clip out here,
you can see the Lighthouse report where CLS is in the red
and not quite where we want it to be.
So how do we address this?
Well, in the early days of the web,
developers would add width and height attributes all
over the place, they'd add them to their image tags, they'd
make sure that they kept enough space allocated
on the page before browsers would start fetching
images. That was great because it would minimize reflow
and re-layout.
Now, when responsive web design was
introduced, developers began to omit these width and
height attributes and they started to use CSS to
resize their images instead.
One of the downsides of this approach is that space could
only be allocated for an image once it began to download.
And, you know, at that point, the browser could determine
its dimensions. As images load in
in that old world, you know, the page would reflow as

English: 
in the page all the way down.
This is something that's
reflected in our tools
like Lighthouse.
And I've got a little
bit of a click out here.
You can see the Lighthouse
report where CLS is in the red
and not quite where
we want it to be.
So how do we address this?
Well, in the early
days of the web,
developers would add width
and height attributes
all over the place.
They'd add them to
their image tags.
They'd make sure that they
kept enough space allocated
on the page before browsers
would start fetching images.
That was great, because it would
minimize re-flow and re-layout.
Now when responsive web
design was introduced,
developers began to omit these
width and height attributes,
and they started to use CSS to
resize their images instead.
One of the downsides
to this approach
is that space could only
be allocated for an image
once it began to download.
And at that point, the browser
could determine its dimensions.
As images load in,
in that old world,

English: 
each image appears on the screen.
And a lot of us got used to, you know, our text
suddenly popping down the screen, which wasn't a very great
user experience.
And this is where aspect ratio comes in.
So the aspect ratio of an image is the ratio
of its width to its height.
It's pretty common to see this expressed as two numbers
separated by a colon.
So, for example, 16:9, or
4:3 for an X:Y aspect ratio.
The image is X units wide and Y units high. What
that means is that if we know one of the dimensions,
the other one can be determined.
So for a 16:9 aspect ratio, if
dress.jpg has got a 360px
height, the width is 360 multiplied
by (16/9), which gives us
640px.
I'm not very good at math.
So hopefully that was helpful.
Now modern browsers now set

English: 
the page would re-flow as each
image appears on the screen.
And a lot of us got used
to our text suddenly
popping down the screen,
which wasn't a very great user
experience.
And this is where
aspect ratio comes in.
So the aspect ratio of
an image is the ratio
of its width to its height.
It's pretty common to
see this expressed as two
numbers separated by a colon.
So for example, 16:9 or 4:3.
For an x:y y aspect ratio, the
image is x units wide and y
units high.
What that means is that if we
know one of the dimensions,
the other one can be determined.
So for a 16 to 9 aspect
ratio, if dress.jpg
has got a360 px
height, the width
is 360 multiplied by 16 over
9, which gives us 640 px.
Whew.
I'm not very good at math.
So hopefully, that was helpful.

English: 
the default aspect ratio of images based on an image's
width and height attributes.
So it's really valuable to set them if you want
to avoid those layout shifts.
This is a change in modern browsers and it's all thanks to
CSS Working Group.
They've done some work that basically allows us to just set
width and height as normal.
And this calculates an aspect ratio based on the width and
height attributes before the image is loaded.
So what we're seeing on screen here, this is something
that's added to the default style sheet of all
browsers.
And it calculates aspect ratio based on the elements, width
and height attributes.
So as long as you're providing widths and heights,
the aspect ratio can be calculated and everything - we'll
hopefully avoid layout shifts.
So this is a great best practice to be following.
This is also something that works well with responsive
images. So with srcset, you're generally
defining images that you want to allow the browser

English: 
Now modern browsers now set
the default aspect ratio
of images based on an image's
width and height attributes.
So it's really
valuable to set them
if you want to avoid
those layout shifts.
This is a change
in modern browsers.
And it's all thanks
to CSS working group.
They've done some work that
basically allows us to just set
width and height as normal.
And this calculates
an aspect ratio
based on the width
and height attributes
before the image is loaded.
So what we're seeing
on screen here,
this is something that's
added to the default style
sheet of all browsers, and
it calculates aspect ratio
based on the element's
width and height attributes.
So as long as you're
providing width and height,
the aspect ratio
can be calculated.
And everything will hopefully
avoid layout shifts.
So this is a great best
practice to be following.
This is also something
that works well
with responsive images.
So with source set,
you're generally
defining images that you
want to allow the browser
to select between.

English: 
You can define sizes
for those images.
To make sure that you're image
width and height attributes
can be set, just make
sure that each image is
using the same aspect ratio.
And here's that demo once
again, with width and height
attributes added.
Notice that in a modern browser
you won't see any layout
shifts there, and the
user will get a much more
pleasant experience.
So another reminder, set
those with the height
attributes as much as you can.
Here's the impact that this
change has on Lighthouse.
As we can see before, we
went from a CLS of 0.36--
so we're in the red--
all the way back to something
that's a little bit better.
There are one or
two other things
in this page that could
have been improved,
but on the whole, we've had a
relatively significant impact
on reducing layout shift.
You may be wondering,
how can I figure out
what elements on my page
are contributing to CLS?
We've got you covered.
So in Lighthouse, we have an
avoid large layout shifts audit
that highlights the top
DOM elements contributing

English: 
to select between. You can define sizes for those images.
To make sure that your image width
and height attributes can be set, just make sure
that each image is using the same aspect ratio.
And here's that demo once again with width
and height attributes added.
Notice that in a modern browser, you won't see any layout
shifts there and the user will get a much more pleasant
experience. So another reminder, set those width and
height attributes as much as you can.
Here's the impact that this change has on Lighthouse.
As we can see before we went from a CLS of
0.36, so we're in the red all the way back to something
that's a little bit better.
There are one or two other things in this page that could
have been improved; but on the whole, we've had a
relatively significant impact on reducing layout shift.
You might be wondering how can I figure out what elements
on my page are contributing to CLS?
We've got you covered.
So in Lighthouse we have an 'Avoid large layout
shifts' audit that highlights the top DOM elements

English: 
most of the CLS to the page.
So check out that audit.
In DevTools we also
have a good story here.
So if you're using the
DevTools Performance panel,
it has an Experience
section that
can help you detect
unexpected layout shifts.
Super helpful for
finding and fixing
visual instability issues.
They get highlighted in
this experience section
with some kind of reddish
pinkish Layout Shift records.
And if you click on
one of those records,
you'll be able to
get more details
about what was the score?
Where did this element
move to and from?
Really great
diagnostics to help you
nail down how to fix your CLS.
So Chloe's approach
to image loading
is that they use a skeleton
pattern with a SAS CSS mix-in
called Bruschetta Loading.
Bruschetta is one
of those things that
are a little bit of a luxury.
Through me during
quarantine, they're
right up there with toilet
paper and antibacterial soap.
But let's stick with
Bruschetta loading.

English: 
contributing most of the CLS to the page.
So check out that audit. In DevTools, we also have
a good story here. So if you're using the DevTools
Performance panel, it has an experience section
that can help you detect unexpected layout shifts.
Super helpful for finding and fixing visual instability
issues, they get highlighted in this experience section
with some kind of reddish pinkish layout shift records.
And if you click on one of those records, you'll be able to
get more details about what was the score?
Where did this element move to and from?
Really great diagnostics to help you nail down how to fix
your CLS.
So Chloé's approach to image loading is that they use a
skeleton pattern with a Sass CSS mixin called
bruschetta loading'.
Bruschetta
is one of those things that are a little bit of a luxury to
me during quarantine there.
They're right up there with toilet paper and antibacterial
soap. But let's stick with bruschetta loading.

English: 
So this is Chloé's approach to image loading.
They have a parent container with a color similar
to the final image that's being loaded.
Now, lazy loading strategies like this where you have
a little bit of a preview of what's finally going to be
shown are sometimes referred to as low quality image
placeholders. You can use a predominant
color from the final image.
You can use a low resolution image.
Sometimes people will use like a 1px by 1px
image or something, 10px by 10px, something very low
resolution, that just gives you a preview of what's finally
going to be displayed.
Now, lazy loading strategies like
this, which either use a color or that kind of place
holder, they don't strictly improve
Largest Contentful Paint, but they do improve perceived
performance so they can still be pretty good for the user
experience. Now, what Chloé did here, in addition
to using this skeleton loading approach, was that
they do use responsive images and they do make sure that

English: 
So this is Chloe's
approach to image loading.
They have a parent
container with a color
similar to the final
image that's being loaded.
Now least loading
strategies like this
where you have a little bit
of a preview of what's finally
going to be shown, are sometimes
referred to as a low quality
image placeholders.
You can use a predominant
color from the final image.
You can use a low
resolution image.
Sometimes people will use
a 1 pixel by 1 pixel image,
or something like 10 pixel
by 10 pixel, something
very low resolution that just
gives you a preview of what's
finally going to be displayed.
Now lazy loading
strategies like this,
which either use a color or
that kind of place holder,
they don't strictly improve
Largest Contentful Paint,
but they do improve
perceived performance.
So they can still be pretty
good for the user experience.
Now what Chloe did
here, in addition
to using this skeleton
loading approach,
was that they do use
responsive images,
and they do make
sure that they're

English: 
they're setting dimensions on their images as well, to
avoid CLS.
Let's shift things up.
Shift things up.
Let's go on to the next tip. So reserve enough space
for any of your dynamic content, things like
ads or promos.
Ideally, you want to make sure that you are giving
any of that content a container that is not going to just,
you know, bounce out of and suddenly cost shifts
in the page.
A related tip to this one is avoid
inserting new content above existing content
unless it's in reaction to a user interaction.
You want to make sure that any layout shifts in your page
are ones that you are making a conscious decision around
and like, occur as expected.
So let's try to visualize this.
Here's an example of a promo where
we're dynamically injected into the page.
We haven't reserved space and it's just pushed everything
all the way down. We can see this reflected in our
Lighthouse call out at the bottom of the screen.

English: 
setting dimensions on their
images as well to avoid CLS.
Let's shift things up.
Let's cross the next tip.
So reserve enough space for any
of your dynamic content, things
like ads or promos.
Ideally, you want to
make sure that you
are giving any of that
content a container
that is not going to just
bounce out of and suddenly cause
shifts in the page.
A related tip to
this one is avoid
inserting new content
above existing content
unless it's in reaction
to a user interaction.
You want to make sure that
any layout shifts in your page
are ones that you are making
a conscious decision around,
and occur as expected.
So let's try to visualize this.
Here's an example of a promo
where we're dynamically
injected into the page,
we haven't reserved space,
and it's just pushed
everything all the way down.
We can see this reflected
in our Lighthouse call-out
at the bottom of the screen.

English: 
Now, this is something that very typically happens with
ads, iframes, promos,
and these types of assets can sometimes be the largest
contributors to layout shifts on the web.
Now, many ad networks and publishers will often
support dynamic ad sizes.
And ad sizes that are, you know, dynamic
or something that can sometimes increase revenue because
you're giving people a lot of flexibility around what
can go inside your ad slots; but, it can also be something
that can potentially negatively impact the user experience
by pushing things down. So that's something that you want
to avoid. So how do we approach this?
Well, one solution to the problem is statically
reserving space for the slot so you can
make sure that you're defining a container for these
ads or embed frames so that regardless
of what goes inside, you're not shifting the content
of the page around.
So here I've got a container where I've set my width and my

English: 
Now this is something that
very typically happens
with ads, iframes, promos.
And these types of
assets can sometimes
be the largest contributors
to lay out shifts on the web.
Now many ad networks
and publishers
will often support
dynamic ad sizes.
And ad sizes that are dynamic
or something that can sometimes
increase revenue, because
you're giving people
a lot of flexibility around what
can go inside your ad slots.
But it can also be something
that can potentially negatively
impact the user experience
by pushing things down.
So that's something
that you want to avoid.
So how do we approach this?
Well, one solution
to the problem
is statically reserving
space for the slot.
So you can make sure
that you're defining
a container for these
ads or embed frames so
that regardless of
what goes inside,
you're not shifting the
content of the page around.
So here, I've got a
container where I've
set my width and my height.

English: 
height. I've set a background color, but I've also
set it to overflow hidden just in case anything dynamic
is a little bit, you know, a little bit taller
than the container.
I still don't want it to be able to break out of it.
Ideally, the content fits inside of our container, like
our iframes or whatever else we might inject in there.
And what you can do if you're somebody that
has lots of dynamic content that gets injected
into your page, you can take a look at your
your data, look at what are the medians
or the 95th percentile widths and heights for this
dynamic content and size your container accordingly.
That'll just mean that you have the best chance at still
being able to present that content to users without
negatively impacting the rest of the user experience.
So here's what it looks like with my pattern in place.
I've reserved enough space and that content pops
in, but there are no layout shifts in the page.
So I'm really happy about that.
Slightly better is my baseline for everything in life at

English: 
I've set a background
color, but I've also
set it to overflow hidden,
just in case anything
dynamic is a little bit
taller than the container.
I don't I still don't want
it to be a break out of it.
Ideally, the content fits
inside of our container,
like our iframes or whatever
else we might inject in there.
And what you can do,
if you're somebody
that has lots of
dynamic content that
gets injected into
your page, you
can take a look at your data.
Look at what are the medians
or the 95th percentile widths
and heights for this
dynamic content?
And size your
container accordingly.
That'll just mean that you have
the best chance at still being
able to present that content
to users, without negatively
impacting the rest of
the user experience.
So here's what it looks
with my pattern in place.
I've reserved enough space
and that content pops in,
but there are no layout
shifts in the page.
So I'm really happy about that.
Slightly better is my
baseline for everything
in life at the moment.

English: 
So, yeah.
This is the
Lighthouse 6.0 impact.
We can see that we reduced
our layout shifts from 0.24
all the way down to about 0.
I'm going to give
myself about 0.
It's in the green.
So that's great.
So let's talk about a
production example of something
like this on Chloe.
So Chloe had a promotion
banner for shipping
at the top of their
Product Listings page.
And you'll see this like, free
standard shipping promotion
list the very top.
But this wasn't always there.
There was a time when
this Product Listings
page had a CLS of 0.4,
which is really not
great because of two things.
The first was the way they
approached their dynamic promo
banner and the way that
they approached filters.
Let's talk about
the banner first.
Now this banner used
to be positioned
in line underneath
the main page header.

English: 
the moment. So, yeah, this is the Lighthouse 6.0
impact. We can see that we reduced our layout shifts from
0.24 all the way down to about zero.
I'm going to give myself about zero.
It's in the green. So that's great.
So let's talk about a production
example of something like this on Chloé.
So Chloé had a promotion banner for
shipping at the top of their product listings page.
And you'll see this like free standard shipping promotion
listed the very top, but this wasn't always there.
There was a time when this product listings page had a
CLS of 0.4, which is
really not great because of two things.
The first was the way they approached their dynamic
promo banner and the way that they approached filters.
Let's talk about the banner first.
Now, this banner used to be positioned inline
underneath the main page header.

English: 
And as you can see here, it looks kind of harmless,
but what's the impact of having a dynamically sized banner
on the user experience?
Well, we have a video here. Let's take a look.
As we can see here, once the content is
fetched and rendered for this banner, it
pushes the content for the rest of the page all the way
down. And that's not very ideal.
So how did Chloé go about fixing this?
Well, they reserved space for this banner.
The content for this banner was also coming from
a client side request.
Therefore, messages were causing a pretty visual
layout shift occurring a few seconds into page load.
Now, they moved this API call straight to the server
and they made sure to reserve enough space for the banner
with a simple height setting.
As a part of this work, they moved the position of the
banner up a little bit, but altogether, like
moving more work to the server - always a good idea -
and just making sure that they're reserving space.
These things made a bit of a difference.

English: 
And as you can see here,
it looks kind of harmless.
But what's the impact of having
a dynamically-sized banner
on the user experience?
Well, we have a video here.
Let's take a look.
As we can see here, once the
content is fetched and rendered
for this banner, it
pushes the content
for the rest of the
page all the way down.
And that's not very ideal.
So how did Chloe go
about fixing this?
Well, they reserved
space for this banner.
The content for
this banner was also
coming from a
client-side request.
Therefore messages were
causing a pretty visual layout
shift occurring a few
seconds into page load.
Now they moved this API
call, streaked the server,
and they made sure to reserve
enough space for the banner
with a simple height setting.
As a part of this work,
they moved the position
of the banner up a little bit.
But altogether, like,
moving more works
the server, always a good idea.
And just making sure that
they're reserving space--
these things made a
bit of a difference.

English: 
So here's here's the after view.
Here you can see the impact to their product listings pages
after these changes have been made.
So it's a lot less shifty.
So I'm happy about that.
So we talked about their promo banner.
The other big CLS issue for a product
listing pages was the Chloë had a filters widget
for filtering products.
Now, this would rehydrate to become dynamic
once it booted up and so on, the client,
it was pending XHR calls
for data, it was waiting on session state based
on filter choices in order to be able to finally
render this thing on the screen.
So this is what this basically looked like.
We'd wait for kind of consts to be sent down
for the filter widget, we'd wait for hydration, and it
would still push content on the screen all the way down.
And what they ended up doing here was that they adapted
this widget to contain more of the information needed

English: 
So here's the after view.
Here, you can see the impact
to their product listings pages
after these changes
have been made.
It's a lot less shifty.
So I'm happy about that.
So we talked about
their promo banner.
The other big CLS issue
for Product Listing pages
was that Chloe how a filters
widget for filtering products.
Now this would rehydrate
to become dynamic
once it booted up.
And so on the client, it was
pending XHA R calls for data,
was waiting on session state,
based on filter choices
in order to be able
to finally render
this thing on the screen.
So this is what this
basically looked like.
We'd wait for kind
of [INAUDIBLE]
to be sent down for
the filter widget.
We wait for hydration.
And it would still push content
on the screen all the way down.
Now what they
ended up doing here
was that they
adapted this widget
to contain more of the
information needed to render

English: 
to render the filter widget server side, so they'd render
it with better defaults.
This helped avoid those layout shifts.
And I just wanted to give a call out here to
the right of the screen, we can see the Web Vitals Chrome
extension. This gives you a real-time view of
all of your vitals metrics, and it can just be helpful as
you're building your sites locally or you're
just browsing the web and want to get a sense for the
performance of different sites that you you check out on
the regular. And here's what things look like
after their rehydration fix for filters.
As you can see, CLS reduced by a decent
amount - looking at the before and after.
And it was just another case of like pay attention to the
little things in your pages that might be in aggregate
causing lots of things to be pushed down.
Every little CLS fix helps.
And here's the overall impact of these changes on desktop.
We can see that the above the fold content is relatively

English: 
the filter widget server-side.
So they'd rendered it
with better defaults.
This helped avoid
those layout shifts.
And I just wanted
to give a call out.
Here to the right
of the screen, we
can see the web vitals
Chrome extension.
This gives you a real time view
of all of your vitals metrics.
And can just be helpful as
you are building your sites
locally, or you are
just browsing the web
and want to get a sense for the
performance of different sites
that you check out
on the regular.
And here's what things look
after their rehydration
fix for filters.
As you can see, CLS
reduced by a decent amount,
looking at the before and after.
And it was just another
case of like, pay attention
to the little
things in your pages
that might be in aggregate,
causing lots of things
to be pushed down.
Every little CLS fix helps.
And here's the overall impact
of these changes on desktop.
We can see that the
above the full content

English: 
stable and offers a much better user experience
on the whole. And this is also reflected in Lighthouse
- work on Lighthouse, gotta give Lighthouse
a shout out. As we can see here, Cumulative Layout Shift is
in the green. We've hit zero.
So it's been a really solid place.
So to improve CLS, Chloé acted on
a number of different elements. It wasn't just one thing.
They reserved space for the promo content in terms of its
ratio. They made sure to set width and height dimensions
on their images and they adopted a skeleton pattern
to improve perceived performance.
They reserved space for their promo banners requests
before receiving messages, and they also reserved space for
the filters' dynamic component, as well as making a few
other optimizations to just help with rendering.
So on the whole, it was definitely worth it.
All right. So I have a big surprise for you.
We've got more metrics to talk about -
I put a lot of work into the slide.

English: 
is relatively stable, and
offers a much better user
experience on the whole.
And this is also
reflected in Lighthouse.
Work on Lighthouse.
Gotta give Lighthouse
a shout out.
As we can see here, accumulative
layout shift is in the green.
We've hit 0.
So it's in a really solid place.
So to improve CLS,
Chloe acted on a number
of different elements.
It wasn't just one thing.
They reserved space
for the promo content
in terms of its ratio.
They made sure to
set width and height
dimensions on their images.
And they adopted a
skeleton pattern.
They improved
perceived performance.
They reserved space for
their promo banners requests
before receiving messages.
And they also reserved
space for the filter'
dynamic component,
as well as making
a few other optimizations
to just help with rendering.
So on the whole, it was
definitely worth it.
All right.
So, I have a big
surprise for you.
We've got more
metrics to talk about.
Put a lot of work
into this slide.

English: 
Historically, it's been a bit of a challenge for web
developers to measure just how quickly the main content
of the web page loads and is visible to users.
Thankfully, we now have metrics like Largest Contenful
Paint that are able to report the render time of the
largest content element that's visible within
the viewport. Now, you might be wondering what causes
a poor LCP? Well, there there are lots of things.
Slow server response times are a big one.
This could be your backend infrastructure, it could
be unoptimized database queries,
API responses that are just taking a while to
resolve could be rendered blocking JavaScript and CSS,
slow resource times are another big one.
You could have unoptimized images slowing down
your LCP.
And then there's client side rendering.
There's a whole class of problems where
those of us who love working in JavaScript and using modern
libraries and frameworks and bundlers can sometimes get

English: 
Historically, it's been a bit of
a challenge for web developers
to measure just how quickly the
main content of the web page
loads and is visible to users.
Thankfully, we now have metrics
like Largest Contentful Paint
that are able to report the
render time of the largest
content element that's
visible within the viewport.
Now you might be wondering
what causes a poor LCP?
Well, there are lots of things.
Slow server response
times are a big one.
This could be your back
end infrastructure.
It could be unoptimized database
queries, API responses that are
just taking a while to resolve.
Could be render blocking
JavaScript in CSS.
Slow resource times
are another big one.
You could have unoptimized
images slowing down your LCP.
And then there's
client-side rendering.
There's a whole
class of problems
where those of us who
love working in JavaScript
and using modern libraries
and frameworks and bundlers
can sometimes get
into a place where

English: 
into a place where we have
our requests for assets like images - in particular
hero images - behind JavaScript
fetches. So the browser, first of all, has to fetch your
JavaScript then it has to parse and process
the JavaScript to fetch your image.
And that whole process can take so long
that you delay showing meaningful content to your user.
So it's just things like that you should keep an eye on.
There are plenty of tools that can help diagnose these
issues. So let's take a look at some real world
production challenges around LCP and how to
work around them.
Chloé started off with an LCP of about 10 or 11
seconds. In this view here, we can see that their primary
hero image content wasn't getting fetched and
rendered until about 11 seconds into our trace.
Their home page suffers from, in this case, it suffered
from a few different things.
It had heavy full screen image downloads,
poorly optimized images, some images that were requested
late in the network chain.

English: 
we have our requests for assets
like images, in particular hero
images, behind JavaScript.
So the browser first of all,
has to fetch your JavaScript.
Then it has to parse and
process that JavaScript
to fetch your image.
And that whole process
can take so long
that you delay showing
meaningful content
to your user.
So it's just things like that
you should keep an eye on.
There are plenty of tools that
can help diagnose these issues.
So let's take a look at
some real world production
challenges around LCP and
how to work around them.
Chloe started off with an LCP
of about 10 or 11 seconds.
In this view here, we can see
that their primary hero image
content wasn't getting fetched
and rendered until about
11 seconds in to our trace.
Their Home page suffers from--
in this case, it suffered
from a few different things.
It had heavy full-screen image
downloads, poorly optimized
images, some images
that were requested late
in the network chain.

English: 
And these are very common issues.
There is nothing here that they're doing crazy wrong.
It is just very common issues, and it's useful
to be aware of some of the things that impacts LCP.
So things that impact LCP are image elements,
image elements that might be inside of an SVG element,
video elements, block level elements containing
text nodes.
And so let's talk about images first, because they're
they're pretty often a cause for poor LCP.
So for many sites, images are the largest
element in view, when the page is finished loading.
Especially as UX patterns have
shifted towards us using more hero images in
our pages. So it's very, very important to optimize
our images, especially anything that's visible within the
initial viewport.
Now, there are few techniques that you can use here.
You can consider not having, you know, an image
in the first place.

English: 
And these are very
common issues.
There's nothing here that's
just like, that they're
doing crazy wrong.
It's just very common issues.
And it's useful to be
aware of some of the things
that impacts LCP.
So things that impact LCP are
image elements, image elements
that might be inside of an
SVG element, video elements,
block-level elements
containing text nodes.
And so let's talk
about images first,
because they're pretty
often a cause for poor LCP.
So for many sites, images are
the largest element in view
when the page is finished
loading, especially
as UX patterns have shifted
towards us using more hero
images in our pages.
So it's very, very important to
optimize our images, especially
anything that's visible
within the initial viewport.
Now there are a few techniques
that you can use here.
You can consider not having
an image in the first place.

English: 
If it's not that relevant, maybe remove it.
Compress those images; use, you know, there are
plenty of image optimization tools out there, compress your
images. Maybe consider converting them to more efficient
modern formats.
Use responsive images.
And you can also consider using an image CDN.
I'm seeing an increasing number of sites leveraging
image CDNs just to help them
get control over an ability
to just tweak parameters in a URL for an
image and change what format gets served down
or what quality you have.
And it's just using an image CDN can be a really good way
of staying on top of modern best practices, because
even us like that are, you know, web perf
enthusiasts sometimes have a hard time staying on top of
everything happening in the image optimization world.
Now, you might be wondering, how can I identify the
elements that is my like, LCP?
Thankfully, we've got some

English: 
If it's not that
relevant, maybe remove it.
Compress those images.
Use [INAUDIBLE] there are plenty
of image optimization tools
out there.
Compress your images.
Maybe consider converting
them to more efficient
modern formats.
Use responsive images.
And you can also consider
using an image CDM.
I'm seeing an increasing number
of sites leveraging image CDMs
just to help them get control
over an ability to just tweak
parameters in a
URL for an image,
and change what format
gets served down,
or what quality you have.
And it's just-- using an image
CDM can be a really good way
of staying on top of
modern best practices,
because even us like, that are--
enthusiasts sometimes
have a hard time
staying on top of everything
happening in the image
optimization world.
Now you might be
wondering, how can I
identify the elements
that is my LCP?
Thankfully, we've got
some solutions here.

English: 
solutions here. In Dev Tools in the performance
panel, if you record a trace and you go
to timings, you should find a record for LCP.
Click on that record and you'll get the summary page
showing up. That includes things like the size of the image
and more importantly, the related node.
So if you hover over that related node, it'll highlight
what in your page was considered LCP.
I personally find this really valuable as
kind of a stepping stone to where should I be spending my
time optimizing.
So check that out, if you use the performance panel.
This is also something that we try to capture in
Lighthouse. So Lighthouse has got a 'Largest
Contentful Paint element' audit.
And we try to to highlight what element was responsible
here, too. So if you use Lighthouse, check that
out. So back to Chloé, Chloé discovered
that they were delivering very high resolution
images - even very high resolution for retina
screens - because there is a bit of a cutoff point

English: 
In DevTools in the performance
panel, if you record a trace
and you go to Timings, you
should find a record for LCP.
Click on that record and you'll
get the Summary pane showing up
that includes things like
the size of the image
and more importantly,
the related node.
So if you hover over
that related node,
it will highlight what in
your page was considered LCP.
I personally find this
really valuable as kind
of a stepping stone
to where should I
be spending my time optimizing?
So check that out if you
use the Performance panel.
This is also something that we
try to capture in Lighthouse.
So Lighthouse has got a largest
Contentful Paint Element audit.
And we try to highlight
what element was responsible
here too.
So if you use Lighthouse,
check that out.
So back to Chloe.
So Chloe discovered that
they were delivering
very high resolution images,
even very high resolution
for retina screens.
Because there is
a bit of a cutoff

English: 
where if you're serving kind of
2x, 3x images, the human eye is not
going to be able to perceive large
amounts of difference there.
And there are kind of you know, diminishing values
that you get out of just serving very, very, very high
resolution images.
Now, in this case, we're in DevTools, we're in the elements
panel, we're looking at a specific image.
And what we see is that the maximum width
of images being served down is
1920px. It's pretty large.
So one of the things that Chloé actually decided to do was
change things up here.
They resized their images to not be more than two times
the image viewport size, so they removed
srcset sizes over
828px width to keep an image maximum size that they were
comfortable with, and that actually ended up being pretty
fine on retina devices as well.
So it was this nice tradeoff of how do we deliver

English: 
point where if you're serving
kind of 2 by 3 by images,
the human eye is not going to be
able to perceive large amounts
of difference there.
And there are kind of--
you have diminishing values
that you get out of just
serving very, very, very
high resolution images.
Now in this case,
we're in DevTools.
We're in the Elements panel.
We're looking at
a specific image.
And what we see is that
the maximum width of images
has been served down
is 1,920 pixels.
It's pretty large.
So one of the things that
Chloe actually decided to do
was change things up here.
They resized their images to
not be more than two times
the image viewport size.
So they removed source
at sizes over 828 width
to keep an image
maximum size that they
were comfortable with.
And that actually ended up being
pretty fine on retina devices
as well.
So it was this nice
trade-off of how

English: 
do we deliver rich imagery
without negatively impacting
the user experience?
Now by doing this work
on an iPhone X or a pixel
to excel that was
previously seeing anywhere
up to 245 kilobytes with
image by it's being downloaded
they were able to
reduce it down to 125.
That's huge.
That's like a 51%
decrease in image
by its being served down with
no noticeable difference.
So, optimize your images people.
The next thing to talk about
is some of the other image
optimizations that
they performed.
So on the Product
Listings page, Chloe
used image lazy
loading, which is-- it's
a relatively popular pattern.
What they discovered was that
there were four primary images
being loaded above the fold.
However, there was
one off screen image
that seemed to be tripping up
their lazy loading heuristics,
and was still being fetched.

English: 
rich imagery without negatively impacting the user
experience?
Now, by doing this work on an
iPhone X or a Pixel 2 XL
that was previously seeing anywhere up to
245KB worth of image bytes being downloaded, they were able
to reduce it down to 125KB.
That's huge. That's like a
51% decrease in image bytes being served down
with no noticeable difference.
So optimize your images, people.
The next thing we're to talk about is some of the other
image optimizations that they performed.
So on the product listings page, Chloé use image
lazy loading, which is, you know, it's a relatively popular
pattern. What they discovered was that there were four
primary images being loaded above the fold.
However, there was one off-screen image that
seemed to be tripping up their lazy loading heuristics
and was still being fetched.
Now, this particular image happened to be

English: 
248KB in size, about 200 plus kilobytes
in size. And this was negatively
impacting the user experience.
They wanted to try improving this.
Now on the whole, there are a number of things Chloé did.
They were able to bring down their above
the fold image download size all the way to 14.5KB.
They were able to tune their lazy loading heuristics
so that off-screen images like the one I was just talking
about were no longer a problem.
They adopted an image CDN, they adopted WebP
by default, improved their image resizing
strategy.
And the results of this - outside of just having a nice
like, Lighthouse report with lots of greens -
is that each product page now weighs
57% less than it did before, which is a really nice outcome
to have as a result of like, optimizing your images.
Taking a step back, here's what the home

English: 
Now this particular image
happened to be 248 kilobytes
in size, about 200
plus gigabytes in size.
And this was negatively
impacting the user experience.
So they wanted to
try improving this.
Now on the whole, there are
a number of things Chloe did.
They were able to
bring down their
above the fold image
download size all
the way to 14 kilobytes.
They were able to tune their
lazy loading heuristics so
that off-screen images, like the
one I was just talking about,
were no longer a problem.
They adopted an image CDN.
They adopted WebP by default,
improved their image resizing
strategy.
And the results of
this, outside of just
having a nice like
light, Lighthouse
report with lots of greens,
is that each product page now
weighs 57% it did before,
which is a really nice outcome
to have as a result of
optimizing your images.
Taking a step back, here's
what the Home page LCP looked

English: 
page LCP looked like after these changes.
We can see that again, previously, those hero
images were not rendering until about 11 seconds
in. Now, LCP happens at about 4
seconds into the process and it's complete just
a few seconds later. The request time for our LCP
related node, for kind of our our hero images, is about
1.3 seconds in. And so on the whole, this is really great.
There's still work they could do here, but on the whole,
this is like fantastic to see.
So let's switch things up to our next tip.
Defer any non-critical JavaScript and CSS
to speed up loading the main content of your page.
This is guidance that is not new.
It's been around for a few years, but for anyone that's not
familiar with this guidance, I'll give you a very quick
recap of it.
Now, before a browser can render any content,
it needs to parse HTML markup into a DOM tree.
The parser needs to pause if it encounters any

English: 
like after these changes.
We can see that
again, previously,
those hero images
were not rendering in
until about 11 seconds in.
Now LCP happens at about 4
seconds into the process.
And it's complete just
a few seconds later.
The request time for our LCP
related node for our hero
images is about 1.3 seconds in.
So on the whole,
this is really great.
There's still work
they could do here.
But on the whole, this,
is like fantastic to see.
So let's switch things
up to our next tip.
Defer any non-critical
JavaScript and CSS
to speed up loading the
main content of your page.
Now this is guidance
that it's not new.
It's been around
for a few years.
But for anyone that's not
familiar with this guidance,
I'll give you a very
quick recap of it.
Now before a browser
can render any content,
it needs to parse HTML
markup into a DOM tree.
The parser needs to pause if it
encounters any external style

English: 
external stylesheets, or synchronous scripts.
And scripts and stylesheets can both, you
know, be render blocking resources
which can delay your
First Contenful Paint, consequently, your Largest
Contentful Paint, as well. And so what we tell people to do
is defer any of your non-critical scripts and stylesheets
to speed up load.
So let's take a look once again at the product listings
page for Chloé.
As we can see, this is a trace independent of their image
optimizations. And as we can see here, Lighthouse
highlights that there are a few render blocking stylesheets
that are delaying early paints on the product listings
page. Now, this is kind of manifested in
terms of like just how much white we're seeing in our
filmstrip. So one approach to addressing this problem is by
inlining your critical CSS and deferring
the load of non-critical styles.
We often call this technique critical CSS.
So critical CSS is all about extracting
CSS for above the fold content,

English: 
sheets, your synchronous script.
And scripts and style
sheets can both you know,
be render-blocking
resources, which
can delay your First
Contentful Paint, consequently
your Largest Contentful
Paint as well.
And so what we tell
people to do is
defer any of your non-critical
scripts and style sheets
to speed up load.
So let's take a look once
again at the Product Listings
page for Chloe.
As we can see, this
is a trace independent
of their image optimizations.
And as we can see
here, Lighthouse
highlights that there are
a few render-blocking style
sheets that are delaying early
paints on the Product Listings
page.
Now, this is kind of manifested
in terms of just how much white
we're seeing in our film strip.
So one approach to
addressing this problem
is by inlining your critical
CSS and deferring the load
of non-critical styles.
We often call this
technique critical CSS.
So critical CSS is all
about extracting CSS

English: 
for above the fold content,
ideally, across a number
of different breakpoints,
and making sure
that you can render
the above the fold
content as quickly as possible
in the first few RTTs,
and deferring the load of
the rest of your style sheets
for the page, for
things below the fold,
as soon as possible otherwise.
So how did Chloe do this?
Well, they built some tooling.
They implemented critical CSS
in their CSS build process.
And they constructed a syntax
allowing their developers
to specify for each widget
what part of the CSS code
goes into their critical CSS.
This is highlighted using
the critical keywords you
see on the screen right now.
Now at build time, they're able
to build both the critical CSS
and the non-critical CSS
so that every single build
is consistent with both.
There are many ways that you
can approach critical CSS.
I've contributed to some tooling
on this topic in the past.

English: 
ideally across a number of different break points,
and making sure that you can render
the above the fold content as quickly as possible in the
first few RTTs and deferring the load
of the rest of your stylesheets for the page, you know, for
things below the fold, as soon as possible otherwise.
So how did Chloé do this?
Well, they built some tooling.
They implemented critical CSS in their Sass build process
and they constructed a syntax allowing their developers
to specify for each widget what
part of the CSS code goes into their critical CSS.
This is highlighted using the critical keywords you see
on the screen right now.
Now at build time, they're able to build both the critical
CSS and the non-critical CSS so
that every single build is consistent with both.
There are many ways you can approach critical CSS.
I've contributed to some tooling on this topic in the

English: 
past. And you can automate
it, you can go very custom.
I see some teams that will just have a critical.CSS
file that they manually curate.
And regardless of the approach that
you take, what's key is just making sure that you're
delivering important content to the user as
quickly as possible.
So we talked about the need for, you know, loading in the
other stylesheets for the page.
Well, what Chloé do is their non-critical CSS
stylesheets are stored in an array.
So they point to references to them on their servers.
And that's injected with a deferred script
so that it's hopefully not render-blocking, but it's still
loaded with a relatively high priority that isn't
going to interfere with a HTML parser.
So what was the impact of optimizing their critical CSS?
Well, the answer's pretty large.
They were able to bring down their First Contentful Paint
from 2.1 seconds to about 1.1,

English: 
And you can automate it.
You can go very custom.
I see some teams that will
just have a critical.CSS file
that they manually curate.
And regardless of the
approach that you take,
what's key is just
making sure that you're
delivering important
content to the user
as quickly as possible.
So we talked about the need
for loading in the other style
sheets for the page.
What Chloe do is their
non-critical CSS style sheets
are stored in an array.
So they point to references
to them on their servers.
And that's injected
with a deferred script,
so that it's hopefully
not render-blocking,
but it's still loaded with a
relatively high priority that
isn't going to interfere
with a html parser.
So what was the impact of
optimizing their critical CSS?
Well, the answer
is pretty large.
They were able to bring down
their First Contentful Paint

English: 
from 2.1 seconds to about 1.1,
and their LCP from 2.9 seconds
to about 1.5.
Now this is really great work.
Optimizing your critical CSS can
be a bit of a time investment,
but it's something
that can just make sure
that your page is getting
styled as soon as possible.
So let's talk about another tip.
I mentioned slow
server response times,
when we were discussing
like what impacts LCP?
Now the longer that
it takes a browser
to receive content
from the server,
the longer that it takes to
render anything on the screen.
The faster a server
can respond, that's
going to improve every single
page-load metric, including
LCP.
So you might be
wondering, how can I
tell if I have a slow
server response time?
Lighthouse has you covered.
In Lighthouse, we have an audit
called reduce initial server
response time.
And if you see this,
it's a good hint
to spend more time
diagnosing the problem
and causes of the problem.
As I mentioned earlier,
it can be plenty

English: 
and their LCP from 2.9 seconds to about
1.5.
Now, this is really great work.
Optimizing your critical CSS can be a bit of a time
investment, but it's something that can just make sure that
your page is getting styled as soon as possible.
So let's talk about another tip.
I mentioned slow server response times when we were
discussing like what impacts LCP.
Now, the longer that it takes a browser to receive
content from the server, the longer that it takes
to render anything on the screen.
The faster a server can respond,
that's going to improve every single page load metric,
including LCP.
So you might be wondering, how can I tell if I have a slow
server response time? Lighthouse has you covered.
In Lighthouse, we have an audit called 'Reduce initial
server response time'.
And if you see this, it's a good hint
to spend more time diagnosing the problem and
causes of the problem.
As I mentioned earlier, it can be plenty of things on your

English: 
backend and we're trying to
optimize our server response times.
There's plenty that we can do in terms of
optimizing, you know,
our DNS, our pre-connects, all of those types of things.
But there are also things that we can do to optimize
loading priority.
This is where techniques like
server push can come into play.
Now, if you're new to server push, I'll give you a quick
summary of it. To improve latency,
HTTP/2 introduced this idea of server push,
which basically allows a server to push resources
to the browser before they're explicitly requested.
Now you and I, as developers, we can as
well as anyone else watching - you're all awesome, too -
we often know what the most important resources
are on a page.
And so we can start pushing those as soon as,
you know, things respond with the initial

English: 
of things on your back end.
And we're trying to optimize
our server response times.
There's plenty that
we can do in terms
of optimizing our DNS,
our pre-connects, all
of those types of things.
But there are also
things that we can do
to optimize loading priority.
This is where techniques like
Link Rel Preload and Server
Push can come into play.
Now if you're new
to Server Push,
I'll give you a
quick summary of it.
To improve latency, H/2
introduced this idea
of Server Push, which
basically allows a server
to push resources to the browser
before they're explicitly
requested.
Now you and I as
developers, we can,
as well as anyone
else watching--
you're all awesome too--
we often know what the
most important resources
are on a page.
And so we can start pushing
those as soon as you know.
And things will respond
with the initial requests.

English: 
requests. This allows the server to fully utilize
what's otherwise an idle network to improve
page load times.
Now server push is is not without its
nuance.
This is one of those optimizations where you need to be
careful.
It's possible to over push.
So server push is not HTTP cache
aware. So I could push something for a particular
page, the user could come back to another related page, and
the server would push those exact same resources again.
The way to avoid that is by either using
cookies or a service worker to
avoid those re-fetches for those types of resources
and track what's in the cache, but it does involve a little
bit more work. In general, server push is an optimization
that can have a big impact, but just be aware of some of
that nuance. It's not quite as simple as just like turning
it on sometimes.
Now, Chloé use automatic

English: 
This allows the server
to fully utilize
what's otherwise an idle network
to improve page load times.
Now Server Push is not
without its nuance.
This is one of
those optimizations
where you need to be careful.
It's possible to over push.
So Server Push is
not HTTP cache aware.
So I could push something
for a particular page--
user could come back to
another related page--
and the server would push those
exact same resources again.
The way to avoid
that is by either
using cookies or
a service worker
to avoid those re-fetches
for those types of resources
and track what's in the cache.
But it does involve a
little bit more work.
In general, Server
Push is an optimization
that can have a big impact.
But just be aware of
some of that nuance.
It's not quite as simple as just
like, turning it on sometimes.
Now Chloe used
automatic Server Push,

English: 
which is an implementation
provided by Akamai.
It uses data to decide when
to push critical CSS, fonts,
and scripts.
And if you're manually
using Server Push yourself,
you might end up
looking at syntax that
looks a little bit like this.
What we see here is
the link HTTP header.
This is actually the preload
Resource Hint in action.
And it's a separate, but
distinct optimization
from Server Push.
But in reality, most
http2 implementations
will push an asset that's
specified in a link header
containing a preload
resource hint.
So you can use the
syntax in order
to enable server
pushes for a page.
So what was the impact
of this optimization?
Without Server Push, Chloe
were finding in their lab test
that LCP was closer
to 4 seconds,
but with it, it was closer to
2.5 seconds, which is like,
a huge amount of impact.

English: 
server push, which is an implementation provided by Akamai.
It uses data to decide when
to push critical CSS, fonts, and scripts.
And if you're manually using server push
yourself, you might end up looking at syntax that looks a
little bit like this.
What we see here is the link HTTP header.
This is actually the preload resource hint in action,
and it's a separate but distinct optimization
from server push.
But in reality, most HTTP/2
implementations will push an asset that's specified
in a link header containing a preload resource hint, so
you can use this syntax in order to enable
server pushes for a page.
So what was the impact of this optimization?
Without server push, Chloé are finding in their lab
tests that LCP was closer to 4 seconds,
but with it, it was closer to 2.5 seconds.
Which is, like, a huge amount of impact.

English: 
Onscreen, at the moment,
we've been verifying
that using Lighthouse.
But you can also tell if
individual requests were server
pushed using things
like DevTools
and using things
like web page tests,
Network Waterfall and View
both are very, very handy.
Now we're off to our
very last metric.
Hooray.
Chloe didn't optimize
for First Input Delay.
But I did want to
very quickly cover it.
Now First Input Delay
measures the time
from when a user first interacts
with a page, so that moment
when they start to
click on a button
or tap some UI, some JavaScript
power control, to the time
that the browser is
actually able to respond
to that interaction.
Now there are many things that
cause a poor First Input Delay.
There can be long tasks on the
main thread, heavy JavaScript
execution.
Large JavaScript
bundles can delay

English: 
Onscreen, at the moment, we've been verifying that using
Lighthouse, but you can also tell if individual
requests were server pushed
using things like DevTools and using things like
WebPageTest's network waterfall view.
Both are very, very handy.
Now we're on to our very last metric.
Hurray!
Chloé didn't optimize for First Input Delay, but
I did want to very quickly cover it.
Now, First Input Delay measures the time from when
a user first interacts with a page.
So that moment when they start to click
on a button or tap some UI,
some JavaScript powered control to the time that the
browser is actually able to respond to that interaction.
Now, there are many things that cause a poor First
Input Delay. There can be long tasks
on the main thread, heavy JavaScript
execution, large JavaScript bundles can
delay how soon script can be processed

English: 
by the browser, and it can have an impact here.
And then you have things like render-blocking script.
Now, in general, I would strongly recommend
using Lighthouse and using DevTools because they do try to
point out areas where you might have long tasks or heavy
script execution.
Very often the solution is to just break up this work,
serve what the user needs when they need it, and
try to look at opportunities for, you know,
minimizing main thread work as much as possible.
Sometimes people will contextualize this in terms
of, you know, maybe shift some of that work, some of the
logic work to a Web Worker.
But regardless of the path you want to take there,
the end goal is essentially just making sure
that the main thread isn't busy and that user interactions
are not delayed.
So we're almost at the end of our journey with Chloé.
Here, we can take a look at Chloé's overall Web Vitals
in the lab. Thanks to their investments in performance

English: 
how soon script can be
processed by the browser
and can have an impact here.
And then you have things
like render-blocking script.
Now in general, I would
strongly recommend
using Lighthouse
and using DevTools,
because they do try
to point out areas
where you might have long tasks
or heavy Script Execution.
Very often, the solution is
to just break up this work.
Serve what the user
needs when they need it,
and try to look at opportunities
for minimizing main [INAUDIBLE]
work as much as possible.
Sometimes people
will contextualize
this in terms of maybe
shift some of that work,
some of the logic
work to a web worker.
But regardless of the path
you want to take there,
the end goal is
essentially, just
making sure that the
main thread isn't busy,
and the user interactions
are not delayed.
So we're almost at the end
of our journey with Chloe.
Here, we can take a look at
Chloe's overall web vitals
in the lab.

English: 
Thanks to their investments
in performance and user
experience, they were able to
reduce their Cumulative Layout
shifts down to 0 and
their LCP by almost half.
This is mind blowingly awesome.
This is like,
really, really cool.
As you've seen,
all of this work is
kind of the culmination of a
number of smaller optimizations
that when added
up, actually make
a pretty significant impact
to your end user experience.
And we don't have to just
look at data in the lab.
We can look at
the field as well.
Here is Chrome User Experience
Report data for Chloe.
And as we can see, our Core Web
Vitals metrics for LCP and CLS
are trending in the
right direction.
CLS went from 0.85 down to
0 in the latest data set.
And this is all-- on the
whole, it's tremendous work.
It's really great to see.
And I know that Chloe are
happy to continue building
on this work in
the future as well.
Now, if you're interested
in building dashboards

English: 
and user experience, they were able to reduce their
Cumulative Layout Shifts down to zero and their LCP by
almost half. So this is, like, this is mind blowingly
awesome. This is really, really cool.
As you've seen, all of this work is kind of the
combination of of a number of smaller optimizations
that, when added up, actually make a pretty significant
impact to your end user experience.
And we don't have to just look at data in the lab.
We can look at the field as well.
Here is Chrome user experience report data
for Chloé. And as we can see, our Core Web Vitals,
metrics for LCP and CLS are trending
in the right direction.
CLS went from 0.85 down to
0 in the latest data set.
And this is all on the whole, it's tremendous work.
It's really great to see.
And I know that Chloé are happy to continue building
on this work in the future as well.
Now, if you're interested in building dashboards like this

English: 
for your own team, measuring the Core Web Vitals,
you might be interested in checking out the Chrome User
Experience Report dashboard.
This is a great solution that just allows you.
You drop in a URL and very quickly get
access to field data and distributions
for the different Core Web Vitals.
It also summarizes the metrics, so if you're trying
to share around this report with other people on your team,
they will hopefully be able to also get
some familiarity with the Core Web Vitals too.
We also recently shipped a new
Chrome User Experience Report API, CrUX API.
This is great for programmatically being able to build out
your own dashboards.
Very similar to what we were just taking a look at.
So check that out too.
And that's it.
I hope that you found this talk useful.
Go and optimize your Web Vitals.
There are plenty of docs over on web.dev that
cover the methodology, the tools, as well
as the best practices that you can use to get fast

English: 
like this for your own team,
measuring the Core Web Vitals,
you might be interested in
checking out the Chrome User
Experience Report dashboard.
This is a great
solution that just
allows you to drop in a URL and
very quickly get access field
data and distributions for
the different Core Web Vitals.
It also summarizes the metrics.
So if you're trying
to share around
this report with other
people on your team,
they'll hopefully be able
to also get some familiarity
with the Core Web Vitals too.
We also recently shipped a
new Chrome User Experience
Report API, CrUX API.
This is great for
programmatically
being able to build out
your own dashboards, very
similar to what we were
just taking a look at.
So check that out too.
And that's it.
I hope that you found
this talk useful.
Go and optimize your web vitals.
There are plenty of
docs over on Web.Dev
that cover the
methodology, the tools,
as well as the best
practices that you can

English: 
use to get fast and stay fast.
My name is Addy Osmani.
I hope this has been useful.
Thank you.
[MUSIC PLAYING]
RICK VISCOMI: Hi, everyone.
Thanks for joining me.
My name is Rick Viscomi.
I'm an engineer and
developer advocate
on Web Transparency
Projects at Google,
including the Chrome
User Experience
Support, or CrUX for short.
As you may know, CrUX
is a powerful data
set containing
insights about how
real users experience the web.
And this data set goes all
the way back to late 2017,
and includes data from
over 18 million web sites.
This will be a somewhat
advanced presentation.
So if you want to
brush up on the basics,
you can visit the CrUX docs at
bitly/Chrome-ux-report to learn
about things like metrics,
dimensions, best practices,
and more.
What I'll be sharing
with you today
are few pro tips for mining
the low-level data set
on BigQuery for
insights about how

English: 
and stay fast. My name is Addy Osmani.
I hope this has been useful.
Thank you.
Hi, everyone, thanks for joining me.
My name is Rick Viscomi. I'm an Engineer and developer
advocate on Web Transparency projects at Google, including
the Chrome User Experience Support or CrUX for short.
As you may know, CrUX is a powerful data set containing
insights about how real users experience the web.
And this dataset goes all the way back to late 2017 and
includes data from over 18 million websites.
This will be a somewhat advanced presentation, so if you
want to brush up on the basics, you can visit the CrUX docs
at bit.ly/chrome-ux-report to learn about
things like metrics, dimensions, best practices and more.
What I'll be sharing with you today are a few pro tips for
mining the low level data set on BigQuery for insights

English: 
about how users are experiencing the web.
So by now, I'm sure you've heard of Core Web Vitals.
They are the most important UX metrics we think you should
be looking at in 2020.
The list includes LCP, FID and CLS.
In fact, CrUX supports all three of these metrics and has
months of data across millions of websites.
So let's head over to BigQuery to see what we can find.
Here, I'm querying the metrics summary table, which
is a really quick and easy way to get high level stats
about a website's Core Web Vitals.
You can see here that we're extracting the percent of user
experiences that meet the 'good' thresholds for
LCP, FID and CLS, as well as these
metrics' 75th percentiles.
All of these stats are precomputed for you, so you can
spend more time finding insights and less time writing
queries.
This summary table's also much smaller and more efficient.
You can see it processes only about 100 megabytes, So
you shouldn't have any concerns about exceeding your one TB
of free monthly quota.

English: 
users are experiencing the web.
So by now, I'm sure you've
heard of Core Web Vitals.
They're the most
important UX metrics
we think you should
be looking at in 2020.
The list includes
LCP, FID, and CLS.
In fact, CrUX supports all
three of these metrics,
and has months of data
across millions of web sites.
So let's head over to BigQuery
to see what we can find.
Here, I'm querying the
Metric Summary Table,
which is a really
quick and easy way
to get high level stats about
a website's Core Web Vitals.
You can see here
that we're extracting
the percent of user
experiences that
meet the good thresholds
for LCP, FID, and CLS,
as well as metric's
75th percentiles.
All of these stats are
pre-computed for you,
so you can spend more time
finding insights and less time
writing queries.
This Summary table is also much
smaller and more efficient.
You can see it processes
only about 100 megabytes.
So you shouldn't
have any concerns

English: 
The raw data still exists, if you need access to a specific
histogram bins, but almost everything you need is here in
the materialized data set.
If you've ever queried the raw data, you'll know that there
are several useful dimensions that you can drill down on
like, month, device type, and country.
So let's look at a few examples of doing that efficiently
with the summary tables.
The first thing we'll do is modify this query to see how
the Core Web Vitals have changed in recent months.
To do that, we need to change our WHERE clause to include
all releases in 2020 by setting the condition to
'date >= 2020-01-01' or January 2020.
Next, we include the year and month of the release in the
SELECT clause so we can see it in the output.
The difference between year, month, and date is that the
tables are partitioned by date, while the year and month
correspond to the table names in the raw data set.

English: 
about exceeding your 1
terabyte of free monthly quota.
The raw data still
exists, if you need access
to a specific histogram bins.
But almost everything
you need is here
in the Materialized Data Set.
If you've ever
queried the raw data,
you'll know that there are
several useful dimensions
that you can drill
down on, like month,
device type, and country.
So let's look at a few examples
of doing that efficiently
with the Summary tables.
The first thing we'll
do is modify this query
to see how the Core Web Vitals
have changed in recent months.
To do that, we need to
change our WHERE clause
to include all releases
in 2020, by setting
the condition to date
greater than 2020 0 1 0 1,
or January 2020.
Next, we include
the year and month
of the release in
the SELECT clause
so we can see it in the output.
The difference between
year, month, and date
is that the tables are
partitioned by date,
while the year and month
correspond to the table
names in the raw data set.

English: 
And finally, we can sort
the results chronologically
and run the query.
You can see from the
results that Web.Dev
has had relatively stable and
good user experience this year.
But what if we want to break
this down by desktop and phone
experiences?
For that, all we need
to do is change over
to the Device Summary table.
We'll restrict the results to
only desktop and phone results.
Now tablet is available,
but it's less interesting.
Next, we'll add the device
name to the SELECT clause
and secondary sort by it to
keep the ordering of the results
consistent.

English: 
And finally, we can sort the results chronologically and
run the query.
You can see from the results that web.dev has had
relatively stable and good user experience this year.
But what if we want to break this down by desktop and phone
experiences? For that, all we need to do is change over to
the device summary table.
We'll restrict the results to only desktop and phone
results. Now, tablet is available, but it's less
interesting.
Next, we'll add the device name to the SELECT clause and
secondary sort by it to keep the ordering of the results
consistent.

English: 
I'm going to run this query, but there's one thing I wanted
to show you in the results.
These percentages are out of all user experiences on the
origin, not just the percent of desktop experiences,
or the percent of phone experiences for boring technical
reasons. So one last thing we need to do is normalize these
distributions so it doesn't matter that desktop is more
popular than phone.
To do that, we just divide the metric by the total.
Now we have comparable results between devices and we can
see that desktop actually trends slightly better than
phone.
And finally, what if we want to break this down even
further by users' countries?
For that, we can change over to the country_summary table.
For demonstration purposes, let's restrict the results to

English: 
I'm going to run this
query, but there's
one thing I wanted to
show you in the results.
These percentages are out of all
user experiences on the origin,
not just the percent
of desktop experiences
or the percent of
phone experiences,
for boring technical reasons.
So one less thing
we need to do is
normalize these distributions so
it doesn't matter that desktop
is more popular than phone.
To do that, we just divide
the metric by the total.
Now we have comparable
results between devices,
and we can see that
desktop actually trends
slightly better than phone.
And finally, what if we want
to break this down even further
by users' countries?
For that, we can change over
to the Countries Summary table.
For demonstration
purposes, let's
restrict the results
to two countries

English: 
two countries with very different experiences, Korea and
Nigeria, and focus only on desktop.
Now, we could write the country code to the results, but I
wanted to show you one other cool trick.
The CrUX dataset includes an experimental function to map
country codes to full names.
And the last thing we'll do before running the query is to
sort by country rather than device.
The results tell a really interesting story about the
disparity in user experience by country, and BigQuery was

English: 
with very different
experiences, Korea and Nigeria,
and focus only on desktop.
Now we could write the
country code to the results,
but I wanted to show you
one other cool trick.
The CrUX data set includes
an experimental function
to map country
codes to full names.
And the last thing we'll
do before running the query
is to sort by country
rather than device.
The results tell a
really interesting story
about the disparity in
user experience by country.
And BigQuery was
able to analyze this

English: 
in only a couple of
seconds, and using only
about a gigabyte of data.
So that's it.
These are just a
few quick examples
of the power of the
BigQuery data set.
And it doesn't have to be
mysterious or expensive.
I hope you start
exploring the data set
and finding insights about
the state of the web.
You can find links
to all the resources
and queries we discussed in
the description and comments
of this YouTube video.
If you have any
questions at all,
we have a whole support
network set up for you.
You can follow me on
Twitter, @Rick Viscomi
and I also tweet
from @Chromeuxreport.
We have an announcement
and discussion groups
for important product
updates and support.
We have the CrUX
cookbook on GitHub,
where you can find example
queries for common problems.
And finally, we have
CrUX office hours
where we can meet virtually and
get your questions answered.
I hope you found this useful.
Please have the
thumbs up if you did.
Thanks for watching, everyone.
[MUSIC PLAYING]

English: 
able to analyze this in only a couple of seconds and using
only about a gigabyte of data.
So that's it. These are just a few quick examples of the
power of the BigQuery dataset, and it doesn't have to be
mysterious or expensive.
I hope you start exploring the dataset and finding insights
about the state of the web.
You can find links to all the resources and queries we
discussed in the description and comments of this YouTube
video.
If you have any questions at all, we have a whole support
network set up for you.
You can follow me on Twitter at @rick_viscomi and I also
tweet from @ChromeUXReport.
We have announcement and discussion groups for important
product updates and support.
We have the CrUX cookbook on GitHub where you can find
example queries for common problems.
And finally, we have CrUX office hours where we can meet
virtually and get your questions answered.
I hope you found this useful. Please give the thumbs up if
you did. Thanks for watching, everyone.

English: 
HOUSSEIN DJIRDEH: Hi, everyone.
Hope you're all staying safe.
My name is Houssein Djirdeh,
and I'm a developer advocate
on the web team at Google.
For this segment
of Web.Dev Live,
we're going to talk
about different ways
to explore and analyze jobs
group bundles on a web page.
Analyzing bundles
is a good first step
to optimizing the amount
of Java Script shipped
to the browser, which can
improve page load times
and directly result in better
Logic Temple Paint and First
Input Delay.
Java Script Bundling
is a term commonly
used to describe the approach
many web sites take to group
multiple jobs group files or
modules into a single file
or bundle.
Many tools that bundle Java
Script code for the browser
usually include a number of
different optimization steps,
such as minification
and score [INAUDIBLE]..
This is a good
thing, because code
running across multiple
files and modules

English: 
Hi, everyone. Hope you're all staying safe.
My name's Houssein Djirdeh, and I'm a developer advocate on
the Web team at Google.
For this segment of web.dev LIVE, we're going to talk about
different ways to explore and analyze JavaScript bundles
on a web page.
Analyzing bundles is a good first step to optimizing
the amount of JavaScript shipped to the browser,
which can improve page load times and directly result
in better
Largest Contentful Paint and First Input Delay. JavaScript
bundling is a term
commonly used to describe the approach many websites
take to group multiple JavaScript files or modules
into a single file or "bundle".
Many tools that bundle JavaScript code for the browser
usually include a number of different optimization steps,
such as minification and scope hoisting.
This is a good thing because code written across multiple
files and modules can be combined into a single

English: 
can be combined into a
single optimized bundle.
Although this might be useful
from a developer and user
experience standpoint,
this process usually
obfuscates Java Script
code to the extent
that it can't easily
be read and analyzed
without the help of
additional tooling.
Let's take a look at some
examples to get a better idea.
If you're using Chrome, the
Network panel and the DevTools
is the easiest way to look at
all the Java Script downloaded
on a page.
Open DevTools by pressing
Control Shift J or Command
Option J on the Mac, and
click the Network tab
to open the Network Panel.
To take a look at all
the network activity
during page load, reload the
page while DevTools is still
open, click the Java Script
button to filter requests
by JavaScript, and click any
URL to view the response body.
The Format button can make a
minified file more readable.
Notice how with this
simple static site,
there's only a single
Java Script file.
And although minified,
it's easily human readable.

English: 
optimized bundle.
Although this might be useful from a developer and user
experience standpoint, this process usually obfuscates
JavaScript code to the extent that it can't easily be read
and analyzed without the help of additional tooling.
Let's take a look at some examples to get a better idea.
If you're using Chrome, the Network panel in the DevTools
is the easiest way to look at all the JavaScript downloaded
on a page. Open DevTools by pressing Control+Shift+J
or Command+Option+J on a Mac and click
the Network tab to open the Network panel.
To take a look at all the network activity during
page load, reload the page while DevTools is still open.
Click the JavaScript button to filter requests by
JavaScript. And click any URL to view
the response body.
The format button can make a minified file more readable.
Notice how with this simple static site, there's only a
single JavaScript file, and although minified,
it's easily human readable.

English: 
If we do the same for a site that bundles the JavaScript
code, it gets harder trying to understand exactly
what lives in the bundle.
This is an example of a site that bundles many third party
libraries and hundreds of first party modules into just
a few discrete bundles.
So let's take a look at some ways to analyze this
code. The Coverage tab can show you how much
JavaScript code is unused in any of your files or bundles
directly in DevTools.
Open the command menu with Control+Shift+P, or
Command+Shift+P for Mac, type 'coverage'
and select the Show Coverage command.
Click the reload button to reload the page while capturing
coverage. And in the drop down menu, select JavaScript.
In the table, the Unused Bytes field shows exactly
how much JavaScript is unused for each file.
Click any URL to see a line by line breakdown.
So although the Coverage tab gives us a lens on how
much code is being used on a page, it still

English: 
If we do the same for a site
that bundles a Java Script
code, it gets harder trying to
understand exactly what lives
in the bundle.
This is an example of a site
that bundles many third party
libraries and hundreds
of first party modules
into just a few
discrete bundles.
So let's take a look at some
ways to analyze this code.
The Coverage tab can show
you how much Java Script code
is unused in any of
your files or bundles
directly in DevTools.
Open the Command menu
with control Shift P
or Command Shift P for Mac.
Type coverage, and select
Show Coverage command.
Click the Reload button
to reload the page
while capturing coverage.
And in the Drop Down
menu, select Java Script.
In the table, the
Unused Bites field
shows exactly how much Java
Script is unused for each file.
Click any URL to see a
line by line breakdown.
So although the Coverage tab
gives us a lens on how much

English: 
isn't easy to identify which modules
make up the bundle.
Now, there are other tools out there to make this possible.
If you're already bundling code for your site, chances are
you're using a module bundler like webpack and Rollup.
And many of these module bundlers provide either
first-class or third-party tooling that you can use
to visualize and map your bundles.
Let's go over an example.
If you use webpack, you can generate a stats.json File
that contains statistics about all bundled modules.
A single CLI command emits the file.
Although reading this file yourself can give some
information about what modules live in the bundle, there
are community-built libraries, they can consume this file
and display a more useful visualization.
One such library is called 'webpack-bundle-analyzer', and
it works by parsing the bundles generated by webpack
and then mapping them to the module names in the stats.json
file. By doing this, it creates an interactive

English: 
code is being used
on a page, it still
isn't easy to identify which
modules make up the bundle.
Now there are other tools out
there to make this possible.
If you're already bundling
code for your site,
chances are you're using a
module bundler like Webpack
or Rollup.
And many of these
module bundlers
provide either first class
or third party tooling
that you can use to visualize
and map your bundles.
Let's go over an example.
If you use WebPack, you can
generate a stats.json file
that contains statistics
about all bundled modules.
A single CLI command
emits the file.
Although reading this
file yourself can give us
some information about what
modules live in the bundle,
there are community
built libraries
that can consume
this file and display
a more useful visualization.
One such library is called
WebPack Bundle Analyzer.
And it works by parsing the
bundles generated by Webpack,
and then mapping them to the
module names in the stats.json
file.
By doing this, it creates
an interactive tree map

English: 
visualization of
an entire bundle,
showing the size of
each module, as well as
the relation to each other.
GZip and parse sizes
are also displayed
to give you a better idea of
how large each of the modules
are bundler-specific
visualization tools are great.
They make it easier to see what
makes up each of your bundles.
But they are bundler-specific.
For any site, regardless
of whether a specific model
bundler is used or
not, source maps
are a way to map
original written code
to transformed output.
This is useful
because it would allow
us to continue to
obfuscate and transform
our code during
the build process,
but still have a means to map
it back to its original form.
Java Script files that have been
transformed due to minification
or other bundling
optimizations, need
to point to the location
of its source map file
with a source mapping URL
comment or a source map

English: 
treemap visualization of an entire bundle,
showing the sizes of each module as well as the relation
to each other.
Gzip and parsed sizes are also displayed to give
you a better idea of how large each of the modules
are.
Bundler-specific visualization tools are great.
They make it easier to see what makes up each of your
bundles. But they are bundler-specific.
For any site, regardless of whether a specific module
bundler is used or not, source maps
are a way to map original written code to transform
the output. This is useful because it can allow
us to continue to obfuscate and transform our code
during the build process but still have a means
to map it back to its original form.
JavaScript files that have been transformed due to
minification or other bundling optimizations
need to point to the location of a source map file
with a sourceMappingURL comment or a SourceMap
HTTP header.

English: 
All newer browsers support source maps.
And with Chrome, you can enable it in the DevTools
by opening up Settings and checking the "enable JavaScript
source maps" option.
When Chrome can detect that the source map is available,
it'll show a message and we're able to open
and debug these separate associated files as regular
JavaScript files.
'source-map-explorer' is a library that you can use to see
a treemap visualization of the bundle.
This visualization is an example of using
source-map-explorer with a production build.
Just by looking at this, we can identify a few issues
already. A few CommonJS modules here, moment.js
and lodash, are already larger than they need
to be. If they were switched to use ES modules, they
could be smaller and more optimized.
There are duplicate copies of React.
And code needed for multiple different routes all
live in this bundle, and they could easily be lazy-loaded
into their own separate bundles.

English: 
ACP header.
All newer browsers
support source maps.
And with Chrome, you can
enable it in the DevTools
by opening up Settings and
checking the Enable JavaScript
Source Maps option.
When Chrome can detect that
a source map is available,
it will show a
message, and we're
able to open and debug the
separate associated files
as regular Java Script files.
Source Map Explorer
is the library
that you can use
to see a tree map
visualization of the bundle.
This visualization is an example
of using Source Map Explorer
with a production build.
Just by looking at this, we can
identify a few issues already.
A few common JS models
here, moment.js and Lodash,
are already larger
than they need to be.
If they are switched
to use ES modules,
they could be smaller
and more optimized.
There are duplicate
copies of React.
And code needed for multiple
different routes all
live in this bundle.
And they could
easily be lazy loaded

English: 
into their own separate bundles.
These are all common issues
than many sites run into.
And we can spot them by
using a visualization
tool like Source Map Explorer.
Other tooling that you may
already be familiar with
are also starting to consume
source maps in different ways
that can be useful.
Lighthouse, an open source
website auditing tool,
is currently experimenting
with Source Map
support for some of its audits.
With Source Maps, the
unused JavaScript audits
can show how much unused
code and potential savings
live in bundled margins.
There is also a new
legacy Java Script audit
being developed that takes
advantage of Source Maps
to show legacy code within the
bundle that contains polyfills
newer browsers don't need.
And there we have it.
We just went over a number
of different techniques
to analyze bundled
JavaScript code.
To recap, the Network
panel in DevTools
is the easiest way to start
seeing how much Java Script
code is being downloaded.

English: 
These are all common issues and many sites run into,
and we can spot them by using a visualization tool
like 'source-map-explorer'.
Other tooling that you may already be familiar with are
also starting to consume source maps in different ways
that can be useful.
Lighthouse, an open-source website auditing tool, is
currently experimenting with source-map support for some
of its audits. With source maps, the unused JavaScript
audit can show how much unused code and potential
savings live in bundled modules.
There is also a new legacy JavaScript audit being developed
that takes advantage of source maps to show legacy
code within the bundle that contains polyfills that newer
browsers don't need.
And there we have it! We just went over a number of
different techniques to analyze bundled JavaScript
code. To recap - The Network panel
in DevTools is the easiest way to start seeing how much
JavaScript code is being downloaded.

English: 
The Coverage tab can show you how much JavaScript is
actually used.
Many module bundlers have supported tooling that make it
easier to visualize bundles.
If you use webpack, for example, you can emit a stats.json
file and use 'webpack-bundle-analyzer'.
Consider enabling source maps on your site and use
'source-map-explorer' to visualize your bundles.
If you'd prefer not to emit source maps on production, you
can set it up as part of your build process so that it's
only generated during development.
And Lighthouse is also working on collecting source maps to
display more useful audit recommendations.
These changes will land in a future version, so keep an
eye out.
So, analyzing your bundles and limiting the amount
of JavaScript on a web page reduces the amount
of time the browser needs to spend parsing, compiling, and
executing JavaScript code.
The speeds up how fast the browser can to begin
to respond to any user interactions, improving First Input
Delay, and results in a fast render, improving

English: 
The Coverage tab can show you
how much JavaScript is actually
used.
Many module bundlers
have supported
tooling that are making it
easier to visualize bundles.
If you use Webpack
for example, you
can omit a stats.json file and
use Webpack Bundle Analyzer.
Consider enabling source
maps on your site,
and use Source Map Explorer
to visualize your bundles.
If you'd prefer not to omit
source maps from production,
you can set it up as part
of your build process
so that it's only generated
during development.
And Lighthouse is also
working, collecting source maps
to display more useful
audit recommendations.
These changes will land
in a future version.
So keep an eye out.
So, analyzing your bundles
and limiting the amount
of JavaScript on a web page
reduces the amount of time
the browser needs to
spend parsing, compiling,
and executing JavaScript code.
This speeds up how
fast the browser can
begin to respond to
any user interactions,
improving First Input Delay,
and results in a faster render

English: 
Largest Contentful Paint.
Thanks for watching. I hope you found this screencast
super useful.
Hi, everybody! I'm Paul Lewis.
And I'm Philip Walton.
Okay, so we thought today what we'd do is we would talk
about the Core Web Vitals inside of DevTools.
Now I know about the DevTools side.
In fact, I implemented some of the Core Web Vitals
inside of DevTools, but Phil, you're more of the
person that knows about the actual metrics where they came
from and that kind of stuff, right?
That's right. I know a lot about the metrics.
I work on the Chrome team, working with some of the people
that were helping to define the metrics and standardize
them in browsers. But I don't really know much about how
they work in DevTools. So, Paul, you're a great person for
me to talk to here. Let's dive in and see what we

English: 
improving Largest Content Paint.
Thanks for watching.
I hope you found the
screencast super useful.
[MUSIC PLAYING]
PAUL LEWIS: Hi everybody.
I'm Paul Lewis.
PHILIP WALTON: And
I'm Philip Walton.
PAUL LEWIS: OK.
So we thought
today what we do is
we would talk about the Core
Web Vitals inside of DevTools.
Now, I know about
the DevTools side.
In fact, I implemented
some of the Core Web Vitals
inside of DevTools.
But Phil, you're more
of the person that
knows about the actual metrics,
where they came from, and that
kind of stuff, right?
PHILIP WALTON: That's right.
I know a lot about the metrics.
I work on the Chrome team,
working with some of the people
that we're helping
to define the metrics
and standardize
them in browsers.
But I don't really
know much about how
they work in DevTools.
So, Paul, you're a great
person for me to talk to here.

English: 
Let's dive in and see
what we can find out.
PAUL LEWIS: OK.
So I guess our plan is to
have a bit of a conversation
to go back and forth.
We'll be diving in
and out of DevTools,
having a bit of a discussion
about these metrics,
and just trying to kind of
explore, understand, and share
what's kind of going on there.
So I guess the
first one that I was
kind of thinking about
when we were discussing
this was LCP and FCP.
So I guess the first thing
to kind of talk about
is, what are they?
Where do they come from?
PHILIP WALTON: These
are both paint metrics.
So FCP is the First
Contentful Paint.
It represents the
first point in time
that the browser
is able to paint
any content on the screen.
And LCP is Largest
Contentful Paint.
And that represents the largest
single text node or image
element on the page.
And the idea
between these two is
that FTP represents like,
the first time the user
sees something.

English: 
can find out.
Okay, so I guess our plan is to have a bit of a
conversation to go back and forth.
We'll be diving in and out of DevTools, having a bit of a
discussion about these metrics.
I'm just trying to kind of explore, understand, and
share what's kind of going on there.
So I guess the first one that I was kind of thinking
about when we were discussing this was LCP
and FCP.
So I guess the first thing to do to kind of talk about is
what are they? Where do they come from?
Well, these are both paint metrics.
So FCP is First
Contentful Paint. It represents the first point
in time that the browser is able to paint any content on
the screen. And LCP is Largest Contentful
Paint. And that represents the largest single
text node or image element on the page.
And the idea between these two is that FCP represents,
like, the first time the user sees something, and

English: 
LCP represents when, you know, the main content
of the page has painted.
I mean, in general, whatever, the largest thing, the
largest image or text node, on the screen is generally the
thing that the user is going to notice.
And so that kind of represents once the page is really
loaded.
So I guess for a lot of people then, the first thing
they're going to think of, certainly for the Largest
Contentful Paint would be something like a hero
element or something like that, right?
Yeah.
Kind of big image at the top of the page, for example.
Absolutely.
OK.
Right, but it's not always that, I'm guessing, because
you could be deep linking into some content
like further down the page and everything else.
Yup, that's absolutely right.
OK. I'll tell you what we'll do then.
Let's take - I've got a page here actually.
I've got this page on web.dev.
Performance tab open inside
of DevTools.
And I guess the goal here is going to be to show
FCP and LCP in context.
And I have web.dev open here on

English: 
And LCP represents
when you know,
the main content of
the page has painted.
I mean, in general, whatever the
largest thing on the-- largest
image or text node on the
screen is generally the thing
that the user is
going to notice.
And so that kind of represents
once the page is really loaded.
PAUL LEWIS: So I guess
for a lot of people then,
the first thing they're
going to think of, certainly
for the Largest
Contentful Paint,
would be something like a hero
element or something like that,
right?
They get a big image at the
top of the page, for example.
PHILIP WALTON: Absolutely.
PAUL LEWIS: OK.
Right.
But it's not always
that, I'm guessing,
because you could
be deep linking
into some content like,
further down the page
and everything else.
PHILIP WALTON: Yep,
that's absolutely right.
PAUL LEWIS: So, OK.
I think what we'll do
then-- let's take it--
I've got a page here.
Actually, I've got this page
on Web.Dev, Performance tab
open inside of DevTools.
And I guess the goal
here is going to be
to show FCP and LCP in context.

English: 
And I have a Web.Dev open here
on a page in the Performance
section around using image
CDMs to optimize images.
So if you're not
seeing this content,
definitely worth a look.
PHILIP WALTON: It's
a great article.
PAUL LEWIS: OK?
And we have-- yeah, we have--
I'm going to deep--
See?
I can deep link into
this section, right?
With this?
And so this, I guess, would
become our hero image, right?
PHILIP WALTON: And an
interesting point to make here
is that the hero image
is not necessarily
going to be above the fold.
Like in this case, you're
loading a page halfway down,
half way scrolled down the page.
And so LCP is always--
you know, it's only
going to consider
elements that are actually
visible to the user
on the screen.
PAUL LEWIS: Right.
Great point.
This is what's going
to make this probably
a bit interesting.
So what I'm going to do
is I'm actually going
to going to go to a fast 3G.
So in the Performance tab, you
can open the Capture Settings
here.
I'm going to change
from just online.
Oh, it's a fast 3G.
So we're just going to switch to
a slow down on the performance
use.
This little exclamation
mark shows up

English: 
a page in the Performance section around
using image CDNs to optimize images.
If you're not seeing this content, definitely worth a look.
That's a great article.
OK.
Yeah, we have - See I can
deep link into this section right
with this. And so this I guess, would become our hero
image, right?
And an interesting point to make here is that
the hero image is not necessarily going to be above the
fold. Like in this case, you're loading a page
halfway down. Halfway scrolled down the page, and so LCP
is always, you know, it's only going to consider elements
that are actually visible to the user on the screen.
Right. Great point.
So this is what's going to make this probably a bit
interesting. So what I'm going to do is I'm actually going
to gonna go to Fast 3G.
So in the Performance tab, you can open the Capture
settings here. I'm going to change just 'Online' over to
'Fast 3G'. So we're just going to switch to
a slow download performance. You see, this little

English: 
exclamation mark shows up saying 'network throttling is
enabled'. And I'm going to, I'm actually going to slow down
the CPU just a little bit.
Just so that we can see things -
You're doing this to simulate maybe a lower power device or
something like that, correct?
Yeah, I am.
But right now as well, what I wanted to do is, if I take
a recording with things just slowed down a little bit, it
might be easier to just to see what's going on.
Because I happen to be somewhere
in my house with actually a really good internet
connection. So I don't particularly see
network latency quite as much as
you would in other cases say if you were on a mobile device
out and about. So I just thought, "Let's just try this and
see what happens". So I'm gonna hit record.
I'm going to hit Command+Shift+R to do a reload.
OK. And I'm going to stop and we can discuss what
we see. OK.
Let me just wrap this up here.
Now, the first thing to notice, I suppose, would be the
Timings row here
to remind ourselves what these are.

English: 
saying Network
Throttling is Enabled.
I'm actually going to slow
down the CPU just a little bit.
PHILIP WALTON: And
are you doing this--
PAUL LEWIS: --just so
that we can see things--
PHILIP WALTON: You're doing
this to simulate maybe a lower
powered device or something
like that, correct?
PAUL LEWIS: Yeah, I am.
But right now as well,
what I wanted to do
is if I take a recording
with things just slowed
down a little bit, it
might be easier to just
to see what's going on.
Because I happen to be
somewhere in my house
has actually a really
good internet connection.
So I don't particularly
see network latency
quite as much as you
would in other cases,
say, if you were on a
mobile device out and about.
So I just thought, let's just
try this and see what happens.
So I'm going to hit Record.
I hit Command Shift
R to do a reload.
OK?
And I'm going to stop.
And we can discuss what we see.
OK.
Let me just wrap this up here.
Now, the first thing
to notice I suppose,
would be the Timings row
here, to remind ourselves

English: 
what these are.
DOMContentLoaded.
This has been around
forever, hasn't it?
PHILIP WALTON: Yeah.
PAUL LEWIS: But there is First
Paint, First Contentful Paint,
First Meaningful Paint, which
we could talk by a little bit,
I suppose, Largest
Contenful Paint.
And you can see
that it's actually
highlighted our screenshot
here, and then the load event.
Now I could use the
keys on the keyboard
to come into a
little bit closer,
zoom in a little bit on this
particular area of interest.
And you see here, I
suppose the First Contenful
Paint is presumably happening.
And then the Largest
Contentful Paint
is happening slightly later.
PHILIP WALTON: That's right.
PAUL LEWIS: Now,
I think we can get
a little bit more info about
this, because First Contentful
Paint is happening, and then the
Largest Contentful Paint, which
implies to me that
the image is coming in
after the initial page content.
So we're drawing something,
we're painting something.
And then we're painting
the image after the fact.
So let's see if we do
that with Screenshots on.

English: 
DOMContentLoaded. This has been around forever, hasn't it?
Yeah.
But there is First Paint.
First Contentful Paint.
First Meaningful Paint, which we could talk about a little
bit, I suppose. Largest Contentful Paint, and you can see
that it's actually highlighted a screenshot here.
And then the load event.
Now I could use the keys on the keyboard to
come into a little bit closer,
zoom in a little bit on this particular area of interest.
And you see here, I suppose the First Contentful
Paint is presumably
happening.
And then the Largest Contentful Paint is happening slightly
later.
That's right.
Now, I think we can get a little bit more info about this,
because First Contentful Paint is happening and then the
Largest Contentful Paint, which implies to me that the
image is coming in after the
initial page content. So we're drawing something, we're
painting something, and then we're painting the image after
the fact. So let's see if we can do that with screenshots
on. And we

English: 
will record again and see what we get.
OK, so I'll stop that.
And hopefully if I just lose this a little bit.
And we might see.
OK, so round about -
I wonder if I can just bring this in a little bit further.
Let me just see if I can track that down.
Try this a little bit.
OK. That might be as clear as this is going to get.
I wonder. Yeah, it is.
OK. I tell what we're gonna do, we're gonna make this a
little bit clearer because what's happening is we're
actually seeing the page content before I did the refresh
and then slightly after.
So if I take this and I go to about:blank.
This is actually a really interesting way to do this
testing, if you're ever curious about it.
Record it from about:blank so that you start
without anything on the page.

English: 
And we will record again
and see what we get.
OK.
I'll stop there, and
hopefully if I just
loose this a little bit.
And we might see--
OK.
So a round about--
I wonder if I can just bring
this in a little bit for the--
let me just see if I
can drag that down.
Drag this a little bit.
OK.
That might be as clear as this
is going to get, I wonder.
Yeah, it is.
OK.
I think what we're
going to do, we're
going to make this a
little bit clearer.
Because what's happening
is we're actually
seeing the page content
before I did the refresh,
and then slightly after.
So if I take this and
I go to About Blank,
this is such a really
interesting way
to do this testing.
If you're ever curious about
it, record it from About Blank
so that you start without
anything on the page.

English: 
And that can make it easier
to find your screenshot.
So I'm going to paste
in the URL here,
but not hit Enter--
not go to that yet.
Hit Record and now go there.
OK.
Hopefully that will
make it a little easier
to see what's going on.
OK.
So you can see we've gone from
here into the screenshots.
We see this.
We see the original page
content, the top of the page.
And then we're going down to
our deep link just below that.
So my assumption is if we
bring our zoom in here,
that around about here, in
fact, we can do this here.
Yeah, boop.
You see we're just right
on this line here where
we go from nothing to
something, nothing to something
is exactly the point where we
actually start to see this--
the First Contentful
Paint coming in.
PHILIP WALTON: Yeah.
It's the first thing
that the user sees.

English: 
And that can make it easier to find your screenshots.
So I'm going to paste in the URL
here but not hit enter.
Not go to that yet. Hit record.
And now go there.
OK. Hopefully that will make it a little easier to see
what's going on.
OK, so you see we've gone from here
into the screenshots. We see this, we see the original page
content, the top of the page, and then we're going down
to our deep link just below
that. So my assumption is, if we if we bring
our zoom in here.
That around about here.
In fact, we can just use this
here.
Yeah, you see, we're just right on this line
here where we go from nothing to something,
nothing to something, is exactly the point where we
actually start to see this
First Contentful Paint coming in.
Yeah, that's the first thing that the user sees.

English: 
But it's not the main thing that they wanted to see
when they were loading the page.
Yeah, in fact, you say that the Largest Contentful Paint at
this point is actually this piece of text now.
Let's try it one more time. Just to really, really dial it
in, I want to go for slow 3G.
I'm going to go to about:blank again, and I'm going
to hit record, and I'm going to
see what happens. I feel like we're going to see something
reasonable here.
Let's process that profile.
OK.
There we go. This, I think, is starting to make more sense
to me over here.
There we go.
OK.
Wow. There we are.
First Contentful Paint is here.
OK?
And then much later,
there comes our image, which is
slightly over to the right here.
There.

English: 
But it's not the
main thing that they
wanted to see when they
were loading the page.
PAUL LEWIS: Yeah.
In fact, it's saying the
Largest Contentful Paint
at this point is actually
this piece of text now.
Let's try it one more time, just
to really, really dial it in.
I want to go for slow 3G.
I'm going to go to
About Blank again.
And I'm going to hit Record.
And I'm going to
see what happens.
I feel like we're going to
see something reasonable here.
Let's process that profile.
OK.
There we go.
This, I think is starting
to make more sense to me
over here.
There we go.
OK.
Wow.
There we are.
The First Contentful
Paint is here, there.
OK?
And then much later, boop.
There comes our image, which
is slightly over to the right
here, there.

English: 
So I can select that area.
And based on the
screenshots, roughly there.
And I say that's the
First Contentful Paint.
And then if I select later
on in the screenshots there,
I can see that that's
the Largest Contentful
Paint, which is our image.
OK.
PHILIP WALTON: And it's
nice that DevTools shows you
exactly what element on the
page is the Largest Contentful
Paint.
PAUL LEWIS: Absolutely.
I can't resist.
I know we're going to talk
about layout shifts next.
But why not just jump
the gun a little bit?
We actually have a
Layout Shift showing up
between First Contentful Paint
and Largest Contentful Paint.
And I think based
on this, I think
the reason is because
we're going from no image
to image that's pushing
the content down now.
So I think we're seeing
the page content move,
so my guess is if we were to
go and find this image here
in the Elements
panel, we're going
to see that it doesn't
actually have--
yeah, it doesn't have width
and height attributes set.

English: 
So I can select that area and
based on the screenshots roughly there.
And I say that's the First Contentful Paint.
And then if I select later on, in the screenshots there,
and say that that's the Largest Contentful Paint, which is
an image. OK.
So as I said DevTools shows you exactly what
element on the page is the Largest Contentful Paint.
Absolutely.
I can't resist.
I know we're going to talk about Layout Shifts next, but
why not just jump the gun a little bit?
We actually have a Layout Shift showing up between
First Contentful Paint and Largest Contentful Paint.
And I think based on this,
I think the reason is because we're going from
no image to image and it's pushing the content down there.
That's right.
So I think we're seeing the page content move.
So my guess is if we were to go and find this image here in
the Elements panel, we're going to see that it doesn't
actually have - Yeah, it doesn't have width and height
attributes set.
Yeah.

English: 
And I think that's basically
causing this to happen.
So we will talk about layout
shifts more in a second.
But the reason this
page is shifting
is because we have an image
here that what it loads,
loads asynchronously
essentially.
And when it's loaded, it pushes
the rest of the page content
down.
If we added width and height
attributes to this image,
we wouldn't see
that layout shift.
But as I said, we'll come
back to that more in a moment.
PHILIP WALTON: Yeah, that's
a good general best practice
though, just to
let everybody know.
Always put width and height
attributes on your images.
That way the browser can
render the space that it needs.
It can allocate a space
that it needs to render them
before it actually
finished loading the image
so then you don't get
that layout shift.
PAUL LEWIS: Exactly.
The other thing I think
we should talk about
before we move on
is how to optimize
for this particular situation.
So what would you
suggest if somebody

English: 
I think that's basically causing this to
happen. So we'll
talk about Layout Shifts more in a second.
But the reason this page is shifting is because we have an
image here that when
it loads, it loads asynchronously, essentially.
And when it's loaded, it
pushes the rest of the page content down.
If we added width and height attributes to this image,
we wouldn't see that layout shift. But as I said, we'll
come back to that more in a moment.
Yeah, that's a good general best practice, though, just to
let everybody know.
Always put width and height attributes on your images.
That way the browser can render the space that it needs.
It can allocate a space that it needs to render them
before it actually finished loading the image, so then
you don't get that layout shift.
Exactly.
The other thing I think we should talk about
before we move on is how to optimize
for this particular situation.

English: 
said I need to get First
Contentful Paint and Largest
Contentful Paint nearer the
start, that it's taking too
long to get to these numbers.
These numbers are too high.
Do you have a kind of
go-to list of things
you would say to them?
PHILIP WALTON: Yeah.
Well definitely, one thing that
you don't want to ever block--
I mean, ideally, you
don't ever block painting
on more than kind of
one network request,
that initial network
request that you
make to get the page content.
You want to be able to
paint at that point.
If you have additional requests
like request for fonts or style
sheets or other things
that are preventing
the browser from painting,
that will just delay the time
when that paint can happen.
And so I mean,
sometimes depending
upon the design you're working
with, you don't have a choice.
But in an ideal
world, you would want
to build a paint right away.
And so it looks like in
this case, on Web.Dev,
we are able to paint
pretty quickly.

English: 
So what would you suggest if somebody said, "Oh, I need to
get First Contentful Paint and Largest Contentful Paint
nearer the "start"
and that it's taking too long to get to these numbers.
These numbers are too high.
Do you have a go to list of things you would say to them?
Yeah well, definitely one thing that you,
you don't want to
you know, ever block
I mean, ideally, you don't want to ever block
painting on more than kind of one network request.
That initial network request that you make
to get the page content.
You want to be able to paint at that point.
If you have additional requests, like requests for fonts
or stylesheets or other things that are preventing,
you know, the browser from painting,
that will just delay the time when that paint can happen.
And so,
you know, I mean, sometimes, you know, depending upon
the design you're working with, you don't have a choice.
But in an ideal world,
you would want to be the paint right away. And so,
it looks like in this case
on the web.dev, we are able to paint pretty quickly.

English: 
And then, and that's why first paint is happening
you know, at the beginning.
And then the browser is loading this image
and then Largest Contentful Paint happens
as soon as that image gets loaded in.
Exactly. Yeah. I think what we're actually also seeing here
is that app.css,
which is the main stylesheet and the fonts as well
OK.
My guess is that they are going to be blocking
based on that, you see that, when I roll over them,
the network panel here saying 'Highest',
which is the priority that's been assigned to the CSS.
And the reason, I guess, is because
the CSS is gonna be blocking the render,
which is what you were saying.
So that's why I think some people would inline
that. But I guess if we go ahead and take a quick look
in our head, and if we can find
we could search for it.
There is the stylesheet.
Yeah. You see, there's a stylesheet for the fonts.
I'm right below it app.css.
And so this would be a classic case of
Here's a stylesheet
it's going to block render because the browser

English: 
And that's why first paint is
happening at the beginning.
And then the browser
is loading this image.
And then Largest
Contentful Paint
happens as soon as that
image gets loaded in.
PAUL LEWIS: Exactly.
Yeah, I think what we're
actually also seeing here is
that app.css, which is the
main style sheet and the fonts
as well--
my guess is that they are
going to be blocking based
on the-- you can see that
when I roll over them,
the Network panel here
is saying highest,
which is the priority that's
been assigned to the CSS.
And the reason I guess,
is because the CSS
is going to be blocking
the render, which
is what you are saying.
So that's why I think some
people would inline that.
But I guess if we'd go ahead and
take a quick look in our head,
and if we can find--
we could search for it,
but I'm going to link rel--
this the style sheet.
Yeah, you see there's a
style sheet for the fonts
and right below it, app.css.
And so this would be a classic
case of here's a style sheet.
It's going to block render
because the browser--

English: 
Chrome, is going to take a look at that and go,
'Well, I need to wait and see what the styles are
before I render anything."
Right.
Absolutely. So
that can be something that we can
sometimes take a look at.
Same with blocking JavaScript, right? Yeah. We see that one
we see that one sometimes gets in the way.
You sometimes hear this referred to as critical CSS,
where you identify just the CSS that is needed
to lay out the page,
not necessarily style
all the components on your entire site.
And so you can inline just that
CSS content in the head of your document.
And so then you're not blocking on an additional
network request in order to paint something on the page.
Exactly. Yeah. Right. So that,
those FCP and LCP and as I say, you will find those
on the Timings track here in DevTools.
OK. So next up
Layout Shifts.
Now, we talked about this very briefly just now
with these two down here. But
where does it come from? What's the history of
the Layout Shift

English: 
Chrome is going to take a
look at that and go, well,
I need to wait and see
what the styles are
before I render anything.
Absolutely.
So there could be something
that we can sometimes
take a look at.
Say we're blocky
JavaScript, right?
We see that one sometimes
gets in the way in things
like defer and async.
PHILIP WALTON:
You sometimes hear
this referred to
as critical CSS,
where you identify
just the CSS that
is needed to layout the
page, not necessarily
style all the components
on your entire site.
And so you can inline
just that CSS content
in the head of your document.
And so then you're not blocking
on an additional network
request in order to paint
something on a page.
PAUL LEWIS: Exactly.
Yeah.
Right!
So that was FCP and LCP.
And as I say, you will find
those on the timings track
here in DevTools.
OK.
So next up, layout shifts.
Now we talked about this very
briefly just now with these two
down here.
But where does it come from?
What's the history of the Layout
Shift and Cumulative Layout

English: 
Shifting, I think I've
also heard it called?
PHILIP WALTON: Yes, so the
metric name, Cumulative Layout
Shift or CLS for
short, is a metric that
tries to capture the experience
of visual stability on a page.
And everyone's probably
had this experience where
you go to a website and
you go to tap on a button
or something, and right
before you tap on it,
it shifts out from
underneath you.
It's a very
frustrating experience.
Even if you're not
interacting with the page,
you're just reading it,
if you know, some images--
late-loading images pop
in, some ads pop in,
the content changes, like, a
number things could happen.
And you lose your place
as you're reading.
And it's just not the
greatest experience
as a user from the
user's point of view.
So Cumulative Layout
Shift is a metric
that attempts to
quantify that experience.
And so there's a
couple of pieces there.
But a Layout shift
is anytime an element
on the page between one
frame and the next frame,

English: 
and Cumulative Layout Shifting I also have it called.
Yes. So the metric name Cumulative Layout Shift,
or CLS for short
is a metric that tries to capture the
experience of visual stability on a page.
Everyone's probably had this, you know,
experience where you go to a website, and you
go to tap on a button or something
and right before you tap on it,
it shifts out from underneath you.
It's a very frustrating experience.
even if you're not interacting with the page,
you're just reading it.
If, you know, some
images, late loading images pop in, some ads pop in,
the content changes,
like a number of things can happen,
and you lose your place as you're reading.
And it's just not the greatest experience as a user,
from the user's point of view.
So, Cumulative Layout Shift
is a metric that attempts to quantify that experience.
And so
And so there's a couple of pieces there, but a layout shift
anytime an element on the page,

English: 
between one frame and the next frame, it's start position
changes.
And so this will happen, like in this case that we just saw,
an image loads in
and it pushes the text below it down.
And so the image, the layout shift was not on the image.
The layout shift was on the text
below the image
that on the previous frame it had, you know, an X and Y
position of something.
And then on the next frame, it was pushed lower
and so its position changed.
So
it's a bit tough to explain, but the
CLS is a measure of
both how much of the page content moved
and also how far it moved.
and so if the entire page content shifts
from being fully visible on the page
to not visible at all, that would be a CLS of 1.
If that happened 20 times throughout the page's lifecycle,
that would be a CLS of 20
And then, if it moves, kind of,
half of the screen distance and the
the image itself is only
filling up half the screen, that would be roughly
0.25 CLS.

English: 
it's start position changes.
And so this would
happen like in this case
that we just saw,
an image loads in,
and it pushes the
text below it down.
And so the image--
the Layout Shift was
not on the image.
The Layout Shift was on
the text below the image,
that on the previous
frame, it had,
you know, an x and y
position of something.
And then on the next
frame it was pushed lower.
And so it's position changed.
So it's a bit tough to explain.
But the CLS is a measure of both
how much of the page content
moved, and also
how far it moved.
And so if the
entire page content
shifts from being fully visible
on the page to not visible
at all, that would
be a CLS of 1.
If that happened 20 times
throughout the page lifecycle,
that would be a CLS of 20.
And then if it moves kind of
half of the screen distance,
and the image itself is only
filling up half the screen,
then that would be
roughly 0.25 CLS.

English: 
You can go read more about how
to calculate CLS in Web.Dev.
It's little bit too
complicated to explain now.
But that gives you a sense.
It's a measure of how much
visible instability there
is on the page.
PAUL LEWIS: OK.
So as we talked
about before, then
we have this one layout
shift here and so on.
In fact, this is probably
a better one of the two
to actually demonstrate this.
And when you click on this--
and it's in the
Experience Track.
If you don't get this
Experience Struck in DevTools,
it means that we didn't
detect any layout shifts
in that particular recording.
If you do find that
it's there, then you'll
see that it's populated
with these kind of records.
Now you can click on this,
and it will take you off
to the detailed
information about CLS.
But what we try and do
is we try to give you
a sense of the score and the
cumulative score of buzzwords
about what's going on.
But we also try and
highlight for you--
you see your going from an
image here that's 11 by 11.
And we show it as this
very small overlay

English: 
You can go read more about how to calculate CLS in web.dev
It's little bit too complicated to explain now.
But that gives you a sense, it's a measure of
how much visible instability there is on a page.
OK, so as we talked about before then,
we have this one Layout Shift here.
And so this is probably the better one of the two
to actually demonstrate this.
And when you click on this,
and it's in this Experience track.
If you don't get this Experience track in DevTools
it means that we didn't detect
any Layout Shifts in that particular recording.
If you do find that it's that,
then you'll see that it's populated with these
kind of records. Now,
you can click on this,
and it will take you off to the detailed information
about CLS.
But what we try and do, is we try to give you a sense
of the Score and the Cumulative Score
about what's going on. But we also try and
highlight for you. So you're going from an image here
that's 11x11. And we show it as this very small
on the left hand side there,

English: 
on the left hand side there,
to a much bigger 801 by 414.
So one of the items that I
actually have to do in this
area-- and you can see we
have a few going on here,
which are probably other images
that are being shifted as we
make our way through--
PHILIP WALTON: And let me just--
PAUL LEWIS: One of the things--
PHILIP WALTON: I
wanted to step back
for a second and just talk about
why somebody would do this.
I mean, typically,
you know, you'll
run Lighthouse on
a page, or you'll
go to Search Console's new Core
Web Vitals report or the Chrome
User Experience report, and
you'll see that you know,
you have layout shifting
happening on your page.
And you might be
wondering to yourself, OK.
But I don't see it when I
visit my page so where is
this layout shifting happening?
And so then DevTools is a
great place to debug that
and to figure out which page on
your site has layout shifting,
and then load it up in DevTools
under the throttling conditions
that Paul showed earlier.
And then look and
see what DevTools
is telling you is
shifting, because that's
how you can figure out what's
causing the layout shift.
And then what you
need to do to fix it.
PAUL LEWIS: Yeah.

English: 
to a much bigger 801x414.
So one of the items that I actually have to do
in this area. And you can see actually, we have a few
going on here,
which are probably other images that are being shifted
as we, as we make our way through.
And let me just - I wanted to step back for a second
and just talk about why somebody would do this. I mean,
typically you'll, you know, you'll run Lighthouse on a page
or you'll go to search console's new Core Web Vitals report
or the Chrome User Experience report.
And you'll see that, you know, you have
layout shifting happening on your page.
And you might be wondering to yourself, "OK, but
I don't see it when I visit my page,
so where is this layout shifting happening?".
And so then DevTools is a great place to debug that to, you
know, figure out which page on your site has layout shifting
and then load it up in DevTools under the
throttling conditions that, you know, Paul showed earlier.
And then, you know,
look and see what DevTools is telling you is shifting
because that's how you can figure out
what's causing the layout shift. And then you know
what you need to do to fix it.

English: 
And there's more I have
to do here to be clear.
I think one of the things that
is missing from this, which is
actually available in the data.
I just need to
plummet through is--
which elements are
we talking about?
I can show you that
we've got these areas.
But it does feel like we're
missing a bit of information
about exactly which
element it is.
What we did with the LCP, we
highlight the image that we're
actually referring
to here, we should
be able to do the same here.
So by the time this goes out
and you're watching this,
give it a try in Chrome Canary,
because I might have been
able to land a feature by then.
Not making any promises,
but that would be good,
wouldn't it.
PHILIP WALTON: And just--
yeah, just as a kind
of a quick point,
there there's often two
pieces to a layout shift.
There's the element
that shifted,
and then there's that element
that caused it to shift.
And so sometimes you
know, figuring out one
or the other can be
helpful in fixing.
Because it looks
like here that it's
sharing images that came in.
But adding images--
adding elements of the DOM

English: 
Yeah. And there's more I have to do here, to be clear.
I think one of the things that is missing from this,
which is actually available in the data, I just need to
pull it through is,
"Which elements are we talking about?". I can show
I can show you that we've got these areas,
but it does feel like we're missing
a bit of information about exactly which elements it is.
What we did with LCP, we highlight
the image that we're actually referring to here.
We should be able to do the same here.
So by the time this goes out and you're watching this,
give it a try in Chrome Canary, because I might have been a
I might have been able to land the feature by now.
I'm not making any promises, but that would be good,
wouldn't it?
Yeah, just as a kind of a quick point.
There's often two pieces to a layout shift.
There's the element that shifted
and then there's that element that caused it to shift.
And so sometimes, you know, figuring out one or the other
can be helpful in fixing because
it looks like here that it's showing the image that came
in.
But adding elements to the DOM doesn't

English: 
in itself cause a large shift, but if adding an element of
the DOM moves the elements below it,
then that would cause a layout shift.
Right, because the the default size of this image looks to
be 11x11 pixels to
begin with. And then when it gets populated with the
actual pixel data, it pushes down
the rest of the page content, which I guess justifies
the layout shift there.
Yeah.
Yeah.
OK, so that's that.
You know, and if you got, like we said earlier, if you put
width and height on these things, that will help.
But you can also have, I mean, let me show you this other
one. Even on the Google home page, this privacy reminder
down here. If I take a recording here,
and I just refresh this page.
We're gonna see a layout shift here.
And similarly, we've got this here, which is going from
down here.
And I presume there's some JavaScript

English: 
doesn't in itself cause shift.
But if adding an
element to the DOM
moves the elements
below it, then that
would cause a layout shift.
PAUL LEWIS: Right.
Because the default size of
this image looks to be 11 by 11
pixels to begin with.
And then when it gets populated
with the actual pixel data,
it pushes down the
rest of the page
content, which I guess justifies
the Layout Shift there.
Yeah.
OK.
So that's that.
And if you got--
like we said earlier, if
you put width and height
on these things, that will help.
But you can also have--
I mean, let me show
you this other one.
Even on the Google homepage,
this privacy reminder down
here, if I take a recording here
and I just refresh this page,
we're going to see
a Layout Shift here.
And similarly,
we've got this here,
which is going from down here.

English: 
And I presume there's some
JavaScript or something
like that that's looking
to see whether the privacy
reminder has been seen.
And if not, it pushes
that content up.
And so again, this is
probably JavaScript-based.
And you're going to know in
your own apps what's going on.
Is it third party content?
It your own JavaScript?
Is it your own styles?
And It's a case of
digging into the specifics
of your application
to try and figure out
exactly what's triggering that?
What could be happening there
in order to figure it out.
So that's just a couple of
examples of the layout shifting
that you could see.
PHILIP WALTON: Yeah.
And just-- well, one
thing to keep in mind
is that in an ideal
world, you would have
no layout shifts on your page.
But sometimes it's unavoidable.
And so the threshold that we
recommend folks stay below
is 0.1.
And so it looks
here that you know,
this layout shift is
quite a bit below that.

English: 
or something like that that's looking to see whether the
privacy reminder has been seen, and if not, it pushes
that content up.
So, again, this is probably JavaScript based.
And you're going to know in your own apps what's going
on. What, is it third-party content?
Is it your own JavaScript?
Is it your own styles?
Right, and it's a case of some digging into the specifics
of your application to try and figure out exactly
what's triggering that, like what could be happening there
in order to figure it out. So that's just a couple of
examples of the layout shifting that you could see.
Yeah. And just, you know. Well, one thing to keep in mind
is that in an ideal world, you would have no layout shifts
on your page. But sometimes it's unavoidable.
And so the threshold that we recommend that
folks stay below as is 0.1.
And so it looks here that, you know, this layout shift is
quite a bit below that.
And so even though, you know, you still want to be at

English: 
And so even though, you still
want to be at 0 if you can.
As long as you're below
0.1 for 75% of your users,
you're usually in good shape.
PAUL LEWIS: So you say 0.1.
I guess that's
like for page load,
because that's where a
lot all of these metrics
are aimed at page
load right now, right?
PHILIP WALTON: Yeah.
So that's actually
a really good point.
I'm glad you brought it up.
CLS measures layout
shifts that happened
during the entire
lifecycle of the page,
from when you load the page
until when you unload the page.
Even if you leave the page up
and for days or weeks, it does.
Measure that entire time
whereas here in DevTools,
you ran a trace.
And you saw the
Layout Shift that
happened during that trace.
And so in this
particular case, CLS
was only measuring those shifts
for a small period of time.
It's important that
developers keep
that in mind, because you know,
the actual metric definition is
for the entire
lifespan of the page.
So if you run a Lighthouse
trace or a web page test race

English: 
zero if you can, as long as you're below
0.1 For, you know, 75% of your users, you're usually in
good shape.
So you say 0.1.
I guess that's, like, for page load
because that's where all
of these metrics are aimed at page load right now, right?
Yeah. So that's actually a really good point.
I'm glad you brought it up.
CLS measures layout shifts that happen during the entire
lifecycle of the page. From when you load the page, until
when you unload the page.
Even if you leave the page open for days or weeks,
it does measure that entire time.
Whereas here in DevTools, you ran a trace and
you saw the layout shift that happened during that trace.
And so in this particular case, CLS was only measuring
layout shifts for a small period of time.
It's important that developers keep that in mind because,
you know, the actual metric definition
is for the entire lifespan of the page.
So if you run a Lighthouse trace or a WebPageTest

English: 
trace, or even in DevTools, and you see a certain value
and it's below 0.1, the threshold I just mentioned,
just keep in mind that you have to actually be measuring
it the entire time. You know, that's the
measure that counts, is the entire lifecycle of a page.
Also, I think in this area we should talk about perhaps the
metrics themselves as a bit of
an evolving art. I mean, we have, for example, First
Meaningful Paint up here.
But this isn't one of the metrics that we mentioned in say
something like Core Web Vitals.
And there's also no metric as
far as I'm aware for something like animation performance.
So, I guess my question to you is, what's
going on there? Why have we got a metric here that we
wouldn't refer to? And why do we not yet have a metric for
something that we might be interested in tracking?
What's the kind of history and story there.
Yeah, that's a good question. So FMP, or First Meaningful
Paint, if you remember from a previous

English: 
or even in DevTools, and
you see a certain value,
and it's below 0.1, the
threshold I just mentioned,
just keep in mind that you have
to actually be measuring it
the entire time.
That's the measure
that counts is
the entire lifecycle of a page.
PAUL LEWIS: Also, I
think in this area
we should talk about perhaps
the metrics themselves
as a bit of an evolving art.
I mean, we have for example,
First Meaningful Paint up here.
But this isn't
one of the metrics
that we would mention say,
some like Core Web Vitals.
And there's also
no metric as far
as I'm aware for something
like animation performance.
So--
PHILIP WALTON: That's true.
PAUL LEWIS: I guess
my question to you
is what's going on there?
Why have we got a metric here
that we wouldn't refer to?
And why do we not yet have
a metric for something
that we might be
interested in tracking?
What's the kind of
history and story there?
PHILIP WALTON: Yeah,
that's a good question.
So FMP, or First
Meaningful Paint,

English: 
if you remember from a previous
you know, trace that you did,
Paul, FMP was right next to FCP.
And then LCP was later
in the page load.
So what actually ended up
happening was that-- oh yeah,
it looks like that's
the case here.
So after a bunch
of testing, I mean,
FMP is essentially it's
a different metric,
and has a different
meaning than LCP.
And after a bunch
of research, we
found out that FMP
actually wasn't
as accurate at predicting
when the main--
what most people would consider
to be the most important
content of the page, the most
meaningful part of the page,
the metric itself--
that's the word--
meaningful in the name.
But it turns out that LCP is
actually a better predictor.
And so as we come
up with metrics
that are better at capturing
the user experience,
we'll kind of
deprecate older metrics
and replace them
with newer metrics.
But we do recognize
that that's happened
a bunch over the years.
And I'm sure
developers are getting
tired of hearing new metrics
announced all the time.

English: 
trace that you did Paul.
FMP was right next to
FCP and then LCP was later
in the page load. So what actually ended up happening was
that - Oh yeah, and it looks like that's the case here.
So after a bunch of testing, I mean,
FMP is essentially it's a different metric.
It has a different meaning than LCP.
And after a bunch of research, we found out that FMP
actually wasn't as accurate at predicting when
the main - what most people would consider
to be the most important content of the page - the
most "meaningful" part of the page.
The metric itself has the word "meaningful" in the name.
But it turns out that LCP is actually a better
predictor. And so, as we come up with metrics that are
better at capturing the user experience, we'll, you know,
kind of deprecate older metrics and replace them with
with newer metrics.
But we do recognize that that's happened a bunch over the
years. And I'm sure developers are getting tired of hearing
new metrics announced all the time.
And so one of the things that we did with Core Web Vitals,

English: 
And so one of the things that
we did with Core Web Vitals,
with the Web Vitals
Initiative, and specifically
with Core Web Vitals is we're
committing to only introducing
metrics at most, once a year
for the core set of web vitals.
And so if developers
are following along,
that gives them a
little bit of stability
if they're building a
business on these metrics
or predictability, if
they just kind of don't
want to have to always
be following along
with the latest.
And so recently,
we announced LCP
was one of the Core Web
Vitals and an FMP is not
one of the Core Web Vitals.
And over time that will
probably be deprecated.
So you also asked about
animation performance.
This is definitely
a metric that we're
looking at for the future,
maybe in 2021 or 2022.
So we know that the
set of Core Web Vitals
doesn't capture the entire
story of user experience.
And we're hoping that over
time, we can improve it.
And animation performance is
definitely a metric that--
or definitely of an
area of performance

English: 
with the Web Vitals initiative, and specifically with
Core Web Vitals, is we're committing to only introducing
metrics at most once a year for the core
set of Web Vitals.
And so developers are following along.
They can bring, you know, it gives them a little bit of
stability if they're building a business on these metrics
or, you know, predictability if they just, kind of, don't
want to have to always be following along with the latest.
And so, you know, recently we announced
LCP was one of the Core Web Vitals.
And FMP was not one of the Core Web Vitals and over time
that will probably be deprecated.
So you also asked about animation performance.
This is definitely a metric that we're looking at for the
future, maybe in 2021 or 2022.
So we know that the set of Core Web Vitals doesn't
capture the entire story of user
experience. And we're hoping that over
time we can improve it. And animation performance is

English: 
definitely an area of performance that we're exploring.
I think the lesson we talked about talking about,
if I got that right, I think I did.
I think you did.
Was First Input Delay, which
is not directly shown in
DevTools.
So what is sometimes called FID, right?
What is that and why?
Yeah. So First Input Delay, or FID
for short, represents the time from
when the user, you know, interacts with the page.
So, taps on the screen or you know,
clicks a keyboard key
to the point when the browser is able to respond
to that input event.
So this can - you might think that it's
always going to be instantaneous, like, you know, you click
on the screen and then something will happen.
But as users, we kind of know that that's not the case.
Oftentimes, you know, we've all had the experience of
clicking on something or tapping on something and
not having an instant response.

English: 
that we're exploring.
PAUL LEWIS: I think
the lesson that we
talked about talking about--
if I got that right.
I think I did--
was First Input Delay, which is
not directly shown in DevTools.
So what is-- it's not
sometimes called FID, right?
What is that and why?
PHILIP WALTON: Yeah.
So First Input Delay or
FID or FID for short,
represents the time
from when the user
interacts with the page, so taps
on the screen or clicks a key--
keyboard key, to the
point when the browser
is able to respond
to that input event.
So you might think
that it's always going
to be instantaneous, like, you.
Click on the screen and
then something will happen
but as users, we kind of know
that that's not the case.
Oftentimes, you know, we've
all had the experience
of clicking on something
or tapping on something,
and not having an
instant response.

English: 
And so that's going to happen if, you know, there's
a bunch of JavaScript running on the page.
Maybe you have a large JavaScript file that the browser is
currently parsing and executing.
And then so, if at that exact time a user taps
on the screen, then the browser has to wait a little bit of
time before it can respond to that input event.
And so, FID quantifies that
duration of time.
And you mentioned that it's not exposed directly
in DevTools. And the reason is because, I'm assuming, you
know, you're the one helped implement this.
But First Input Delay requires an input.
It requires a user.
And so, you know, in
many lab scenarios, there is no user.
And so you can't always measure First Input Delay that way.
But we have another metric called 'Total Blocking Time'
that quantifies -
That we do have.
Yeah, that's great.
And it quantifies how often the main thread, how much of
the main, like, how much time the main thread

English: 
And so this can
happen if you know
there's a bunch of JavaScript
running on the page.
Maybe you have a
large JavaScript
file that the
browser is currently
parsing and executing.
And then so if at that exact
time a user taps on the screen,
then the browser has to
wait a little bit of time
before it can respond
to that input event.
And so FID quantifies like,
that duration of time.
And you mentioned that it's not
exposed directly in DevTools.
And the reason is
because I'm assuming,
you know-- you're the one
who helped implement this--
but First Input Delay requires
an input, it requires a user.
And so in many lab
scenarios, there is no user.
And so you can't always measure
First Input Delay that way.
But we have another metric
called Total Blocking
Time that quantifies just how--
PAUL LEWIS: That, we do have.
PHILIP WALTON:
Yeah, that's great.
And that quantifies how often--
how much of the--
like, how much time the
main thread is blocked.

English: 
And a blocked main thread,
as I just mentioned,
contributes to the likelihood
that a user will interact
with a page, but
the browser won't
be able to respond right away.
So you said that Total
Blocking time is in DevTools.
Can you show me where that is?
PAUL LEWIS: Yes!
PHILIP WALTON: Oh, I see there,
the bottom of the screen.
PAUL LEWIS: I have
long tasks over here.
And yeah, it is down there.
And it currently says
it's unavailable.
And we'll talk about
that more a little bit.
I've been working on that
feature in fact, today.
So I can tell you
a little bit more
about what's going
on there, too.
So what I'll do is
I'll come to Web.Dev.
I've cleared it.
And I'm just going
to hit Record.
And I'm going to hit Refresh.
And I don't expect
here that I'm going
to see any particular
blocking time,
because I've got a fast machine,
I want a good connection.
And yeah, you see right
down to the bottom here,
we have Total Blocking
Time and it's currently
set to 0 milliseconds.
So what that roughly
translates to over
here is when we zoom in on
these top level tasks, which

English: 
is blocked. And a blocked main thread, as I just
mentioned, contributes to, you know, the the likelihood
that a user will interact on the page, but the browser
won't be able to respond right away.
So you said that blocking time is in DevTools.
Can you show me where that is?
Yes!
Oh, I see it there at the bottom of the screen.
I have Long Tasks over here.
And yeah, it is down there.
And it currently says it's unavailable, and I will talk
about that more a little bit. I've been working on that
feature, in fact, today.
So I can tell you a little bit more about what's going on
there too. So what I'll do is I've come to
web.dev, and I've cleared it.
And I'm just gonna record, and I'm gonna hit refresh.
And I don't expect here
that I'm gonna see any particular
blocking time because I've got a fast machine.
I'm on a good connection.
And yeah, you see, right down to the bottom here, we have
total blocking time and it's currently set to 0
milliseconds. So what that roughly translates to
over here is when we zoom in on these top-level tasks,

English: 
which are on the main thread, we have
no task that goes over 50 milliseconds.
So 50 milliseconds is our threshold for, "Hey,
this task is long and it's going to contribute
to the blocking time".
Right.
Because what we want to do is we want to keep a track on
tasks that go over 50 milliseconds because they're
the ones that are most likely, were the user to interact,
they're the ones that are most likely to prevent
the browser from being able to respond in an adequate
amount of time.
Right.
So we currently have no tasks -
So blocking time is defined as
any time greater than 50 milliseconds in
a task. So if the task is 49 milliseconds,
there's 0 blocking time.
And if a task is 51 milliseconds, there's 1 millisecond
blocking time. And just out of curiosity, some people ask
why, you know, why 50 milliseconds?
What's the thinking behind that?
Yeah.
So the answer is that the idea you might have
heard of RAIL, the RAIL Performance Model, and you've heard

English: 
are on the main
thread, we have no task
that goes over 50 milliseconds.
So 50 milliseconds is
our threshold for hey,
this task is long.
And it's going to contribute
to the blocking time.
Because what we want
to do is we want
to keep a track on tasks
that go over 50 milliseconds,
because they're the ones
that are most likely--
were the user to
interact-- they're
the ones that are
most likely to prevent
the browser from
being able to respond
in an adequate amount of time.
So we currently have no tasks.
PHILIP WALTON: So blocking time
is defined as any time greater
than 50 milliseconds in a task.
So if a task is 49 milliseconds,
there's 0 blocking time.
And if a task is
51 milliseconds,
there's 1 millisecond
of blocking time.
And just out of
curiosity, some people
asked why 50 milliseconds?
What's the thinking behind that?
And so the answer
is that the idea--
you might have heard of Rail,
the Rail Performance Model.
And you've heard
oftentimes, people

English: 
oftentimes people say you should always respond within 100
milliseconds of user input.
And so the question is, why is 50 milliseconds the blocking
time? And the idea there is that if
you keep all of your tasks below 50 milliseconds, then
there's never a situation where two tasks can't both
run within the 100 millisecond threshold.
And so that's kind of, if people are wondering why that 50
millisecond time exists and why we chose that for the magic
number with total blocking time.
Exactly. And, of course, if you were doing an animation,
then your task time really should be under like
10 or 12 milliseconds.
So it's sort of, you got to be context
aware. The 50 milliseconds number is a great number to have
in mind, especially for low performance.
But it does change depending on the context and whether
you're, say, animating or not. Now, as I said, we have
no tasks here that are
running long.
I mean, if I got a trace like this from somebody, I would
be very happy.
Perfect.

English: 
say you should always respond
within 100 milliseconds of user
input.
And so the question is,
why is 50 milliseconds
the blocking time?
And the idea there is
that if you ever have--
if you keep all of your
tasks below 50 milliseconds,
then there's never a situation
where two tasks can't both
run within the 100
millisecond threshold.
And so that's kind of
if people are wondering
why that 50 millisecond
time exists,
and why we chose that
for the magic number
with Total Blocking Time.
PAUL LEWIS: Exactly.
And of course, if you
were doing an animation,
then your task time really
should be under like, 10 or 12
months milliseconds.
You've got to be context aware.
The 50 milliseconds number--
it's a great number to
have in mind, especially
for low performance.
But it does change,
depending on the context
and whether you're
say, animating or not.
Now as I said, we have no tasks
here that are running long.
I know if I got a trace
like this from somebody,
I would be very happy.

English: 
Yeah, I wouldn't
complain at this at all.
But what I can do is I can
least simulate a slower device.
Like I did before, over
in my Capture settings,
I'm going to go to a
six time slow down.
And I'm expecting that
this 25 milliseconds here
is going to run long.
So this is some JavaScript
that's being evaluated.
So I'm going six
times slow down.
I'm going to hit Record.
And I'm going to refresh again.
OK, I'm going to do two things.
I'm going to stop the recording
a little bit earlier than I
did last time.
But the first thing
to notice here
is tests are now longer
because of the slowdown.
And if I zoom in on this task,
it's 176.55 milliseconds.
And [INAUDIBLE] it's qualified
for being a long task by what?
126.55 milliseconds.
OK?
So what we do is, after the 50
millisecond point on this task,
we do this candy striping here
and we also pop a red triangle
up into the top
right hand corner,
so that when you're looking
at a glance like, zoomed out,

English: 
I'd say, yeah I wouldn't complain at this
at all. But what I can do is I can at least simulate a
slower device like I did before
over my Capture settings. I'm going to go to like a 6x
slowdown. And I'm expecting that this 25 milliseconds
here is going to run long. So this is some JavaScript
that's being evaluated.
So I've gone 6x slowdown.
I'm going to hit record, and I'm going to refresh again.
OK, I want to do two things.
I'm going to stop the recording a little bit earlier than I
did last time.
But the first thing to notice here is our tests
are now longer because of the slowdown.
And if I zoom in on this task, it's
176.55 milliseconds.
And you said it's qualified for being a long task by
126.55 milliseconds.
OK? So what we do is after the 50 millisecond
point on this task, we do this candy striping here, and
we also pop a red triangle up into
the top right-hand corner. So when you're looking at a
glance like zoomed out, you get a sense of just how

English: 
many of your tasks are getting a bit long.
I think almost universally here, the ones that are running
long are JavaScript based.
If you again are looking at the Chrome User
Experience Report or Search Console's Core Web Vitals
report, and you see that you have a First Input Delay
that's higher than you would have expected for a certain
page. I think this is a great example of how you would go
about debugging that.
So, like, you might be on your fast Macbook
Pro laptop or something and not see any long task, but
if you go into DevTools and you throttle the CPU,
and then you start seeing a bunch of long tests like shown
here, then that would help explain why.
Because if a user tried to interact with the page during
one of these long tasks, the browser would not be able to
respond and would have to wait until the task completed
before it could run those event handlers.
Yeah so Paul, I'm seeing it's saying 'unavailable' there
in the bottom in DevTools. What does that mean?
Yeah, so sometimes we do say 'unavailable'.
The reason is we wait for Blink to tell us when

English: 
you get a sense of just
how many of your tasks
are getting a bit long.
And I think almost
universally here,
the ones that are running
long JavaScript-based.
PHILIP WALTON: So if you again,
are looking at the Chrome User
Experience Report or Search
Console's Core Web Vitals
report, and you see that you
have a First Input Delay that's
higher than you would have
expected for a certain page,
I think this is a great
example of how you
would go about debugging that.
So like, you might be
on your fast MacBook Pro
laptop or something, and
not see any long task.
But if you go into DevTools
and you throttle the CPU,
and then you start seeing
a bunch of long tasks
like shown here, then that
would help explain why--
because if a user try to
interact with the page
during one of these
long tasks, the browser
would not be able to respond.
It would have to wait until
the task completed before it
could run those event handlers.
Yeah, so Paul, I'm seeing
it's saying unavailable there
in the bottom in DevTools.
What does that mean?
PAUL LEWIS: Yeah, so sometimes
we do say unavailable.
The reason is we wait
for Blink to tell us

English: 
when it's happy
for us to declare
that the page interactive.
And at that point
it tells us how much
blocking time it measured.
And so sometimes if the
trace isn't long enough,
we don't actually
get that information.
So what I've been working
on actually recently
is adding in an estimate, which
is essentially counting up
the amount of candy
striping that we're
getting in those
top level records
so that we can at least
give you an estimate,
even if Blink hasn't given us
the kind of official answer.
So hopefully, you should see
that in Chrome Canary soon.
PHILIP WALTON: Yeah,
that makes sense.
Because yeah, Total
Blocking Time is--
technically the
definition is the amount
of blocking time between First
Contentful Paint and timed
interactive.
And so it makes
sense that DevTools
would wait until the
browser is interactive.
But yeah, that does
seem like a good feature
to just give an unofficial
total when it's not interactive.
PAUL LEWIS: Yeah, exactly.
So now we've talked about
FPC, LCP, layout shifting,

English: 
it's happy for us to declare
the page interactive. And at that point, it tells us how
much blocking time it measured.
And so sometimes if a trace isn't long enough, we don't
actually get that information. So what I've been working
on, actually, recently is adding in an estimate which is
essentially counting up the amount of candy striping that
we're getting in those top-level records so that we can
least give you an estimate, even if Blink hasn't given us
the, kind of, official answer.
So hopefully you should see that in Chrome Canary soon.
Yeah, that makes sense because, yeah, total blocking
time is, technically, the definition is the amount of
blocking time between First Contentful Paint
and Time to Interactive.
And so it makes sense that DevTools would wait until the
browser is interactive. But yeah, that does seem like a
good feature to just give, like,
an unofficial total when it's not interactive.
Yeah. Exactly.
So now we've talked about FCP, LCP, layout shifting, and
long tasks.
And FID.
Yeah.

English: 
If I was a developer who wanted to know more about these
things, as well as playing with it in DevTools, where would
I go and get more information?
That's a great question. You can go to web.dev/vitals.
And that will
have all the information about the definitions of the
metrics, links to guides, and how to optimize for them,
links to more information about all the tools that support
them and everything like that. So definitely the best place
is to go to web.dev/vitals.

English: 
and long tasks, and FID, or FID.
If I was a developer
who wanted to know more
about these things, as well
as playing with it DevTools,
why would I go and
get more information?
PHILIP WALTON: Well,
that's a great question.
You can go to Web.Dev/vitals.
And that will have
all the information
about the definitions
of the metrics,
links to guides on how
to optimize for them,
links to more information
about all the tools
that support them,
everything like that.
So definitely the best place
is to go to Web.Dev/vitals.
[MUSIC PLAYING]
[AUDIO OUT]
[MUSIC PLAYING]

English: 
Hello everyone, thank you for joining us today.
My name is Sebastian Benz. I'm part of the AMP Developer
Relations Team.
And my name is Naina Raisinghani, and I'm a product manager
on the AMP project.
We want to talk about the work we are doing on AMP to make
web development less painful and developers more
productive.
Yeah, I'm incredibly excited.
So let's dive right into it.
So Naina, we would be remiss if we talked about
AMP and didn't talk about the impact of Google's recent
announcement around the page experience ranking signal.
Absolutely. So even before we can actually start talking
about AMP and page experience, first, let's just talk about
what the announcement is.
In May, the Google Search Team announced that they're going
to measure how the pages experienced by the user, in
addition to prior signals, such as a page's usefulness.
And this whole suite of measurements is called 'page
experience'.
It uses Core Web Vitals, which the Chrome team announced
earlier that month, and adds other preexisting signals,

English: 
SEBASTIAN BENZ: Hello, everyone.
Thank you for joining us today.
My name is Sebastian Benz.
I'm part of the AMP
Developer Relations team.
NAINA RAISINGHANI: And my
name is Naina Raisinghani.
And I'm a product manager
on the AMP Project.
SEBASTIAN BENZ: We want to
talk about the work we're
doing on AMP to make web
development less painful
and develops more productive.
NAINA RAISINGHANI: Yeah.
I'm incredibly excited.
So let's dive right into it.
SEBASTIAN BENZ: So
Naina, we would be remiss
if we talked about
AMP and didn't
talk about the impact of
Google's recent announcement
around the page
experience ranking signal.
NAINA RAISINGHANI: Absolutely.
So even before we
can actually start
talking about AMP
and page experience,
first, let's just talk about
what the announcement is.
In May, the Google
Search Team announced
that they're going to measure
how the page is experienced
by the user, in addition to
price signals such as a page's
usefulness.
And this whole suite
of its measurements
is called Page Experience.
It uses Core Web Vitals, which
the Chrome team announced
earlier that month, and adds
other pre-existing signals,

English: 
such as mobile-friendliness, safe-browsing, and HTTPS on
top of it. And the great thing is that these
metrics line up really well with AMP's design goals
of making sure that users are getting a content forward
experience and are able to consume content
without having to download unnecessary resources or wait
for unnecessary processing.
Okay. So how does AMP do against page experience?
Good catch, actually! We did some analysis and we saw that
a majority of AMP pages actually already do pretty
well against this criteria.
This means that AMP is really living up to the intention of
being a well lit path to creating a great page experience.
So you said that a majority of pages meet the criteria,
but not all?
Yep. So in the cases where the AMP page doesn't
perform well against the page experience criteria, we saw
that they failed for reasons that were outside of AMP's
control, such as overly large images being served
on mobile devices, or the server response time being too
slow.

English: 
such as mobile
friendliness, safe browsing,
and HTTPS on top of it.
And the great thing is that
these metrics line up really
well with AMP's design
goals of making sure
that users are getting a
content-forward experience
and are able to consume
content without having
to download
unnecessary resources
or wait for
unnecessary processing.
SEBASTIAN BENZ: OK, so how does
AMP do against Page Experience?
NAINA RAISINGHANI:
Good catch, actually.
We did some analysis.
And we saw that a
majority of AMP pages
actually already do pretty
well against this criteria.
This means that AMP
is really living up
to the intention of
being a well-lit path
to creating a great
page experience.
SEBASTIAN BENZ: So you said
that a majority of AMP pages
meet the criteria, but not all?
NAINA RAISINGHANI: Yep.
So in the cases where the
AMP page doesn't perform well
against the Page
Experience criteria,
we saw that they
failed for reasons
that were outside
of AMP's control,
such as overly
large images being
solved on mobile devices
or the sever response time
being too slow.
SEBASTIAN BENZ: So that's a
really interesting key aspect

English: 
of Page Experience than the
Core Web Vitals are matched up
from real user data.
This means to improve your
Core Web Vitals, for example,
it's a good idea to
use a CDM to ensure
that users around the world get
your content delivered quickly.
NAINA RAISINGHANI: Yeah.
And just like other
libraries and frameworks,
the AMP project will be
monitoring these metrics
closely and continue
investing in AMP's performance
by our performance
working group.
But more generally,
it's really important
to note that AMP
intends to reduce
the ongoing effort needed
to create pages that
offer a great user experience.
And we intend to
do so by helping
offload tasks and [INAUDIBLE]
such as browser compatibility,
accessibility, JavaScript
budgets, et. cetera.
SEBASTIAN BENZ: At its core,
AMP is a UI component library.
Before using AMP, I often
struggled with too much choice
when it came to adding
a new feature to a site.
Having to decide
whether I should build
my own carousel, which
is a bad idea, or finding
a suitable existing
implementation
could take a lot
of time and energy.

English: 
That's a really interesting key aspect of page experience,
that the Core Web Vitals are measured from real user
data. This means to improve your
Core Web Vitals, for example, it's a good idea to use a CDN
to ensure that users around the world get your content
delivered quickly.
Yeah. And just like other libraries and frameworks, the AMP
project will be monitoring these metrics closely and
continue investing in AMP's performance by our Performance
Working Group. But more generally, it's really important to
note that AMP intends to reduce the ongoing
effort needed to create pages that offer a great user
experience. And we intend to do so by helping offload
tasks and worries such as browser compatibility,
accessibility, JavaScript budgets, etc.
At its core, AMP is a UI component library.
Before using AMP, I often struggled with too much
choice when it came to adding a new feature to a site.
Having to decide whether I should build my own carousel,
which is a bad idea, or finding a suitable existing
implementation, could take a lot of time and energy.

English: 
With AMP, you get a flexible, high quality UI
component out of the box.
And you can be sure that these perform well, are
accessible, and play along well with each other.
Recently I talked to a developer from an agency which uses
AMP for building most of their clients' websites.
They told me that one of their design interns had been able
to build a fully interactive website for one of their
clients without any JavaScript knowledge.
I think that's fantastic and a great example for the value
of a good UI component library.
It makes it easy to get started for beginners and allows
experienced developers to focus on creating new
user experiences instead of bikeshedding technical details.
And that's exactly what we're focusing on in 2020.
We want AMP to be a cost effective and simple solution
that allows developers to focus on their product and not
worry about other things like performance, infrastructure,
etc.
And this is an effort that we're calling 'AMP as a
Service'. The idea here is to use

English: 
With AMP, you get a flexible,
high quality component out
of the box.
And you can be sure that these
perform well, are accessible,
and play along well
with each other.
Recently, I talked
to a developer
from an agency which
uses AMP for building
most of the client's websites.
They told me that one
of their design interns
had been able to build a
fully intact website for one
of their clients without
any JavaScript knowledge.
I think that's fantastic and
a great example for the value
of a good UI component library.
It makes it easy to get
started for beginners,
and allows
experienced developers
to focus on creating
new user experiences
instead of bikeshedding
technical details.
NAINA RAISINGHANI: And
that's exactly what
we're focusing on in 2020.
We want AMP to be a cost
effective and simple solution
that allows developers
to focus on their product
and not worry about other
things like performance,
infrastructure, et cetera.
And this is an effort that
we're calling AMP as a Service.

English: 
The idea here is to use
AMP as a turnkey solution
to easily create and then
maintain a great page
experience, and make developers
more productive simultaneously.
SEBASTIAN BENZ: So what
exactly do you intend to do?
NAINA RAISINGHANI: So the first
thing that we really want to do
is address the feedback
that AMP developers have.
And some of the top complaints
that we've seen with AMP
is first, the need
for custom JavaScript,
and second, the fact
that the inline CSS is
too small at 50 kilobytes.
Now we adjust the need
for custom JavaScript
by adding AMP
script, a component
that allows you to add
custom JavaScript to AMP
to help fulfill any
business specific
need that AMP doesn't solve.
And if you want to hear more
here, you should stay tuned,
because our colleagues Ben
Morse and Crystal Lambert
will be talking you
through this in their talk
titled Workerized JS.
Now with our CSS
limit, the intention
was to promote CSS hygiene.
But we got feedback
that the limit
was too tight at 50 kilobytes.
So we worked with
the AMP community
to understand what a
reasonable CSS limit could be.
And after working with plugin
developers, news publishers,

English: 
AMP as a turnkey solution to easily create
and then maintain a great page experience and make
developers more productive simultaneously.
So what exactly do you intend to do?
So the first thing that we really want to do is address the
feedback that AMP developers have.
And some of the top complaints that we've seen with AMP is,
first, the need for custom JavaScript, and second, the fact
that the inline CSS limit is too small at 50 kilobytes.
Now we addressed the need for custom JavaScript by adding
amp-script, a component that allows you to add custom
JavaScript to AMP to help fulfill any business specific
need that AMP doesn't solve.
And if you want to hear more here, you should stay tuned,
because our colleagues Ben Morss and Crystal Lambert will
be talking you through this in their talk titled
"Workerized JS".
Now, with our CSS limit, the intention was to promote CSS
hygiene. But we got feedback that the limit was too
tight at 50 kilobytes.
So we worked with the AMP community to understand what a
reasonable CSS limit could be.
And after working with plugin developers, news publishers,
and e-commerce site creators, we realized that most

English: 
interactive experiences could actually fit within 75
kilobytes of CSS.
And so that's what we made our new limit.
75 kilobytes.
And this really seems to have hit the sweet spot.
With a 50 kilobyte limit, I heard from many developers that
they've been struggling with keeping their CSS below the
limit, but I still have to hear from someone struggling
with the 75 kilobyte limit.
Yeah. Fingers crossed that this limit works.
Now aside from addressing feedback, we want to make
developers more productive.
We want to help them create and maintain performant sites
as well.
The problem usually wasn't with AMP itself, but that they
had to maintain two versions of the pages.
The canonical one and an additional AMP one.
Yep.
That's by far the largest problem that AMP developers have.
The problem is even more acute if you have separate teams
that are working on the AMP and mobile web experience,
especially if they're in separate parts of the
organization.
To be honest, the AMP team itself advocated for paired

English: 
and e-commerce site
creators, we realized
that most interactive
experiences could actually
fitting within 75
kilobytes of CSS.
And so that's what we
made our new limit, .
75 kilobytes
SEBASTIAN BENZ: And
this really seems
to have hit the sweet spot.
With a 50 kilobyte limit, I
heard from many developers
that they've been
struggling with keeping
their CSS below the limit.
But I still have to hear
from someone starting
with the 75 kilobyte limit.
NAINA RAISINGHANI: Yeah.
Fingers crossed that
this limit works.
Now aside from
addressing feedback,
we want to make developers
more productive.
We want to help them create and
maintain performance sites as
well.
SEBASTIAN BENZ: The problem
usually wasn't with AMP itself.
But then they had to
maintain two versions
of the pages, their canonical
one, and an additional one, AMP
one.
NAINA RAISINGHANI: Yep, that's
by far the largest problem
that AMP developers have.
The problem is even more acute
if you have separate teams that
are working on the AMP
and mobile web experience,
especially if they're
in separate parts
of the organization.
To be honest, the AMP
team itself advocated

English: 
for paired AMP experiences
when we got started.
We saw it as an easy way to
create AMP pages with the least
amount of effort.
But talking to
developers over time
has made us realize
the amount of pain that
can be associated with
maintaining this dual code
base, and that this outweighs
the initial gains of actually
creating the AMP page quickly.
And Google's page
experience announcement
is a great move for AMP
developers in this regard.
It allows development
teams to really think
about how they want to continue
investing in AMP going forward.
SEBASTIAN BENZ: OK.
So say I'm publishing
paired AMP pages, because I
want to be in the Google
top stories carousel.
Should continue doing this?
NAINA RAISINGHANI:
So in that case,
I would ask for you to consider
the maintenance costs that you
are incurring by having
to maintain an AMP
version off your code and an
non-AMP version of your code.
Now that you have the option
to be flexible with your tech
stack, you should
be looking to pick
a setup that allows all
your web developers to be
productive from day one.
SEBASTIAN BENZ:
So you're telling
those who are going the paired
AMP route to completely trump
AMP support?

English: 
AMP experiences when we got started.
We saw it as an easy way to create AMP pages with
the least amount of effort.
But talking to developers over time has made us realize
the amount of pain that can be associated with maintaining
this dual code base and that this outweighs the initial
gains of actually creating the AMP page quickly.
And Google's page experience announcement is a great move
for AMP developers in this regard.
It allows development teams to really think about how they
want to continue investing in AMP going forward.
OK, so say I'm publishing paired AMP pages because we want
to be in the Google Top Stories Carousel.
Should I continue doing this?
So in that case, I would ask for you to consider the
maintenance costs that you're incurring by having to
maintain an AMP version of your code, an non-AMP version of
your code. Now that you have the option to be flexible
with your tech stack. You should be looking to pick a setup
that allows all your web developers to be productive from
day one.
So you're telling those who are going the paired AMP route
to completely drop AMP support?

English: 
No. What we're telling them is to pick something that makes
them the most productive.
And this could be a number of things.
Developers could pick experiences on their site that could
actually benefit from AMP and only invest in AMP for those
experiences. Or they could go fully AMP first
across all their site.
If they actually believe that AMP is able to meet their
needs. And we've gotten pretty positive feedback from
developers who use AMP as their main library because they
think that AMP makes them more productive.
And this is what we see AMP's future as - a component
library that helps developers be more productive.
And this is why we're investing in allowing everyone to use
AMP components, even outside of AMP pages.
It's an effort that we're calling Bento AMP.
And we look forward to releasing it later this year.
I'm really excited about this.
Focusing on AMP as a UI component library is a
much healthier direction, in my opinion.
And I'm very happy that we're making this move.
Another area where we're taking our learnings from AMP and
are making them available to a wider audience are

English: 
NAINA RAISINGHANI: No.
What we're telling them
is to pick something that
makes them the most productive.
And this could be
a number of things.
Developers could
pick experiences
on their site that could
actually benefit from AMP,
and only invest in AMP
for those experiences.
Or they could go fully AMP
first across all their site
if they actually
believe that AMP
is able to meet their needs.
And we've gotten pretty
positive feedback
from developers who use
AMP as their main library,
because they think that AMP
makes them more productive.
And this is what we
see AMP's future as,
a component library that helps
developers be more productive.
And this is why we're
investing and allowing
everyone to use AMP components
even outside of AMP pages.
It's an effort that
we're calling Bento AMP.
And we look forward to
releasing it later this year.
SEBASTIAN BENZ: I'm
really excited about this.
Focusing on AMP as your
UI component library
is a much healthier
direction, in my opinion.
And I'm very happy that
we're making this move.
Another area where we are
taking your learnings from AMP,
or making them available
to a wider audience,

English: 
are service-side
optimizations fundamentals.
At the beginning, AMP pages were
mostly served from AMP caches.
And these perform additional
optimizations enabling
AMP's strong user experience.
However, many
developers started using
AMP for building
their whole website.
In these cases, AMP pages
are not served from a cache.
And there's been room for
improving AMP's loading
performance.
To address this, we
created AMP Optimizer,
a tool to bring AMP cache
optimizations to publishers.
For example, we use AMP
Optimizer for the official AMP
website, amp.dev.
And by using AMP Optimizer, we
achieve the same performance
as when the page is
served from an AMP cache.
And what I really
like, AMP Optimizer
fits really well into our
idea of AMP as a Service.
It enables us to automate web
development best practices.
For example, the latest
AMP Optimizer released
added support for
image source generation
to make it easier to
serve optimized images.

English: 
server-side optimizations for AMP pages.
At the beginning AMP pages were mostly sourced from AMP
caches. And these perform additional optimizations
enabling AMP's strong user experience.
However, many developers started using AMP for building
their whole website.
In these cases, AMP pages are not served from a cache.
And there's been room for improving AMP's loading
performance to address this we created
AMP Optimizer, a tool to bring AMP cache optimizations
to publishers.
For example, we use AMP Optimizer for the official AMP
website, amp.dev.
And by using AMP Optimizer we achieve the same performance
as when the page it served from an AMP cache.
And what I really like, AMP Optimizer fits really well
into our idea of AMP as a service.
It enables us to automate web development best practices.
For example, the latest AMP Optimizer release added support
for image srcset generation to make it easier to serve
optimized images.
Another example is JavaScript modules.

English: 
The AMP project is soon going to start serving the AMP
runtime and components as JavaScript modules.
And if you're using AMP Optimizer, you will automatically
get the benefit of smaller runtime modules once this
becomes available.
That sounds so great.
And I'm really excited about all the improvements that are
coming to Optimizer.
But what's the best way for developers to actually include
AMP Optimizer?
I mean, of course you could include it normally in your
build pipeline or your rendering pipeline.
But ideally, you shouldn't have to think about how to
integrate AMP Optimizer.
Our goal is to make the integration seamless by integrating
AMP Optimizer into existing frameworks and CMSes.
The Next.js integration is a great example for what a good
AMP development experience can look like.
Next.js has a special AMP mode that you enable
via flag. And this will result in the generated
page being valid AMP.
The cool thing is that you can start using AMP components
straight out of the box, and you don't need to worry about

English: 
Another example is
JavaScript modules.
The AMP project is
soon going to start
serving the AMP
runtime of components
as JavaScript modules.
And if you're using
AMP Optimizer,
you will automatically get
the benefit of smaller runtime
modules once this
becomes available.
NAINA RAISINGHANI:
That sounds so great.
And I'm really excited
about all the improvements
that are coming to Optimizer.
But what's the best way for
developers to actually include
AMP Optimizer?
SEBASTIAN BENZ: I
mean, of course,
you could include it normally
in your build pipeline
or your rendering pipeline.
But ideally, you
shouldn't have to think
about how to get AMP Optimizer.
Our goal is to make the
integration seamless
by integrating AMP
Optimizer into existing
frameworks and [INAUDIBLE].
The next JS integration
is a great example
for what a good AMP development
experience can look like.
Next JS has a special AMP
mode that you enable via flag.
And this will result
in the generated page
being [INAUDIBLE] AMP.
The cool thing is
that you can start
using AMP components
straight out of the box,

English: 
and you don't need to worry
about the AMP boilerplate
or important AMP components.
All of this is
automatically added
in the background AMP
Optimizer, which is
integrated tightly into NextJS.
And the resulting editing
experience is really nice.
And it feels like web
development from 30 years ago.
And a great example
for this is Axios.
They recently launched
their new site,
and it's completely built
on AMP using NextJS.
And they've been really
happy with their experience.
Another example for
[INAUDIBLE] that
has these features
integrated is WordPress.
Recently, the official
AMP WordPress plugin
start by publishing
optimized AMP by default.
So this means if you built
an AMP page using WordPress,
you'll get the best serving
performance for AMP.
NAINA RAISINGHANI: Wow.
It's really exciting to see
so many new experiences that
are being built using AMP
and in fact, AMP Optimizer.
And I'm really
hoping to see more.
But that's it.
That's our time.
And that's our vision for 2020.

English: 
the AMP boilerplate or importing AMP components.
All of this is automatically added in the background by AMP
Optimizer, which is integrated tightly into
Next.js.
And the resulting editing experience was really nice
and it feels like web development from 30 years ago.
And a great example for this is Axios.
They recently launched their new site and it's
completely built on AMP using Next.js.
And they've been really happy with the experience.
Another example for a CMS that has these features
integrated is WordPress.
Recently, the official AMP Wordpress Plugin started
publishing optimized AMP by default.
So this means if you build an AMP page using Wordpress,
you get the best serving performance for AMP.
Wow. It's it's really exciting to see so many new
experiences that are being built using AMP, and in fact,
AMP Optimizer. And I'm really hoping to see more.
But that's it. That's our time.
And that's our vision for 2020.

English: 
The Google page experience announcement allows AMP to focus
on what it does best.
Be a UI component library that helps developers be more
productive by helping them deploy web development best
practices at scale.
And if you want to read more about AMP's plans for 2020,
please read our blog post at go.AMP.dev/service.
And with that, thank you for joining us.
If you want to learn more about AMP in general, you can
visit amp.dev today.
Thanks, everyone! And we will also be hanging out in the
chat to help answer your questions for a bit.
Hey there! I'm Ben Morss.
I'm a Developer Advocate working on the Web and on AMP.
And I'm Crystal Lambert, technically a writer, working for
the web on the AMP project.
We're here to talk about something we think is pretty cool.
A new way to run JavaScript in Web Workers with
AMP.

English: 
The Google Page
Experience announcement
allows AMP to focus
on what it does best,
be a UI component library
that helps developers
be more productive
by helping them
deploy web development
best practices at scale.
SEBASTIAN BENZ: And if you want
to read more about AMP's plans
for 2020, please read our blog
post at go.amp.dev/service.
NAINA RAISINGHANI: And with
that, thank you for joining us.
If you want to learn more
about AMP in general,
you can visit amp.dev today.
SEBASTIAN BENZ: Thanks everyone.
And we will also be
hanging out in the chat
to answer your
questions for a bit.
[MUSIC PLAYING]
BEN MORSE: Hey
there, I'm Ben Morse.
I'm a developer advocate
working on the web and on AMP.
CRYSTAL LAMBERT: And I'm
Crystal Lambert, technically
a writer working for the
web on the AMP project.
BEN MORSE: We're here
to talk about something
we think it's pretty cool,
a new way to run JavaScript
in Web Workers with AMP.
CRYSTAL LAMBERT: Awesome.

English: 
Awesome. Let's get started.
But Ben, what is this slide?
JavaScript "foe"?
I love JavaScript.
It lets me do whatever I want.
Sure. JavaScript is amazing.
It's made the modern web possible.
But we both know that many websites are too slow, and
that's partially caused by lots of JavaScript.
It's one of the reasons why people like this are staring at
their phones, waiting for our sites to load.
Yeah, that's no good.
You think the more JavaScript, the better?
I could write more code to make things quicker.
Well, it's like too much ice cream or time spent at home.
You don't want to overdo it.
Well, what about these Web Workers?
I hear you can use them to get JavaScript off the main
thread, but I'm not sure how to get started.
Yeah, it can be pretty intimidating because the -
Oh! And another thing!
AMP doesn't let me write my own JavaScript, period.
Can we make a video about that, too?
Well, conveniently, Crystal, this video can be about both
those things because AMP now provides an easy way to

English: 
Let's get started.
But Ben, what is this slide?
JavaScript faux?
I love JavaScript.
It lets me do whatever I want.
BEN MORSE: Sure,
JavaScript is amazing.
It's neither modern
web possible.
But we both know that many
websites are too slow.
And that's partially caused
by lots of JavaScript.
That's one of the reasons
why people like this
are staring at their phones
waiting for sites to load.
CRYSTAL LAMBERT:
Yeah, that's no good.
You'd think the more
JavaScript, the better.
I could write more code
to make things quicker.
BEN MORSE: Well, it's like
too much ice cream or time
spent at home.
You don't want to overdo it.
CRYSTAL LAMBERT: Well, what
about these web workers?
I hear you can use them to get
JavaScript off the main thread.
But I'm not sure
how to get started.
BEN MORSE: Yeah, it can
be pretty intimidating,
because the thing is--
Oh!
CRYSTAL LAMBERT:
And another thing.
AMP doesn't let me write
my own JavaScript, period.
Can you make a video
about that too?
BEN MORSE: Well,
conveniently Crystal,
this video can be about
both those things,
because AMP now provides
an easy way to use workers.

English: 
use Workers. So we're going to show JavaScript developers
how AMP makes it easy to try Web Workers.
And for people that are already using AMP, we'll show you
how you can write your own JavaScript without breaking
AMP's performance guarantees.
For everyone, it's a nice way to run JavaScript in a way
it's unlikely to harm your Web Vitals scores.
Oh yeah! I'm hearing lots about these "Web Vitals".
That's, uhh, oh!
Our page's First Input Delay, Largest Contentful Paint,
and Cumulative Layout Shift.
Right?
Those are the three. So let's get going!
Another slide.
What is this, a guy knitting?
Yeah, it's a transition slide.
Well, it does remind me, why is the
web single-threaded?
I mean, every modern OS has multiple threads.
Why hasn't the web caught up?
Honestly, it's just how browsers and JavaScript have always
been. I mean, of course, modern browsers can multitask.
They can do more than one thing at a time, but each browser
tab is a single thread for the UI.

English: 
So we're going to show
JavaScript developers
how AMP makes it easy
to try Web Workers.
And for people are
already using AMP,
we'll show you how do you
write your own JavaScript
without breaking AMP's
performance guarantees.
For everyone, it's
a nice way to run
JavaScript in a way that's
unlikely to harm your web
vital scores.
CRYSTAL LAMBERT: Oh yeah.
I'm hearing lots
about this web vitals.
That's uh, oh.
Our page's First Input Delay,
Largest Contentful Paint,
and Cumulative
Layout Shift, right?
BEN MORSE: Those are the three.
So let's get going.
CRYSTAL LAMBERT: Another slide.
What is this?
A guy knitting?
BEN MORSE: Yeah, it's
a transition slide.
CRYSTAL LAMBERT: Well,
it does remind me,
why is the web single threaded?
I mean, every modern OS
has multiple threads.
Why hasn't the web caught up?
BEN MORSE: Honestly,
it's just how browsers
and JavaScript have always been.
I mean, of course, modern
browsers can multitask.
They can do more than
one thing at a time.
But each browser tab has a
single thread for the UI.
Only a one process
can make changes

English: 
Only one process can make changes to the screen at a time.
That means JavaScript can block the browser from doing
things and vice versa.
But wait. JavaScript is asynchronous, right?
So whenever an event gets fired, doesn't the event
handler's code start running right away?
Well, sure. But all of the code on a web page still runs in
a single thread.
This diagram illustrates JavaScript's event loop.
So the browser fires an event. If you have an event
handler, that code runs until it's done.
As other events fire, they get added to a queue.
I see. So if my code is handling one event
and another event fires, the browser just can't
spin up another thread.
Instead, it has to wait for that event in the queue?
Right. It has to wait until the current code is done.
Let's say the user taps a button while your code is running
a long task.
Well, JavaScript can't handle any other event until your
task completes.
So, the next bit of code will be delayed.
Worse still, the browser may be unable to change the UI

English: 
to the screen at a time.
That means JavaScript can
block the browser from doing
things and vise versa.
CRYSTAL LAMBERT:
But wait, JavaScript
is asynchronous, right?
So whenever an event gets fired,
doesn't the event handlers code
start running right away?
BEN MORSE: Well sure, but all
of the code on a web page still
runs in a single thread.
This diagram illustrates
JavaScript's event loop.
So the browser fires an event.
You give an event handler.
That code runs until it's done.
As other events fire,
they get added to a queue.
CRYSTAL LAMBERT: Mm, I see.
So if my code is handling one
event, and another event fires,
the browser just can't
spin up another thread?
Instead, it has to wait for
that event in the queue?
BEN MORSE: Right.
It has to wait until the
current code is done.
Let's say the user taps a
button while your code is
running a long task.
Well, a JavaScript can
handle any other event
until your task completes.
So the next bit of
code will be delayed.

English: 
Where still, the browser may
be unable to change the UI,
because it's waiting
for your code
CRYSTAL LAMBERT: I guess if
it weren't that way everything
would just be fighting
for control over the DOM,
and you'd have race
congestions and general chaos.
BEN MORSE: Oh yeah,
and unfortunately,
to make JavaScript
thread safe, you'd
have to completely rewrite it.
CRYSTAL LAMBERT: All right.
This is making some sense.
Not only can
excessive JavaScript
make your page slow
to load, it can also
make the page slow to respond
to users' interactions.
I'm guessing this is
where Web Workers come in?
BEN MORSE: Yes.
JavaScript in a Web Worker that
runs in a different thread.
And this is not a new idea.
Web Workers have been
around for about 10 years.
CRYSTAL LAMBERT: You're kidding?
10 years?
That's longer than I've
been working on the web.
Why am I just
learning about them?
BEN MORSE: I think
because their limits
have made them harder to use.
Workers can't cause
race conditions
with other workers
or the main thread
because they lack access to
the DOM, or the global scope.
Instead, a worker communicates
with the main thread

English: 
because it's waiting for your code.
I guess if it weren't that way, everything would just be
fighting for control over the DOM.
You'd have race conditions and general chaos.
Oh yeah. And unfortunately, to make JavaScript thread-safe,
you'd have to completely rewrite it.
All right. This is making some sense.
Not only can excessive JavaScript make your page slow to
load, it can also make the page slow to respond
to users' interactions.
I'm guessing this is where Web Workers come in.
Yes. JavaScript in a Web Worker runs in a different thread.
And this is not a new idea.
Web Workers have been around for about ten years.
You're kidding! Ten years? That's longer than I've been
working on the web.
Why am I just learning about them?
I think because their limits have made them harder to use.
Workers can't cause race conditions with other Workers or
the main thread because they lack access to the DOM
or the global scope.
Instead, a worker communicates with the main thread by

English: 
passing messages back and forth where each message contains
an object.
There are libraries that make this simpler.
Notably, Comlink by Surma, and Workerize
by Jason Miller. But workers can't access the DOM.
So, workers are great for doing long tasks
off the main thread.
But what if you want access to the DOM?
That's a big obstacle.
And that's where comes to the rescue!
I knew at some point we were gonna bring AMP into this.
We did. So in 2018, the AMP
project released an open source library called WorkerDOM.
WorkerDOM makes a copy of the DOM for the Worker's use.
WorkerDOM also recreates a subset of the standard DOM
API. This lets the Worker manipulate the DOM
and make changes on the page using standard techniques.
WorkerDOM keeps the copy of the DOM and the real DOM in
sync. So, when something changes in the real
DOM, WorkerDOM sends a messae to the Worker to
make that change in the copy.
And if your Worker changes its copy, WorkerDOM sends a

English: 
by passing messages back and
forth, where each message
contains an object.
There are libraries
that make this simpler,
notably Comm Link by Surma,
and Workerize by Jason Miller.
But workers can't
access the DOM.
CRYSTAL LAMBERT: So
workers are great for doing
long tasks off the main thread.
But what if you want
access to the DOM?
That's the big obstacle.
BEN MORSE: And that's where
AMP script comes to the rescue.
CRYSTAL LAMBERT: I
knew at some point,
we were going to
bring AMP into this.
BEN MORSE: We did.
So in 2018, the AMP Project
released an open source library
called Worker DOM.
Worker DOM makes a copy of
the DOM for the workers use.
Worker DOM also recreates a
subset of the standard DOM API.
This lets the worker
manipulate the DOM
and make changes on the page
using standard techniques.
Worker DOM keeps the copy of the
DOM and the real DOM in sync.
So when something
changes in the real DOM,
Worker DOM sends a
message to the worker
to make that change in the copy.
And if you record
changes its copy.

English: 
message over to the real DOM, and the same change gets made
there.
So, I heard you say "AMP".
Is all of this only true for AMP, or can I use WorkerDOM
with a different stack?
You can import WorkerDOM into your own project, but
WorkerDOM is super useful for AMP since it provides a
way to run JavaScript in a sandbox, where it can't run
rampant and break AMP's performance guarantees.
AMP encapsulates WorkerDOM in a component called
.
This is a little abstract.
Can you show me some code?
Code, I understand.
OK, fine.
Let's make a basic "Hello, World" example with
.
In the
, we insert an component.
The DOM it contains gets passed to the Worker.
So here, to the Worker, that entire DOM is that
tag.
Next we put our code in a
Whoa, that's weird.
You set the "type" to plain text instead of
"text/JavaScript".
Yeah we did. That's so the browser won't see it as

English: 
Worker DOM sends a message
over to the real DOM.
And the same change
gets made there.
CRYSTAL LAMBERT: So
I heard you say AMP.
Is all of this
only true for AMP?
Or can I use Worker DOM
with a different stack?
BEN MORSE: You can import Worker
DOM into your own project.
But Worker DOM is
super useful for AMP,
since it provides a way to run
JavaScript in a sandbox, where
it can't run rampant and break
AMP's performance guarantees.
AMP encapsulates Worker DOM in
a component called AMP script.
CRYSTAL LAMBERT: This
is a little abstract.
Can you show me some code?
Code I understand.
BEN MORSE: OK, fine.
Let's make a basic Hello
World example with AMP script.
In the body, we insert
an AMP script components.
The DOM it contains gets
passed to the worker.
So here to the worker, that
entire DOM is that H1 tag.
Next, we put our
code in a script tag.
CRYSTAL LAMBERT:
Whoa, that's weird.
You set the type to plain text
instead of text JavaScript.
BEN MORSE: Yeah.
We did.

English: 
JavaScript and just execute it immediately.
Instead finds the code and puts it into a
Worker. So the code in this script here grabs the first
tag in the DOM and appends a comma and the word
"world" right on page load.
And does that work?
Look, magic! That was pretty quick.
Let's watch it again.
I'm overwhelmed.
Well, OK. It's not gmail, but that "world" was really and
truly added by a Web Worker.
Can you prove it?
If we open DevTools, and go to the Sources tab and
click over here, we can see our script.
Right under the code added by .
OK. That's kind of cool.
Here's how that looks in a full web page.
I have left some things out for simplicity's sake.
But you can see that, as with all AMP pages, we're loading
AMP's runtime script.
We're also including the JavaScript that makes
work.
So do you always have to include your JavaScript
inline like that?
It's not really a best practice.
Yeah. That's a good point.

English: 
But that's so the browser
won't see it as JavaScript
and just execute it immediately.
Instead, AMP's script finds the
code and puts it into a worker.
So the code in this script
here grabs the first H1 tag
in the DOM, and appends a
comma and the word world right
on page load.
And does that work?
Look, magic.
That was pretty quick.
Let's watch it again.
CRYSTAL LAMBERT:
I'm overwhelmed.
BEN MORSE: Well,
OK it's not Gmail.
But that world was really and
truly added by a Web Worker.
CRYSTAL LAMBERT:
Can you prove it?
BEN MORSE: If we open DevTools
and go to the Sources tab
and click over here,
we can see our scripts
right under the code
added by M script.
CRYSTAL LAMBERT: OK.
That's kind of cool.
BEN MORSE: Here's how it
looks in a full web page.
I have left some things
out for simplicity's sake.
But you can see that
as with all AMP pages,
we're loading AMP's
runtime scripts.
We're also including
the JavaScript
that makes AMP scripts work.
CRYSTAL LAMBERT:
So do you always
have to include your
JavaScript inline like that?
It's not really a best practice.

English: 
BEN MORSE: Yeah,
that's a good point.
We can also store the
JavaScript in his own file
by using AMP script's
source attribute like this.
CRYSTAL LAMBERT: So
that example works,
but it's not really that useful.
Could we say add that world
when the user presses a button?
BEN MORSE: OK, fine.
Let's add to our HTML
button that says hello who?
We'll write JavaScript
that grabs that button
and adds a handler
for the click event.
When you click the button,
it works its magic.
Let's try it out.
So there is hello.
There is our button.
And look, hello world.
OK, let's go a little crazy.
CRYSTAL LAMBERT: Super neato.
What else can we do?
Does AMP script
let us do a fetch?
BEN MORSE: Does it ever?
Here's that Hello world
example modified to retrieve
the world from an end point.
Workers natively support fetch,
XML HTTP requests, and even
web sockets.
CRYSTAL LAMBERT: OK, this
is getting pretty cool.
But this is AMP, right?

English: 
We can also store the JavaScript in its own file by using
's "src" attribute, like this.
So that example works, but it's not really
that useful. Could we, say, add that "world"
when the user presses a button?
Okay, fine.
Let's add to our HTML a button that says "Hello who?".
We'll write JavaScript that grabs that button and adds a
handler for the "click" event.
When you click the button, it works its magic.
Let's try it out. So, there's Hello.
There's our button, and look, "Hello, world!".
Okay, let's go a little crazy.
Super neato!
What else can we do?
Does let us do a fetch?
Does it ever!
Here's that "hello, world" example modified to retrieve the
word "world" from an endpoint.
Workers natively support Fetch, XMLHTTPRequest,
and even WebSockets.
OK, this is getting pretty cool.
But this is AMP, right?

English: 
How does AMP just let me write any JavaScript I want?
Well that's a good point.
AMP tries hard to guarantee low Cumulative Layout Shift
to keep page elements from moving around.
If your code makes mutations to the page that would really
disturb the page layout, AMP reserves the right to disallow
those changes or even shut the Worker down.
If your container can't change size, it can't
disturb the pages much, and AMP gives you more freedom.
That's why I specified the height and width here in HTML
and why I didn't choose AMP's "container" layout.
There's a lot to this, so check the documentation on
amp.dev for details.
Hold on. Can I just use to inject
more scripts into the DOM?
Nope. You're working with a virtual DOM.
Not going to work.
Fair enough. But I see something about not allowing
more than one 150 kilobytes of JavaScript.
Is that on a page level?
That's right. That 150K is per page.
But I could still fit jQuery into that.
And oh!
I can just copy in my favorite image slider and charting

English: 
How does AMP just let me
write any JavaScript I want?
BEN MORSE: Well,
that's a good point.
AMP is hard to guarantee
low Cumulative Layout
Shift to keep page elements
from moving around.
If your code makes mutations
to the page that would really
disturb the page layout,
AMP preserves the right
to disallow those changes
or even set the worker down.
If your AMP script,
container can't change size
it can't disturb the pages much,
and it gives you more freedom.
That's why I specified
the height and width here
in the HTML, and why I didn't
choose AMP's container layout.
There's a lot to this.
So check the documentation
on that amp.dev for details.
CRYSTAL LAMBERT:
Hold on, can I just
use AMP script to inject
more scripts into the DOM?
BEN MORSE: Nope.
You're working with virtual DOM.
Not going to work.
CRYSTAL LAMBERT: Fair enough.
But I see something
about not allowing
more than 150 kilobytes
of JavaScript.
Is that on a page level?
BEN MORSE: That's right.
That 150 K is per page.
CRYSTAL LAMBERT: But I could
still fit jQuery into that.
And, oh, I can just copy
in my favorite image slider
and charter libraries.

English: 
libraries.
Well, remember the WorkerDOM has recreated the DOM APIs it
supports in its own JavaScript.
If WorkerDOM supported the whole DOM API, it would be
cumbersome and huge.
It would slow down pages enormously.
So pretty few third-party libraries are going to work right
out of the box.
OK.
Then, what's the best way to use ?
Well, one way is to use vanilla JavaScript while keeping an
eye on this table of supported APIs.
There is quite a bit there.
Wait. React.
Can I use React?
Yes. That's the other way.
React uses a very specific subset of the DOM API,
so the WorkerDOM team made sure that subset is well
supported.
OK. But I've used React before.
My React bundle might need to break that 150 kilobyte
limit.
Yeah. That's why it's probably better to use Preact
instead. Preact is highly compatible with React,
but it's only 3K minified and gzipped.
For projects with more code, Preact is probably the
way to go. Here I've remade the button example

English: 
BEN MORSE: Well,
remember the Worker
DOM has recreated
the DOM API that
supports in its own JavaScript.
If Worker DOM is part
of the whole DOM API,
it would be cumbersome and huge.
It would slow down
pages enormously.
So pretty few third
party libraries
are going to work
right out of the box.
CRYSTAL LAMBERT: OK, then what's
the best way to use AMP script?
BEN MORSE: Well, one way is
to use vanilla JavaScript
while keeping an eye on this
table of supported APIs.
There is quite a bit there.
CRYSTAL LAMBERT: Wait, React.
Can I use React?
BEN MORSE: Yes,
that's the other way.
React uses a very specific
subset of the DOM API.
So the Worker DOM team
made sure that subset
is well as supported.
CRYSTAL LAMBERT: OK.
But I've used React before.
My React bundle might need to
break that 150 kilobytes limit.
BEN MORSE: Yeah, that's
why it's probably better
to use Preact instead.
Preact is highly
compatible with React,
but it's only a 3K
minified and G zipped.
For projects with
more code, Preact
is probably the way to go.

English: 
using Preact.
I find it easier to write and debug the JSX in a simpler
environment, and then build it into my AMP page.
So let's build this.
Let's start up our server... and
there's our page with our button. It works!
All right. That was a lot.
If only there was an tutorial out there.
Wait a minute, didn't you and I already make one of those?
Yeah... you want to take the next slide?
Of course, that tutorial is a great introduction to
.
Head on over to go.amp.dev/learn-script
to get started.
And then keep on going.
Remember, the WorkerDOM is still quite new.
If you have feature requests or find things that are
missing, please get involved on GitHub.
Help improve it!
In conclusion, Web Workers can help you keep JavaScript
from slowing down your web pages.
is a nice way to try this technique out.

English: 
Here, I've made the button
example using Preact.
I find it easier to write the
debug, the JSX in a simpler
environment, and then
build it into my AMP page.
So let's build this.
Lets start up our server.
And there's our page
with our button.
It works!
CRYSTAL LAMBERT: All right.
That was a lot.
If only there was an AMP
script tutorial out there.
Wait a minute, didn't you and
I already make one of those?
BEN MORSE: Yeah.
you want to take the next slide?
CRYSTAL LAMBERT: Of course.
That tutorial is a great
introduction to AMP script.
Head on over to
go.amp.dev/learn-script to get
started.
BEN MORSE: And
then keep on going.
Remember that Worker
DOM is still quite new.
If you have feature requests or
find things that are missing,
please get involved on GitHub.
Help improve it.
CRYSTAL LAMBERT: In
conclusion, Web Workers
can help you keep
JavaScript from slowing down
your web pages.
AMP script is a nice way
to try this technique out.

English: 
BEN MORSE: You can find
all the code from this talk
here on Glitch.
Thanks for listening, and let's
get to work on pudding workers
to work--
CRYSTAL LAMBERT: --for you.
BEN MORSE: Ah!
[MUSIC PLAYING]
MARTIN SPLITT: Hi everyone.
Thanks for watching this session
on debugging JavaScript SEO
issues.
In the next 15 minutes
I will take you
on a short journey
in which we will
talk a bit about the worries
that a few SEOs still
have about JavaScript
and Google Search,
then look at the tools available
to SEOs and developers,
and then get our hands
dirty on a few case
studies from the real world.
Now, let's get started
with looking at the basics.
Can SEO and
JavaScript be friends?
There is a bunch of
history behind this

English: 
You can find all the code from this talk here on Glitch.
Thanks for listening! And let's get to work on putting
Workers to work...
...for you!
Hi, everyone! Thanks for watching this session on debugging
JavaScript SEO issues.
In the next 15 minutes, I will take
you on a short journey in which we will talk a bit about
the worries that a few SEOs still have about JavaScript
and Google Search.
Then look at the tools available to SEOs and developers,
and then get our hands dirty on a few case studies
from the real world.
Now, let's get started with looking at the basics.
Can SEO and JavaScript be friends?
There is a bunch of history behind this that contributed to

English: 
that contributed to various
opinions and answers
to this question.
Today, the answer
is generally yes.
Sure, as with every
technology, there
are things that can go wrong.
But there's nothing
inherently or categorically
wrong with JavaScript
sites and Google Search.
Let's look at a
few things people
tend to get wrong about
JavaScript and Search.
The number one
concern brought up
is that Googlebot does not
support modern JavaScript,
or has otherwise very
limited capabilities in terms
of JavaScript features.
At Google I/O 2019 we announced
the Evergreen Googlebot.
This means that Googlebot
users a current, stable Chrome
to render websites and
execute JavaScript,
and that Googlebot follows the
release of new Chrome versions
quite closely.
Another worry is concerned
with the two waves of indexing,
and the delay between
crawling and rendering.

English: 
various opinions and answers to this question.
Today, the answer is generally yes.
Sure. As with every technology, there are things that
can go wrong.
But there's nothing inherently or categorically
wrong with JavaScript sites and Google search.
Let's look at a few things people tend to get wrong about
JavaScript and Search.
The number one concern brought up is that Googlebot
does not support modern JavaScript or has otherwise very
limited capabilities in terms of JavaScript features.
At Google I/O 2019, we announced the evergreen
Googlebot.
This means that Googlebot uses a current, stable
Chrome to render websites and execute JavaScript,
and that Googlebot follows the release of new Chrome
versions quite closely.
Another worry is concerned with the "two waves of
indexing" and the delay between crawling and
rendering.

English: 
Googlebot renders all pages.
And the two waves
were a simplification
of the process that
isn't accurate anymore.
The time pages
spent in the queue
between crawling and
rendering is very, very short.
Five seconds at the
median, a few minutes
for the 90th percentile.
Rendering itself takes as
long as it takes a website
to load in the browser.
Last but not least, be wary
of blanket statements that
paint JavaScript as
the general SEO issue.
While some search
engines might still
have limited capabilities
for processing JavaScript,
they ultimately want to
understand modern websites,
and that includes JavaScript.
If JavaScript is used
responsibly, tested properly,
and implemented
correctly, then there
are no issues for Google
search in particular,
and solutions exist
for SEO in general.
For example, you may consider
server-side rendering

English: 
Googlebot renders all pages and the "two waves"
were a simplification of the process that isn't accurate
anymore. The time pages spend in the queue between
crawling and rendering is very, very short.
Five seconds at the median, few minutes
for the 90th percentile.
Rendering itself takes as long as it takes
a website to load in the browser.
Last but not least, be wary of
blanket statements that paint JavaScript as the general
SEO issue.
While some search engines might still have limited
capabilities for processing JavaScript.
They ultimately want to understand modern websites, and
that includes JavaScript.
If JavaScript is used responsibly, tested properly,
and implemented correctly, then there are no issues
for Google Search in particular, and solutions exist for
SEO in general.
For example, you may consider server-side rendering or

English: 
or use dynamic rendering as a
workaround for other crawlers.
When saying test your site
properly, the follow up
question is usually well, how
do I test my site properly?
And luckily, we have a
whole toolkit for you
to test your site
for Google Search.
Let's take a look
at what's available.
The first tool in your tool
belt is Google Search console.
It's a super powerful tool for
your Google Search performance.
Besides a ton of reports, it
contains the URL inspection
tool that lets you
check if a URL is
in Google Search, if
there are any issues,
and how Googlebot sees the page.
The second tool that
is really helpful
is the rich results test.
It takes any URL or lets you
copy and paste code to check.
Its main purpose is to show
if structured data is correct
implemented.
But it has much more to
offer than just that.
Last but not least, the
mobile friendly test

English: 
use dynamic rendering as a workaround for other crawlers.
When saying "test your site properly", the
follow up question is usually, "well, how do I test
my site properly?".
And luckily, we have a whole toolkit for you
to test your site for Google Search.
Let's take a look at what's available.
The first tool in your tool belt is Google Search Console.
It's a super powerful tool for your Google Search
performance. Besides a ton of reports, it
contains the URL inspection tool that lets you check
if the URL is in Google search, if there are any
issues, and how Googlebot sees the page.
The second tool that is really helpful is the Rich Results
test.
It takes any URL or lets you copy and paste code
to check.
Its main purpose is to show if structured data is correctly
implemented, but it has much more to offer than just that.
Last but not least, the mobile-friendly test is similar to
the Rich Results test.

English: 
is similar to the
rich results test.
On top of the rendered HTML,
the status of all embedded
resources and network requests,
it also shows in the box
the full screenshot of the page,
as well as possible mobile user
experience issues.
Now let's take these
tools for a spin.
I have built three
web sites based
on real cases that I debugged
in the webmaster forums.
The first case is a single
page application that does not
show up in Google at all.
As I am not the
owner of the domain,
I don't have access to Google
Search console for the site,
but I can still take a look.
I will start with a
mobile friendly test
to get a first look at
the page in question.
As we can see, the page loads,
but shows an error message.
When I load the
page in the browser,
it displays the data correctly.
Hm.
We can take a look at
the resources Googlebot
tried to load for this page.

English: 
On top of the rendered HTML, the status of all embedded
resources and network requests, it also shows
an above-the-fold screenshot of the page, as well
as possible mobile user-experience issues.
Now, let's take these tools for a spin.
I have built three websites based on real cases that
I debugged in the webmaster forums.
The first case is a single-page application that
does not show up in Google at all.
As I am not the owner of the domain, I don't have access
to Google Search Console for this site, but I can still
take a look.
I will start with a mobile-friendly test to get a first
look at the page in question.
As we can see, the page loads,
but shows an error message.
When I load the page on the browser, it displays the data
correctly.
We can take a look at the resources Googlebot tried to load
for this page.

English: 
Here we see that one wasn't loaded.
The api.example.org/products
URL wasn't loaded because it's blocked by robots.txt.
When Googlebot renders, it respects the robot.txt for
each network request it needs to make, the HTML,
CSS, JavaScript, images, or API calls.
In this case, someone prevented Googlebot from making the
API call by disallowing it in robots.txt.
In this case, the web app handles a failed API request
as a "not found" error and shows the corresponding message
to the user.
We caught this as a soft 404.
And as it is an error page, we didn't index it.
Take note that there are safer ways to show a 404 page
in a single-page app, such as redirecting to
a URL with a 404 status or setting the page
to noindex.
Right. We solved that one.
That's pretty good!
All right, onto the next one.

English: 
Here, we see that
one wasn't loaded.
The API.example.org/products
URL wasn't loaded
because it's blocked
by robots.txt.
When Googlebot renders,
it respects the robots.txt
for each network request
it needs to make,
the HTML, CSS, JavaScript,
images, or API calls.
In this case, someone
prevented Googlebot
from making the API call by
disallowing it and robots.txt.
In this case, the web app
handles a failed API request
as a "not found" error and
shows the corresponding message
to the user.
We caught this as a soft 404.
And as it is an arrow
page, we didn't index it.
Take note that
there are safer ways
to show a 404 page
in a single page app,
such as redirecting to
a URL with a 404 status,
or setting the
page to a no index.
Right, we solved that one.
That's pretty good.
All right.
Onto the next one.

English: 
This one is described as a progressive web app, or PWA,
that didn't show up in search except for their home page.
Let's go find out why.
Looking at the home page, it looks all right.
The other views in this progressive web app also load just
fine.
Let's test one of these pages.
We will use the mobile-friendly test again to get a first
look at what's going on.
Oh!
The test says it can't access the page?
But it worked in the browser?
So let's check with our DevTools.
In the Network tab, I see that I get 200 status...
from the service worker, though.
What happens when they open the page in an incognito
window?
Woops.
So the server is not actually properly set up to
display the page.
Instead, the service worker does all the work to handle the
navigation.

English: 
This one is described
as a progressive web
app, or PWA that didn't show up
in search except for their Home
page.
Let's go find out why.
Looking at the Home
page, it looks all right.
The other views in this
progressive web app
also load just fine.
Hm, let's test one
of these pages.
We will use the mobile
friendly test again
to get a first look
at what's going on.
Oh.
The test says it
can't access the page?
But it worked in the browser.
So let's check
with our DevTools.
In the Network tab, I see
that I get a 200 status,
from the service worker though.
What happens when I open the
page in an incognito window?
Whoops.
So the server is not
actually properly set up
to display the page.
Instead, the service
worker does all the work
to handle the navigation.

English: 
That isn't good.
Googlebot has to behave
like a first time visitor,
so it loads a page without
the service worker cookies
and so on.
This needs to be
fixed on the server.
Great.
Two websites fixed.
But I have one more to go.
This one is a news
website that is worried
because not all content can
be found via Google Search.
To mix things up
a little bit, I'll
use the rich results
test for this one.
The website doesn't seem
to have any obvious issues.
Let's look at the rendered HTML.
Hm, even that looks fine to me.
So let's take a look at
the website in the browser.
So it loads 10 news stories
and links to each news story,
and then it loads more
stories as I scroll down.
Do we find that in
the rendered HTML too?
Interesting.
This story isn't in
the rendered HTML.
It looks like the initial
10 stories are there,

English: 
That isn't good Googlebot has to behave like a first time
visitor, so it loads a page without the service worker,
cookies, and so on.
This needs to be fixed on the server.
Great! Two websites fixed, but I have one more to go.
This one is a news website that is worried because not
all content can be found via Google Search.
To mix things up a little bit, I'll use the Rich Results
test for this one.
The website doesn't seem to have any obvious issues.
Let's look at the rendered HTML.
Hmm. Even that looks fine to me.
So let's take a look at the website in the browser.
So it loads 10 news stories and links to
each news story, and then it loads more stories as
I scroll down.
Do we find that in the rendered HTML too?
Interesting, this story isn't in the rendered HTML.
It looks like the initial 10 stories are there, but none
of the content that is being loaded on scroll.

English: 
but none of the content that
is being loaded on scroll.
Wait, does it work when
a resize the window?
Oops, it only works
when the user scrolls.
Well, Google bot doesn't scroll.
That's why these
stories aren't loaded.
That's not exactly a problem.
This can be solved by
using an Intersection
Observer, for instance.
Generally, I recommend
checking out the documentation
at developers.Google.com/search
for much more information
on this topic and other topics.
I hope this was
interesting and helped you
with testing your websites
for Google Search.
Keep building cool stuff
on the web and take care.
[MUSIC PLAYING]
Hi everyone.
I'm excited to show you
in the next 15 minutes
how you can use structured data
to make your website stand out
more in Google
Search, and how that

English: 
Wait, does it work when I resize the window?
Woops. It only works when the user scrolls.
Well, Googlebot doesn't scroll.
That's why these stories aren't loaded.
That's not exactly a problem.
This can be solved by using an IntersectionObserver, for
instance.
Generally, I recommend checking out the documentation at
developers.google.com/search for much
more information on this topic and other topics.
I hope this was interesting and helped you with testing
your websites for Google Search.
Keep building cool stuff on the web and take care!
Hi, everyone! I'm excited to show you in the next
15 minutes how you can use structured data to
make your website stand out more in Google Search

English: 
and how that can be done with JavaScript
when the static implementation is infeasible.
We will start by looking at what structured data is
and why it is a good idea for your website.
Then we will look at ways to implement it using
JavaScript.
And last but not least, we'll take a look at how to test
and debug your implementation.
All right. Now, what is structured data
and why is it useful?
Structured data is a standardized set of additional markup
that you can put on your pages to tell machines,
like Googlebot, more about the content on your page.
On the right side here, you can see the information for a
specific product being highlighted in both the
image search as well as the search results,
including additional information like ratings and price.
We call such results "Rich Results".
To implement structured data, you can use JSON-LD,

English: 
can be done with JavaScript
when a static implementation
isn't feasible.
We will start by looking
at what structure data is
and why it is a good
idea for your website.
Then we will look at ways to
implement it using JavaScript.
And last but not least,
we'll take a look
at how to test and debug
your implementation.
All right.
Now, what is structured
data and why is it useful?
Structured data is
a standardized set
of additional markup that
you can put on your pages
to tell machines
like Googlebot more
about the content on your page.
On the right side here,
you can see the information
for a specific product being
highlighted in both the image
search as well as
the search results,
including additional information
like ratings and price.
We call such results
rich results.
To implement
structured data, you

English: 
can use JSON-LD,
micro data, or RDF8.
But we recommend using JSON-LD.
Here is an example of what
a JSON-LD block on your page
might look like.
Besides products, there
are many verticals
that can benefit
from structured data
and become eligible
for rich results.
Here are some examples.
But you should check the
link for the full gallery
of supported verticals.
Note that implementing
structured data
makes a page eligible
for rich results,
but does not mean
that we will always
show them for every
page that implements it.
So, now we talked about
what structured data is
and how it benefits
your website.
Let's walk through a few
possible implementations.
We've seen that
the easiest way is
to include a script tag with
JSON-LD data in the page.
This can, of course,
be done in the back end
or straight in the
html of the page.

English: 
microdata, or RDFa.
But we recommend using JSON-LD.
Here is an example of what a JSON-LD block on your page
might look like.
Besides products, there are many verticals that
can benefit from structured data and become eligible for
rich results.
Here are some examples, but you should check the link
for the full gallery of supported verticals.
Note that implementing structured data makes
a page eligible for rich results but does not
mean that we will always show them for every page that
implements it.
So now, we talked about what
structured data is and how it benefits your website.
Let's walk through a few possible implementations.
We've seen that the easiest way is to include a script tag
with the JSON-LD data in the page.
This can, of course, be done in the backend
or straight in the HTML of a page.

English: 
But what are the options if you are using client-side
rendered JavaScript?
First of all, it is fine to implement it dynamically
with client-side JavaScript.
We recommend to use server-side rendering to make your
website as robust as possible.
But there is no issue with implementing it with JavaScript,
per se.
In this session, we will look at three possible
implementation approaches.
Of course, you can use JavaScript without libraries
or frameworks to inject structured data into your pages.
Here is an example of a vanilla JavaScript implementation
for a client-side rendered single-page application.
It fetches the JSON-LD data from an API
and injects into the head of the page.
As Googlebot renders this page, it will execute
the JavaScript and the structured data will be rendered.
Just make sure that the API is available to
Googlebot and not blocked by robots.txt.

English: 
But what are the options if
you are using client-side
rendered JavaScript?
First of all, it is fine
to implement it dynamically
with client-side JavaScript.
We recommend to use
server-side rendering
to make your website
as robust as possible.
But there is no issue with
implementing it with JavaScript
per se.
In this session, we will look
at three possible implementation
approaches.
Of course, you
can use JavaScript
without libraries or frameworks
to inject structured data
into your pages.
Here is an example of
a vanilla JavaScript
implementation for a
client-side rendered single page
application.
It fetches the JSON-LD
data from an API
and injects it into
the head of the page.
As Googlebot, renders this page
it will execute the JavaScript
and structure data
will be rendered.
Just make sure that the API
is available to Googlebot
and not blocked by robots.txt.

English: 
When you are using frameworks
such as React, Angular,
or ViewJS, you will
very likely have
helpers or built-in
functionality
available to insert structured
data into your pages.
Here is an example of
a React component using
the schema helper utility
to create type JSON-LD
for a person's profile page.
Should you not have access
to the code of your pages,
but have Google Tag
Manager on these pages,
you may use a custom
tag and custom variables
to create structured data
from the information that
is on the page.
To do that, create a customized
HTML, tag in your container
and insert the relevant
JSON-LD, as well
as the variables for
the values of each field
the JSON-LD block.
Then create the necessary
custom JavaScript variables
to extract information
from the page
so it can be inserted into the
custom HTML tag automatically.

English: 
When you are using frameworks such as React, Angular,
or Vue.js, you very likely have helpers
or built-in functionality available to insert structured
data into your pages.
Here is an example of a React component using the
schema helper utility to create typed JSON-LD
for a person's profile page.
Should you not have access to the code of your pages
but have Google Tag Manager on these pages,
you may use a custom tag and custom variables
to create structured data from the information that is on
the page.
To do that, create a custom HTML tag in
your container and insert the relevant JSON-LD
as well as the variables for the values of
each field of the JSON-LD block.
Then create the necessary custom JavaScript variables
to extract information from the page so it can be inserted
into the custom HTML tag automatically.

English: 
We advise not to copy and paste information from the
page directly into Google Tag Manager, as that
will likely cause a mismatch between page content and
structured data generated by Google Tag Manager
to arise in the future.
Great! So we've seen three ways of generating structured
data with JavaScript.
Let's find out if our implementation works as expected.
There are two main tools for testing the implementations.
The first one is the Rich Results test.
You can paste the URL into the tool and see what structured
data is recognized, as well as if there are any
issues with the structured data on the page.
When using JavaScript to generate structured data, we
recommend testing a URL instead of pasting code directly
into the tool.
The other great tool for testing this is the Google Search
Console.
In the URL inspection tool, you can see the structured
data that is detected and if it is valid.

English: 
We advise not to copy and
paste information from the page
directly into
Google Tag Manager,
as that will likely cause a
mismatch between page content
and structure of data
generated by Google Tag Manager
to arise in the future.
Great.
So we've seen three ways of
generating structured data
with JavaScript.
Let's find out if
our implementation
works as expected.
There are two main tools for
testing the implementations.
The first one is the
rich results test.
You can paste a
URL into the tool
and see what structure
data is recognized,
as well as if there are any
issues with the structure
data on the page.
When using JavaScript to
generate structured data,
we recommend testing a URL
instead of pasting code
directly into the tool.
The other great tool
for testing this
is the Google Search console.
In the URL inspection tool, you
can see the structure of data
that is detected
and if it is valid.

English: 
But you can also see which pages of your site
were eligible for rich results and which ones have errors
or warnings to look into.
If you want to learn more about Google Search and
structured data, check out our documentation at
developers.google.com/search or
use this short link to read more on how to use
JavaScript to generate structured data for your pages.
Thanks a lot for joining and have a great day.
Bye!

English: 
But you can also see
which pages of your site
were eligible for which
results, and which
ones have arrows or
warnings to look into.
If you want to learn more about
Google Search and structure
data, check out
our documentation
at developers.google.com/search,
or use this short link to read
more on how to use JavaScript
to generate structured data
for your pages.
Thanks a lot for joining,
and have a great day.
Bye.
[MUSIC PLAYING]
