
English: 
Welcome back to web.dev LIVE.
As a Brit, I'm tickled to be kicking off Day 2 from
an EMEA-friendly time zone.
Yesterday, we spoke about governments doing great work,
getting the word out on the web.
And one of the most inspirational digital services is from
the U.K., where I saw a government convene online for
the first time. Now we've all been learning how best to
work and learn from home.
And at Google, we're trying to get a deeper understanding
of the needs of web developers, partnering with the
community. As one of our close partners, I wanted to invite

English: 
[MUSIC PLAYING]
DION ALMAER: Welcome
back to web.dev LIVE.
As a Brit, I'm tickled
to be kicking off day two
from an Almaer-friendly
time zone.
Yesterday we spoke about
governments doing great work
getting the word out on the web.
And one of the most
inspirational digital services
is from the UK, where I
saw a government convene
online for the first time.
Now, we've all been learning
how best to work and learn
from home.
And at Google we're trying
to get a deeper understanding
of the needs of web developers,
partnering with the community.
As one of our close
partners, I wanted

English: 
our friends at Mozilla to chat more about that work
and also get information on what's new in their world.
Welcome, Kadir and Victoria.
Hi, Dion.
Great to be here.
Hey! So Kadir, web developers know MDN
really well, and many may have participated in the last
DNA report.
But can you get us up to speed a little bit and tell us
about its history and what we're trying to do?
Yeah, of course. So this really started in late 2017,
shortly after CSS Grid was shipped.
And CSS Grid was a massive success,
but was also years in the making and layout had been an
issue for web developers, at least since the 90s, when
we abused tables to implement our UI designs.
So we asked ourselves, so can we have more
of these wins?
And we talked to people who worked at the Web platform
at Mozilla about how they prioritize things.
And one thing really stood out because almost everyone said
the same thing.
They said, 'we need to hear more from developers'.

English: 
to invite our friends at Mozilla
to chat more about that work
and also get information on
what's new in their world.
Welcome, Kadir and Victoria.
KADIR TOPAL: Hi, Dion.
VICTORIA WANG: Great to be here.
DION ALMAER: Hey.
So Kadir, web developers
know MDN really well.
And many may have participated
in the last DNA report.
But can you get us up
to speed a little bit
and tell us about its history
and what we're trying to do?
KADIR TOPAL: Yeah, of course.
So this really
started in late 2017,
shortly after CSS
grid was shipped.
And CSS grid was a massive
success but was also
years in the making.
And layout had been an
issue for web developers
at least since the '90s,
when we have abused tables
to implement our UI designs.
So we asked ourselves, how can
we have more of these wins?
And we talked to people who
worked at the web platform
at Mozilla about how
they prioritize things.
And one thing really stood
out, because almost everyone
said the same thing.

English: 
And it makes so much sense because none of us can be
successful without that part.
It's hard to prioritize the right thing without knowing
about developer pain points, and it's hard to find the
right solution -
it's hard to get people to use something if it's not
solving that problem or not solving it in the right way.
So for all of those reasons, we proposed a
Developer Needs Assessment. And the DNA, in short,
is meant to be a single and simple tool for harsh
prioritization, representing very
diverse populations and a huge feature space.
And it's published on MDN and that's important because it's
not owned by a single browser vendor.
We initially proposed this under the umbrella of the MDN
Product Advisory Board, where we have representation from
browser vendors like Google, and Microsoft, and Samsung,
but also the W3C and industry stakeholders.
And as a community, we need to have at least the

English: 
They said, we need to
hear more from developers.
And it makes so much
sense, because none of us
can be successful
without that part.
It's hard to prioritize the
right thing without knowing
about all of the pain points.
And it's hard to find the right
solution, especially if it--
and it's hard to get
people to use something
if it's not solving
that problem or not
solving it in the right way.
So for all of those reasons,
we proposed Developer Needs
Assessment.
And the DNA in short is meant
to be a single and simple tool
for [INAUDIBLE]
prioritization representing
very diverse populations
and a huge feature space.
And it's published on MDN.
And that's important,
because it's not owned
by a single browser vendor.
We initially proposed this under
the umbrella of the MDN product
advisory board, where
we have representation
from browser
vendors like Google,
and Microsoft, and Samsung,
but also the W3C and industry
stakeholders.
And as a community,
we need to have

English: 
common understanding of the facts when it comes to needs,
even if you draw different conclusions from them.
And for this situation,
in 2019, more than 28,000 developers and designers
took the 20 minutes necessary to fully complete the survey.
And that's from one hundred and seventy three countries
total. That's about 10,000 hours
of time contributed by developers and
designers to help us understand what their pain points
and needs are.
And we believe that makes the MDN Web DNA
the biggest web developer and designer focused survey
ever conducted so far.
Yeah, well, when you put it that way, I just really want to
say thank you to the community and anyone that
took the time to go through those 20
minutes and get in the feedback, it's incredibly useful
to us. Thank you so much.
So when you were going through the results, Kadir, the
2019 report, what kind of really kind of stood out
to you?

English: 
at least a common
understanding of the facts
when it comes to needs, even if
you draw different conclusions
from them.
And for this situation, in 2019
more than 28,000 developers
and designers took
the 20 minutes
necessary to fully
complete the survey.
And that's from 173
countries total.
And that's about 10,000 hours of
time contributed by developers
and designers to help us
understand what their pain
points and needs are.
And we believe that
makes the MDN web
DNA the biggest web developer
and designer focused
survey ever conducted so far.
DION ALMAER: Yeah, well,
when you put it that way,
you just really want to say
thank you to the community
and anyone that took the time
to go through those 20 minutes
and getting us the feedback.
It's incredibly useful to us.
But thank you so much.
So when you were going
through the results, Kadir,
the 2019 report,
what kind of really
kind of stood out to you?

English: 
KADIR TOPAL: You know,
one thing really.
And that's web compatibility
and interoperability.
The four of the top
five issues were
focused on exactly that topic.
And one of the biggest
strengths of the web
is that there is no
single entity controlling
the platform.
But that doesn't come for free.
Web developers and
designers are frustrated
by not being able
to use features,
by having to find
workarounds, fiddling
with browser differences,
and also by the difficulty
to verify that something
that works in one browser
will not break in
another browser.
And related to that, it
was a bit of a surprise
that the top five issues
were extremely stable
between very different markets.
So whether it's China, India,
Japan, the US, or France,
the top issues
for web developers
revolve around web compatibility
and interoperability.
DION ALMAER: Got it.
So when you take
this feedback in,
how does it actually kind of
change the roadmap at Mozilla?

English: 
You know, one thing, really, and that's the web
compatibility and interoperability.
Four of the top five issues were focused on
exactly that topic.
And one of the biggest strengths of the web is that there
is no single entity controlling the platform, but
that doesn't come for free.
Web developers and designers are frustrated by
not being able to use features, by having
to find work arounds, fiddling with browser differences,
and also by the difficulty to verify that something that
works in one browser will not break in another browser.
And related to that, there was a bit of a surprise that
the top five issues were extremely stable between
very different markets.
So whether it's China, India, Japan,
the U.S. or France, the top issues for web developers
revolve around web compatibility and interoperability.
Got it. So when you take this feedback in,
how does it actually kind of change the roadmap at Mozilla?

English: 
Like, what are you now
looking to focus on
based on this feedback?
KADIR TOPAL: Yeah, so web
compatibility was already
a focus at Mozilla
even before this.
But we know have
doubled down on it.
So recently, we made a browser
compatibility data machine
readable on MDN.
And that's now
starting to pay off.
So if you use VS Code, a
very popular code editor,
the tooltips have compat data
information when you write CSS.
And we also recently started a
collaboration with CanIUse.com
to share the data that
we have so we are all
looking at the same
browser compat information.
And I'm sure Victoria can say
more about it in a moment,
but the Firefox
DevTools, they now
come with compat data
information built in.
DION ALMAER: Got it.
Yeah, the feedback has
been really helpful for us.
And the web platform
team is really
working even deeper on
stability, compat issues
that you talk about,
helping with testing,

English: 
Like, what are you now looking to focus on based on this
feedback?
Yeah. So web compatibility was already a focus
at Mozilla even before this, but we now have to double down
on it. So recently we made browser compatibility
data machine readable on MDN and that's now starting
to pay off. So if you use VS
Code - very popular code editor -
the tool tips have compat data information
when you write CSS. And we also recently
started a collaboration with caniuse.com
to share the data that we have.
So we are all looking at the same browser company
information, and I'm sure Victoria can say more
about it in a moment, but the Firefox DevTools
now come with compat data information built in.
Got it. Yeah. The feedback has been really helpful
for us. And the web platform team is really,
you know, working even deeper on stability, compat issues
that you talk about, helping with testing,

English: 
layout - really kind of understanding what
what the developer needs are and kind of bringing it into
our prioritization, too.
So it's actually almost time for the next version of the
DNA report. So what do we have in store for developers
this time around?
Yeah. So one thing we're super excited about
this year is to see how things have changed year over year.
So we want to see the developer satisfaction go
up or down and how have the
top pain points changed.
So this is really the first opportunity to see those
trends.
Got it. I can't wait to get that out to developers.
And I hope that everyone who's watching would do
us a favor and take some time to get that
feedback in so we can, like, really know what we can
prioritize in the future for our roadmaps.
Now, Victoria, I actually used to work on the Developer
Tools at Mozilla, and not only was it the
birth of great tooling in the browser back from Joe
Hewitt's, you know, Firebug onwards, but it really
continues to push the bar.

English: 
layout, really kind
of understanding
what the developer needs
are and kind of bringing it
into our prioritization, too.
So it's actually almost time
for the next version of the DNA
report.
So what do we have in store for
developers this time around?
KADIR TOPAL: Yeah.
So one thing we're super
excited about this year
is to see how things have
changed year over year.
So we want to see the developer
satisfaction go up or down,
and how have the top
pain points changed.
So this is really
the first opportunity
to see those trends.
DION ALMAER: Got it.
Well, I can't wait to get
that out to developers.
And I hope that everyone's
watching would do us a favor
and take some time to get that
feedback in so we can really
know what we can prioritize in
the future for our roadmaps.
Now Victoria, I
actually used to work
on Developer Tools at Mozilla.
And not only was it the birth of
a great tooling in the browser,
back from Joe
Hewitt's, you know,
Firebug onwards, but it really
continues to push the bar.

English: 
So I'd love for you to be
out to kind of catch us
up on what's the latest.
What's New in Firefox DevTools?
VICTORIA WANG: Hi, Dion.
As Kadir explained, we know
that differing browsers
support for CSS features is
a top issue for web devs.
So our team built the
compatibility panel
to make it easier to
stay on top of this.
It lists all the
CSS on your website
that's unsupported in
certain browsers as well
as deprecated styles.
You try it now in
Firefox Dev Edition.
DION ALMAER: I really love the
UX touches there, especially
the [? turtle ?] and the like.
That's great.
And in general, a lot of great
features that are landed there.
I'm curious if there's anything
on the upcoming roadmap
that you're excited
about sharing.
VICTORIA WANG: We've
also been working
on the Firefox Profiler.
It's our performance tool
which features shareable links
for collaboration.
We've been integrating the
recorded UI into Firefox,
so it's easy to get started.
This tool is also
currently in Dev Edition.
As far as our debugger,
I want to highlight
two unique features in 77.

English: 
So I'd love for you to kind of be able to catch us up on,
what's the latest. What's new in Firefox
DevTools?
Hi Dion! As Kadir explained, we know that differing browser
support for CSS features is a top issue for web devs.
So our team built the compatibility panel to make it easier
to stay on top of this.
It lists all the CSS on your website that's unsupported in
certain browsers, as well as deprecated styles.
You can try it now in Firefox Dev Edition.
I really love the UX touches there, especially the turtle
and the like. That's great.
And in general, you know, a lot of great features
that have landed there.
I'm curious if there's anything on the upcoming roadmap
that you're excited about sharing.
We've also been working on the Firefox Profiler.
It's our performance tool which features shareable links
for collaboration.
We've been integrating recording UI into Firefox, so it's
easy to get started. This tool is also currently in Dev
Edition. As far as our debugger, I want to highlight
two unique features in 77.

English: 
We added a type of
breakpoint that's
new to browser tools,
the watch point.
It lets you pause when
an object property is
accessed or changed.
Also, we made source
map variables work.
When you pause in
an original file,
we now reverse engineer
the scope chain
so that variables look
direct in the scopes pane
and work in the console.
This was six months of
incredibly challenging work
done by our teammate
Logan, who had
deep knowledge for being tech
lead on Babel the year before.
We joked that he's the only
person in the world who
could have written this.
So recently, I really embraced
the open design process
for a network panel redesign.
We sent out a survey and posted
early mock-ups to Twitter
and got amazing input.
I originally used bold
to indicate large files,
and someone suggested
it be more clear to have
mouse and elephant
icons for small and big.
That's how we ended up with
the turtle for slow responses.
People also told us that when
it comes to the domain column,
they mainly want to know
if it's third party or not.
So in this condensed
view with the sidebar,

English: 
We added a type of breakpoint that's new to browser tools:
the watchpoint. It lets you pause when an object property
is accessed or changed.
Also, we've made source mapped variables work!
When you pause in an original file, we now reverse engineer
the scope chain so that variables look correct in the
Scopes pane and work in the console.
This was six months of incredibly challenging work done by
our teammate Logan, who had deep knowledge from being tech
lead on Babel the year before.
We joke that he's the only person in the world who could
have written this. So recently I really embraced the
open design process for a Network panel redesign.
We sent out a survey and posted early mockups to Twitter
and got amazing input.
I originally used bold to indicate large files and someone
suggested it be more clear to have mouse and elephant icons
for small and big. That's how we ended up with the turtle,
for slow responses.
People also told us that when it comes to the Domain
column, they mainly want to know if it's third party or
not.
So in this condensed view with the sidebar, we hid the

English: 
we hid the domain column
and added an icon that
indicates third party requests.
Originally, I made these
brightly colored file
type icons, and
some people loved
them and others said it was
too much, they look like candy.
So I iterated toned
down colors and got
to the result you see here.
Most of this has landed
in the latest Nightly.
We hope everyone will try it
out and give us more feedback.
DION ALMAER: Awesome.
Great.
So Victoria, Kadir, thank you
so much for taking the time
to join us today.
We really appreciate
the great work
that Mozilla continues
to do for the web.
Now, you spend a lot of time
both in your developer tools
and also on the core task
of building your app UI.
So to chat more about modern
web UI, let's welcome Una.
UNA KRAVETS: Hi, Dion.
DION ALMAER: Hi, Una.
Now, we were just talking
about the Developer Needs
survey with Mozilla.
And there was plenty of feedback
from developers on layout.
So I was just curious, are
there any recent additions
to the platform that you
think target these needs?
UNA KRAVETS: Oh, yes.
We've definitely been listening.

English: 
domain column and added an icon that indicates third party
requests.
Originally, I made these brightly colored file type icons
and some people love them and others said it was too much,
'they look like candy.' So I iterated, toned down
the colors and got to the result you see here.
Most of this has landed in the latest Nightly.
We hope everyone will try it out and give us more feedback.
Awesome. Great.
So, Victoria, Kadir, thank you so much for taking
the time to join us today.
We really appreciate the great work the Mozilla continues
to do for the web.
Now, you spent a lot of time both in your developer tools,
and also on the core task of building your app UI.
So to chat more about modern web UI, let's welcome
Una.
Hi, Dion.
Hi, Una. Now, we were just talking about the developer
needs survey with Mozilla and there was plenty of feedback
from developers on layout.
So I was just curious, you know, are there any recent
additions to the platform that you think target these
needs?
Oh, yes, we've definitely been listening, and CSS has been

English: 
And CSS has been evolving so
rapidly in the past few years
and really months.
So tomorrow I'll
be going over a ton
of cool aspects of modern layout
with CSS grid and flexbox,
including how to harness
the power of CSS functions
like clamp,
fractional units, auto
placement, the minmax
function, justification,
place items, the repeat
function, and a lot more
to create robust
layouts breaking down
how powerful a single
line of CSS can be.
There are also some CSS
properties coming down
the pipeline that will help
with a lot of user needs
that haven't yet been met.
And aspect ratio
was one of them.
This just land in Chrome Canary.
And it enables users
to set defined width
to height ratios for media
items like images and video.
Previously, the way to do
this was a hack using padding
and calculating a percentage.
But now you can set your ratios
in a much more readable way.
I'm looking forward to
this landing in browsers
and making a lot of
developers' lives
easier, because I
know this is something
that I run into a lot.
We're also getting the
get property in flexbox.

English: 
evolving so rapidly in the past few years and
really months.
So tomorrow I'll be going over a ton of cool aspects of
modern layout with CSS grid and flexbox, including how
to harness the power of CSS functions like clamp(),
fractional units, auto placement, the minmax function,
justification, place-items, the repeat function, and
a lot more to create robust layouts, breaking down
how powerful a single line of CSS can be.
There are also some CSS properties coming down the pipeline
that will help with a lot of user needs that haven't yet
been met and aspect ratio was one of them.
This just landed in Chrome Canary, and it enables users to
set defined width to height ratios for media items
like images and video.
Previously, the way to do this was a hack using padding and
calculating a percentage, but now you can set your ratios
in a much more readable way.
I'm looking forward to this landing in browsers and making
a lot of developers' lives easier because I know this is
something that I run into a lot.
We're also getting the gap property in flexbox.

English: 
This one is exciting because of how many times we've just
been styling a series of items and wanting there to be
space between those items, but not around those items.
Gap enables the parent element to control spacing, not the
children, making it easier to style these items uniformly
within that parent.
Currently, you can use gap to create tracks with CSS grid.
You'll be able to use the flexbox layout too, meaning
you can leverage all the benefits of gap with the greater
choice of layout mechanism.
The Web Animations API is also getting a lot more robust
in Chromium 84.
Now we have promises, replaceable animations, composite
modes, partial keyframes and a way to access animations
from CSS in JavaScript.
Check out the blog post on web.dev for more information
about these updates and try them out in Chrome Canary
yourself. The @property rule is also available
behind a flag in canary, and it's something that I am
particularly excited about because this allows for semantic
variables in CSS.
With @property, you can declare CSS custom properties that

English: 
This line is exciting
because of how many times
we've just been styling
a series of items
and wanting there to be a
space between those items
but not around those items.
Gap enables the parent element
to control spacing, not
the children, making
it easier to style
these items uniformly
within that parent.
Currently you can use
gap to create tracks
with the CSS grid.
But you'll be able to use
a flexbox layout, too,
meaning you can leverage
all the benefits of gap
with a greater choice
and layout mechanism.
The Web Animations API is
also getting a lot more robust
in Chromium 84.
Now we have promises,
replaceable animations,
composite modes, partial
key frames, and a way
to access animations
from CSS and JavaScript.
Check out the blog
post on web.dev
for more information
about these updates
and try them out in
Chrome Canary yourself.
The @property rule is also
available behind a flag
in Canary.
And it's something that I am
particularly excited about,
because this allows for
semantic variables in CSS.
With @property, you can
declare CSS custom properties

English: 
that have semantic typed
values and fullbacks.
This is a part of the CSS
Houdini effort, specifically
the Properties and Values
API, and previously
was possible in JavaScript
with CSS.register property
as a part of Houdini.
But the @property declaration
brings this into our CSS files,
meaning a nice collocation
of super parent styles
with the rest of your CSS.
The other Houdini APIs
to keep an eye out for
are the Typed Object
Model, the Paint Worklet,
Animation Worklet, and
the Layout Worklet.
DION ALMAER: Great.
So JavaScript seemed to
kind of have its time
with the addition
of async await, ES
modules, and the like.
It was great to
see that evolution.
It's really feeling like
this is a big time for CSS.
I love being able to
get features that--
like Gap, that you
mentioned and the like,
to just make things a lot
easier for us, and super excited
at how deep you can get into
on the Houdini side, too.
But if I use these,
and I'm building
these super rich designs and
the like, with this great power
comes great responsibility.

English: 
have semantic typed values and fallbacks.
This is the part of the CSS Houdini effort, specifically
the Properties and Values API, and previously was possible
in JavaScript with CSS.registerProperty() as a part of
Houdini. But the @property declaration brings this into
our CSS files, meaning a nice co-location of super
powered styles with the rest of your CSS.
The other Houdini APIs to keep an eye out for are the Typed
Object model, the Paint Worklet,
Animation Worklet, and the Layout Worklet.
Great. So JavaScript seem to kind of have its time with
the addition of async/await, ES modules and the like,
it was great to see that evolution.
It's really feeling like this is a big time for CSS.
I love being able to get features like gap that
you mentioned, and the like, to just make things a lot
easier for us and super excited at how deep
you can get into on the Houdini side, too.
But if I use these and I'm building these super rich
designs and the like, with this great power
comes great responsibility.

English: 
So how do you think about the
role of accessibility here?
UNA KRAVETS: I love that
you bring up accessibility,
because accessibility
should always come first.
I think you're absolutely
right that that
needs to be top of mind.
Your users need to be able
to access your content
and navigate your product.
It is not an enhancement.
I think of accessibility
as a core feature.
And Chrome 83
actually just launched
with some new accessibility
testing features,
which are pretty
neat because they
allow for visual
accessibility testing.
So now through DevTools
you can examine
if your UI works for users with
various vision deficiencies,
like blurred vision,
and four different types
of colorblindness.
DION ALMAER: Yeah, it's great.
We're actually going to
have Paul Lewis come on
and kind of walk through
that a little bit.
So thanks so much.
I also have really been
enjoying your new "CSS
Podcast" with Adam Argyle.
Not only are you both
kind of really fun
to listen to and the
like, it's actually
been really interesting to watch
you kind of go through step
by step and kind of teach
us the fundamentals.
There was a lot that I
didn't really know about.
UNA KRAVETS: Yeah.
Honestly, we are
learning so much
as we were going
through the fundamentals

English: 
So how do you think about the role of accessibility here?
I love that you bring up accessibility because
accessibility should always come first.
I think you're absolutely right that that needs to be top
of mind. Your users need to be able to access your content
and navigate your product.
It is not an enhancement.
I think of accessibility as a core feature.
And Chrome 83 actually just launched with some new
accessibility testing features, which are pretty neat
because they allow for visual accessibility testing.
So now through DevTools, you can examine if your UI works
for users with various vision deficiencies like blurred
vision and four different types of colorblindness.
Yeah, it's great. We're actually going to have Paul Lewis
come on and kind of walk through that a little bit.
So thanks so much. I also have really been enjoying your
new CSS podcast with Adam Argyle.
Not only are you both kind of really fun to listen to and
the like. It's actually been really interesting to watch
you kind of go through step by step and kind of teach us
the fundamentals. There was a lot that I didn't really know
about.
Yeah. Honestly, we're learning so much as

English: 
we are going through the fundamentals and are having a lot
of fun and making these episodes.
So if you haven't seen it yet, check out the CSS podcast.
Absolutely. So, thanks so much for joining us, Una, and
we'll see you later on the stream.
Bye.
Now there are a few more DevTools features I'm really keen
to show you. And no one's better to show off a bit of
tooling than the ever supercharged Paul Lewis.
Hey Dion, how you doing?
Not too bad. How you doin', mate?
Yeah. Pretty good, thanks.
All right. So one of the things that we've noticed is
that DevTools puts out these console warnings,
as you can see on screen. And if you're anything like me,
after a while, you start to ignore them.
And the reason is that there could just to be quite a lot
of them. So we've been thinking about that and what we've
decided to do is to bring in the Issues tab.
Now, if we detect issues on your page, you'll see this bar
across the top with a button in the top right-hand corner
there that says "go to issues".
If you click on that, it'll take you through to the Issues
tab. Now it might offer you the opportunity to reload the

English: 
and are having a lot of fun
and making these episodes.
So if you haven't seen it
yet, check out "CSS Podcast."
DION ALMAER: Absolutely.
So thanks so much
for joining us, Una.
And we'll see you
later on the stream.
UNA KRAVETS: Bye.
DION ALMAER: Now, there are
a few more DevTools features
I'm really keen to show you.
And no one's better to
show off a [INAUDIBLE]
than the ever-supercharged
Paul Lewis.
PAUL LEWIS: HI, Dion.
How you doing?
DION ALMAER: Not too bad.
How you doing, mate?
PAUL LEWIS: Yeah,
pretty good, thanks.
All right.
So one of the things
that we've noticed
is that DevTools puts out
these console warnings,
as you can see on screen.
And if you're anything
like me, after a while
you start to ignore them.
And the reason is
that there just
can be quite a lot of them.
So we've been
thinking about that.
And what we decided to do is
to bring in the Issues tab.
Now, if we detect
issues on your page,
you'll see this
bar across the top
with a button in the top
right-hand corner that
says, "Go to issues."
If you click on that,
it will take you
through to the Issues tab.
Now, it might offer
you the opportunity

English: 
page to get more information.
If you click on one of these items, it'll expand and you
can see more information there, as well as potentially some
links to content for you to read up on what you could do
to fix the issue.
So that's the Issues tab.
The other thing we've been looking at is Web Vitals.
So if you go to web.dev/metrics, you'll see a
whole list of metrics here that affect the UX and things
that we would like to optimize as web developers.
And we've been looking at ways of exposing this information
to you inside of the DevTools UI.
So things like First Contentful Paint or
Largest Contentful Paint, for example.
So if you go to the Performance tab in DevTools and you
take a recording in the Performance tab, you'll see
something that looks like this.
Now there is a Timings tab there - or
Timings row, sorry, I should say - across which you'll see
these blocks. And these relate to some of those metrics
that you see FCP and LCP -
First Contentful Paint and Largest Contentful Paint - and so
on. So you can start
to get information there on some of your metrics.

English: 
to reload the page to
get more information.
If you click on one of
these items, it'll expand,
and you can see more information
there as well as potentially
some links to content for
you to read up on what
you could do to fix the issue.
So that's the Issues tab.
The other thing we've been
looking at is web vitals.
So if you go to web.dev/metrics,
you'll see a whole list
of metrics here that affect
the UX and things that we would
like to optimize
as web developers.
And we've been looking at ways
of exposing this information
to you inside of
the DevTools UI,
so things like first
contentful paint
or largest contentful
paint, for example.
So if you go to the
Performance tab in DevTools,
and you take a recording
in the Performance tab,
you'll see something
that looks like this.
Now, there is a Timings tab
there-- or Timings row, sorry,
I should say--
across which you'll
see these blocks.
And these relate to
some of the metrics.
So you see FCP, and LCP--
First Contentful Paint, Largest
Contentful Paint, and so on.
So you can start to
get information there

English: 
on some of your metrics.
The other thing
we've started doing
is to add candy striping
to your long-running tasks.
And you can see that here.
I have one task
on my main thread
that is 70 milliseconds long.
And what we're
looking for is we're
looking for tasks to remain
under 50 milliseconds.
This means that the main
thread stays responsive
and hopefully we can respond
to user interactions quickly.
So as you look around your
performance recording,
if you see this candy striping
effect and the red triangle
in the corner, you
know that you've
got a task that's running
longer than 50 milliseconds.
What we've also added
as well is we've
added a total blocking
time footer at the bottom.
What this tells you is, if
you like, the amount of candy
striping that you would see
across the whole recording.
So as you're looking
around, if you
see that that number's going up
you might want to take a look
and see if you have a
lot of long-running tasks
on your main thread.
Bringing that down
should hopefully
help your user experience.
Another thing that we've added
is this experience track.
And what's contained within this
is layout shift information.
So for example, when you've got
buttons and so on your page,

English: 
The other thing we started doing is to add candy striping
to your long running tasks, and you can see that here I
have one task on my main thread that is 17 milliseconds
long. And what we're looking for is, we're looking for
tasks to remain under 50 milliseconds.
This means that the main thread stays responsive and
hopefully we can respond to user interactions quickly.
So as you look around your Performance recording, if you
see this candy striping effect and the red triangle in the
corner, you know that you've got a task that's running
longer than 50 milliseconds.
What we've also added as well - is we've added a total
blocking time footer at the bottom.
What this tells you is the amount of candy striping
that you would see across the whole recording.
So as you're looking around, if you see that that number is
going up, you might want to take a look and see if you have
a lot of long running tasks on your main thread.
Bringing that down should hopefully help your user
experience.
Another thing that we've added is this experience track.
And what's contained within this is Layout Shift
information. So, for example, when you've got buttons

English: 
and so on on your page and that are perhaps moving around,
this can cause UX discomfort.
So what we want to do is we want to minimize the amount of
moving elements on the page.
And so the Layout Shift here is going to tell you what
elements are moving on the page and where
and so on, and the size that they were when they did it.
So if I look at this, I have a warning here
which tells me that Cumulative Layout Shifts can result in
poor user experiences, and that's a link to more
information, as well as information on where it's moved
from or to. And if I roll over that, I get an overlay
on my screen, which shows me exactly where on my page
the shift took place.
You can also get live information about layout
shifts by going to the Rendering tab and choosing Layout
Shift Regions here in the options.
Now, I should say that for people prone to photo sensitive
epilepsy, this might be a less suitable option because it
can cause flashing of overlays on the screen, but it is
there as an option if it's suitable for you.
The next thing I want to talk about is WebAssembly
Debugging. It's an experiment, so if you go into your

English: 
and they're perhaps
moving around,
this can cause UX discomfort.
So what we want
to do is you want
to minimize the amount of
moving elements on the page.
And so the layout
shift here is going
to tell you what elements are
moving on the page and where
and so on, and the size that
they were when they did it.
So if I look at this,
I have a warning here,
which tells me that cumulative
layout shifts can result
in poor user experiences--
and that's a link to more
information--
as well as information on
where it's moved from and to.
And if I roll over that, I
get an overlay on my screen,
shows me exactly where on my
page the shift took place.
You can also get live
information about layout shifts
by going to the Rendering
tab and choosing Layout Shift
Regions here in the options.
Now, I should say that for
people prone to photosensitive
epilepsy this might be
a less suitable option,
because it can cause flashing
of overlays on the screen.
But it is there as an option
if it's suitable for you.
The next thing I want to talk
about is WebAssembly debugging.
It's an experiment.
So if you go into your
DevTools settings,

English: 
DevTools settings, go to the Experiments tab and click
on WebAssembly Debugging.
You can switch it on there.
What this allows you to do is it allows you to do things
like setting breakpoints in your WebAssembly code.
So here I've compiled a C program.
It's just a 'Hello, world!' program.
But what I've done is I've added a breakpoint on the line
that says 'Hello, world!'.
So when this code executes and it hits that line inside of
the WebAssembly, it pauses execution just like it would
inside the JavaScript.
And you can see here in the call stack that I can actually
take a look at what's going on in that particular frame
and I can go between my JavaScript and the C and so on and
so forth. So that's something that's coming down and it's
currently in Canary, so take a look at that.
Now, the last thing I want to show you is Color Vision
Deficiency Emulation inside of DevTools.
And there's no better way to do that than to actually give
you a demo.
OK, so here I am in Chrome Canary and
I have a video here running of me and Surma doing
Supercharged, yesteryear. But you see, I have the Rendering
tab open in DevTools and I can emulate various
vision deficiencies, such as blurred vision or I can do
protanopia.

English: 
go to the Experiments tab, and
click on WebAssembly Debugging,
you can switch it on there.
What this allows you
to do is it allows
you to do things like setting
breakpoints in your WebAssembly
code.
So here I've
compiled a C program.
It's just a Hello World program.
But what I've done is I've added
a breakpoint on the line that
says "Hello world."
So when this code
executes and it
hits that line inside
of the WebAssembly,
it pauses execution, just like
it would inside the JavaScript.
And you can see here
in the call stack
that I can actually
take a look at what's
going on in that
particular frame,
and I can go between my
JavaScript and the C,
and so on and so forth.
So that's something
that's coming down.
And it's currently in Canary.
So take a look at that.
Now, the last thing
I want to show you
is color vision deficiency
emulation inside of DevTools.
And there's no better way to do
that than to actually give you
a demo.
OK.
So here I am in Chrome Canary.
And I have a video here
running of me and Surma doing
"Supercharged--" yesteryear.
But you can see I have the
Rendering tab open in DevTools.
And I can emulate various
vision deficiencies,
such as blurred vision.

English: 
I can do deuteranopia,
protanopia- oh sorry, tritanopia, and achromatopsia
as well.
You see the live effect that it has on the page.
So these are physiologically accurate emulations of various
vision deficiencies. Now, a vision deficiency isn't an
on/off thing like you see here, but rather it's a spectrum.
So a person could have a milder form of vision deficiency
or a more acute form.
What we chose to implement inside of the DevTools UI is
the most acute form.
The theory being that as you're optimizing your app for
accessibility in terms of color and contrast, if you
make it work for the most acute form, then you'll include
everything up to and including that as well.
So that's called a vision deficiency emulation inside of
DevTools.
Thanks, Paul. I'm really looking forward to seeing more
later today.
Now, one thing I've noticed as we work from home is how
seriously people are taking their home setups,

English: 
Or I can do protanopia.
I can do deuteranopia,
protanopia--
sorry, tritanopia, and
achromatopsia as well.
And you can see the live
effect that it has on the page.
So these are physiologically
accurate formulations
of various vision deficiencies.
Now, a vision deficiency
isn't an on/off thing
like you see here, but
rather, it's a spectrum.
So a person could have a milder
form of a vision deficiency
or a more acute form.
What we've chosen to implement
inside of the DevTools UI
is the most acute
form, the theory
being that as you're optimizing
your app for accessibility
in terms of color
and contrast, if you
make it work for
the most acute form,
then you'll include
everything up to
and including that as well.
So that's color vision
deficiency emulation inside
of DevTools.
DION ALMAER: Thanks, Paul.
I'm really looking forward
to seeing more later today.
Now, one thing I've noticed
as we work from home
is how seriously people are
taking their home setups,

English: 
whether it be playing
with mikes and cameras
or virtual backgrounds.
And recently, I saw
a demo that would
make you invisible on your
video feed using TensorFlow.js
And so I really
wanted to learn more.
So please welcome Jason Mayes
from the TensorFlow.js team.
JASON MAYES: Hi there.
I'm Jason.
And thank you for
the introduction.
It's a pleasure to be
invited to this show.
And yes, I have created
an invisibility cloak.
So maybe you want to
learn more about that.
DION ALMAER: Yeah, Jason.
Invisibility cloaks
are pretty cool.
And so maybe you can show
us how web developers
can create superpowers
like that with TensorFlow.
JASON MAYES: Sure, definitely.
So if I switch to my
slides for just a second,
you can see what the
invisibility cloak
was that Dion was referring to.
And in this demonstration on the
right-hand side, you can see,
as I get on the bed, the bed
is deforming in real time,
and I'm being removed from the
bottom frame at the same time.
And this is running
all in the web browser.
Now, this is pretty cool,
because privacy is preserved,

English: 
whether it be playing with mics and cameras or virtual
backgrounds.
And recently, I saw a demo that would make you invisible on
your video feed using TensorFlow.js .
So I really wanted to learn more.
So please welcome Jason Mayes from the TensorFlow.js team.
Hi there. I'm Jason, and thank you for the introduction
there. It's a pleasure to be invited to the show.
And yes, I have created an invisibility cloak.
So maybe we can learn more about that.
Yeah. Jason, invisibility cloaks are pretty cool.
And so maybe you can show us how web developers can create
superpowers like that with TensorFlow.
Sure. Definitely.
So if I switch to my slides for just a second, you can see
what the invisibility cloak was that Dion was referring to.
And in this demonstration, on the right hand side, you can
see as I get on the bed, the bed is deforming in real
time and I'm being removed from the bottom frame at the
same time. This is running all in the web browser.
Now, this is pretty cool because privacy is preserved as
none of these images are being sent to the server-side.

English: 
And that's super powerful, especially in today's climate
where privacy is top of mind.
Now, this was created in just under a day, in fact.
So it is quite easy to get started with machine learning in
the web browser, and we'll see some more demos in just a
second.
So on that note, I also created a Chrome extension that
allows me to use the same stuff we saw before.
I was actually using BodyPix to create that, which gives me
this image segmentation of my body in real time.
I can now join a Google Meet meeting,
as you can see shown on the slide right now.
And this could be combined with my previous demo.
So I can remove that second person who comes into frame
halfway through the GIF and then
it would appear as if it never actually happened.
Cool. So can you give us a few more details on how this all
works?
So essentially, all this is using body segmentation.
And this is running in TensorFlow.js in the web browser.
And this can distinguish 24 unique body areas
across multiple bodies in real time.
You can see on the right hand side that this works pretty

English: 
as none of these images are
being sent to the server site.
And that's super powerful,
especially in today's climate
where privacy is top of mind.
Now, this was created in
just under a day, in fact.
So it's quite easy to
get started with machine
learning in the web browser.
And we'll see some more
demos in just a second.
So on that note, I also
created a Chrome extension
that allows me to use the
same stuff we saw before--
I was actually using
body pics to create
that, which gives me
this image segmentation
of my body in real time.
I can now join a
Google Meet meeting,
as you can see shown
on the slide right now.
And this could be combined
with my previous demo.
So I can remove
that second person
who comes into frame
halfway through the GIF,
and then it would appear as
if it never actually happened.
DION ALMAER: Cool.
So can you give us a few more
details on how this all works?
JASON MAYES: So
essentially, all this
is using body segmentation.
And this is running
in TensorFlow.js
in the web browser.
And this can distinguish
24 unique body areas
across multiple
bodies in real time.
You can see on the
right-hand side

English: 
that this works pretty well
when all those settings are
bumped up to high.
And you can even get
the pose estimation
showing on the
bodies, too, which
estimate where the skeleton is.
These can be used in delightful
ways, such as clothing size
estimation, which
you can see here.
This is not a
prototype I created.
And I don't know
about you, Dion,
but I am terrible
at knowing what
size clothing to buy in my
once a year clothing purchase.
DION ALMAER: Absolutely.
JASON MAYES: So here--
totally.
And here you can see,
I just enter my height.
And in less than
15 seconds, I think
an estimate of my inner leg, my
chest, and waist measurements,
which the clothing
site can then use
to estimate what size I am--
a small, medium, or large.
Now, this can even
give you superpowers,
as you see on this next slide.
And one of our community
members from the USA
has combined this
with WebGL shaders
to turn himself into
Iron Man of sorts.
And he can shoot lasers
from his eyes and mouth
using our face mesh model.
So-- which is pretty cool.
And it runs buttery
smooth at 60 frames
per second in the web browser.
And you can even go
further, of course.
There's many web
technologies out there

English: 
well when all those settings are bumped up to high and we
can even get a pose estimation showing on the bodies too,
which estimate where the skeleton is.
These can be used in delightful ways, such as clothing size
estimation, which you can see here.
This is another prototype I created.
And I don't know about you, Dion, but I am terrible at
knowing what size clothing to buy in my once a year
clothing purchase.
So.
Absolutely.
Yeah, totally.
And here you can see I just enter my height and in less
than fifteen seconds I can get an estimate of my inner leg,
my chest and waist measurements.
Which a clothing site can then use to estimate what size I
am: a small, medium or large.
Now this can even give you superpowers as you can see on
this next slide. And one of our community members from the
USA has combined this with WebGL shaders
to turn himself into Iron Man of sorts, and he can shoot
lasers from his eyes and mouth using our face mesh model.
So which is pretty cool.
And it runs buttery-smooth at 60 frames per second in the
web browser.
And you can even go further, of course.
There's many web technologies out there, that you might

English: 
want to combine with machine learning, such as WebXR,
WebGL, and TensorFlow.js.
And if you do that, you can get an example like this,
from another one of our community members in Paris, France,
who can essentially scan a magazine, and if there's a
person in it they can bring that person into the living
room full size.
You can walk up to them and inspect them in more detail.
Pretty cool technology. But of course, after seeing this,
I thought to go one step further.
And if I add WebRTC, I can then teleport myself
anywhere in the world in real time, and
this is using a complete rewrite, using
WebRTC, A-Frame, Three.js, and TensorFlow.js together to
create this demo.
And it really does make a big difference when I'm
seeing someone in my room which I can walk up to
and move around. It's a massive difference compared to
a rectangle that's solid on the screen.
So this could be the future of video conferencing.
Who knows? But it's great to play with technologies and
push the boundaries of the web.
That's really cool. From invisibility cloaks to
teleportation.

English: 
that you might want to
combine with machine
learning, such as Web
XR, WebGL, and use
those with TensorFlow.js.
And if you do that, you can
get an example like this
from another one of
our community members
in Paris, France, who can
essentially scan a magazine,
and if there's a
person in it, they
can bring that person into
the living room, full size.
You can walk up to them and
inspect them in more detail.
Pretty cool technology.
But of course,
after seeing this I
thought to go one step further.
And if I add WebRTC, I can
then teleport myself anywhere
in the world in real time.
And this is using a complete
rewrite using WebRTC, AFrame,
Three.js, and TensorFlow.js
together to create this demo.
And it really does
make a big difference.
When I'm seeing someone
in my room, which
I can walk up to
and move around,
it's a massive difference
compared to a rectangle that's
solid on the screen.
So this could be the future of
video conferencing, who knows.
But it's great to
play with technologies
and push the
boundaries of the web.
DION ALMAER: That's really cool.

English: 
From invisibility cloaks
to teleportation--
that's pretty cool stuff.
JASON MAYES: Yeah, exactly.
This changes
everything, essentially.
DION ALMAER: Yeah, totally.
So how should web developers--
if we kind of zoom out
a second--
how to web developers
kind of generically
think about the role
of ML in TensorFlow.js
and how it could fit into
their web applications?
JASON MAYES: Yeah,
that's a great question.
And obviously,
right now, in fact,
machine learning and JavaScript
is still a pretty new thing,
right, the very early stages.
But that's super exciting,
too, because there's
so much potential to be
unraveled at this time as well.
So on that note, I
would ask web developers
to consider how machine learning
might fit into their existing
pipelines.
Maybe you're developing a
content management system.
In that case, you could
potentially use something
like automatic image cropping
to detect where a human face is
in the image, so then
you can make sure
that it's cropped nicely when
you're reciting with your CSS.

English: 
That's pretty cool stuff.
Yeah. Exactly. This changes everything, essentially.
So how should Web developers, if we kind of zoom out a
second, how do web developers
kind of generically think about the role of ML and
TensorFlow.js and how it could fit into their web
applications?
Yeah, that's a great question. And obviously right now,
in fact, machine learning in JavaScript is still a pretty
new thing. We're at the very early stages.
But that's super exciting, too, because there's so much
potential to be unraveled at this time as well.
So on that note, I would ask web developers to consider how
machine learning might fit into their existing pipelines.
Maybe you're developing a content management system.
In that case, you could potentially use something
like automatic image cropping to detect where human
face is in the image, so then you can make sure
that it's cropped nicely when you're resizing

English: 
with your CSS. Or maybe you want to summarize
a blog post article. So you have one paragraph of text that
shows in the search results.
That is now possible in machine learning too, and that can
be done automatically.
So I think I would encourage people to experiment and go
outside of the regular box of thinking.
And of course, on that note, on this slide, you can see all
the different areas JavaScript can run on the browser,
server side, mobile native, desktop native
and even Internet of Things. And TensorFlow.js supports all
of these environments, too.
So maybe you want to combine it with hardware, if you can
recognize an object, maybe you can trigger something to
happen in the physical world or something on the
server-side, like talk to a third-party service.
And on that note, Tensorflow.js can essentially run,
retrain by transfer learning, or even
allow you to write your own models from scratch if you so
desire. Now, on that note, we have
a ton of pretrained models you can use to get started, such
as the body segmentation you saw just a little bit ago, but
also things like pose estimation, speech commands,

English: 
Or maybe you want to
summarize a blog post article,
so you have one
paragraph of text that
shows in the search results.
That's now possible
machine learning, too.
And that can be
done automatically.
So I think I would encourage
people to experiment
and go outside of the
regular box of thinking.
And of course, on that
note, on this slide
you can see all
the different areas
JavaScript can run on
the browser, server side,
mobile native, desktop native,
and even internet of things.
And TensorFlow.js supports all
of these environments, too.
So maybe you want to
combine it with hardware.
If you can recognize
an object, maybe you
can trigger something to
happen in the physical world,
or something on the
server side that
talks to a third party service.
And on that note,
TensorFlow.js can essentially
run, retrain by
transfer learning,
or even allow you to write
your own models from scratch
if you so desire.
Now, on that note we have
a ton of pretrained models
you can use to get started,
such as the body segmentation
you saw just a little
bit ago, but also
things like pose estimation,
speech commands, face mesh,

English: 
hand pose, and some cool
natural language processing.
And just to dive into
that a little bit more,
you can see how these
models work here.
So here's the object
recognition in action.
This class allows
you to recognize
90 pretrained objects, like
these dogs you can see here.
And you get the bounding
boxes but come back
at the same time,
which is pretty neat.
Or what about this, face mesh?
Just 3 megabytes
in size, and you
can understand 468 unique
landmarks on the face.
And this could be
cool for making a face
mask or some kind
of AR experience,
such as the one you
see on the right.
ModiFace, which is part
of a L'Oreal group,
has actually uses
for AR makeup try-on.
And this lady on
the right-hand side
is not actually wearing
any makeup at all.
In fact, she's selecting
the color of makeup
she wants to try on,
and she can do that all
in real time in the web browser
in a much more hygienic way,
which is pretty cool.
And then finally, I
want to talk about some
of the client side
superpowers you
get if we think
about running machine
learning in the web browser.
The first one is privacy,
as we hinted at before.
Essentially, because we're
running in the web browser,

English: 
face mesh, hand pose, and some cool natural language
processing. And just to dive into that a little bit more,
you can see how these models work here.
So here's the object recognition in action.
This class allows you to recognize 90
pretrained objects like these dogs you can see here
and you get the bounding boxes that come back at the same
time, which is pretty neat.
Or what about this face mesh?
Just 3 megabytes in size and you can understand 468
unique landmarks on the face.
And this could be cool for making face masks or some kind
of AR experience, such as the one you see on the right.
ModiFace, which is part of the L'Oréal group, has actual
used this for AR makeup try-on.
And this lady on the right hand side is not actually
wearing any makeup at all.
In fact, she's selecting the color of makeup she wants to
try on and she can do that all in real time in the web
browser, in a much more hygienic way, which is pretty cool.
And then finally, I just want to talk about some of the
client-side super powers you get if we think about running
machine learning in the web browser.
The first one is privacy, as we hinted at before.
Essentially because we're running in the web browser,

English: 
none of that data
is ever being sent
to a server for classification.
So that allows you to access
the sensitive data in a way that
is great for privacy.
Linked to that, of
course, is lower latency.
As there's no server
involved, that
means there's no 100
milliseconds or so
round trip time from the
mobile device to the server.
You cut that out completely
by running on the edge.
And of course, lower costs.
You might spend
tens of thousands
of dollars hiring
beefy GPUs and CPUs
to run the machine learning
models on your server side
environments.
By running on the
client side, all that
goes away, because you're using
the hardware of the client
to run instead.
DION ALMAER: Got it.
So how can people get started?
You've definitely
piqued my interest.
Now I want to start playing.
JASON MAYES: Sounds good.
So if there's one slide you
want to kind of screenshot
from today's talk, it
would be this one here.
On this slide, you can
find out our website,
our models, GitHub code.
We are open-source, so
feel free to contribute.
There's a Google
Group for asking
more technical questions, and of
course, some great boilerplate
to get started on
CodePen and Glitch.

English: 
none of that data is ever being sent to a server for
classification. So that allows you to access for sensor
data in a way that is great for privacy.
Linked to that, of course, is lower latency as there's no
server involved. That means there's no hundred milliseconds
or so round trip time from the mobile device to the server.
You cut that out completely by running on the edge.
And of course, lower cost.
You might spend tens of thousands of dollars hiring
beefy GPUs and CPUs to run the machine learning models on
your server-side environments.
By running on the client-side, all that goes away because
you're using the hardware of the client to run instead.
Got it. So how can people get started?
You've definitely piqued my interest, now I want to know
when I can start playing.
Sounds good. So if there's one slide you want to kind of
screenshot from today's talk, it'll be this one here.
On this slide, you can find out our website, our models,
GitHub code. We are open source so feel free to contribute.
There's a Google Group for asking more technical questions.
And of course, some great boilerplates to get started on
Codepen and Glitch.

English: 
And for those of you who
want to go really deep,
we recommend the "Deep
Learning with JavaScript"
book that's very comprehensive.
And even you have no
background in machine learning,
as long as you know
some JavaScript,
it will walk you through
everything step by step.
So I highly recommend
checking that out.
And on that note, I
would also suggest
checking out Teachable Machine,
right after this show maybe.
In just two minutes,
you can use this website
to learn to recognize
any object in your room.
In just 30 seconds, you
just take a few images
of that thing, hit
Train, and you'll
get a classifier that can
then classify that object.
If you like it, you can export
this model to a JSON format
and then use it on any website
you like to be more creative
and make your own
creations using that.
And then finally, I'd ask you
to come join the community.
We've got this
#MadeWithTFJS hashtag
that you can use that allows
you to share what you've made,
so we can find it and we can
invite you to our future show
and tells, and of course,
allow others to get
inspired by your
great work, too.
So finally, I want to leave
you with one more inspiration
example.

English: 
And for those of you want to go really deep, we recommend
the Deep Learning with JavaScript book.
That's very comprehensive and even if you have no
background in machine learning, as long as you know some
JavaScript, it will walk you through everything step by
step. So I highly recommend checking that out.
And on that note, I would also suggest checking out
Teachable Machine right after his show maybe.
In just two minutes, you can use this web site to learn
to recognize any object in your room in just
30 seconds. You just take a few images of that thing, hit
train and you'll get a classifier that can
then classify the object.
If you like it, you can then export this model to a JSON
format and then use it on any website you'd like to be more
creative and make your own creations using that.
And then finally, I'd ask you to come join the community.
We've got this #MadeWithTFJS hashtag that you can use
that allows you to share what you've made so we can find
it and we can invite you to our feature show and tells.
And of course, allow others to get inspired by your great
work, too. So finally, I just want to leave you with one
more inspirational example.

English: 
This guy from Tokyo in Japan is actually a
dancer, but he's used TensorFlow.js to
make this cool hip hop video.
So machine learning really is now for everyone and I'm
super excited to see how creatives will start using this
and not just academics; musicians, artists,
and much, much more.
So if you do make something, do use the #MadeWithTFJS
hashtag, so we can find it and share it for you.
And I look forward to seeing what you make.
And with that, feel free to stay in touch using the
following links, if you so desire, on Twitter and LinkedIn.
Great. Thanks so much for joining us, Jason.
And thanks to everyone who joined me on the Day Two kick
off.
So now let's get to today's sessions where we focus on
updates across our tools and the Web Platform to make
developers more productive.
As well as the latest in the world of PWAs.
Please enjoy the show. And remember, the team is here to
chat with you on web.dev/live
and via YouTube.
I'll see you there today. And we'll be back tomorrow
morning for the Day Three kickoff.

English: 
This guy from Tokyo in
Japan is actually a dancer.
But he's used TensorFlow.js to
make this cool hip-hop video.
So machine learning really
is now for everyone.
And I'm super excited to see
how creatives will start using
this, and not just academics--
musicians, artists,
and much, much more.
So if you do make
something, do you
use the #MadeWithTFJS hashtag
so we can find it and share it
for you.
And I look forward to
seeing what you make.
And with that, feel free to stay
in touch using the following
links if you so desire
on Twitter and LinkedIn.
DION ALMAER: Great.
Thanks so much for
joining us, Jason.
And thanks to everyone who
join me on the day two kickoff.
So now let's get to
today's sessions,
where we focus on updates across
our tools and the web platform
to make developers
more productive,
as well as the latest
in the world of PWAs.
Please enjoy the show.
And remember, the team
is here to chat with you
on web.dev/live and via YouTube.
I'll see you there today.
And we'll be back tomorrow
morning for the day three

English: 
kickoff.
[MUSIC PLAYING]
PAUL LEWIS: Hey, everybody.
Paul Lewis here.
Just going to show
you some features
that have been landing
inside of DevTools recently.
And I have, as
always, Surma with me.
And we're just going to talk
those through, navigate around.
I've got a few demos
and things to show you.
How you doing, Surm?
SURMA: I'm good.
PAUL LEWIS: Good, good.
SURMA: I thought we
could start with--
I know you've been working
on the Performance tab
in some ways.
And that's something that
people are interested in.
Why don't we start with that?
PAUL LEWIS: Let's
start with that, then.
OK.
So if you've not
seen it-- in fact,
let's just go there right
now-- web.dev/metrics.
It helps if I can type.
As always, my typing is woeful.

English: 
Hey, everybody. Paul Lewis here.
Just going to show you some features that have been landing
inside of DevTools recently, and I have, as always,
Surma with me. And we're just going to talk those through.
Navigate around, I've got a few demos and things to show
you. How you doing, Surma?
I'm good. I thought we could start with - I
know you've been working on the Performance tab
in some ways.
And that's something that people are interested in.
Why don't we start with that?
Let's start with that then. Okay. So if
you've not seen it - in fact, let's just go there right now
- web.dev/metrics.
It helps if I can type - as always, my typing is woeful but

English: 
But let's have a look.
SURMA: Nothing has changed.
PAUL LEWIS: Nothing has
changed, web.dev/metrics.
OK.
So here you can see we've got
important metrics to measure--
first contentful paint, largest
contentful paint, first input
delay, time to interactive,
total blocking time,
and cumulative layout shift.
Now, some of these
metrics-- it's worth
saying from the
off-- these are all
designed to help you
improve the user experience
in your sites and your apps.
But not all of them
are suitable for what
we'd call a lab setting.
Some of them are, we call
field setting metrics.
So for example, first input
delay is not really something
you typically test in a
lab setting, something
you do more with the field with
like RUM and sort of live data.
So not all of these
make sense in the lab.
But the ones that
do, we've been trying
to get into the Performance
panel inside of DevTools.
SURMA: Because once
you open DevTools,
you are in a lab setting.
PAUL LEWIS: Exactly.
SURMA: You're
not-- you can never
require a real user [INAUDIBLE]
can you please open DevTools
and quickly do the measurement?

English: 
let's have a look.
Nothing has changed.
Nothing has changed.
web.dev/metrics OK, so here you can see we've got
important metrics to measure.
First Contentful Paint, Largest Contentful Paint,
First Input Delay, Time to Interactive, Total Blocking
Time, and Cumulative Layout Shift.
Now, some of these metrics - it's worth saying from the off
- these are all designed to help you improve the user
experience in your sites and your apps.
But not all of them are suitable for what we'd call a lab
setting. Some of them are called field setting
metrics. So, for example, First Input Delay
is not really something you typically test in a lab
setting, it's something you do more in the field with like
RUM, and sort of live data.
So not all of these make sense in a lab, but the ones that
do, we've been trying to get into the Performance panel
inside of DevTools.
Because once you open DevTools, you are in a lab setting.
Exactly.
Like you're not - You can never require a real user and ask
them, 'Can you please open DevTools and quickly do the
measurement?' The second you need DevTools to measure

English: 
The second, you need
DevTools to measure
something-- you're in a lab.
PAUL LEWIS: Yeah.
And so our assumption
is predominantly,
if you're local host
on your machine,
we tend to refer to
that as the lab setting.
So let me just run you
through very quickly the kinds
of things that we've got.
So if I just-- if I go to
the web.dev home page--
and I've got the Performance
tab open here in Chrome.
This is currently in Canary.
So some of these features
are very hot off the press.
They're subject to change.
So just bear that in mind.
If these things
look a little bit
different over the
coming weeks and months,
it's because we're
working on them.
We just wanted to share
them with you, because we're
kind of excited about some
of them-- and all of them,
and we want to show them to you.
So if I take a recording--
so I'm going to go to Record.
I'm going to hit, in my
case, Command-Shift-R,
which will do a reload
without any caching.
Just wait for a little
while for things to--
SURMA: Or service worker.
PAUL LEWIS: Sorry, say again?
SURMA: Or service worker.
Like, it's literally as
if you hit the network.
PAUL LEWIS: Yeah, exactly.
So I'm going to
stop that profile.
I left it running for
quite some time there.
And it's going to process that.
OK.

English: 
something, you're in a lab.
Yeah, and so our assumption is predominantly if you're
localhost on your machine, we tend to refer to that as the
lab setting.
So let me just run you through very quickly the
kinds of things that we've got. So if I just go
to the web.dev home page and I've got the Performance tab
open here in Chrome.
This is currently in Canary, so some
of these features are very kind of hot off the press.
They're subject to change, so just bear that in mind.
If these things look a little bit different over the coming
weeks and months, it's because we work in them and we just
wanted to share them with you because we're excited about
some of them and all of them, and we want to show them to
you. So if I take a recording, so I am going to go to
Record, I'm going to hit, in my case, Command+Shift+R,
which will do a reload without
any caching.
Just wait for a little while for things.
Or a service worker. Like it's literally as if you hit the
network.
Yeah, exactly. So I'm going to stop that profile.
I left it running for quite some time there.
And it's going to process that.

English: 
So let me bring that Down.
So this-- I mean,
this is actually
a really good trace recording,
like a performance recording.
If you've got something like
this in the lab setting,
you'd be super happy.
And the reason is--
if I just zoom in
a little bit here--
you can see that
most of the tasks
in all of these top level
tasks here, the tasks,
they're all really,
really short.
And that's really good,
because it means that--
SURMA: That's super short.
PAUL LEWIS: Yeah, it
means that the main thread
is remaining responsive.
So let me just zoom back out.
But let's talk about
some of these metrics.
So, for example,
DOM content loaded--
that's been around
since forever, really.
And that continues
to be shown here.
But you see this Timings
row contains some new ones,
like first paint,
first contentful paint,
first meaningful paint, and
largest contentful paint.
And you actually
see on my screen
here, when I roll over
largest contentful paint,
the element that is the largest
contentful paint highlights.
We put an overlay on that
one so that you can see.
So immediately, some of these
metrics that you're going
to see in web.dev/metrics
are showing up inside

English: 
OK. So let me bring that down.
So this I mean, this is actually a really
good trace recording, like a performance recording.
If you got something like this in the labs, I think you'd
be super happy. And the reason is, if I just zoom in a
little bit here, you can see that most of the tasks in
all of these top level tasks here, the task, they're all
really, really short.
And that's really good because it means...
It's super short.
Yeah, it means that the main thread is remaining
responsive. So let me just zoom back out.
But let's talk about some of these metrics.
So, for example, DOM Content Loaded - that's been around
since forever, really.
And that continues to be shown here.
But you see these timings row contains some new ones, like
First Paint, First Contentful Paint, First Meaningful Paint,
and Largest Contentful Paint. And
you actually see on my screen here when I roll over Largest
Contentful Paint, the element that is the Largest
Contentful Paint highlights.
We put an overlay on that one so you can see.
So immediately some of these metrics that you're going to
see in web.dev/metrics are showing up

English: 
of the DevTools timeline.
So that's the
high-level metrics.
What we can also do, though,
is we can show long tasks.
Now, as I said, all these
tasks are pretty quick.
But I'm running on
decent hardware here.
This one here is
18 milliseconds.
And I think we could
probably do another recording
and actually show
it slowed down.
So what I'm going to do is I'm
going to go to the Settings
here, and I'm going to
switch on CPU throttling.
I'm going to switch it
to six times slow down.
The reason we're
going to do that
is just so that we
can see what it would
look like on slower hardware.
SURMA: You have a
sixth of a MacBook now.
PAUL LEWIS: Yeah, exactly.
So I'm going to hit Record.
I'm going to refresh,
as I did before.
As I say, I'm going to give it
time to actually settle down.
Some of our metrics,
they do need
to wait until
there's only a couple
of requests running on
the network and so on.
So I'm going to
stop that recording.
And what we start
to see as we start
to see these red triangles.
So again, let me zoom in
on what's going on here.

English: 
inside of the DevTools timeline.
So that's the high level metrics.
What we can also do, though, is we can show long tasks.
Now, as I said, all these tasks are pretty, pretty
quick, but I'm running on decent hardware here.
This one here is 18 milliseconds and
I think we could probably do another
recording and actually show it slow down.
So what I'm going to do is I'm going to go to the settings
here and I'm going to switch on CPU throttling, then I'm
going to switch it to 6x slowdown.
The reason why I'm doing that is just so that we can see
what it would look like on slower hardware.
So I have a sixth of a MacBook now.
Yeah, exactly. So I'm going to record, I'm going to refresh
as I did before.
As I say, I'm going to give it time to actually settle
down some of our metrics.
They do need to wait until there's
only a couple of requests running on the network and so on.
So I'm going to stop that recording.
And what we start to see is we start to see these red
triangles, so let me zoom in on what's going on here.

English: 
So there's a couple
of things to notice.
One is that the task,
six times slowed down
in this particular case,
is 168.54 milliseconds.
What we're trying
to do is we're going
to try and keep all our
tasks under 50 milliseconds.
And the reason for
that is that we
can make the main thread
responsive when we do that
to user interactions.
Because most of the
JavaScript that people
are running, like touch
handlers, and mouse click
handlers, and all
those kinds of things,
all run on the main thread.
And so what we want
to make sure is
that we're not blocking
the main thread with work.
And so keeping tasks under
50 milliseconds is the goal.
So the blocking time--
SURMA: And I mean, let's be
honest here, that, I mean,
50 milliseconds is
somewhat lenient.
If we went to the good old goal
of like 60 frames a second,
it would be even less.
But during the first load,
we can be a bit more lenient,
because nobody expects to
interact with a website
during the first
second of loading.
So we can be-- have a little
bit of a higher threshold there.
But above 50 milliseconds,
it can become very noticeable

English: 
So there's a couple of things to notice.
One is that the task six times slowed down
in this particular case is 168.54
milliseconds. What we're trying to do is we're going to try
and keep all our tasks under 50 milliseconds.
And the reason for that is that we can make the main thread
responsive when we do that to
user interactions, because most the JavaScript that people
are running like touch handlers and mouse click
handlers and all those kinds of things all run on the main
thread. And so what we want to make sure is that we're not
blocking the main thread with work.
And so keeping tasks under 50 milliseconds is the goal.
So the blocking time...
And I mean, let's be honest here that 50 milliseconds
is somewhat lenient.
If we went to the good old goal of like 60 frames per
second, it would be even less. But during the first load,
we can be a bit more lenient because nobody
expects to interact with a website during the first second
of loading. And so we can have a bit of a higher threshold
there. But above 50 milliseconds, it can become very

English: 
if the main thread is blocked.
PAUL LEWIS: Yeah, absolutely.
Great point.
In the context of
page load, absolutely.
This is more about, the
50 millisecond thing--
if you're animating,
yeah, absolutely,
to just repeat that, you'd be
going for 60 frames a second,
and your tasks would need
to be a lot, lot shorter.
So if you're wondering
how much of your recording
is actually blocking, so how
much candy striping overall
do you have going on in
here, well, we actually
have this new metric
at the bottom, which
says total blocking
time, which you can think
of as like the amount
of candy striping
that's in this
recording as a whole.
In this particular case,
it's 230.05 milliseconds.
And we have a link to
explain what's going on.
And you can see that
that would take you over
to the total blocking
time metric information.
SURMA: So I find it quite
interesting, because we have--
the total blocking
time is actually not
necessary the amount of
time that the main thread
has been blocked.
Because you have a 50
milliseconds budget per task
if you're allowed to block.
But everything
that's over counts
into that total blocking time.

English: 
noticeable if the main thread is blocked.
Absolutely. Great point.
In the context of page load, absolutely.
This is more about the 50 millisecond thing - if you're
animating. Yeah, absolutely, just to repeat that - You'd be
going for 60 frames a second and your tasks would need to
be a lot, lot shorter.
So if you're wondering how much of your recording is
actually blocking, so how much of candy striping overall,
do you have going on in here?
Well, we actually have this new metric at the bottom which
says Total Blocking Time, which you can think of, is like
the amount of candy striping that's in this
recording as a whole. In this particular case, it's 230.05
milliseconds and we have a link to explain what's going on.
And you can see that it will take you over to the Total
Blocking Time metric information.
So I find it quite interesting because we have the Total
Blocking Time is actually not necessarily the
amount of time that the main thread has been blocked
because you have a 50 millisecond budget
per task that you're allowed to block, but everything that
is over counts into that Total Blocking Time.

English: 
So that's the amount
of time that you went
over the 50 millisecond budget.
PAUL LEWIS: Exactly.
I should say, the total blocking
time, we tend to stop it at--
one metric that is
not shown here, which
is the time to interactive.
So when we hit an
interactive time in the--
back end in Blink
actually notifies us
that we've hit interactive
time if it can.
We'll actually stop
tracking total blocking time
for that value at the bottom.
But the candy
striping here would
tally pretty much entirely
with that total blocking time.
At the bottom, you've got this
number, 230.05, in this case.
If that number starts
to creep up and up,
that's something you're going
to want to take a look at,
because that means that your
spending quite a lot of time
with long-running tasks
on the main thread.
And the chances
are, it's negatively
impacting the user experience.
In this case, web.dev
is actually really good.
It's just, I had to introduce
some slowdown-- which
is a good feature to know
about, that you can introduce
slowdown, both in terms of
the CPU, but also the network
as well.
You can do that
if you so desire.
So that's the total
blocking time.
That's long-running
tasks as well.

English: 
So that's the amount of time that you went over the 50
millisecond budget.
Exactly. I should say that the Total Blocking Time,
we tend to stop it at one metric that is not
shown here, which is the Time to Interactive.
So when we hit an interactive time in the back end and
Blink actually notifies us that we've hit interactive time,
if it can, we will actually stop
tracking Total Blocking Time for that value at the bottom.
But the candy striping here would tell
you pretty much entirely without Total Blocking Time.
At the bottom, you've got this number 230.05
- in this case, if that number starts to creep
up and up then that's something you're going to want to take
a
look at because that means that you're spending quite a lot
of time with long running tasks on the main thread and the
chances are it's negatively impacting the user experience.
In this case, web.dev is actually really good.
It's just I had to introduce some slowdown, which is a good
feature to know about, that you can introduce slowdown both
in terms of the CPU, but also the network as well.
You can do that if you so desire.
So that's that's the Total Blocking Time.
That's long running tasks as well.

English: 
We mark that at 50 milliseconds
plus with the candy
striping and the little
red triangle in the corner.
And the other thing
we've also introduced
is a thing called
layout shifting.
So if I just even go
to the Google homepage,
and I do a recording here,
and I just refresh like so,
and I hit Stop, what
you're going to see
is going to see a new
track if we find any layout
shifts in the layout track--
sorry, the experience
is the new track.
It's called experience.
And one inside here,
we've got a layout shift.
And if we click on this-- let
me just bring this up so you can
see a bit more--
you get information
on the layout shift.
So the idea of a
layout shift is--
I think most people
have experienced this--
you're browsing
the web, and you're
about to tap on a button, and
it moves to somewhere else
on the screen.
And it's really frustrating.
And so we're
starting to give you
the ways and the means
of tracking those layout
shifts in the course of
interacting with your page
and so on.

English: 
We mark that at 50+ milliseconds with the candy striping
and the little red triangle in the corner.
Now, the other thing we've also introduced is a thing
called Layout Shifting.
So if I just go to the Google home page
and I do a recording here and I
just refresh like so.
Then I stop.
OK.
What you're going to see is you're going to see a new track
if we find any layout shifts in the layout track, so
the experience is the new track.
It's called experience. One inside here we've got a layout
shift.
And if we click on this, let me just bring this up so you
can see a bit more.
You get information on the layout shift.
So the idea of a layout shift is, I think
most people have experienced this, you're browsing the web
and you're about to tap on a button and then it moves to
somewhere else on the screen.
And it's really frustrating.
And so we're starting to give you the ways and the means
of tracking those layout shifts in the course
of interacting with your page and so on.

English: 
And we've put them into
this experience track.
So there's some
stuff about scoring
which, if you read
the documentation
on cumulative layout shifts,
it will explain the scoring
a little bit more to you.
And you can also
see whether or not
it was in response to
some recent user input.
We also include some
location information
where you went from.
And you can see
this here is this.
So it's come from here.
This element has
shifted to here.
You see that overlay.
So I'm pretty confident
I know what it is.
It's probably this
privacy reminder here
that is coming on here.
Now, one of the things that
we probably need to add
is which element
it is specifically.
And there is that
information in the trace.
It's just a case of
plumbing it through.
So that's actually
something that I'm
going to hopefully look at.
So maybe by the time
you see this video--
go and have a look
in Chrome Canary--
possibly we've got
the elements there.
SURMA: If not what's
doing the timer recording
with screenshots helps.
We can go through the film
strip and actually see

English: 
And we've put them into this experience track.
So there's some stuff about scoring, which if
you read the documentation on Cumulative Layout Shifts it
will explain the scoring a little bit more to you.
And you can also see whether or not it was in response to
some recent user input.
We also include some location information where you went
from. And you can see this here. That is this. So
it's come from here. This element has shifted to here,
you see that overlay.
So I'm pretty confident I know what it is.
It's probably this privacy reminder here that is
coming on here. Now, one of the things that we probably
need to add is which elements
it is specifically.
And there is that information in the trace.
It's just a case of plumbing it through.
So that's actually something that I'm going to hopefully
look at. So maybe by the time you see this video,
go and have a look in Chrome Canary.
Possibly, we've got the elements there.
If not, would doing the time
of recording with screenshots help?
You can go through the filmstrip and actually see

English: 
something shift and that might be a good way to work around
that until DevTools can pinpoint it for you.
Absolutely. That's a great way of doing it.
Often, I think when it's your own code, you have a fair
sense of what's likely to be moving
around. And the thing to bear in mind is layout shifts
do cause problematic UX.
And so it's a good thing to try and get those down and
ideally removed, if at all possible.
And the way to do that normally is to reserve space for
your content ahead of time in your style so that you're not
just leaving content to just move other content out of the
way. Sometimes it's not possible, but where you can if you
can mark certain areas as being the correct size ahead of
time, that should help an awful lot.
So that's the layout shift information, so that's
added in.
So those are the main bits of Performance.
In summary, we have the Timings which are here.
We've got the Layout Shifts.
And we also had Long Running Tasks, which we show in the
main thread with the candy striping and the red triangle.
So in DevTools, we have also been working on

English: 
something shift, and
that might be a good way
to work around that until
DevTools can pinpoint it
for you.
PAUL LEWIS: Yeah, absolutely.
That's a great way of doing it.
Often, I think, when
it's your own code,
you have a fair sense of what's
likely to be moving around.
And the thing to bear in
mind is, layout shifts
do cause problematic UX.
And so it's a good thing
to try and get those down,
and ideally removed
if at all possible.
And the way to do
that normally is
to reserve space for your
content ahead of time
in your styles so
that you're not just
leaving content to just move
other content out of the way.
Sometimes it's not possible.
But where you can, if you
can mark certain areas
as being the correct
size ahead of time,
that should help an awful lot.
So that's the layout
shift information.
So that's added in.
So those are the main
bits of performance.
So in summary, we have the
timings, which are here;
we've got the layout
shifts; and we also
had long-running tasks, which
we show on the main thread
with the candy striping
and the red triangle.
SURMA: So in
DevTools, we have also

English: 
been working on WebAssembly
and debugging experience
that it gives you.
Because you write your
code in whatever language,
then you compile it
to this VM bytecode.
And you kind of don't want
to step through the bytecode.
You want to step through your
original code that you wrote.
And for the longest time,
WebAssembly and DevTools
used source map--
source maps to
define that mapping
from the compiled
WebAssembly binary code
to the original source code.
And source maps were kind
of built for JavaScript
and minifiers and transpilers.
And so that worked quite
well, but it's also
lacking some capabilities
that are quite important.
So I have little examples
of C, because it's actually
a sample from Emcripten that I
just simplified a little bit.
It just draws a little gradient.
It doesn't really
matter what it does.
But this is C code.
And I have compiled
this WebAssembly
with our new
debugging experience.
Because in native
land, in binary land,
there has been a
debugging format
called DWARF for a
very long time that
has all these capabilities.

English: 
WebAssembly and debugging experience that it gives you
because you write your code in whatever language, then you
compile it to this VM bytecode
and you kind of don't want to step through the bytecode.
You want to step through your original code that you wrote.
And, for the longest time WebAssembly in DevTools
used source maps to define that mapping from
the compiled WebAssembly binary code to
the original source code. And source maps were kind of
built for JavaScript and minifiers and transpilers.
And so that worked quite well, but it's also
lacking some capabilities that are quite important.
So I have little example as you can see.
It's actually a sample from Emscripten that I just
simplified a little bit. It just draws a little gradient,
it doesn't really matter what it does.
But this is C code and I have compiled this
to WebAssembly with our new debugging experience.
Because in native land, in binary land, there has
been a debugging format called DWARF for a very long time
that has all these capabilities.

English: 
But until recently, DevTools didn't understand it.
So now we have compiled this to WebAssembly.
This is what the output looks like.
It just draws this little nice gradient that you see on
screen here.
But what's cool about this is that now if you go to the
Source panel in DevTools over here, we have our usual
JavaScript files, we have our WebAssembly bytecode,
which, you know, that's not the kind of code you want to
step through.
It's basically assembly. While that sometimes can be
useful, it doesn't read that well, let's be honest.
What you can now find with DWARF support in DevTools
is that we actually have a mapping to the original
C++ file.
Again, this also worked with source maps.
But this has some more capabilities.
So I can now set a breakpoint here in
the C code. Actually,
let's set two breakpoints. Why not? And so if I reload, the
program will be halted once we reach that breakpoint,
and then we can continue going through it as we're used

English: 
But until recently, DevTools
didn't understand it.
So now we have compiled
this to WebAssembly.
This is what the
output looks like.
It just draws is a
little nice gradient
that you see on screen here.
But what's cool
about it is that now,
if you go to the Source
panel in DevTools over here,
we have our usual
JavaScript files,
we have a WebAssembly
bytecode which--
you know, that's not the kind of
code you want to step through.
PAUL LEWIS: I mean, that's not--
SURMA: Because that is--
it's basically Assembly.
And while that
sometimes can be useful,
it doesn't read that well.
Let's be honest.
What you can now find with DWARF
support in DevTools is that we
actually have a mapping
to the original C++ file.
Again, this also work
with source maps,
but this has some
more capabilities.
So I can now set a breakpoint
here in the C code.
Actually, let's set
two breakpoints.
Why not?
And so if I reload,
the program we'll
be halted once we
reach that breakpoint.

English: 
to. And you can see the UI updating, so if I now step
over this SDL_UnlockSurface,
the gradient shows up because we're actually stopping the
program and updating the UI
as we go along. And I think that's pretty cool.
So now the question that I have for you there, Surma, is,
are you able to inspect, say, the values of like
i or j or alpha and those kind of...?
Right. So, for example, if I hover over this, nothing
shows up. And that's exactly the capability
that - for example, that was also missing from source maps
because source maps can't really handle the renaming of
variables very well. DWARF, however, can.
So while we are using DWARF to have a more efficient
debugging experience, the capabilities
are possible, but we haven't quite built that part
yet. So we are using DWARF to allow you the old experience
of stepping through and breakpointing and all these things.
But you can't quite inspect variables yet, but we
are working on that. And only now is it even possible to

English: 
And then we can continue going
through it as we're used to.
And you can see the UI updating.
So if I now step over this
SDL_UnlockSurface, flip,
the gradient shows up,
because we're actually
stopping the program and
updating the UI as we go along.
And I think that's pretty cool.
Now the--
PAUL LEWIS: Question
I have for you there,
Surma, is, are you
able to inspect, say,
the values of like
i or j or alpha,
and those kind of in
the middle of your loop?
SURMA: So for example,
if I hover over this,
nothing shows up.
And that's exactly the
capability that was,
for example, was also
missing with source maps.
Because source maps can't
really handle the renaming
of variables very well.
DWARF, however, can.
So while we are using DWARF to
have a more efficient debugging
experience, the capabilities
are potentially possible.
We haven't quite
built that part yet.
So we are using
DWARF to allow you
the old experience of stepping
through and breakpointing
and all these things.
But you can't quite
inspect variables yet.
But we are working on that.

English: 
build. So that's really the exciting part here.
The other part is that we are no longer using
a WebAssembly interpreter during debugging, but actually
our baseline compiler.
So usually in the olden days when you would start
debugging your WebAssembly with source maps, you would
actually experience a much, much slower
WebAssembly execution. So if you had a long loop and you
want to break through it, you would actually have to wait
longer until it is done.
Now we are at the baseline compiler which generates
much performant code under the hood, so you can
debug with higher performance.
And if that isn't a great tagline, I don't know what
is, to be honest.
So you've kind of pointed at this, I think, a little bit.
This is experimental.
It's still a work in progress.
If somebody wanted to actually try this out for themselves
in Canary, what would they do?
Where would they go?
So it is currently in Canary.
It is still being worked on, so in the last couple of days,
it kind of oscillated between working and not working.
Sometimes I could set break points, sometimes I couldn't.

English: 
And only now is it
even possible to build.
So that's really the
exciting part here.
The other part is that we are
no longer using a WebAssembly
interpreter during
debugging, but actually
our baseline compiler.
So usually, in the
olden days, when
you would start backing your
WebAssembly with source maps,
you would actually experience
a much, much slower
WebAssembly execution.
So if you had a long loop and
you went to break through it,
you would actually have to
wait longer until it is done.
Now we are at the
baseline compiler,
which generates much more
performant code under the hood
so you can debug with
high performance.
And if that isn't
a great tagline,
I don't know what
is, to be honest.
PAUL LEWIS: So you've
kind of pointed at this,
I think, a little bit.
This is experimental.
It's still a work in progress.
If somebody wanted to actually
try this out for themselves
in Canary, what would they do?
Where would they go?
SURMA: So it is
currently in Canary.
It is still being worked on.
So in the last couple
of days, it kind of
oscillated between
working and not working.
Sometimes I could
set breakpoint,
sometimes I couldn't.

English: 
But we are on track for getting this ready for stable.
I don't know if we have set a milestone yet.
If so, I will update the description of this talk
accordingly.
If you want to play around with this, please do.
If you just Google for Emscripten and DWARF,
the most recent release now has support by default for
these DWARF debugging symbols.
So if we just update your Emscripten to the most recent
release, this should just work out of the box.
Just to be clear, there is no
source map file in the flow that I'm
serving, so source maps cannot have provided
this debugging experience.
So this is definitely using the DWARF symbols that are in
the Web Assembly binary.
Cool.
Well, with that, I think we have covered what is important
and new in DevTools.
No, I've got more stuff to show you.
I've got the Issues.
You have more stuff to show?
Yeah.
Well, the next thing I want to show you is the Issues
tab. Now you, Surma - you are similar to me, I think - when

English: 
But we are on track for
getting this ready for stable.
I don't know if we have
set a milestone yet.
If so, I will update the
description of this talk
accordingly.
But if you want to play
around with it, please do.
Emcripten, if you just google
for Emcripten and DWARF,
the most recent
release now has support
by default for these
DWARF debugging symbols.
So if you just
update your Emcripten
to the most recent
release, this should just
work out of the box.
Just to be clear, there
is no source map file
in the [? flood ?]
that I'm serving.
So source maps cannot have
provided this debugging
experience.
So this is definitely
using the DWARF symbol that
are in the WebAssembly binary.
PAUL LEWIS: Cool.
SURMA: Well, with
that I think we
have covered what is important
and new in DevTools, didn't we?
PAUL LEWIS: No.
I've got more stuff to show you.
I've got the issues--
SURMA: You have
more stuff to show?
PAUL LEWIS: Yeah.
Switchbacks [INAUDIBLE]
SURMA: Ah, well.
PAUL LEWIS: All right, Surma.
The next thing I want to
show you is the Issues tab.
Now you were similar
to me, I think.

English: 
you see lots of warnings and
messages in the console over here, you kind of go
'oh sure' and you start to kind of mentally filter those
out. And you - maybe if you're like me - you kind of clear
the console and you think, 'OK, it's just too busy,
just too noisy.
I'll kind of deal with those at some point'.
Well, if you are like that, I completely
understand where you're coming from. So we've added this
new thing, which is the Issues tab.
It says that 'issues have been detected'.
So if you see something like this, we've detected some
issues during the execution of your page and you can go
to the Issues tab, which opens a new tab here.
Now, sometimes if the page is
already loaded and you bring up the console, it might say,
well, there's possibly more information.
Would you like to reload? So sometimes you get an option to
reload the page and you might see more issues coming
through that way.
So you can see here that this particular page,
some SameSite cookies
issues are starting to show.
So you can see I've got the issues listed here and I can

English: 
When you see lots of
warnings and messages
in the console over here,
you kind of go, uh, sure.
And you start to kind of
mentally filter those out.
And maybe if you're like me,
you kind of clear the console,
and you think, OK--
it's just too busy,
just too noisy.
I'll kind of deal with
those at some point.
Well, if you are
like that, completely
understand where
you're coming from.
So we've added this new thing,
which is the Issues tab.
It says the "Issues
have been detected."
So if you see
something like this,
we have detected some
issues during the execution
of your page.
And you can go to
the Issues tab,
which opens a new tab here.
Now, sometimes if the
page is already loaded,
and you bring up the
console, it might say, well,
there's possibly
more information.
Would you like to reload?
So sometimes you'll get an
option to reload the page,
and you might see more issues
coming through that way.
So you can see here that
this particular page,
some same-site cookies issues
are starting to show up.
So you can see I've got
the issues listed here.

English: 
spin this down and it gives me an explanation of what's
going on and how I can resolve
this issue directly. It also tells me which cookies
in this case are affected, and also links me off to
some information on web.dev about how
this all works.
It's also true of something like mixed content as well.
So if I go to something with mixed content, which is HTTPS
and HTTP mixed together, similarly,
I can go here, and it says that there are eight
cases where I have mixed content
where all results should've been loaded via HTTPS,
but they haven't been. And again, it lists
the requests and the resources
in question.
So hopefully that's going to make the console less
noisy and makes it much more
clear to you how you can track down any issues that
DevTools has noticed in your site and your app.
Okay, Surma, the last thing I want to talk about is color

English: 
And I can spin this
down, and it gives me
an explanation of
what's going on
and how I can resolve
this issue directly.
It also tells me which cookies
in this case are affected.
And it also links me off to
some information on web.dev
about how this all works.
It's also true of something
like mixed content as well.
So if I go to something
with mixed content,
which is HTTPS and
HTTP mixed together--
similarly I can go
here, and it says
that there are eight cases
where I have mixed content
where all resources should
have been loaded via HTTPS,
but they haven't be.
And it lists-- again,
it lists the requests
and the resources in question.
So hopefully that's going to
make the console less noisy
and makes it much
more clear to you
how you can track down
any issues that DevTools
has noticed in your
site and your app.
OK.
So the last thing I
want to talk about

English: 
vision deficiency emulation.
So let me fire up this brilliant video of HTTP 203 for you
in just a second. Yeah, talking
about imposter syndrome, something I personally suffer
from. So if you've not seen that, go ahead and watch it,
it's a really good video.
So what I've got on my screen, I've got the
rendering tools inside of DevTools.
Now, if you've not seen this, you can go to more tools and
then rendering down here. If you click on that, this tab
will pop up and there's a load of tools just for rendering
inside of here.
And since we're passing through here, the layout shifts
that I talked about earlier in the performance area, layout
shift regions, this is a live update version
of layout shifts. So if you're just wanting to see very
quickly what layout shifts you have during the,
you know, the lifecycle of your page, you can switch this
on, and if you see blue flashes, then that's
a layout shift that would have been caught if you had taken
a performance recording.
Now, I'm not going to show this today because as it says
here, it may not be suitable for people prone to

English: 
is color vision
deficiency emulation.
So let me fire up this brilliant
video of [INAUDIBLE] in just
a second.
SURMA: Yes, very good.
PAUL LEWIS: Yeah, talking about
imposter syndrome, something
that I personally suffer from.
So if you've not seen that,
go ahead and watch it.
It's a really good video.
So what I've got
on my screen, I've
got the rendering tools
inside of DevTools.
If you've not seen this,
you can go to More Tools
and then Rendering down here.
If you click on that,
this tab will pop up.
And there's a load of tools just
for rendering inside of here.
And since we're
passing through here,
the layout shifts that
I talked about earlier
in the performance area,
layout shift regions--
this is a live update
version of layout shift.
So if you're just wanting to see
very quickly what layout shifts
you have during the
lifecycle of your page,
you can switch this on.
And if you see blue
flashes, then that's
a layout shift that would
have been caught if you'd
taken a performance recording.
Now, I'm not going
to show this today,
because, as it says
here, it may not

English: 
photosensitive epilepsy, and I want to be mindful of that.
But if you think this is the right tool for you, definitely
check that out when you're looking in the rendering tools.
So if I scroll to the bottom here of this list,
you'll see that you can emulate vision deficiencies.
And if I start playing this video back, you
can see that if we work our way through this list, we have
blurred vision.
You see it applies a blur to everything.
And I can still interact with the page and still click on
buttons and so on, but you see, the content is completely
blurred.
We also have
protanopia, and there is deuteranopia.
Tritonopia, and achromatopsia - assuming
I'm pronouncing those correctly, if I'm not,
then forgive me.
I find it really cool that all these
effects, so to speak, are applied to the page without
disabling the interactivity or the animations or even the
video. So you can really check if all your
animation experience holds up when someone

English: 
be suitable for people prone
to photosensitive epilepsy.
And I want to be
mindful of that.
But if you think this is
the right tool for you,
definitely check
that out when you're
looking in the rendering tools.
So if I scroll to the
bottom here of this list,
you'll see that you can
emulate vision deficiencies.
And if I start playing
this video back,
you can see that if we work
our way through this list,
we have blurred
vision, which we see
applies a blur to
everything-- and I can still
interact with the page.
I can still click on
buttons and so on.
But you see the content
is completely blurred.
We also have protanopia.
And there is deuteranopia,
tritanopia, and achromatopsia,
assuming I'm pronouncing
those correctly.
If I'm not, then forgive me.
SURMA: I find it really
cool that all these effects,
so to speak, are
applied to the page
without disabling
the interactivity
or the animations,
or even the video.
So you can really check
if all your animation,

English: 
has one of these vision deficiencies.
So I think it's just a really cool thing.
Yes. Absolutely.
So you might have noticed, Surm.
These are - these seem fairly extreme
when you look at this. It looks to me -
as somebody who doesn't have any of these vision
deficiencies - these look like a very extreme form
of change, visual change.
So, I mean, I said this in the introduction with Dion
earlier, but to say it again,
these deficiencies that we're emulating
are physiologically accurate, but
the most acute form of that particular
vision deficiency.
So, it's not on/off like we have
emulated here, where it's sort of no vision deficiency
and then protanopia.
If you have one of these vision differences, it's much more
likely to be a spectrum and you may have

English: 
the entire experience, holds
up when someone has one
of these vision deficiencies.
So I think that's just
a pretty cool thing.
PAUL LEWIS: Yeah, absolutely.
So you might have noticed--
so these are-- they seem fairly
extreme when you look at this.
It looks to me, as
somebody who doesn't
have any of these
vision deficiencies,
these look like a very extreme
form of change, visual change.
So-- I mean, I said this in the
introduction with Dion earlier.
But to say it again, these
deficiencies that we're
emulating are
physiologically accurate
but the most acute form of that
particular vision deficiency.
So it's not on/off like
we have emulated here,
where it's sort of no vision
deficiency and protanopia.
If you have one of these
vision differences,
it's much more likely
to be a spectrum,

English: 
a certain amount of one of these vision deficiencies.
So what this is, it's like the most extreme
version of all of these vision deficiencies.
And so that way you can make sure if your page holds up
in the most extreme case, you know, it will also hold up
for any less severe case.
It can make sure your website remains usable for anyone
really.
That is exactly the idea behind it, exactly that.
If you're optimizing for accessibility, you want to be as
confident as you can be that you're capturing
the colors and the contrast and all of that stuff
in the most helpful way
possible. And so by going for the most acute version of
these vision deficiencies, exactly as you said, you can you
can optimize for those.
And then anything up to, and including those, will also be
covered as well.
So that's it, we've talked about the Issues tab, we've
talked about long tasks, we talked about layout shifts,
color vision deficiency, and Web Assembly debugging with
DWARF.
I know, right! And that's quite a lot of new things
in DevTools, and you should definitely try those out.

English: 
and you may have a certain
amount of one of these vision
deficiencies.
So what we're trying to do--
SURMA: So what you
say is like, this
is the most extreme
version of all
of these vision deficiencies.
And so that way,
you can make sure
if your page holds up in
the most extreme case,
you know it will also hold
up for any less severe case.
You can make sure your website
remains usable for anyone,
really.
PAUL LEWIS: That is
exactly the idea behind it.
Exactly that.
If you are optimizing
for accessibility,
you want to be as
confident as you
can be that you're capturing
the colors, and the contrast,
and all of that stuff in the
most helpful way possible.
And so by going for the most
acute version of these vision
deficiencies, exactly as you,
you could optimize for those.
And then anything up
to and including those
will also be covered as well.
So that's it.
So we've talked
about the Issues tab.
We talked about long tasks.
We talked about layout shifts,
color vision deficiency,
and WebAssembly
debugging with DWARF.
SURMA: I know, right?
And that is quite all of
new things in DevTools.
And you should
definitely try those out.

English: 
PAUL LEWIS: Absolutely, yeah.
So if you want to
try any of these out,
probably the easiest thing to
do is to fire up Chrome Canary,
give them a go.
If you run into any issues,
you can go over here
to the Help menu, and
you can report a DevTools
issue, which will create a bug.
And you can fill that in,
and that'll make its way--
SURMA: A.k.a., your inbox.
PAUL LEWIS: I really hope not.
So with that,
thanks for watching.
And I guess we'll
see you around.
SURMA: See you around.
Bye.
[MUSIC PLAYING]
SHU-YU GUO: Hello.
My name is Shu, and I work on
the JavaScript specification
as well as the V8 project.
LESZEK SWIRSKI:
My name is Leszek,
and I work as a promise
engineer on the V8 VM.
So Shu, what's new in
JavaScript these days?
SHU-YU GUO: Yeah, a
whole bunch of stuff
has happened since last year.
And you might recognize
some of the features
we're going to talk about today
from some of our colleagues'

English: 
Absolutely. So if you want to try any of these out,
probably the easiest thing to do is to fire
up Chrome Canary. Give them a go.
If you run into any issues, you can go over here to the
Help menu and you can report a DevTools issue, which will
create a bug and you can fill that in and that'll make its
way.
A.k.a. your inbox.
I really hope not.
So with that. Thanks for watching.
And I guess we'll see you around.
See you around. Bye!
Hello, my name is Shu, and I work on the JavaScript
specification, as well as the V8 project.
My name is Leszek and I work as a performance engineer on
the V8 VM.
So Shu, what's new in JavaScript these days?
Yeah. A whole bunch of stuff happened since last year and
you might recognize some of the features we're gonna talk
about today from some of our colleagues'

English: 
2019 Google I/O talk, because language features take a
while to be standardized and to be shipped in the browsers.
The ones we're gonna be talking about today have shipped.
So let's start with the fun stuff.
Like I said, we're about to - we're either shipped or about
to ship quite a few syntax features that should make web
devs lives' easier.
For this talk, we'll be focusing on
two features that'll make dealing with optional values
easier. So Leszek, have you ever written code that dealt
with configuration?
Oh, yeah, definitely. Like I'm always using a hashmap for
those things.
Yeah. So I'm running this new chat app right, something
of a strength for Google engineers.
I made some network parameters configurable, which I keep
in this map of configurations called 'config'
that you see on the screen.
But the network configuration is optional because it isn't
always set by the user and it has sub-configurations
like the server and the port, and maybe those aren't set by
the user either. Handling that kind of optionality is kind
of a pain. Currently, folks do this with logical AND
like you see on the screen.
That's pretty verbose.

English: 
2019 Google I/O talk, because
language features take
a while to be standardized and
to be shipped in the browsers.
The ones we're going to
be talking about today
have shipped.
So let's start
with the fun stuff.
Like I said, we're
about to-- we've either
shipped or about to ship quite a
few syntax features that should
make web devs' lives easier.
For this talk, we'll be
focusing on two features that
will make dealing with
optional values easier.
So Leszek, have you
ever written code
that dealt with configuration?
LESZEK SWIRSKI: Oh
yeah, definitely,
like always using a
hashmap for those things.
SHU-YU GUO: Yeah.
So I'm keeping--
I'm running this new
chat app, something
of a strength for
Google engineers.
I made some network
parameters configurable,
which I keep in this map of
configurations called config.
You can see on the screen.
But the network
configuration is optional,
because it isn't
always set by the user.
And it has subconfigurations
like the server and the port.
And maybe those aren't
set by the user either.
Handling that kind of
optionality is kind of a pain.
And currently, folks
do this with logical
and, like you see on the screen.
LESZEK SWIRSKI: Oh,
that's pretty verbose.
SHU-YU GUO: Yeah.

English: 
Yeah. For those chains of property accesses is where at
any point some property in the middle could turn out to be
undefined, we added this feature called optional chaining.
Easier to show you on the screen than to talk you through
it. So the optional chaining feature is
the question mark and the dot instead of a plain dot.
Oh, I see. So if netConfig is undefined then
netConfig.server is undefined and
so forth.
Yeah. Almost. It's a little bit more relaxed than that.
If it's undefined or null.
And specifically we call the set of things that are
undefined or null, 'nullish'.
So in the in this case, if netConfig is nullish, the whole
optional chain is undefined.
If netConfig isn't nullish, but netConfig.server
is nullish then again, the whole thing is undefined.
You get the idea. If nothing is nullish, then
eventually you get the whole property -
the most nested property access.
Cool. That's a lot easier to read.
I think so too. Now another common feature while dealing
with configurations is default values.

English: 
And for those chains of property
accesses, where at any point
some property in the middle
could turn out to be undefined,
we added this feature
called optional chaining.
Easier to show you on the screen
than to talk you through it.
So the optional
chaining feature is
the question mark and the
dot, instead of a plain dot.
LESZEK SWIRSKI: Oh, I see.
So if netConfig is undefined,
then netConfig.server is
undefined, and .addr is
undefined, and so forth.
SHU-YU GUO: Yeah, almost.
It's a little bit more
relaxed than that.
If it's undefined or null--
and specifically,
we call the set
of things that are
undefined or null nullish.
So in this case, if
netConfig is nullish,
the whole optional
chain is undefined.
If netConfig isn't nullish, but
netConfig.server is nullish,
then, again, the whole
thing is undefined.
You get the idea.
If nothing is nullish,
then eventually you
get the whole property, the
most nested property access.
LESZEK SWIRSKI: Cool.
That's a lot easier to read.
SHU-YU GUO: Yeah,
I think so, too.
Now, another common
feature while
dealing with configurations
is default values.

English: 
And sometimes folks use
logical or for this,
like you see on the screen.
LESZEK SWIRSKI: Oh, I've
definitely run that, cool.
SHU-YU GUO: Yeah.
And it usually works fine.
But sometimes it doesn't.
And it's really surprising
when it doesn't.
Suppose I add a configuration
for enabling compression
to the server.
Do you spot the bug?
LESZEK SWIRSKI: Oh, yeah, right.
How would you actually
explicitly disable
compression, right?
If it's false, then false or
true is still going to be true.
SHU-YU GUO: Yeah, exactly.
If enable compression is false,
false or true, like you said,
is true.
So what we really
want to test here
is not something
is truthy, which
is what logical or tests for,
but actually if something
is absent or present.
And we already are
familiar with that concept.
That's nullish.
So we introduced this
new syntax feature
called nullish coalescing,
which the two question marks.
And that does exactly what you
want here for default values.
It tests for nullishness
on the left-hand side.
If the left-hand
side is nullish,
then it evaluates to
the right-hand side.
If the left-hand
side is not nullish,
then the whole
thing is undefined.

English: 
And sometimes folks use logical OR for this, like you see
on the screen.
Yeah, I've definitely written that before.
Yeah, it usually works fine, but sometimes it doesn't.
And it's really surprising when it doesn't.
Suppose I add a configuration for enabling compression
to the server Do you spot the bug?
Oh yeah. Right.
How do you actually explicitly disable compression.
Right. If it's false, then false or true is still going to
be true.
Yeah, exactly. If enableCompression is false, false
or true, like you said, is true.
So what we really want to test here is not
if something is truthy, which is what logical OR tests for.
But actually if something is absent or present and
we already are familiar with that concept, that's nullish.
So we introduced this new syntax feature called nullish
coalescing, which the two question marks.
And that does exactly what you want here for default
values. It tests for nullishness on the left hand side.
If the left hand side is nullish then it evaluates to the
right hand side. If the left hand side is not nullish,
then the whole thing is undefined.

English: 
So in this case, enableCompression is false.
?? true will still get you false because false is not
null or undefined.
But if enableCompression wasn't present, if it's undefined,
then you get the default value of true.
That's pretty cool. Where can I use it?
So you can use both optional chaining and nullish
coalescing in Chrome Stable today.
Now, enough talking from me - that was just
a taste of the new features. You can find more on our
website later, we'll show you the link.
But, you know, we add all these new syntax features, and
I'm worried that supporting them all will slow down the
parser. V8 is known to be fast, and I
don't want to do anything to make it slower.
You know what, that's a fair concern to have.
When we shipped ES6 back in 2015, we actually saw a
big parsing performance drop.
This is measured on Octane on the 'Codeload' benchmark.
And we have this big drop during this implementation phase.
But actually, nowadays, parsing speed
doesn't matter as much as you might think.
Really not anymore?
I thought parsing was pretty expensive.

English: 
So in this case, enable
compression is false.
Question mark question
mark true will still
get you false, because false
is not null or undefined.
But if enable compression wasn't
present, if it's undefined,
then you get the
default value of true.
LESZEK SWIRSKI:
That's pretty cool.
Where can I use it?
SHU-YU GUO: So you can
use both optional chaining
and nullish coalescing
in Chrome stable today.
Now, enough talking from me.
That was just a taste
of the new features.
You can find more on
our website later.
We'll show you the link.
But you know, we add all
these new syntax features,
and I'm worried that
supporting them all
will slow down the parser.
V8 is known to be fast, and
I don't want to do anything
to make it slower.
LESZEK SWIRSKI: You know what,
that's fair concern to have.
When we shipped
ES6 back in 2015,
we actually saw a big
parsing performance drop.
This is measured on Octane,
on the Codeload benchmark.
And we have this big drop during
this implementation phase.
But actually,
nowadays parsing speed
doesn't matter as much
as you might think.
SHU-YU GUO: Oh, really?
Not anymore?
I thought parsing
was pretty expensive.

English: 
LESZEK SWIRSKI: Well,
it's still not cheap.
But in the past year,
we've worked a lot
to move a lot of parsing
off of the main thread
and be able to parse scripts
while they're still loading.
So imagine that Chrome
sees a script like this.
The HTML parser gets up
to it, it sees the script,
and it has to pause
the HTML parsing,
has to download the script,
parse it, execute it.
And only then can it
continue parsing HTML.
SHU-YU GUO: I know it
isn't strictly true,
because of optimizations
like preloading.
LESZEK SWIRSKI: No,
you're absolutely right.
This isn't actually
the whole truth.
And the download of the
script happens a lot earlier
if there's a link preloaded,
or if the preparser finds
a script earlier.
And if the download moves off
of the main thread and earlier
in time, the parser
execute moves earlier, too.
But the thing is the
parsing itself can happen
on a separate thread as well.
It's only really the executes
that happen on the main thread.
In particular, if a
scripts is marked as async,
you can keep
processing the HTML up
until the parsing of the async
scripts is actually finished
and then need to execute.
And we had support for
this basically forever,

English: 
Well, it's still not cheap.
For the past year, we've worked a lot to move as much
parsing off of the main thread and be able to parse scripts
while they're still loading.
So imagine that Chrome sees a script like this.
The HTML parser gets up to it, it sees the script
and it has to pause the HTML parsing, has to download the
scripts, parse it, execute it and only then can it continue
parsing HTML.
Well, I know this isn't strictly true though, because of
optimizations like preloading.
No, you're absolutely right.
This isn't actually the whole truth and the download of the
script happens a lot earlier if there's a link preload or
if the preparser finds a script earlier.
And if the download moves off of the main thread and
earlier in time, then the parse and execute can move
earlier too. But the thing is, the parsing itself can
happen on a separate thread as well.
It's only really the execution that has to happen on the
main thread. In particular, if a script is marked as async,
you can keep processing the HTML up until the parsing
of the async scripts is actually finished and needs to
execute. And if we've had support for this basically
forever, but it's been very limited.

English: 
but it's been very limited.
We've only been able
to concurrently parse
one script at a time.
And we've only been able to
do with [INAUDIBLE] scripts.
SHU-YU GUO: Yeah.
How come it's been so limited?
LESZEK SWIRSKI: Honestly,
just historical,
technical reasons, which
don't really hold anymore.
So one of the first
things that we did
was move everything from
this dedicated thread
into our global
thread pool, which
meant that they could happen
at the same time, in parallel.
Another thing that
we changed was
to have synchronous scripts
also use this off-thread parsing
Functionality
SHU-YU GUO: I'm
kind of confused.
You said synchronous scripts.
But what's the point of
parsing synchronous scripts
in another thread?
Isn't the whole point for
nonasync scripts that they
block the main thread?
LESZEK SWIRSKI: Well, that
is the point for the execute.
But for the parsing,
even though we're
parsing on a different thread,
if the main thread is free,
that means it can
do other things.
It means that the
user can scroll.
It means that the user can type.
It means that we can
execute other JavaScript
like on-click handlers.
So it's actually
very useful to be--
to have this empty space
here on the main thread.

English: 
We've only been able to concurrently parse one
script at a time and we've only been able to do this for
async scripts.
Yeah. How come it's been so limited?
Honestly, just historical technical reasons which don't
really hold anymore.
So one of the first things that we did was move everything
from this dedicated thread into our global thread pool,
which meant that they could happen at the same time in
parallel. Another thing that we changed was to
have synchronous scripts also use this off thread parsing
functionality.
I'm kind of confused. You said synchronous scripts.
But what's the point of parsing synchronous scripts in
another thread? Isn't the whole point for non-async scripts
that they block the main thread?
Well, that is the point for the execute, But for the
parsing, even though with parsing on a different thread,
if the main thread is free that it means it can do other
things. It means that the user can scroll.
It means that the user can type.
It means that we can execute other JavaScript like onClick
handlers. So it's actually very useful
to have this empty space here on the main thread.

English: 
SHU-YU GUO: Ah, OK.
I see.
So this is the difference
between improving interactivity
versus just improving
the loading time.
LESZEK SWIRSKI: Right.
But we can improve
loading as well.
Because the parsing is
happening on a separate thread,
we can actually move earlier.
We can stop parsing when
the download starts,
but then as data comes
in from the network,
we can feed it into the parser.
And then the actual
parse time doesn't matter
as much as you might think.
All we need is for the parser
to be faster than the network.
SHU-YU GUO: Really?
But the networks are
already pretty fast.
LESZEK SWIRSKI: Not always.
But usually they are.
Fair enough.
And yeah, caches
are even faster.
So we can't completely
ignore parser performance.
So we have invested
a lot into improving
the positive performance as
well-- single-threaded parsers
performance.
Starting 2018, we put
this big effort in,
put some of our best
engineers on it.
And we had actually
very good results
in improving parser
performance just
through programming
optimization.
SHU-YU GUO: Yeah.
Up and to the right, that's the
kind of graph I like to see.
Really fascinating stuff.
I think I learned quite a
bit in just the past five
minutes about making parsing
and compiling faster,

English: 
OK, I see this. This is a difference between improving
interactivity versus just improving the loading time.
Right. But we can improve loading as well.
Because the parsing is happening on a separate thread, we
can actually move it earlier. We can start parsing when the
download starts, and then as data comes in from the
network, we can feed it into the parser.
And then the actual parse time doesn't matter as much as
you might think. All we need is for the parser to be faster
than the network.
Really, but the networks are already pretty fast.
Not always, but usually they are, fair enough.
And caches are even faster.
So we can't completely ignore parser performance.
So we have invested a lot into improving the parser
performance as well.
Same with threaded parser performance. Starting in 2018 we
put this big effort and put some of our best engineers on
it and we have actually very good results in
improving parser performance just through programming
optimization.
Yeah. Up into the right.
That's the kind of graph I'd like to see.
Really fascinating stuff. I think I learned quite a bit
in just the past five minutes about making parsing and
compiling faster and web app performance in

English: 
and web app
performance in general.
And you got me thinking about
this other big chunk of web app
performance, which is memory.
I was doing this thing the
other day with my chat app.
And, you know, I got
it basically working,
and I was trying to measure
the performance of the packets
that I was getting
from the server.
I wrote this little
MovingAvg class
to compute the latency
moving average of the packets
that I was getting
from the WebSocket.
You see there that I
add a message listener
that, basically all it does
is, it accumulates events
into the event array.
And I use that later in
this compute function
which I don't show to actually
compute moving average.
And the way I use that is
I have this component that,
when I start
measuring and I want
to see the live
statistics of the moving
average of the latency, that
I make a new instance of it.
And then when I
stop, I null it out,
because I don't want to keep
all the events I accumulated
in memory.
I know that V8 garbage
collects memory that
can no longer be reachable.

English: 
general. You got me thinking about this other big chunk of
web app performance, which is memory.
I was doing this thing the other day with, with my chat
app. And, you know, I got it basically working.
And I was trying to measure the performance of the packets
that I was getting from the server.
I wrote this little moving average class to
compute the latency moving average of all
of the packets that I was getting from the WebSocket.
You see there that I add a message listener, that
basically all it does is it accumulates events into the
event array.
And I use that later in this compute function,
which I don't show to actually compute the moving average.
And the way I use that is I have this component, that
when I start measuring and I want to see the live
statistics of the moving average of the latency, that
I make a new instance of it and then when I stop, I null it
out because I don't want to keep all the events I
accumulated in memory. I know that V8 garbage
collects memory that can no longer be reachable and

English: 
as long as the moving average is reachable through
this.movingAvg property on the MovingAvg component,
that the garbage collector cannot collect it.
Which is why I nulled it out.
Yeah, that makes a lot of sense to me.
Yeah. And I thought this would work fine.
But then what happened was, you know, it's a chat app.
I kept it open for a while and I opened the Memory pane.
I see every once in a while that a GC happens and
it moves - you know, it collects some memory.
The memory goes down a little bit.
But it's pretty clear that the trend is
up and to the right - this is one of those graphs where up
and to the right is actually bad.
And what this was basically showing me is that it's a
memory leak, right?
That every time GC happened, it wasn't
able to collect all the memory so I just kept accumulating
more and more memory. And eventually, if I kept this chat
app open for another day or so,
my computer would have ran out of memory.
Memory leaks? The V8 only collects objects that
you can actually reach. You nulled out your moving average
field so the garbage collector should be able to reclaim
this memory, shouldn't it?

English: 
And as long as the moving
average is reachable through
the this.movingAvg on the
MovingAvgComponent that
the garbage collector
cannot collect it,
which is why it nulled out.
LESZEK SWIRSKI: That makes
a lot of sense to me.
SHU-YU GUO: Yeah.
And I thought this
would work fine.
But then what happened was,
you know, it's a chat app,
I kept it open for a while,
and I opened the Memory pane.
I see every once in a while
a GC happens, and it moves--
it collects some memory.
The memory goes down a bit.
But it's pretty clear that the
trend is up and to the right.
This is one of those graphs
where up and to the right
is actually bad.
And what this was
basically showing me
is that it's a memory leak--
that every GC-- every
time GC happened,
it wasn't able to
collect all the memory,
so I just kept accumulating
more and more memory.
And eventually, if I kept this
chat app open for another day
or so, my computer will
have run out of memory.
LESZEK SWIRSKI: Memory leaks.
But V8 only collects objects
so you can actually reach.
You nulled out your
moving average field,
so the garbage collector should
be able to reclaim this memory,
shouldn't it?

English: 
SHU-YU GUO: Yeah.
So it's a common mistake,
but it's still pretty subtle.
I'm sure the more seasoned web
developer would have spotted it
right away.
So what's going on is that the
WebSocket is holding onto all
the event listeners strongly,
which means that until they are
explicitly removed,
everything that
is reachable via
the event listener
is also considered
reachable, and thus
not collectible by
the garbage collector.
So you see that use
of this.events.push
inside the event listener.
As long as that use
is inside there,
the whole moving
average instance
is reachable from within
the event listener,
and thus not
garbage collectible.
So even when I nulled it out
in the MovingAvgComponent,
it was still considered alive
by the garbage collector.
To deal with this,
folks often use
what's called a
disposable pattern, where
I have a method that
manually removes the event
listener called dispose.
And that's kind of annoying.
And to use that, the
way I would do it

English: 
Yeah. So it's a common mistake, but it's still pretty
subtle. I'm sure the more seasoned web developer
would have spotted it right away.
So what's going on is that the WebSocket is holding
on to all the event listeners strongly, which means that
until they are explicitly removed, everything
that is reachable via the event listener is also considered
reachable and thus not collectible by the garbage
collector. So you see that use of this.events.push
inside the event listener? As long as that use is
inside there, the whole MovingAvg instance is
reachable from within the event listener and thus not
garbage collectible. So, even when I nulled it out
in the MovingAvgComponent, it was still considered alive
by the garbage collector.
To deal with this, folks often use what's
called a Disposable pattern, where I have a
method that manually removes the event listener called,
dispose().
And that's kind of annoying. And to use that, the way I
would do it is before I null it out in the MovingAvg

English: 
component, I would have to remember to manually call
.dispose().
What is this? C++?
You have to manually manage your memory?
I thought the whole point of garbage collection is so that
you don't have to follow that sort of stuff?
Exactly. And it's so easy to forget it, too.
And this is all because the event listeners can't be
garbage collected until you manually remove them.
So what if there was a way to actually tell the engine,
'Don't let me keep you from garbage collecting this
thing, even though it's reachable.' Then you don't have to
remember to manually call dispose() or even need the
Disposable pattern.
And it turns out there's a new standard feature in
JavaScript that lets you do exactly this, WeakRefs.
Right, and before we go into that, I have to give a quick
disclaimer: so WeakRefs are an advanced feature that's hard
to use correctly, because garbage collection is
unpredictable and very different from browser to browser,
or even different from run to run of the same browser.
Because of that unpredictability, we didn't add WeakRefs to
the web for many years and you'll hopefully never run into
a memory leak or a bug that legitimately needs it but on
the rare occasion that you actually legitimately need to

English: 
is, before I null it out
in the MovingAvgComponent,
I would have to remember to
manually call dot dispose.
LESZEK SWIRSKI:
What is this, C++?
You have to manually
manage your memory?
I thought the whole point
of garbage collection
was you don't have to
follow that sort of stuff.
SHU-YU GUO: Yeah, exactly.
And it's so easy
to forget it, too.
And this is all because
the event listeners
can't be garbage collected
until you manually remove them.
So what if there was a way
to actually tell the engine,
don't let me keep you from
garbage collecting this thing,
even though it's reachable?
Then you don't have to
remember to manually call
dispose or even need
the disposable pattern.
And it turns out there's a new
standard feature in JavaScript
that lets you do
exactly this, WeakRefs.
All right.
And before we go into it, I
have to give a quick disclaimer.
So WeakRefs are an
advanced feature
that's hard to use correctly,
because garbage collection is
unpredictable and very
different from browser
to browser, and even from--
an even different from run
to run of the same browser.
Because of that
unpredictability,
we didn't add WeakRefs to
the web for many years.
And hopefully we'll never run
into a memory leak or a bug
that legitimately needs it.
But on the rare occasion that
you actually legitimately use

English: 
use a WeakRef finally you can use it and fix your problem
at the root. All right.
Back to the main programing.
So how am I using WeakRefs here to solve the previous
problem? I still have this event listener.
But now, instead of directly registering that
event listener function with the socket, I
wrap it in a WeakRef.
It is the - what's called the target of the WeakRef
and inside the actual event listener, I
deref the WeakRef and I call the function.
And this kind of indirection basically means that the
function that is actually holding the MovingAvgComponent
alive via this.events.push()
is no longer kept from being garbage collected because
it is a weakly held reference, inside a WeakRef.
OK, and what does weakRef.deref() return? I
see you're using optional chaining function call syntax
here?
Yeah, good eye on that. That was not an example that we
showed earlier, but like optional chaining for property,
you can also optional chain a function call.
So if it's undefined then you

English: 
a WeakRef, finally you can
use it and fix your problem
at the root.
All right.
Back to the main programming.
So how am I using WeakRefs here
to solve the previous problem?
I still have this
event listener.
But now, instead of
directly registering
that event listener
function with the socket,
I wrap it in a WeakRef.
It is what's called the
target of the WeakRef.
And inside the actual event
listener, I deref the WeakRef,
and I call the function.
And this kind of
indirection basically
means that the function
that is actually
holding the MovingAvgComponent
alive via this.events.push
is no longer kept
from being garbage
collected because it is
a weakly held reference
inside a WeakRef.
LESZEK SWIRSKI: OK.
And what does
weakRef.deref return?
I see you're using optional
chaining function call
syntax here?
SHU-YU GUO: Yeah,
good eye on that.
That was not an example
that we showed earlier.
But like optional
chaining for property,
you can also optional
chain a function call.

English: 
don't end up making the call and the whole thing is
undefined. But that also suggests that deref()
here, when the thing is actually collected, will return
undefined. To recap here, what it basically means is
that you have to manually call deref() because
we're no longer preventing the garbage collector from
collecting the event listeners since it's wrapped in a
WeakRef. So
every time you want to access it, you have to manually
deref(). And if the garbage collector has collected
it, then deref() will return undefined.
OK. So in this case, the listener's reachable by
this.listener and once the particular MovingAvg instance
isn't reachable, then the component - and the component
nulls it out, then the whole thing can be collected?
Right. Exactly. Because we're no longer accidentally
keeping MovingAvg alive anymore via the event
listener. We can go back to what I naively
thought would work in the first place.
When I no longer need all the data and the MovingAvg

English: 
So if it's undefined, then you
don't end up making the call
and the whole
thing is undefined.
But that also suggests
that deref here,
when the thing is
actually collected,
will return undefined.
To recap here, what
it basically means
is that you have to
manually call deref,
because we're no longer
preventing the garbage
collector from collecting
the event listener,
since it's wrapped
up in a WeakRef.
So every time you want to access
it, you have to manually deref.
if the garbage collector
has collected it,
then deref will
return undefined.
LESZEK SWIRSKI: OK.
So in this case, the listener
is reachable by this.listener.
And once a particular
moving average instance
isn't reachable,
then the component--
and the component nulls it
out, then the whole thing
can be collected?
SHU-YU GUO: Right, exactly.
Because we're no longer
accidentally keeping
moving average alive anymore
via the event listener,
we can go back to
what I naively thought
would work in the first place.
When I no longer need all the
data in the moving average

English: 
instance, I would just null it out and let the GC do its
thing.
Ok. No wait, but hold on.
But now you've got this strongly-held listener on the
actual event listener that's calling the WeakRef.
Yeah.
That's a good point.
I thought you wouldn't spot that, but that's exactly
right. Even though with this WeakRef indirection,
I still have this event listener, remember?
The socket still keeps strongly.
It just holds onto all the event listeners until I
unregister it. I still have this extra event listener.
So what do I do there?
There is a a companion feature to WeakRefs called
FinalizationRegistry, that lets me do the thing that's
needed, which is I want the garbage collector to tell
me when it has collected something so that I can
perform some action at the point that an object has been
collected or in GC parlance, finalized.
That feature is called FinalizationRegistry.
On this slide, what you see is that I make a
FinalizationRegistry and when I add

English: 
instance, I will
just null it out
and let the GC do its thing.
LESZEK SWIRSKI: OK.
No wait, but hold on.
But now you've got this
strongly held listener
on the actual event listener
that's calling the WeakRef.
SHU-YU GUO: Yeah.
that's a good point.
You know, I thought
you wouldn't spot that.
But that's exactly right.
Even though with this
WeakRef indirection--
I still have this event
listener, remember,
the socket still keeps strongly.
It just holds on to
all the event listeners
until I unregister it.
I still have this
extra event listener.
So what do I do there?
There is a companion
feature to WeakRefs
called FinalizationRegistry
that lets me do the thing that's
needed, which is, I want the
garbage collector to tell me
when it has collected
something so that I can perform
some action at the point that
an object has been collected
or, in GC parlance, finalized.
That feature is called
FinalizationRegistry.
On this slide, what
you see is that I make

English: 
the new event listener, I also register it with the
FinalizationRegistry, meaning when the inner listener,
the thing that actually does the this.events.push() is
collected - and remember, it's collectible now because it's
held in a WeakRef - when that's collected, it's going to
run this function that I passed to the FinalizationRegistry
asynchronously to remove the event listener.
Cleaning up all the excess memory.
Now, again, this is an advanced feature and hopefully
you'll never need it.
So it doesn't actually pass the object
itself into the finalizer.
This is a good observation there.
You see, the thing that actually gets passed to the
finalizer are some other values.
The object that you want to observe the finalization of,
that's already been collected so you don't get that back.
In this case, the thing we need to perform the finalization
action to unregister are the socket and the wrapper
listener. And that's what we passed to the
FinalizationRegistry.
All right. That makes sense.
Yeah, and like I said, this is an advanced feature.
And this example here is pretty dense.

English: 
a FinalizationRegistry, and when
I add the new event listener,
I also register it with
the FinalizationRegistry,
meaning when the
inner listener--
the thing that actually
does the this.events.push--
is collected-- and remember,
it's collectible now,
because it's held in a WeakRef--
when that's
collected, it's going
to run this function that I
pass to FinalizationRegistry
asynchronously to remove
the event listener,
cleaning up all
the excess memory.
Now again, this is
an advanced feature,
and hopefully you'll
never need it.
LESZEK SWIRSKI: So it doesn't
actually pass the object itself
into the finalizer.
SHU-YU GUO: That's a
good observation there.
You see that the
thing that actually
gets passed to the finalizer
are some other values.
The objects that you want to
observe the finalization of,
that's already been collected,
so you don't get that back.
In this case, the thing we need
to perform the finalization
action to unregister are
the socket and the wrapper
listener.
And that's what we passed
to the FinalizationRegistry.
LESZEK SWIRSKI: All right.
That makes sense.
SHU-YU GUO: Yeah.
And like I said, this
is an advanced feature.
And this example
here is pretty dense.

English: 
I recommend that you follow
the link on the screen
there to follow our full
explainer on the V8.dev website
for the future.
So with all of that work, I
opened up the Memory panel.
Again, I kept my chat
app on for a while.
I start and stop
measuring the latency.
And now I see that every
time a GC does happen,
it's able to reclaim
basically all of the memory.
And then over a
longer period of time,
I'm no longer
accumulating memory.
And yeah, it looks
like I fixed the leak.
LESZEK SWIRSKI: It
looks pretty tricky.
Like, I've collected
garbage before,
and I don't do particularly
determinist thinking myself
SHU-YU GUO: Yeah.
Garbage collection
is not predictable.
It's not deterministic.
Don't depend on
it always running.
And that's why we
have kept saying
that WeakRefs and
FinalizationRegistry is
an advanced feature.
And that's a good point, too.
Given the unpredictability
of the garbage collector,
are there other things that
the engine does to make
apps slimmer?
LESZEK SWIRSKI: Actually, yeah.
Basically doing a lot of work
to reduce memory consumption.
There's actually been
two major projects

English: 
I recommend that you follow the link on the screen there to
follow our full explainer on the v8.dev website
for the feature. So with all of that work,
I opened up the memory panel again, I kept my chat app on
for a while, I start and stop measuring the latency,
and now I see that every time a GC does
happen, it's able to reclaim basically all the memory.
And then over a longer period of time, I'm no longer
accumulating memory. And yeah, it looks like I fixed the
leak.
Looks pretty tricky.
I've collected garbage before and I
don't do it particularly deterministically myself.
Garbage collection is not predictable.
It's not deterministic, don't depend on it always running.
And that's why we we have kept saying that WeakRefs
and FinalizationRegistry is an advanced feature.
So, and that's a good point, too.
Given the unpredictability of the garbage collector, are
there other things that the engine does to make
apps slimmer?
Actually, yeah, V8's been doing a lot of work to reduce its
memory consumption. There's actually been two major

English: 
projects that landed last year, which have focused
on this, pointer compression and V8 Lite.
And I can actually talk about both of them very quickly.
So pointer compression, first of all, you've
probably heard that machines are 32-bit or 64-bit.
On 32-bit machines, we have 32-bit pointers, on 64-bit
machines, we have 64-bit pointers.
And a whole point of this, no pun intended, is
that 32-bit pointers can reference up to four gigabytes
of memory. 64-bit pointers can reference up to 18 exabytes
of memory, which is quite a lot more.
And Chrome wants to be able to run in 64-bit
so that it can access more than 4GB memory.
Yeah, Chrome definitely needs more than 4 gigs.
Yeah. All right, ha, we've all heard, we've all seen
the same memes, and, you know, fair enough.
If you've got one hundred tabs open with a thousand images
and they're playing games and they're playing music, it's
going to use up memory.
But not necessarily each individual tab, not

English: 
that landed last year
which have focused on this,
points compression and V8 Lite.
And I can talk about both
of them very quickly.
So point compression
first of all,
you've probably that machines
are 32-bit or 64-bit.
On 32-bit machines, we
have 32-bit pointers.
On 64-bit machines, we
have 64-bit pointers.
And the whole point of this--
no pun intended-- is that
32-bit pointers can reference up
to 4 gigabytes of
memory, 64-bit pointers
can reference up to 18 exabytes
of memory, which is quite a lot
more.
And Chrome wants to be
able to run in 64-bit
so that it can access more
than 4 gigabytes of memory.
SHU-YU GUO: Yeah.
Chrome definitely
needs more than 4 gigs.
LESZEK SWIRSKI: Yeah, right.
We've all heard-- we've
all seen the same memes.
And, you know, fair enough.
If you've got hundred tabs
open with a thousand images,
and they're playing games,
and they're playing music,
it's going to use up memory.
But not necessarily
each individual tab,

English: 
not necessarily each
individual V8 instance.
And the key observation
of points compression
is that actually we can
probably restrict each V8
instance to be less than 4 gig.
And if we can restricted
to less than 4 gig,
that means we can pre-allocate
a 4-gig area for it
and force all objects to be
allocated inside of that area.
And now, instead of
referencing those objects
by a 64-bit pointer,
we can reference them
by an offset like this.
Under points compression, you
can take your 64-bit pointer,
and then you can
split it in half.
You can split it into
a base and an offset.
The base is the start of
that 4-gig allocation area,
and the offset is
the offset within it.
And then you only have
to store the offset
on objects, which means that
your pointers go half the size.
They go back to 32-bit size.
SHU-YU GUO: Ah.
Guessing it wasn't
just easy as that.
LESZEK SWIRSKI: It
definitely wasn't.
It was a whole journey.
And there's a whole blog
post describing that journey,
which was very exciting.
But as a little
spoiler, I can tell you
that on typical websites we
reduce memory by about 40%.
SHU-YU GUO: Yeah.

English: 
necessarily each individual V8 instance.
And the key observation of pointer compression is that
actually we can probably restrict each V8
instance to be less than 4 GB.
And if you can restrict it to be less than 4 GB, that means
we can pre allocate a 4 GB area for it and force
all objects to be allocated inside of that area.
And now, instead of referencing those objects by a 64-bit
pointer, we can reference them by an offset, like this.
Under pointer compression, you can take your 64-bit pointer
and then you can split it in half.
You can split it into a base and an offset.
The base is the start of that 4 GB allocation area, and the
offset is the offset within it.
And then you only have to store the offset on objects,
which means that your pointers go half the size, they go
back to 32-bit size.
I'm guessing it wasn't just easy as that.
It definitely wasn't.
It was a whole journey, and there's a whole blog post
describing that journey, which was very exciting.
But as a little spoiler, I can tell you that on typical
websites, we've reduced memory by about 40%.

English: 
So those are some very impressive numbers, 40%.
But what if a web app or Node.js program really wants to
use more than the 4 GB?
Are you constricting apps to only have 4 GB of memory?
Well, kind of, but also not really.
First of all, with pointer compression, those objects are a
lot smaller, so you can fit a lot more of them into the
4 GB allocation area. And also, this 4 GB only applies
to a single V8 instance's JavaScript object heap.
So, for example, TypedArrays, they have their own
external memory backing, so they're not included.
Wasm instances have their own 4 GB allocation area, so
those are separate. Even other V8 instances inside of
web workers and on other tabs
have their own 4 GB allocation area.
So you're only restricting one V8 instance, not all
of them. The other big project last year was V8
Lite, and this was a really interesting one because we
thought to ourselves, what would happen if we just gave up
on performance and tried to just improve memory?
How far can we actually get for memory constrained devices

English: 
That's-- those are some very
impressive numbers, 40%.
But what if a web app or
Node.js program really wants
to use more than the 4 gigs?
Are you constricting apps to
only have 4 gigs of memory?
LESZEK SWIRSKI: Well, kind of.
But also, kind of not really.
First of all, with points
compression those objects
are a lot smaller so you
can fit a lot more of them
into the 4-gigabyte
allocation area.
And also, this
4-gig only applies
to a single V8 instance's
JavaScript object heap.
So for example,
typed arrays, they
have their own external
memory backing.
So they're not included.
Wasm instances have the room
4-gigabyte allocation area,
so those are separate.
Even other V8 instances
inside of Web Workers
and on other tabs have their
own 4-gigabyte allocation area.
So you're really restricting one
V8 instance, not all of them.
The other big project
last year was V8 Lite.
And this was a really
interesting one,
because we thought
to ourselves, what
would happen if we just
gave up on performance
and tried to just
improve memory?
How far could we actually get,
like for memory constrained

English: 
devices, where V8 just couldn't
run at all without the memory
that it needed?
SHU-YU GUO: Yeah, that's an
interesting thought experiment.
I guess if you
run slowly, that's
better than not being able to
run at all because you're out
of memory.
LESZEK SWIRSKI:
Right, absolutely.
The approach that we took was
to just look at typical websites
and look at what kind of things
are actually taking up memory
there.
40-ish percent was user data.
There's not really much
we can do about that.
Projects like points
compression are
going to reduce that by a
lot, but we can't really
have any targeted optimizations
that reduce the amount of data
the users create.
And there was this
big bucket of other,
because there's always a
big bucket called other.
And we can reduce
that [INAUDIBLE]
with targeted optimizations.
But we did look at some
of the top uses of memory.
And we decided to
try and target those.
SHU-YU GUO: Right.
So right off the bat, if you're
not worried about performance
at all you don't need
to optimize code.
That makes sense to me.
LESZEK SWIRSKI: Absolutely.
And if you don't need
to optimize code,
you don't need to really
collect type feedback either,
because that's just
storing the data that we
need for optimization.
And it's only used
for performance.

English: 
where V8 just couldn't run at all without the memory that
it needed.
Yeah, that's an interesting thought experiment.
I guess if you run slowly, that's better than not being
able run at all because you're out of memory.
Right, absolutely. The approach that we took was to just
look at typical websites and look at what kind
of things are actually taking up memory there.
40-ish percent was user data.
There's not really much we can do about that.
Projects like pointer compression are going to reduce that
by a lot. But we can't really have any targeted
optimizations that reduce the amount of data that users
create. And there's this big bucket of Other because it's
always a big bucket called Other than we couldn't reduce
that either with targeted optimizations.
But we did look at some of the top users of memory
and we decided to try and target those.
Right. So right off the bat, if you're not worried about
performance at all, you don't need to optimize code.
That makes sense to me.
Absolutely.
And if you don't need to optimize code, you don't need to
really collect type feedback either, because that's just
storing the data that we need for optimization, and it's
only used for performance.

English: 
Even the bytecodes
that we generate,
you don't have to store that.
You can just compile it on
the fly whenever you need it
and get rid of it afterwards.
SHU-YU GUO: It sounds a little
different to me, though--
bytecode is unoptimized code.
And if you're even
getting rid of that,
that sounds like you're giving
up more than just a little bit
of performance.
LESZEK SWIRSKI: Yeah.
The first prototypes of
V8 Lite were pretty slow.
But then we realized that we
could get a lot of these gains
without actually sacrificing
performance at all-- just
by being a little bit lazier.
SHU-YU GUO: Yeah, I'm
something of an expert
of being lazy myself.
LESZEK SWIRSKI: Yeah, I'm pretty
good at being lazy myself, too.
But, as an expert
in laziness, you
know, right, that
being lazy doesn't just
mean not doing anything at all.
It means not doing anything
into you're really required to.
So we took the same
approach with V8.
Let's talk about
those feedback fixes
that I mentioned previously,
the type feedback.
You're not going to make-- get
much benefit from type feedback
if you only run a
function once or twice.
It's only really going
to start benefiting you
after you run it tens
or hundreds of times.
So we can delay creating this
type feedback until we've

English: 
Even the bytecode that we generate, you don't have to store
that. You can just compile it on the fly whenever you need
it and get rid of it afterwards.
Sounds a little different to me, though.
Bytecode is unoptimized code.
And if you're even getting rid of that, that sounds like
you're giving up more than just a little bit of
performance.
Yeah. The first prototypes of V8 Lite were pretty
slow, but then we realized that we could get a lot
of these gains without actually sacrificing performance at
all just by being a little bit lazier.
Yeah, something of an expert at being lazy myself.
Now, I'm pretty good at being lazy myself too.
But as an expert in laziness, you know, right,
that being lazy doesn't just mean not doing anything at
all, it means not doing anything until you're really
required to. So we took the same approach with V8.
Let's talk about those feedback fixes that I mentioned
previously, the type feedback.
You're not going to make that much benefit from type
feedback if you only run a function once or twice.
This is only going to benefit you once you run it tens or
hundreds of times.
So we can delay creating this type feedback

English: 
until we've already had a couple of runs of this function,
taking off some of those feedback vectors but not all of
them. Same thing with source positions.
We only need those for printing line numbers when you print
exceptions, stack traces, or printing stack traces
in DevTools. So if we can delay calculating those till
later, then we save a lot of space as well.
Even bytecode, we have this capability of getting rid of
bytecode that we don't need. So we can just get rid of old
bytecode. Keep around bytecode the we're still using and
save a little bit of memory there.
And there were a bunch of tiny projects targeting these top
memory users, which I described in this blog post a lot
more detail. But again, spoiler alert, they reduce
memory by 10 to 30 percent on typical web sites.
Nice.
So there's actually been a lot more going on in V8 in the
last year. We only really have time to talk about a couple
of projects. I recommend you visit our blog where we post
about new versions of V8.
We talk about exciting new things that we just
like to talk about.
It's a great read and we look forward to seeing you there.
Thank you very much for all the viewers who joined us for
this whirlwind tour of what's

English: 
already had a couple
runs of this function,
taking off some of those
feedback [INAUDIBLE]
but not all of them.
Same thing with
source positions.
We only need those for
printing line numbers when
you print exception
stack traces,
or for printing stack
traces in DevTools.
So we can delay calculating
those till later.
Then we save a lot
of space as well.
Even bytecode-- we
have this capability
of getting rid of bytecode
that we don't need.
So we can just get
rid of old bytecode,
keep around bytecode
that we're still using,
and save a little
bit of memory there.
And there were a bunch of
tiny projects targeting
these top memory users, which
I described in this blog
post in a lot more detail.
But again, spoiler alert--
they reduced memory by 10%
to 30% on typical websites.
SHU-YU GUO: Nice.
LESZEK SWIRSKI: So there's
actually been a lot more going
on in V8 in the last year.
We only really had time to talk
about a couple of projects.
I recommend you
visit our blog, where
we post about new
versions of V8,
and we talk about
exciting new things
that we'd just
like to talk about.
It's a great read.
And we look forward
seeing you there.
SHU-YU GUO: Thank you very
much for all other viewers who

English: 
joined us for this
whirlwind tour of the Java--
what's new in the
JavaScript language
and new developments
in the engine itself
that makes running
JavaScript both faster
and to use less memory.
We definitely didn't have time
to go into all the new features
that were added to JavaScript.
So please give our blog a read.
Thank you very much.
LESZEK SWIRSKI:
Thanks, everyone.
[MUSIC PLAYING]
MATHIAS BYNENS: Hi everyone.
My name is Mathias.
And I'm here to tell you
what's new in Puppeteer.
But before we can do
that, we should probably
talk about what Puppeteer
is in the first place.
Puppeteer is a browser
automation library for Node.
It lets you control a browser
using a simple and modern
JavaScript API.
After installing it using
npm install puppeteer,
you can require Puppeteer
in your Node script
and start automating.
The first step to
browser automation

English: 
new in the JavaScript language and the new developments in
the engine itself that makes running JavaScript both faster
and to use less memory.
We definitely didn't have time to go into all the new
features that were added to JavaScript, so please give our
blog a read.
Thank you very much.
Thanks, everyone.
Hi, everyone. My name is Mathias and I'm here to tell you
what's new in Puppeteer.
But before we can do that, we should probably talk about
what Puppeteer is in the first place.
Puppeteer is a browser automation library for Node: it
lets you control a browser using a simple and modern
JavaScript API. After installing it using 'npm install
puppeteer', you can require('puppeteer') in
your Node script and start automating.
The first step to browser automation is to launch

English: 
an actual browser. And with Puppeteer, that's just
one line of code.
Next, we open a new page.
This is equivalent to opening a new tab in your browser.
Now let's navigate to a URL.
This line of code ensures that the page has finished
loading before continuing with the rest of the script.
Then we take a screenshot and save it to a file,
before finally closing the browser.
And that's it!
That's the entire script.
We did all of that with just a few lines of code.
And Puppeteer can do much more.
You can generate PDFs, evaluate JavaScript in pages,
enter text in input fields, click on elements...
almost anything you would manually do when using a browser
can be automated using Puppeteer.
The Puppeteer project is fully open source and has received
contributions from individual contributors all around the
world, as well as from companies like Mozilla,
SauceLabs and Microsoft. At Google, the Puppeteer

English: 
is to launch an actual browser.
And with Puppeteer, that's
just one line of code.
Next, we open a new page.
This is equivalent to opening
a new tab in your browser.
Now let's navigate to to a URL.
This line of code
ensures that the page
has finished loading
before continuing
with the rest of the script.
Then we take a screenshot
and save it to a file
before finally
closing the browser.
And that's it.
That's the entire script.
We did all of that with
just a few lines of code.
And Puppeteer can do much more.
You can generate PDFs,
evaluate JavaScript in pages,
enter text in input
fields, click on elements.
Almost anything
you would manually
do when using a browser can
be automated using Puppeteer.
The Puppet you project
is fully open-source
and has received contributions
from individual contributors
all around the world, as well
as from companies like Mozilla,
Sauce Labs, and Microsoft.

English: 
At Google, the Puppeteer team
consists of Chrome engineers
who also work on DevTools.
And this might sound a
little strange at first,
but it actually makes
sense, because Puppeteer
is built on top of the
same underlying protocol
that DevTools also uses to
communicate with the Chromium
back end.
Because of this, Puppeteer
also it gives you access
to advanced browser
functionality
that is usually only
available through DevTools.
For example, you might
know the tools lets
you emulates print media
so that you can easily
debug print styles.
Well, Puppeteer lets you do
the same thing in an automated
script.
Here we call
page.emulateMediaType to force
print styles, and then we
save the result as a PDF.
OK.
Now that you know what
Puppeteer is, what it can do,
and who is working on
it, let's take a look
at some recent
feature additions.
Similar to emulating
print styles,
we recently added
DevTool support
for emulating light
and dark mode,

English: 
team consists of Chrome engineers who also work on
DevTools. And this might sound a little strange at first,
but it actually makes sense because Puppeteer is built
on top of the same underlying protocol that DevTools
also uses to communicate with the Chromium backend.
Because of this, Puppeteer also gives you access to
advanced browser functionality that is usually only
available through Dev Tools.
For example, you might know that DevTools lets
you emulate Print media, so that you can easily debug
Print styles.
Well, Puppeteer lets you do the same thing in an automated
script. Here we call page.emulateMediaType()
to force Print styles, and then we save the result
as a PDF.
OK, now that you know what Puppeteer is, what it can
do, and who is working on it, let's take a look at some
recent feature additions.
Similar to emulating Print styles, we recently added
DevTools support for emulating Light and Dark Mode,

English: 
as well as other so-called
CSS media features.
We then shipped a
new Puppeteer API
that lets you perform the same
emulation programmatically.
This Puppeteer script takes two
screenshots of your web app,
one in light mode
and one in dark mode.
It works independently of your
operating system settings.
One of my favorite features on
web.dev/live is the schedule,
which adapts to your
local time zone.
I live in Germany.
So when I view the schedule,
I see something like this.
Today's events started
at 2:00 PM for me.
But someone in
Tokyo, for example,
would see a different time.
For them the event
started at 9:00 PM.
I love that the
website just tells me
what I need to know
in my local time.
Nobody likes doing
time zone math.
To make it easier to
test this kind of time
zone aware functionality,
we add DevTool support
for emulating
arbitrary time zones.

English: 
as well as other so-called 'CSS media features'.
We then shipped a new Puppeteer API that lets you perform
the same emulation programmatically.
This Puppeteer script takes two screenshots of your web
app: one in Light Mode and one in Dark Mode.
It works independently of your operating system settings.
One of my favorite features on web.dev/live
is the schedule which adapts to your local time
zone. I live in Germany, so when I view the schedule,
I see something like this.
Today's events started at two p.m.
for me, but someone in Tokyo, for example,
would see a different time. For them, the event started at
9 p.m..
I love that the website just tells me what I need to know
in my local time.
Nobody likes doing time zone math!
To make it easier to test this kind of time zone-aware
functionality, we added DevTools support for emulating
arbitrary time zones.

English: 
Yesterday's events started
on June 30 at 6:00 PM for me.
But for someone in Tokyo, it
was already 1:00 AM on July 1.
In addition to the new
DevTools functionality,
we also added a new
API to Puppeteer
to let you change time
zones programmatically.
This script emulates
various time zones
and then execute some time
zone dependent JavaScript
in the page context.
We're logging the same date but
in two different time zones,
and that produces
different output.
Here's another example.
This Puppeteer script
forces the Tokyo time zone,
then loads the
web.dev LIVE page,
and finally takes a screenshot
of just the schedule,
similar to the side-by-side
screenshots we saw earlier.
DevTools recently gained
support for simulating
the effect of various
vision deficiencies,
including blurred vision and
color vision deficiencies.
This can help you identify
accessibility issues related

English: 
Yesterday's events started on June 30th at 6 p.m.
for me, but for someone in Tokyo, it was already 1:00
a.m. on July 1st.
In addition to the new DevTools functionality, we also
added a new API to Puppeteer to let you change time zones
programmatically.
This script emulates various time zones and then executes
some time zone-dependent JavaScript in the page context.
We're logging the same date, but
in two different time zones, and that produces different
output.
Here's another example.
This Puppeteer script forces the Tokyo time zone, then
loads the web.dev/live page, and finally takes a
screenshot of just the schedule, similar to the
side-by-side screenshots we saw earlier.
DevTools recently gained support for simulating the effect
of various vision deficiencies, including blurred
vision and color vision deficiencies.
This can help you identify accessibility issues related to

English: 
color, such as bad contrast.
And guess what? We added a corresponding Puppeteer API
that lets you apply these simulations programmatically.
This script takes a screenshot of the web app after
simulating blurred vision, achromatopsia, or
full color blindness, and deuteranopia,
which is red-green colorblindness.
One feature we're still experimenting with is the ability
to register and use custom selector query handlers.
Many Puppeteer APIs deal with selector strings which by
default use querySelector() or querySelectorAll()
to find elements in the page.
We've heard from users that they want to be able to provide
their own selector query handlers with custom logic.
And this new feature now makes that possible.
You can imagine providing a custom hasText handler,
which looks for DOM nodes containing a string of text.
Or maybe you want to select elements across Shadow DOM
boundaries which querySelector() doesn't let you do.

English: 
to color, such as bad contrast.
And guess what?
We added a corresponding
Puppeteer API
that lets you apply these
simulations programmatically.
This script takes a screenshot
of the web app after simulating
blurred vision, achromatopsia,
or full color blindness,
and deuteranopia, which is
red-green colorblindness.
One feature we're still
experimenting with
is the ability to register
and use custom selector query
handlers.
Many Puppeteer APIs deal
with selector strings,
which by default
use querySelector or
querySelectorAll to find
elements in the page.
We've heard from
users that they want
to be able to provide their
own selector query handlers
with custom logic.
And this new feature
now makes that possible.
You can imagine providing
a custom hasNext
handler, which looks
for DOM nodes containing
a string of text.
Or maybe you want
to select elements
across shadow DOM boundaries,
which querySelector
doesn't let you do.

English: 
There's one more feature
I want to talk about.
And it's a little different
from all these API additions
we've been covering until now.
Let's go back to our
very first example--
launching a browser,
navigating to a URL,
and taking a screenshot.
Puppeteer was originally
built for Chrome.
So when you call
Puppeteer.launch,
it launches the Chromium
browser by default.
You can now also
specify this explicitly
by using the product option.
OK, so we added the
new product option.
By itself, that's probably
not very interesting.
But here comes
the exciting part.
[DRUM ROLL, RIM SHOT]
Instead of Chrome, you can
now specify Firefox and then
use the same Puppeteer API to
test a real Firefox browser.
By changing just
this one line, you
are now automating
Firefox instead of Chrome.
Firefox support for
Puppeteer is a result
of an ongoing
collaboration with Mozilla.
Part of this effort involves
patching Puppeteer itself.

English: 
There's one more feature I want to talk about, and it's a
little different from all these API additions we've been
covering until now.
Let's go back to our very first example: launching a
browser, navigating to a URL and taking
a screenshot. Puppeteer
was originally built for Chrome, so when you call a
puppeteer.launch(), it launches a Chromium browser
by default.
You can now also specify this explicitly by using
the 'product' option.
OK. So we added the new 'product' option.
By itself, that's probably not very interesting.
But here comes the exciting part [drumroll]
instead of 'chrome', you can now specify 'firefox'
and then use the same Puppeteer API to test a real
Firefox browser.
By changing just this one line, we are now automating
Firefox instead of Chrome.
Firefox support for Puppeteer is the result of an ongoing
collaboration with Mozilla.
Part of this effort involves patching Puppeteer itself, but

English: 
But a big chunk of the work
happens in the Firefox code
base.
The Puppeteer Firefox
implementation
is still experimental, and
so not all the Puppeteer APIs
are yet compatible with Firefox.
But Mozilla has been
making great progress here.
In fact, as of
mid-May, exactly 319
out of the 638 tests in
Puppeteer's test suite
are passing on Firefox.
That's exactly 50%.
We're hoping to ship Puppeteer
with more complete Firefox
support soon.
Longer term, we would love
to support Safari as well.
And we're actively
working on making
that happen in collaboration
with other browser vendors.
We believe the right way to
get to a fully cross-browser
Puppeteer is by standardizing a
protocol that all browsers can
implement, instead
of building on top
of the proprietary
Chrome DevTools protocol.
In addition to all those
new features, a lot of work
has been going on behind
the scenes of Puppeteer.
We recently finished migrating
the code base to TypeScript.

English: 
a big chunk of the work happens in the Firefox codebase.
The Puppeteer Firefox implementation is still experimental,
and so not all the Puppeteer APIs are yet compatible
with Firefox.
But Mozilla has been making great progress here.
In fact, as of mid-May, exactly 319
out of the 638 tests in Puppeteer's test suite
are passing on Firefox.
That's exactly 50%.
We're hoping to ship Puppeteer with more complete Firefox
support soon.
Longer term, we would love to support Safari as well,
and we're actively working on making that happen in
collaboration with other browser vendors.
We believe the right way to get to a fully cross-browser
Puppeteer is by standardizing a protocol that all
browsers can implement instead of building on top of the
proprietary Chrome DevTools protocol.
In addition to all those new features, a lot of work
has been going on behind the scenes of Puppeteer.
We recently finished migrating the codebase to TypeScript,

English: 
We simplified our test
runner, we considerably
improved the robustness of our
continuous integration setup,
and our documentation keeps
getting better and better.
This work is often
less user visible,
but it's crucially
important, because it
enables us to iterate more
quickly and more confidently.
I hope you enjoyed this overview
of what's new in Puppeteer.
Thanks for listening,
and see you next time.
[MUSIC PLAYING]
ANDRE BANDARRA: Hi.
My name is André Bandarra.
And in this video,
I'm going to show you
how to use your progressive
web app inside an Android
application without writing
a single line of native code.
Progressive Web Apps, or PWAs,
combine the reach of the web

English: 
we simplified our test runner, we considerably improved
the robustness of our continuous integration setup, and
our documentation keeps getting better and better.
This work is often less user-visible, but it's crucially
important because it enables us to iterate more quickly and
more confidently.
I hope you enjoyed this overview of what's new in
Puppeteer. Thanks for listening and see you next time.
Hi, my name is André Bandarra and in this video,
I'm going to show you how to use a Progressive Web App
inside an Android application without writing a single
line of native code.
Progressive Web Apps, or PWAs, combine
the reach of the web with the capabilities that were once

English: 
with the capabilities that were
also available to native apps.
If you are new to PWAs
read more about them
on the
web.dev/progressive-web-apps.
It is natural that developers
building great PWAs
wants to reuse those
experiences inside their Android
applications.
In the past, possible
ways for developers
to use their progressive web app
inside an Android application
included using the Android,
WebView or embedding
in browser engine.
The WebView doesn't
provide support
for many of the new capabilities
on the web, like push
notifications or web Bluetooth.
So the output can be a
subpar experience compared
to the PWA it's built on.
Creating and maintaining an
app with an embedded browser
requires a considerable
amount of engineering efforts,
and produces an app that's
larger than a native app
equivalent.
At last year's Google I/O,
we announced Trusted Web
Activities, which
allow developers
to use their progressive web
app inside an Android app

English: 
only available to native apps.
If you are new to PWAs, read more about them on
web.dev/progressive-web-apps.
It is natural that developers
building great PWAs want to reuse those experiences inside
their Android applications.
In the past, possible ways for a developer to use their
progressive web app inside an Android application
included using the Android WebView or embedding a browser
engine. The WebView doesn't provide support for many
of the new capabilities on the web, like Push Notifications
or Web Bluetooth.
So the output can be a subpar experience compared
to the PWA it's built on.
Creating and maintaining an app with an embedded browser
requires a considerable amount of engineering effort
and produces an app that's larger than a native app
equivalent. At last year's Google I/O, we
announced Trusted Web Activities - which allow developers
to use their Progressive Web App inside an Android

English: 
in a fullscreen
tab that is powered
and has the same
features and capabilities
as the browser providing it.
This leads to small development
costs and application size.
Even though Trusted Web
Activities provide a better
alternative for using a
PWA inside an Android app,
developers still
need some knowledge
about native application
tooling and development.
So to create an easier
path for developers
who want to create their Android
app using their PWA inside it,
we have created Bubblewrap,
a Node.js project that
contains both a library
and a command line
interface developers
can use to generate
and because of their
Android application.
In the next few
minutes, I'd like
to guide you on how to
configure Bubblewrap and use
it to generate an
application from an existing
progressive web app.
I'm going to use [INAUDIBLE]
persistence app as a starting
point.
But you can use any existing
progressive web app.
Check the video description
for the link to the persistence
app.

English: 
app, in a full screen tab that is powered
and has the same features and capabilities
as the browser providing it.
This leads to a small development cost and application
size. Even though Trusted Web Activities
provide a better alternative for using
a PWA inside an Android app, developers still need some
knowledge about native application tooling and development.
So, to create an easier path for developers who want
to create their Android app using their PWA inside it,
we have created Bubblewrap, a Node.js project
that contains both a library and a command-line interface
developers can use to generate and build their Android
application. In the next few minutes, I'd like
to guide you on how to configure Bubblewrap and use
it to generate an application from an existing Progressive
Web App. I'm going to use Rowan Merewood's
Persistence app as a starting point, but you can use
any existing Progressive Web App.
Check in the video description for the link to the

English: 
Persistence app. We'll need to modify the application
later, so I'll open the app,
scroll down and click on the 'code' link.
Then I'm going to remix the project so we
can modify it.
We can get the link to the remixed app by clicking on
'Share,' then 'Live App'
and then copying the link.
We are going to need that information soon.
In order to use Bubblewrap, we
need Node.js 10 or above installed on the
development computer. And, optionally, an
Android device setup in developer mode so we can test
the application. Check the link on the video description
for more information on how to setup an Android device
for developer mode.
Bubblewrap builds on top of Native SDK tooling.
So we'll start by downloading the Android command-line
tools and the Java Development Kit, or JDK,
version 8.
To download the Android command-line tools, we can
use the shortcut on the Bubblewrap CLI documentation,

English: 
We'll need to modify
the application later.
So I'll open the
app, scroll down,
and click on the code link.
Then I'm going to remix the
project so we can modify it.
We can get the link
to the remixed app
by clicking on Share then Live
App, and then copying the link.
We are going to need
that information soon.
In order to use Bubblewrap,
we need Node.js 10
or above installed on
the development computer
and optionally an Android
device set up in developer mode
so we can test the application.
Check the link on
the video description
for more information on
how to set up an Android
device for developer mode.
Bubblewrap builds on top
of native SDK tooling.
So we'll start by downloading
the Android command line
tools and the Java Development
Kit, or JDK, version 8.
To download the Android
command line tools,

English: 
which is linked on the video description.
Inside the page, click on the link for your operating
system, accept the license and click on 'Download.'
The Bubblewrap CLI documentation also links to the
correct version of the Java Development Kit.
Inside the page, choose your operating
system, then architecture, then download
the compressed TAR file for the JDK.
In our terminal, we now create a directory where we
can place both dependencies.
Then we unzip the command-line tools.
And then the Java Development Kit.
Make sure to take notes of the directories of where those
files were decompressed, as we're going to need them later.
I'd like to rename the JDK folder to just 'jdk',
as it's easier to remember. With the dependencies now
ready, we can install Bubblewrap using
npm install.

English: 
we can use the shortcut on the
Bubblewrap CLI documentation,
which is linked on
the video description.
Inside the page, click on the
link for your operating system,
accept the license,
and click on Download.
The Bubblewrap CLI
documentation also
links to the correct version
of the Java Development Kit.
Inside the page, choose
your operating system, then
architecture, then download
the compressed tar file
for the JDK.
In our terminal, we now
create a directory where
we can place both dependencies.
Then we unzip the command
line tools and then
the Java Development Kit.
Make sure to take notes of
the directories of where
those files were
decompressed, as we're
going to need them later.
I'd like to rename the
JDK folder to just jdk,
as it's easier to remember.
With the dependencies
now ready, we
can install Bubblewrap
using npm install.

English: 
With Bubblewrap and its
dependencies now installed,
we can start the creation
of the Android app itself.
Let's start by creating
a folder for it.
And now we can initialize
the Android project
by calling Bubblewrap in
it and passing the URL
to the web manifest to it.
When Bubblewrap runs
for the first time,
it will ask for the location of
the JDK and the Android command
line tools we
downloaded previously,
while also automatically
installing other dependencies.
Then the CLI will
ask you to confirm
values read from
the web manifest
and fill in any missing
required values needed
to create the Android app.
We can, for instance,
change the source URL
so that we can use Google
Analytics to measure
how often our users are opening
the PWA from the Android app.
Android applications
need to be signed
with a self-generated
key in order
to be uploaded to
the Play Store.
If Bubblewrap is unable
to find an existing key,

English: 
With Bubblewrap and its dependencies now installed, we
can start the creation of the Android app itself.
Let's start by creating a folder for it,
and now we can initialize the Android project by
calling 'bubblewrap init' and passing the
URL to the Web Manifest to it.
When Bubblewrap runs for the first time, it will ask for
the location of the JDK and the Android command-line tools
we downloaded previously,
while also automatically installing other dependencies.
Then, the CLI will ask you to confirm values read
from the Web Manifest and fill in any missing
required values needed to create the Android app.
We can, for instance, change the start URL
so that we can use Google Analytics to measure how often
our users are opening the PWA from the Android app.
Android applications need to be signed with a
self-generated key in order to be uploaded to the Play
Store. If Bubblewrap is unable to find an existing

English: 
it will prompt the
developer to create one.
So let's go ahead and create it.
And make sure to take note
of the password you choose.
Finally, we can now
evoke bubblewrap
build to build the project.
The command will output
three important things--
the quality criteria
for the PWA;
an assetlinks.json used to
validate the domain opened
inside the Trusted Web
Activity; and a signed Android
application that can be
uploaded to the Play Store.
Bubblewrap will check
the quality criteria
against the URL used to launch
this Trusted Web Activity.
We strongly recommended
it repeatedly
passes the quality criteria.
The quality criteria
is measured using
Lighthouse against the
start URL and consists
of a minimum
performance score of 80
and getting the PWA check.
In order to be
shown in fullscreen,
developers need to implement
digital asset links.

English: 
key, it will prompt the developer to create one.
So, let's go ahead and create it, and
make sure to take note of the passwords you choose.
Finally, we can now invoke 'bubblewrap build' to
build the project.
The command will output three important things: the quality
criteria for the PWA, an assetlinks.json file, used
to validate the domain opened inside the Trusted Web
Activity, and a signed Android application
that can be a uploaded to the Play Store.
Bubblewrap will check the quality criteria
against the URL used to launch the Trusted Web Activity.
We strongly recommend that your PWA passes the quality
criteria. The quality criteria is measured
using Lighthouse against the start URL and consists
of a minimum performance score of 80 and getting
the PWA check.
In order to be shown in full screen, developers need
to implement Digital Asset Links.

English: 
Bubblewrap takes care of the configuration of the Android
application, but there is one extra step that
needs to be done in the web app: the content of the
assetlinks.json file needs to be made available on
.well-known/assetlinks.json, on the root of the domain.
On my remixed project, I'll create a
.well-known/assetlinks.json file, then I'll paste the
content of the file generated by Bubblewrap into
it.
The application is now fully setup.
If you have an Android device in developer mode, you can
now connect it to the computer and run 'bubblewrap
install' to launch the app.
Congratulations, you have built an Android app!
When uploading an application to the Play Store for the
first time, it will ask if the developer wants to use
App Signing. If opting-in into App
Signing, the Play Store will manage the signing key for
you, making sure it's not lost.

English: 
Bubblewrap takes care of the
configuration of the Android
application.
But there is one
extra step that needs
to be done in the web app.
The content of the
assetlinks.json file needs
to be made available on
.well-known/assetlinks.json
on the root of the domain.
On my remix project, I'll create
a .well-known/assetlinks.json
file.
Then I'll paste the contents
of the file generated
by Bubblewrap into it.
The application is
now fully set up.
If you have an Android
device in developer mode,
you can now connect
it to the computer
and run bubblewrap
install to launch the app.
Congratulations.
You have built an Android app.
When uploading an
application to the Play
Store for the
first time, it will
ask if the developer
wants to use app signing.
If opting in into app
signing, the Play Store
will manage the signing key for
you, making sure it's not lost.

English: 
This is important, as
losing the key means
it's not possible to update
the application on the store
anymore.
But it also means that
the final key used
to sign the application will
be different than the one
generated by Bubblewrap.
To update the
assetlinks.json file,
we need information on the
key used by the Play Store.
This information can be found by
clicking on the App Links item
on the menu on the
left and copying
the details for the
fingerprint, and using
it to update assetlinks.json
file on the web app.
It is possible to
use both fingerprints
in the application.
Check out the video
description for a link
on how to add both keys
to the application.
Bubblewrap removes
friction for web developers
who wants to open their
PWA in an Android app.
I'm a fan of command line tools.
If you are more like a
graphical user interface person,
check out PWA Builder.
It uses Bubblewrap as a
library to power your Android
application generation.

English: 
This is important as losing the key means it's not possible
to update the application on the store anymore.
But it also means that the final key used to sign
the application will be different than the one you
generated by Bubblewrap.
To update the assetlinks.json file, we'll need
information on the key used by the Play Store.
This information can be found by clicking on the 'App
Links' item on the menu on the left and copying
the details for the fingerprints and using it to update
assetlinks.json file on the web app.
It is possible to use both fingerprints in the application.
Check out the video description for a link on how to add
both keys to the application.
Bubblewrap removes friction for web developers who want to
open their PWA in an Android app.
I'm a fan of command-line tools.
If you are more like a Graphical User Interface person,
check out PWABuilder - it uses Bubblewrap
as a library to power their Android application generation.
And that's all for Bubblewrap today.

English: 
And that's all for
Bubblewrap today.
Make sure to check the GitHub
repo and drop us some feedback.
And if you're watching this
live, jump into our live chat
and tell us what you think.
Thanks for watching.
[MUSIC PLAYING]
DEMIAN RENZULLI: Hi, I'm
Demian, a web ecosystem
consultant at Google.
In this talk, you will learn
how to define install strategy
across all your
mobile experiences.
Letting your users install your
app is one of the best ways
to keep them engaged.
Today, you can achieve
that in different ways.
Let's start with
native app installs.
If you have a
native application,
you might think that these
might be the best platform
to promote to all your users.
And for some of them,
this might be true.
But for some users, native apps
can have some disadvantages,
to.

English: 
Make sure to check the GitHub repo and drop us some
feedback. And if you're watching this live,
jump into our Live Chat and tell us what you think!
Thanks for watching.
Hi, I'm Demian, a Web Ecosystem Consultant
at Google. In this talk, you will learn how to define
an install strategy across all your mobile experiences.
Letting your users install your app is one of the best ways
to keep them engaged.
Today, you can achieve that in different ways.
Let's start with native app installs.
If you have a native application, you might think that this
might be the best platform to promote to all your users.
And for some of them, this might be true.
But for some users, native apps can have some
disadvantages, too.

English: 
The most common one is
storage constraints.
Making space for a new app might
mean removing valuable content.
Freeing up storage
is also the number
one reason users remove
apps from their devices.
There's also the issue
of available bandwidth,
especially for users on slow
connections on expensive data
plans.
Finally, moving to a store
creates additional friction
and delays a user action that
could be performed directly
in the web.
A great alternative to
this is allowing users
to install your progressive
web app from the browser
through an Add to
Home screen prompt.
You can also upload your
PWA to the Play Store
using Trusted Web Activity.
In this example, QuintoAndar,
a real estate company
from Brazil, was able
to use the same code
base in the web
and the Play Store,
while offering a great
experience to users.
Let's take a look
at another example.
OYO Rooms is one of the largest
hospitality companies in India.

English: 
The most common one is storage constraints.
Making space for a new app may mean removing valuable
content.
Freeing up storage is also the number one reason
users remove apps from their devices.
There's also the issue of available bandwidth, especially
for users on slow connections and expensive data plans.
Finally, moving to a store creates additional
friction and delays a user action that could be performed
directly on the web.
A great alternative to this is allowing users to install
your Progressive Web App from the browser through an "Add
to home screen" prompt.
You can also upload your PWA to the play store using
Trusted Web Activity.
In this example, Quinto Andar, a real estate company
from Brazil, was able to reuse the same code
base in the web and the Play Store
while offering a great experience to users.
Let's take a look at another example.
OYO Rooms is one of the largest hospitality companies in

English: 
India. They have a very large user base, coming
from a variety of devices and networks.
They have built different versions of their mobile
experience to satisfy the needs of all
their users.
First, they created the native application for the Play
Store. For the most sophisticated users, this
could be the best choice.
OYO Light is a Progressive Web App uploaded to Play
Store via Trusted Web Activity.
It provides the same functionality of the native app, while
occupying only 7% of the space.
Finally, for users that visit the site directly by typing
the URL or clicking on a link, OYO
offers the chance of installing the PWA directly from the
home screen.
Having all these ways to achieve app installs
is great, but how can you combine all these
offerings to increase installation rates while avoiding
making your apps compete with each other?
Let's discuss some strategies to combine different install
offerings.
The first strategy is to show the different options in the

English: 
They have a very large user
base coming from a variety
of devices and networks.
They have built
different versions
of their mobile
experience to satisfy
the needs of all their users.
First, they created the native
application for the Play Store.
For the most
sophisticated users,
this could be the best choice.
OYO Lite is a
progressive web app
uploaded to Play Store
via Trusted Web Activity.
It provides the same
functionality of the native app
while occupying only
7% of the space.
Finally, for users that
visit the site directly
by typing the URL or
clicking on a link,
OYO offers a chance of
installing the PWA directly
from the Home screen.
Having all these ways to
achieve app installs is great.
But how can you combine
all these offerings
to increase installation rates
while avoiding making your apps
compete with each other?
Let's discuss some
strategies to combine
different install offerings.
The first strategy is to
show the different options

English: 
same screen. This is a simple approach that might
just work for many users.
The challenge is to be able to communicate the value
proposition, to distinguish clearly one from the other.
But instead of delegating the choice completely to users,
we can make their life easier.
The idea of the following strategies is to make some
inferences, for example, by tracking
users' behavior and device characteristics.
We call these heuristic-based approaches.
The first one is "Web install as fallback".
In this strategy, you can start showing the native app
install prompt.
If the user doesn't install the app and keeps visiting
to your website, chances are that the web is their platform
of choice. After a while, you can start promoting
your PWA to these users.
This strategy can be implemented very easily, for
example, by using cookies to track user behavior.
The group of users that dismiss the app banner

English: 
in the same screen.
This is a simple approach that
might just work for many users.
The challenge is to be able
to communicate the value
proposition to distinguish
clearly one from the other.
But instead of delegating the
choice completely to users,
we can make their life easier.
The idea of the
following strategies
is to make some
inferences, for example
by tracking users' behavior
and device characteristics.
We call these
heuristic-based approaches.
The first one is web
install as fallback.
In this strategy,
you can start showing
the native app install prompt.
If the user doesn't
install the app
and keeps visiting
to your website,
chances are that the web is
their platform of choice.
After a while, you can
start promoting your PWA
to these users.
This strategy can be
implemented very easily,
for example by using cookies
to track user behavior.
The group of users that
dismiss your banner

English: 
and keep coming several
times to the site
might be good candidates
for a web install offering.
But before showing the web
install call to action,
there are two more things
to take into account.
The first one, make sure
that the user hasn't already
installed your native app
or your PWA by other means.
The getInstalledRelatedApps
API can help you check that.
The second is actually
a UX best practice.
To maximize the opt-in rate
for your web install prompts,
you might want to use the
double permission button.
In this example, OYO
shows a web install icon
after capturing the before
install prompt event.
When the user clicks on it,
they trigger the standard
add to Home screen prompt.
If you want to learn
about UX patterns
for web permissions like
this one, check PJ's talk,
"Safe permissions
for the capable web"
at Chrome Dev Summit 2019.
Let's move now to
the second strategy.

English: 
and keep coming several times to the site might be good
candidates for a web install offering.
But before showing the web install call-to-action, there
are two more things to take into account.
The first one: make sure that the user hasn't already
installed your native app or your PWA
by other means. The getInstalledRelatedApps() API
can help you check that.
The second is actually a UX best practice.
To maximize the opt-in rate for your web install prompts,
you might want to use the double-permission pattern.
In this example, OYO shows a web install icon
after capturing the BeforeInstallPromptEvent.
When the user clicks on it, they trigger the standard "add
to home screen" prompt.
If you want to learn about UX patterns for web permissions
like this one, check PJ's talk "Safe
Permissions for the Capable Web" at
Chrome Dev Summit 2019.
Let's move now to the second strategy.

English: 
Intuitively, users on slow networks or low-end devices
might be more inclined to download lite apps.
Therefore, if it's possible to identify a
user's device, one could prioritize the lite
app over the heavier native app installed version.
You can implement this by writing a function checking for
device characteristics to decide which prompt to show.
If it's a low-end device, the lite app, and if it's a
high-end device, you can offer the core native app.
Inside the function, device signals can be obtained in two
ways. The first one is by using JavaScript
APIs like device memory, hardware concurrency,
or the Network Information API.
The second one is by using client hints, which
can be inferred from the header of the HTTP request.
To use them you need to send an Accept-CH header in
your response indicating the type of things you want to
receive. For example, device memory.
After that, you will start receiving these hints in
the header of the HTTP request.

English: 
Intuitively, users on slow
networks or low-end devices
might be more inclined
to download light apps.
Therefore, if it's possible
to identify a user's device,
one could prioritize the light
app over the heavier native app
install version.
You can implement these
by writing a function,
checking for device
characteristics
to decide which prompt to show.
If it's a low-end
device, a light app.
If it's a high-end device, you
can offer the core native app.
Inside the function,
Device signals can be
obtained in two ways.
The first one is by using
JavaScript APIs, like device
memory, hardware concurrency,
or the network information API.
The second one is by
using client hints, which
can be inferred from the
header of the HTTP request.
To use them, you need to
send an Accept-CH header
in your response, indicating
the type of things
you want to receive, for
example device memory.
After that, you will start
receiving these hints

English: 
Finally, you can use this information to map
to a device category and use that later to decide
which prompt to show.
If you want to learn techniques on how to map device
signals to device categories, check out "Adaptive
Loading", a talk that was given at Chrome Dev Summit 2019.
Wrapping up, today you can offer different channels to
users to install your mobile experiences.
For example, you could offer an native app, a PWA
available in the Play Store, and a web install from the
user's screen.
Then, you can define a heuristic to show the most suitable
install offering to a particular user.
You can create a very simple one based on the user behavior
on your site. For example, by tracking how often they come
to it. Or you can go for a more sophisticated approach,
by mapping device signals to device categories and
show different install offerings depending if the device is
low, mid, or high-end.
We encourage you to experiment with these techniques, for
example by running A/B tests, and to reach

English: 
in the header of
the HTTP request.
Finally, you can use
this information to map
to a device category
and use that later
to decide which prompt to show.
If you want to learn
techniques on how
to map device signals
to device categories,
check out "Adaptive
Loading," a talk
that was given at
Chrome Dev Summit 2019.
Wrapping up, today you can offer
different channels to users
to install your
mobile experiences.
For example, you could
offer a native app,
a PWA available
in the Play Store,
and a web install from
the user's screen.
Then you can define
a heuristic to show
the most suitable install
offering to a particular user.
You can create a
very simple one based
on the user behavior
on your site,
for example by tracking
how often they come to it.
Or you can go for a more
sophisticated approach
by mapping device signals
to device categories,
and show different install
offerings depending
if the device is low-,
mid-, or high-end.
We encourage you to experiment
with these techniques,

English: 
for example by
running A/B tests,
and to reach out
to us on Twitter
to share your experiences.
I hope you continue
enjoying web.dev LIVE.
Thanks for watching.
[MUSIC PLAYING]
THOMAS STEINER:
Back in March 2003,
Nick Finck and Steve Champeon
stunned the web design world
with the concept of
progressive enhancement,
a strategy for web design
that emphasizes core web
page content first, and
that then progressively adds
more nuanced and technically
rigorous layers of presentation
and features on top
of the web content.
While in 2003
progressive enhancement
was about using, at
the time, modern CSS
features, unobtrusive
JavaScript, or even scalable
vector graphics, progressive
enhancement at 2020
is about using modern
browser capabilities.
My name is Thomas Steiner.
I'm a developer
advocate based out
of the Google Hamburg office.

English: 
out to us on Twitter to share your experiences.
I hope you continue enjoying web.dev LIVE.
Thanks for watching.
Back in March 2003, Nick Finck and Steven Champeon
stunned the web design world with the concept of
progressive enhancement, a strategy for web design
that emphasizes core webpage content first, and then
progressively adds more nuance and technically rigorous
layers of presentation and features on top of the web
content. While in 2003, progressive
enhancement was about using, at the time, modern CSS
features, unobtrusive JavaScript, or even Scalable
Vector Graphics, progressive enhancement in 2020
is about using modern browser capabilities.
My name is Thomas Steiner.
I'm a Developer Advocate based out of the Google Hamburg

English: 
office. Today, I want to talk about "Progressively
Enhancing Like Its 2003: Building for Modern Browsers".
Since we all can't be here together in person, due to the
coronavirus, I've converted my talk into an online
trip that I want to take you on with me.
For this trip, you need a solid understanding of
JavaScript.
Talking of JavaScript, the browser support for
the latest core JavaScript features is great.
Promises, modules, classes, template
literals, arrow functions, you name them.
All supported.
Async functions work across the board in all modern
browsers. And even super recent language additions,
like optional chaining and nullish coalescing reach
support really quickly.
When it comes to core JavaScript features the grass
couldn't be much greener than it is today.
For the trip that we are going on, you likewise should have
a good understanding of Progressive Web Apps.
For this talk, I work with a simple PWA called Fugu
Greetings. The name of this app is a hat tip

English: 
Today I want to talk about
"Progressively Enhancing Like
It's 2003--
Building for Modern Browsers."
since we all can't be
here together in person
due to the coronavirus,
I've converted my talk
into an online trip that I
want to take you on with me.
For this trip, you need a solid
understanding of JavaScript.
Talking of JavaScript,
the browser
support for the latest core
JavaScript features is great--
promises, modules,
classes, template literals,
arrow functions, you
name them, all supported.
Async functions work across the
board in all modern browsers.
And even super recent
language additions,
like optional chaining
and nullish coalescing,
reached support really quickly.
When it comes to core
JavaScript features,
the grass couldn't be much
greener than it is today.
For the trip that
we are going on,
you likewise should have
a good understanding
of progressive web apps.
For this talk, I work
with a simple PWA
called Fugu Greetings.
The name with this
app is a hat tip

English: 
to project Fugu, where
we work on giving
the web all the colors
of native applications.
You can read more about the
project at web.dev/fugu-status.
Fugu Greetings is
a drawing app that
allows you to create
virtual greeting cards.
Just imagine you actually
had traveled to Google I/O
and wanted to send a greeting
card to your loved ones.
Let me recall some
of the PWA contents.
Fugu Greetings is reliable
and fully offline enabled.
So even if you don't have
network, you can still use it.
It can be installed to the
Home screen of the device,
and it integrates seamlessly
into the operating system
as a standalone application.
With this out of
the way, let's dive
into the actual
topic of this talk--
progressive enhancement.
Starting each greeting
card from scratch
can be really cumbersome.
So why not have a feature that
allows users to input an image
and start from there?
With the traditional
approach, you'd
have used an input type file
element to make this happen.
First, you'd create
the element, set a type

English: 
to Project Fugu, where we work on giving the web all the
powers of native applications.
You can read more about the project and
web.dev/fugu-status.
Fugu Greetings is a drawing app that allows you to create
virtual greeting cards.
Just imagine you actually had traveled to Google I/O
and wanted to send a greeting card to your loved ones.
Let me recall some of the PWA concepts.
Fugu Greetings is reliable and fully offline enabled.
So even if you don't have network, you can still use it.
It can be installed to the home screen of the device and it
integrates seamlessly into the operating system as a
standalone application.
With this out of the way, let's dive into the actual topic
of this talk: progressive enhancement.
Starting each greeting card from scratch can be really
cumbersome. So why not have a feature that allows users
to input an image and start from there.
With a traditional approach, you'd have used an
type=file> element to make this happen.
First, you'd create the element, set its type and the
to-be-accepted MIME types, and then programmatically

English: 
click it and listen for changes.
And it works perfectly fine.
The image is imported straight onto the canvas.
When there is an input feature, there probably should also
be an export feature, so users can save their greeting
cards locally. Similar to before, the traditional
way to saving files is to create an anchor link with a
download attribute and make the blob URL as its href.
You would then programmatically click it to trigger the
download and to prevent memory leaks from happening.
Hopefully make sure not to forget to revoke the blob URL.
But wait a minute. Mentally, you haven't "downloaded"
a greeting card. You have "saved" it.
Rather than showing you a save dialog that lets you choose
where to put the file, the browser instead has directly
downloaded the greeting card without interaction.
And it's put it straight into your downloads folder.
This isn't good.
What if there were a better way?
What if you could just open a local file, edit it,
and then save the notifications?
Either through a new file or back to the original file that
you had initially opened.

English: 
to the accepted MIME types, and
then programmatically click it
and listen for changes.
And it works perfectly fine.
The image is imported
straight onto the canvas.
When there is an import
feature, there probably
should also be an export
feature so users can save
their greeting cards locally.
Similar to before, the
traditional way to saving files
is to create an anchor link
with a download attribute
and make the blob
URL as its href.
You would then
programmatically click
it to trigger the download,
and to prevent memory leaks
from happening, hopefully
make sure not to forget
to revoke the blob URL.
But wait a minute.
Mentally, you haven't
downloaded a greeting card.
You have saved it.
Rather than showing
you a save dialog that
lets you choose where
to put the file,
the browser instead has directly
downloaded the greeting card
without interaction
and has put it straight
into your downloads folder.
This isn't great.
What if there were a better way?
What if you could just
open a local file,
edit it, and then save
the modifications,
either to a new file or
back to the original file
that you had initially opened?

English: 
Turns out there is a better way.
The Native File System API allows you to open and create
files and directories, make modifications, and save them
back. Let's see, how I can feature-detect if the API
exists. The Native File System API exposes
a new method, chooseFileSystemEntries().
I can use this to conditionally load import_image.mjs
and export_image.mjs if the API exists,
and if it isn't available, notify us with the legacy
approaches from the earliest slides.
But before I dive into the Native File System API, let me
just quickly highlight the progressive announcement
pattern.
On browsers that don't support the Native File System API,
I load the legacy scripts.
You can see the network taps of Firefox and Safari here.
However, on Chrome, only the new scripts are loaded.
This is made elegantly possible thanks to dynamic inputs
that all modern browsers support.
As I said earlier, the grass is pretty green these days.
Let's look at the actual Native File System API based
implementation. For importing an image, I call

English: 
Turns out there is a better way.
The Native File
System API allows
you to open and create
files and directories,
make modifications,
and save them back.
Let's see how we can feature
detect if the API exists.
The Native File System
API exposes a new method,
chooseFileSystemEntries.
I can use this to conditionally
load import_image.mjs
and export_image.mjs
if the API exists,
and if it isn't available,
load the files with the legacy
approaches from
the earlier slides.
But before I dive into the
Native File System API,
let me just quickly highlight
the progressive enhancement
pattern.
On browsers that don't support
the Native File System API,
I load the legacy scripts.
You can see the Network
tabs of Firefox and Safari.
However, on Chrome only
the new scripts are loaded.
This is made elegantly possible
thanks to dynamic imports
that all modern
browsers support.
As I said earlier, the grass
is pretty green these days.
Let's look at the actual
Native File System
API based implementation.

English: 
For importing an image, I call
window.chooseFileSystemEntries
and pass it an accepts
option parameter,
where I say I want image files.
Both file extensions as well
MIME types are supported.
This results in a file handler.
From the file handler I
can obtain the actual file
by calling its getFile method.
Exporting an image
is almost the same.
But this time, we need to pass
a type parameter, save-file,
to the chooseFileSystemEntries
method.
So I get a file save dialog.
Before, this wasn't necessary,
since open file is the default.
I set the accept parameters
similar as before but this time
limited to just PNG images.
Again, I get back
a file handler.
But rather than
getting the file,
this time I'm creating
a writable stream
by calling createWritable.
Next, I write the blob, which
is my greeting card image,
to the file.
Finally, I close
the writable stream.
Everything can always fail.
The disk could be out of space.
There could be a
write or read error.
Or maybe simply, the user
cancels the file dialog.

English: 
window.chooseFileSystemEntries() and pass it an accepts
option parameter where I say I want image files.
Both file extensions as well as MIME types are supported.
This results in a file handle.
From the file handle, I can obtain the actual file by
calling its getFile() method.
Exporting an image is almost the same.
But this time I need to pass a type parameter of
"save-file" to the chooseFileSystemEntries() method.
So I get a file save dialog.
Before, this wasn't necessary since open-file is the
default.
I set the accept parameter similar as before, but this time
limited to just PNG images.
Again, I get back a file handle, but rather than getting
the file, this time, I'm creating a writable stream
by calling createWritable().
Next, I write the blob, which is my greeting card image
to the file. Finally, I close the writable
stream. Everything can always fail.
The disk could be out of space.
It could be a write or read error.
Or maybe simply the user cancels the file dialog.

English: 
This is why I always wrap the calls in a try...catch
statement. I can now open the file as before.
The imported file is drawn right onto the canvas.
I can make my edits and finally save them.
With a real save dialog, where I can choose the name and
storage location of the file.
Now the file is ready to be preserved for eternity.
Apart from storing files for eternity, maybe I actually
want to share my greeting card.
This is something that the Web Share and Webs Share Target
APIs allow me to do.
Mobile, and more recently also desktop operating systems,
have gained native sharing mechanisms.
For example, here's Safari's share sheet on macOS
triggered from an article on my site blog.tomayac.com.
And you click the share button, you can share a link to the
article with a friend, for example, by the native Messages
app. The code to make this happen is pretty
straightforward. I call navigator.share()
and pass it an optional title, text, and URL.
But what if I want to attach an image?

English: 
This is why I always wrap the
calls in a try-catch statement.
I can now open the
file as before.
The imported file is drawn
right onto the canvas.
I can make my edits
and finally save them
with a real save dialog, where I
can choose the name and storage
location of the file.
Now the file is ready to be
preserved for the eternity.
Apart from storing
files for the eternity,
maybe I actually want to
share my greeting card.
This is something that the
Web Share and Web Share Target
APIs allow me to do.
Mobile, and more recently also
desktop operating systems,
have gained native
sharing mechanisms.
For example, here is
Safari's share sheet
on Mac OS triggered
from an article
on my site, blog.tomayac.com.
When I click the
Share button, you
can share a link to the article
with a friend, for example
by the native Messages app.
The code to make this happen
is pretty straightforward.
I call navigator.share
and pass it
an optional title,
text, and URL.
But what if I want
to attach an image?

English: 
Level 1 of the Web Share API that you can see on the
screen doesn't support this yet.
The good news is that Web Share Level 2
has added file sharing capabilities.
Let me show you how to make this work with the Frugu
Greeting card application.
First, I need to prepare a data object with a files array
consisting of one blob, and then a title and a text.
Next, as a best practice, I make use of the new
navigator.canShare() method that does what its name
suggests. It tells me if the data object I'm trying to
share can technically be shared by the browser.
If navigator.canShare() tells me the data can be shared,
I'm in the final step ready to call navigator.share() as
before. Again, everything can fail, in
the simplest way, when a user cancels the sharing
operation. So it's all wrapped in try...catch blocks.
As before, I use a progressive enhancement loading
strategy. If both share() and canShare() exist on
the navigator object, only then I go forward and load
share.mjs via dynamic import.
On browsers like Mobile Safari that only fulfills one of

English: 
[INAUDIBLE] one of
the Web Share API
that you can see on the screen
doesn't support this yet.
The good news is that Web
Share Level 2 has added
file sharing capabilities.
Let me show you how to make
this work with the Fugu
greeting card application.
First I need to
prepare a data object
with the files array
consisting of one blob,
and then a title and a text.
Next, as a best
practice I make use
of the new navigator.canShare
method that
does what its name suggests.
It tells me if the
data object I'm
trying to share can technically
be shared by the browser.
If navigator.canShare tells
me the data can be shared.
I'm in the final step ready to
call navigator.share as before.
Again, everything can fail--
in the simplest way when
the user cancels the sharing
operation.
So it's all wrapped
in try-catch blocks.
As before, I use a progressive
enhancement loading strategy.
If both share and canShare
exist on a navigator object,
only then I go forward and load
share.mjs via dynamic import.
On browsers like
mobile Safari that

English: 
the two conditions, I don't load the full functionality.
If I tap the share button on a supporting browser, native
share sheet opens.
I can, for example, choose Gmail, and the email composer
widget pops up with the image attached.
Up next, I want to talk about contacts.
And when I say contacts, I mean contacts as in the device's
address book. When you write a greeting card, it may not
always be easy to correctly write someone's name.
For example, I have a friend who prefers their name to be
spelled in Cyrillic letters.
I'm using a German QWERTZ keyboard and I have no idea how
to type their name.
This is a problem that the Contact Picker API solves.
Since I have my friend stored in my phone's contacts app,
via the Contact Picker API, I can tap into my contacts
from the web. First I need to specified list of
properties I want to access.
In this case, I only want the names, but for other use
cases I might be interested in telephone numbers, emails,
avatar icons, or physical addresses.
Next, I configure an options object and set multiple
to true, so I can select more than one account.

English: 
only fulfills one of
the two conditions,
I download the
full functionality.
If I tap the Share button
on a supporting browser,
the native share sheet opens.
I can, for example, choose Gmail
and the email Composer which
pops up with the image attached.
Up next, I want to
talk about contacts.
And when I say contacts,
I mean contacts
as in the device's address book.
When you write a
greeting card, it
may not always be easy to
correctly write someone's name.
For example, I have a friend
who prefers their name
to be spelled in
Cyrillic letters.
I'm using a German
QWERTZ keyboard,
and I have no idea how
to type their name.
This is a problem that the
Contact Picker API solves.
Since I have my friend stored
in my phone's Contacts app,
by the Contacts Picker API,
I can tap into my contacts
from the web.
First, I need to
specify the list
of properties I want to access.
In this case, I
only want the names.
But for other use
cases, I might be
interested in telephone
numbers, emails, avatar icon,
or physical addresses.
Next, I configure an options
object and set multiple to true

English: 
so I can select more
than one account.
Finally, I can call
navigator.contacts.select,
which results in the
desired properties
once the user selects one or
multiple of their contacts.
In Fugu Greetings, when
I tap the Contacts button
and select my two best pals,
Sergey Mikhailovic Brin,
and Lawrence Edward
Larry Page, you
can see how the Contacts
Picker is limited to only show
their names but not their email
addresses or other information,
like their phone numbers.
Their names are then drawn
onto my greeting card.
And by now, you've probably
learned the pattern.
I only load the file when the
API is actually supported.
Up next is copying and pasting.
One of our favorite operations
as software developers
is copy and paste.
As greeting card
process, at times
I may want to do the same--
either paste an image into a
greeting card I'm working on,
or the other way around,
copy my greeting card
so I can continue editing
it from somewhere else.
The Async Clipboard
API, apart from text,
also supports images.
Let me walk you through how
I have added copy and paste

English: 
Finally, I can call navigator.context.select(),
which results in the desired properties once the user
selects one or multiple of their contacts.
In Fugu Greetings, when I tap the contacts button and
select my two best pals, Sergey Michailowitsch
Brin and Lawrence Edward "Larry" Page, you can see how the
contacts picker is limited to only show their names, but
not their email addresses or other information like their
phone numbers.
Their names are then drawn onto my greeting card.
And by now, you've probably learned the pattern.
I only load the file when the API is actually supported.
Up next is copying and pasting.
One of our favorite operations as software developers is
copy and paste.
As greeting card authors, at times, I might want to
do the same. Either paste an image into a greeting card I'm
working on, or the other way around, copying my greeting
card so I can continue editing it from somewhere else.
The Async Clipboard API, apart from text, also supports
images. Let me walk you through how I added copy
and paste to the Fugu Greetings app.

English: 
to the Fugu Greeting setup.
In order to copy something
onto the system's clipboard,
I need to write to it.
The navigator.clipboard.write
method
takes an array of clipboard
items as a parameter.
Each clipboard
item essentially is
an object with a blob as a value
and the blob's type as the key.
To paste, I need to
loop over the clipboard
items that are
obtained by calling
navigator.clipboard.read.
The reason for this is that
multiple clipboard items
might be on the clipboard in
different representations.
Each clipboard item
has a types field
that tells me in which MIME
type the resource is available.
I simply take the
first one and call
the clipboardItems.getType
method, passing the MIME type
I obtained before.
And almost needless
to say by now,
I only do this on
supporting browsers.
So how does this work?
Here I have an image open
in the Mac OS Preview app
and copy it to the clipboard.
When I click Paste,
the Fugu Greetings app
then asks me whether I
want to allow the app
to see text and images
on the clipboard.
Finally, after accepting
the permission,

English: 
In order to copy something onto the system's clipboard, I
need to write to it.
The navigator.clipboard.write() method takes an array of
clipboard items as a parameter.
Each clipboard item essentially is an object with a blob as
a value and the blob's type as the key.
To paste, I need to loop over the clipboard with items
that I obtain by calling navigator.clipboard.read().
The reason for this is that multiple clipboard items might
be on the clipboard in different representations.
Each clipboard item has a types field that tells me in
which MIME type the resource is available.
I simply take the first one and call the clipboard item's
getType() method, passing the MIME type I obtained before.
And almost needless to say by now, I only do this on
supporting browsers.
So how does this work? Here, I have an image open in the
macOS Preview app and copy it to the clipboard.
When I click paste, the Fugu Greetings app then asks me
whether I want to allow the app to see text and images on
the clipboard.
Finally, after accepting the permission, the image is then

English: 
pasted into the application.
The other way round works too.
Let me copy a greeting card to the clipboard.
When I then open Preview and click "File" and then
"New from Clipboard", the greeting card gets pasted into
a new untitled image.
Another useful API is the Badging API.
As an installable PWA, Fugu Greetings, of course, does
have an app icon that users can place on the app dock
or the home screen.
Something fun to do with it in the context of Fugu
Greetings is to use it as a pen stroke counter.
With the Badging API, it is a straightforward task to do
this. I've added an event listener that on pointer down
increments the pen strokes counter and sets the icon.
Whenever the canvas gets cleared, the counter resets, and
the badge is removed.
In this example, I have drawn the numbers from one to seven
using one pen stroke for each number.
The badge counter on the icon is now at seven.
This feature is a progressive enhancement, so the loading
logic is as usual.
Want to start each day fresh with something new?

English: 
the image is then pasted
into the application.
The other way around works, too.
Let me copy a greeting
card to the clipboard.
When I then open Preview
and click File and then
New from Clipboard,
the greeting card
gets pasted into a
new untitled image.
Another useful API
is the Badging API.
As an installable PWA,
Fugu Greetings of course
does have an app icon that
users can place on their app
dock or their Home screen.
Something fun to do with it in
the context of Fugu Greetings
is to use it as a
pen stroke counter.
With the Badging API, it
is a straightforward task
to do this.
I've added an event
listener that,
on pointerdown, increments
the pen strokes counter
and sets the icon.
Whenever the canvas
gets cleared,
the counter resets, and
the badge is removed.
In this example, I've drawn
the numbers from 1 to 7
using one pen stroke
for each number.
The badge counter on
the icon is now at 7.
This feature is a
progressive enhancement.
So the loading
order is as usual.
Want to start each day
fresh with something new?
A neat feature of the
Fugu Greetings app

English: 
A neat feature of the Fugu Greetings app is that it can
inspire you each morning with a new background image to
start your greeting card.
The app uses the Periodic Background Sync API to achieve
this. The first step is to register a periodicSync
event in the service worker registration.
It listens for a sync tag called 'image-of-the-day' and
has a minimum interval of 1 day, so the user can get a new
background image every 24 hours.
The second step is to listen for the periodicsync event in
the service worker. If the event tag is the one that was
registered a slide ago, the image of the day is retrieved
via the getImageOfTheDay() function and the result
propagated to all clients so they can update their canvases
and caches.
Again, this is truly a progressive enhancement.
So the code is only loaded when the API is supported by the
browser. This applies to both the client code and the
service worker code.
On non-supporting browsers, neither of them is loaded.
Note how in the service worker, instead of a dynamic input,
I use the classic importScripts() function to the same
effect. Sometimes even with a lot of inspiration,
you need a nudge to finish a started greeting card.

English: 
is that it can inspire you each
morning with a new background
image to start
your greeting card.
The app uses the [? Web ?]
Periodic Background Sync API
to achieve this.
The first step is to
register a periodic sync
event in the service
record registration.
It listens for a sync tag
called image-of-the-day and has
a minimum interval of one day.
So the user can get a
new background image
every 24 hours.
The second step is to
listen for the periodic sync
event in the service record.
If the event tag is the one
that was registered a slide ago,
the image of the
day is retrieved
via the getImageOfTheDay
function.
And the result
propagated to all clients
so they can update their
canvasses and attachments.
Again, this is truly a
progressive enhancement.
So the code is only
loaded when the API
is supported by the browser.
This applies to both the client
code and the service worker
code.
Or nonsupporting browsers,
neither of them is loaded.
[INAUDIBLE] service worker
instead of a dynamic input,
I use the classic input scripts
function to the same effect.
Sometimes, even with
a lot of inspiration

English: 
you need a nudge to finish
a started greeting card.
This is a feature that is
enabled by the Notification
Triggers API.
As a user, I can
enter a time when
I want to be nudged to
finish my greeting card.
And when the time
has come, then I
get a notification that my
greeting card is waiting.
After prompting for
the target time,
the application schedules
the notification
with a show trigger.
This can be a timestamp trigger
with a previously selected
target date.
The reminder notification
will be triggered locally.
No network or server
site is necessary.
As everything else
I've shown so far,
this is a progressive
enhancement,
so the code is only
conditionally loaded.
I also want to talk
about the Wake Lock API.
Sometimes you need to just
stare long enough on the screen
until the inspiration
kisses you.
The worst that can happen is
for the screen to turn off.
The Wake Lock API can
prevent this from happening.
In Fugu Greetings, there
is an insomnia checkbox
that, when checked,
keeps your screen awake.
And the first step
we obtain a wake
lock with the
navigator.wakeLock.request
method.
I pass it the string "screen"
to obtain a screen wake lock.

English: 
This is a feature that is enabled by the Notification
Triggers API.
As a user, I can enter a time when I want to be nudged to
finish my greeting card. And when that time has come, I
will get a notification that my greeting card is waiting.
After prompting for the target time, the application
schedules the notification with a showTrigger.
This can be a TimestampTrigger with a previously selected
target date.
The reminder notification will be triggered locally, no
network or server side is necessary.
As everything else I've shown so far, this is a progressive
announcement, so the code is only conditionally loaded.
I also want to talk about the Wake Lock API.
Sometimes you need to just stare long enough on the screen
until the inspiration kisses you.
The worst that can happen is for the screen to turn off.
The Wake Lock API can prevent this from happening.
In Fugu Greetings, there's an insomniac checkbox that,
when checked, keeps your screen awake.
In a first step I obtain a wake lock with the
navigator.wakelock.request() method.
I pass it the string "screen" to obtain a screen wake lock.

English: 
I then add an event listener to be informed when the wake
lock is released. This can happen, for example, when the
tab visibility changes.
If this happens, I can, when the tab becomes visible again,
reobtain the wake lock.
Yes, this is a progressive enhancement, so I only need to
load it when the browser supports the API.
At times, even if you stare at the screen for hours, it's
just useless.
The Idle Detection API allows the app to detect user idle
time. If the user is detected to be idle for too long,
the app resets to the initial state and clears the canvas.
This API is currently gated behind the notification
permission since a lot of production use cases of idle
detection are notifications-related.
For example, to only send a notification to a device the
user is currently actively using.
After making sure that the notifications permission is
granted, I then instantiate the idle detector.
I register an event listener that listens for idle changes,
which includes the user and the screen state.
The user can be active or idle, and the screen can be
unlocked or locked.
If the user is detected to be idle, the canvas clears.

English: 
I then add an event
listener to be informed
when the wake lock is released.
This can happen,
for example, when
the tab visibility changes.
If this happens, I can, when
the tab becomes visible again,
re-obtain the wake lock.
Yes, this is a
progressive enhancement,
so I only need to load it when
the browser supports the API.
At times, even if you stare
at the screen for hours,
it's just useless.
The Idle Detection
API allows the app
to detect user idle time.
If the user is detected
to be idle for too long,
the app resets to the initial
state and clears the canvas.
This API is currently gated
behind the notification
permission, since a
lot of production use
cases of idle detection
are notification related,
for example to only send
a notification to a device
the user is currently
actively using.
After making sure
that the notifications
permission is granted, I then
instantiate the idle detector.
i register an
event listener that
listens for idle
changes, which includes
the user and the screen state.
The user can be active or idle.
And the screen can be
unlocked or locked.

English: 
If the user is detected to
be idle, the canvas clears.
I gave the idle detector
a threshold of 60 seconds.
And as always, I
only load this code
when the browser supports it.
Phew, what a ride.
So many APIs in just
one [INAUDIBLE]..
And reminder-- we
never make the user
pay the download
cost for a feature
that their browser
doesn't support.
By using progressive
enhancement,
I make sure only the
relevant code gets loaded.
And since with HTTP/2
requests are cheap,
this pattern should work well
for a lot of applications,
although at times
you might still
want to consider a bundler
for really large apps.
This has been a sort
overview of many of the APIs
we're working on in the
context of Project Fugu.
Definitely check out
our landing page,
where you can find links to
detailed articles for each API
that I've talked about.
If you're interested
in Fugu Greetings,
go find and fork it on GitHub.
And with that, thank you very
much for watching my talk.
You can find me at @tomayac
on GitHub, Twitter,
and the web in general.
I'm looking forward to
answering your questions.

English: 
I give the idle detector a threshold of 60 seconds.
And as always, I only load this code when the browser
supports it.
Phew, what a ride!
So many APIs in just one sample app.
And reminder, we never make the user pay the download cost
for a feature that their browser doesn't support.
By using progressive enhancement, I make sure only the
relevant code gets loaded.
And since with HTTP/2, requests are cheap, this pattern
should work well for a lot of applications.
Although at times you might still want to consider a
bundler for really large apps.
This has been a short overview of many of the APIs we're
working on in the context of Project Fugu.
Definitely check out our landing page where you can find
links to detailed articles for each API that I've talked
about. If you're interested in Fugu Greetings, go
find and fork it on GitHub.
And with that, thank you very much for watching my talk.
You can find me as @tomayac on GitHub, Twitter,
and the web in general.
I'm looking forward to answering your questions and I hope
you enjoy the rest of web.dev LIVE.

English: 
Hi, I'm Demian, a Web Ecosystem Consultant
at Google.
In this talk, we'll explore how different companies are
building fast, resilient experiences in the web.
We'll use the Workbox libraries to show how to implement
four different patterns in your site.
But all of these features can also be implemented
by manually writing the service worker code.
Our first pattern is called "Resilient search experiences"
and can be applied to any site that offers some type
of search functionality.
When a user searches for a topic in Google Search for
Chrome on Android devices and loses connection,
instead of the standard network error page, they are
presented with a custom offline page asking
if they want to opt-in for notifications.

English: 
And I hope you enjoy the
rest of web.dev LIVE.
[MUSIC PLAYING]
DEMIAN RENZULLI: Hi.
I'm Demian, a web ecosystem
consultant at Google.
This talk will explore
how different companies
are building fast and resilient
experiences in the web.
We'll use the Workbox
libraries to show
how to implement four different
patterns in your site.
But all of these
features can also
be implemented by manually
writing the service worker
code.
Our first pattern is called
resilient search experiences
and can be applied to any site
that offers some type of search
functionality.
When a user searches for a topic
in Google Search, for Chrome
in Android devices, and
loses connection, instead of
the standard network
error page, they
are presented with a
custom offline page,

English: 
If the user accepts the permission, once the connection is
back, they will receive a web push notification
informing that the search search result is ready.
Clicking on the notification will take the user to the
results screen.
This is a great way of keeping the user engaged
while letting them complete the task they were looking for.
At the heart of this implementation is the Background Sync
API, which lets you defer actions until
the user has stable connectivity.
In Workbox, this can be implemented very easily.
First, you can define a network-only caching
strategy for the search endpoint.
So these requests always go to the network.
Then you can pass a background sync plugin
to take care of the offline scenarios.
Let's see how the plugin looks like.
The Workbox Background Sync Plugin receives the name of a
queue to store failed requests to be retried
later.
The plugin also receives an onSync() callback, which

English: 
asking if they want to
opt in for notifications.
If the user accepts
the permission,
once the connection is back
they will receive a web push
notification, informing that
the search result is ready.
Clicking on the notification
will take the user
to the Results screen.
This is a great way of
keeping the user engaged
while letting them complete
the task they were looking for.
At the heart of
this implementation
is the Background
Sync API, which
lets you defer actions until the
user has stable connectivity.
In Workbox, this can be
implemented very easily.
First, you can define a
network-only caching strategy
for the search endpoint.
So these requests always
go to the network.
Then you can pass the
background sync plugin
to take care of the
offline scenarios.
Let's see how the
plugin looks like.
The Workbox
background sync plugin
receives the name of a queue
to store failed requests
to be retried later.
The plugin also receives
an onSync callback,

English: 
which will be called once
the connection is recovered.
Inside the callback, you can
retrieve any failed request,
process them, and inform
the user of the result,
for example with a notification.
Before moving to
the next pattern,
Let's take a look at
an important detail
from this implementation.
You might have noticed that
the notification permission is
requested when the
user loses connection.
At that point, the user
understands the value
of the service and knows that
the notification will deliver
timely and relevant updates.
This is an example of a good
implementation of the web push
permission.
Our next pattern is adaptive
loading with service workers
and will allow you to provide
a fast experience regardless
of the network and the device.
Terra is one of the biggest
media sites in Brazil.
They have a large
user base coming
from slow and fast connections.

English: 
will be called once the connection is recovered.
Inside the callback, you can retrieve any failed request,
process them, and inform the user of the result.
For example, with a notification.
Before moving to the next button, let's take a look at an
important detail from this implementation.
You might have noticed that the notification permission is
requested when the user loses connection.
At that point, the user understands the value of the
service and knows that the notification will deliver
timely and relevant updates.
This is an example of a good implementation of the
web push permission. Our next pattern
is "Adaptive loading with service workers" and will allow
you to provide a fast experience, regardless of the network
and the device.
Terra is one of the biggest media sites in Brazil.
They have a large user base coming from slow and fast
connections.
To provide a more reliable experience to all their users,

English: 
they are combining service workers and the Network
Information API to deliver lower quality
images to users on 2G or 3G connections.
Terra took this strategy to the next level.
When users are navigating on slow connections, they deliver
the AMP version of the articles, which are more
lightweight and tend to perform better under these
conditions.
To implement this functionality in Workbox, you can first
apply your cache-first strategy to images.
Then you can pass an expiration plugin to limit the number
of entries in the cache.
You can extend this strategy by creating a custom
plugin that we will call adaptive loading
plugin.
Inside the plugin, you can listen for the
requestWillFetch() callback.
That will be called before the request is made
so you can apply a transformation to it.
Inside the callback, you can check the connection type.
If it's a slow connection, you can create a new

English: 
To provide a more reliable
experience to all their users,
they are combining service
workers and the Network
Information API to deliver
lower quality images to users
on 2G or 3G connections.
Terra took this strategy
to the next level.
When users are navigating
on slow connections,
they deliver the AMP version
of the articles, which
are more lightweight and
tend to perform better
under these conditions.
To implement this
functionality in Workbox,
you can first apply your
cache-first strategy to images.
Then you can pass
an expiration plugin
to limit the number of
entries in the cache.
You can extend this strategy
by creating a custom plugin
that we will call
adaptiveLoadingPlugin.
Inside the plugin,
you can listen
for the requestWillFetch
callback that
will be called before the
request is made so you can
apply a transformation to it.
Inside the callback, you can
check the connection type.

English: 
If it's a slow connection,
you can create a new URL
for a lower image quality.
Finally, you can create a
new request based on that URL
and fetch the most
appropriate image file
according to these conditions.
If you are using
Cloudinary, there
is a Workbox Cloudinary plugin,
making this feature even easier
to implement.
Check it out.
As you might have noticed,
the first two patterns
have some things in common.
We have combined
the functionality
of runtime caching
strategies with plugins.
This shows one of the benefits
of using Workbox, allowing
you to extend the standard
features in a very easy way.
Let's move now to the
second part of the talk.
Our third pattern is called
instant navigation experiences.
And it's useful for
any type of site.
Performing a task in a website
might involve several steps,
each of them meaning
a navigation request.

English: 
URL for a lower image quality.
Finally, you can create a new request based
on that URL and fetch the most appropriate image
file according to these conditions.
If you are using Cloudinary, there is a Workbox Cloudinary
plugin, making this feature even easier to implement.
Check it out.
As you might have noticed, the first two patterns have some
things in common.
We have combined the functionality of runtime caching
strategies with plugins.
This shows one of the benefits of using Workbox,
allowing you to extend the standard features in a very
easy way.
Let's move now to the second part of the talk.
Our third pattern is called "Instant navigation
experiences", and it's useful for any
type of site.
Performing a task in a website might involve several steps.
Each of them meaning a navigation request.

English: 
Navigation request like requests for HTML page
are normally satisfied via the network.
This means using a cache-control header of no-cache
or a max-age of zero to ensure that the response
is reasonably fresh.
But having to go against the network means that each
navigation might be slow or, at the least,
not reliably fast.
To speed up these navigations, you can apply a technique
called "prefetching".
In this example, Mercado Libre, the largest e-commerce site
in Latin America, dynamically checks link
prefetch tags in listing pages to accelerate
parts of the flow.
But prefetching is not all useful for e-commerce
sites.
Italian sports portal, Virgilio Sport,
uses service workers to prefetch the most popular
posts that appear in their home page before user even
clicks on them.
As a result, load time for navigation to articles

English: 
Navigation requests, like
request for HTML pages,
are normally satisfied
via the network.
This means using a cache
control header of no-cache
or a max age of 0 to ensure
that the response is reasonably
fresh.
But having to go
against the network
means that each navigation
might be slow, or at the least,
not reliably fast.
To speed up these
navigations, you
can apply a technique
called prefetching.
In this example, MercadoLibre,
the largest e-commerce site
in Latin America, dynamically
checks link prefetch tags
in listing pages to
accelerate parts of the flow.
But fetching is not only
useful for e-commerce sites.
Italian sports portal,
Virgilio Sport,
uses service workers to refresh
the most popular posts that
appear in the homepage before
the user even clicks on them.
As a result, load time
for navigation to articles

English: 
has improved by 78%,
and the number of article impressions has increased in
45%.
Prefetching is commonly implemented by using a resource
hint in your pages:
The tag tells the browser to fetch a resource at the lowest
priority and keep it in the HTTP cache
for five minutes.
In the service worker side, you can intercept requests
for HTML pages so that you can extend the lifetime
of the prefetched resource beyond the five minute window.
For HTML pages, a StaleWhileRevalidate() strategy
is a good option to respond quickly from the cache
while simultaneously keeping it up to date.
Before moving to the final pattern, there's a slight
variation of this technique.
Instead of using resource hints in the page, some
developers prefer to delegate prefetching completely
to the service worker.

English: 
have improved by 78%.
And the number of
article impressions
has increased in 45%.
Prefetching is commonly
implemented by using a resource
hint in your pages--
link prefetch.
The tag tells the browser to
fetch a resource at the lowest
priority and keep it in the
HTTP cache for five minutes.
In the service worker side,
you can intercept requests
for HTML pages so
that you can extend
the lifetime of the
prefetched resource
beyond the five minutes' window.
For HTML pages, a
StaleWhileRevalidate
this strategy is a good
option to respond quickly
from the cache while
simultaneously keeping it up
to date.
Before moving to
the final pattern,
there's a slight variation
of this technique.
If they are using
resource hits in the page,
some developers prefer to
delegate prefetching completely
to the service worker.

English: 
For that, you need to implement
a page-to-service-worker
communication technique.
The Workbox Window package
allows you to do that.
So if you're interested
in following that route,
you can check that out.
We have reached the
end of our talk.
Our final pattern is app-shell
UX with service workers.
And it's useful if you want
to make multi-page apps feel
like single-page applications.
Dev has become one of the
favorite platforms for software
developers.
The architecture of their
site is a multi-page app.
Their team was interested in the
benefits of the app-shell model
but didn't want to incur any
major architectural change.
So let's see what they did.
First they created
partials for the header
and the footer of the homepage.
These assets are added to the
cache at the service worker
install event, what's commonly
referred to as precaching.

English: 
For that, you need to implement a page to service
worker communication technique.
The Workbox Window package allows you to do that.
So if you are interested in following that route, you can
check that out.
We have reached the end of our talk.
Our final pattern is App-shell UX with
service workers. And it's useful if you want to make
multi-page apps feel like single page applications.
DEV has become one of the favorite platforms for software
developers.
The architecture of their site is a multi-page app.
Their team was interested in the benefits of the app-shell
model but didn't want to incur in a major architectural
change. So let's see what they did.
First, they created partials for the header and
the footer of the home page.
These assets are added to the cache at the service
worker "install" event, what's commonly referred
to as "precaching".

English: 
The content of the
page is the only part
that's actually being
fetched from the network
when navigating.
But the key ingredient
of this solution
is the usage of streaming.
Thanks to that, bytes can start
being painted in the screen
before the full
response is ready.
Workbox, you can start by
creating a regular expression
to match request for pages.
Then you can pass an array of
stream responses to compose.
For the header and
the footer, you
can use a cache-first strategy.
For the content, you can
use the network-first.
All the streaming sources
will be composed by Workbox
and sent to the client.
Thanks to streams,
the header can
start being painted as soon
as it pick up from the cache
without having to wait
for the full response.
We have seen four
advanced patterns
for speed and resilience.
As a complement of
this talk, we'll

English: 
The content of the page is the only part that's
actually being fetched from the network when navigating.
But the key ingredient of this solution is
the usage of streaming.
Thanks to that, bytes can start being painted
in the screen before the full response is ready.
In Workbox, you can start by creating a regular expression
to match requests for pages.
Then you can pass on a rate of stream responses to compose.
For the header and the footer, you can use a cacheFirst
strategy.
For the content, you can use a networkFirst.
All the streaming sources will be composed by Workbox
and sent to the client.
Thanks to streams, the header can start being painted
as soon as it's picked up from the cache without having to
wait for the full response.
We have seen four advanced patterns for speed and
resilience.
As a complement of this talk, we'll be uploading guides

English: 
and codelabs, so you can see them in more detail.
Please check web.dev/progressive-web-apps
and web.dev/reliable.
Thanks for watching.
Hi, I'm Andre. Today, we're going to answer some frequent
questions about the new features and capabilities for
installed PWAs that were reserved previously
only to native apps.
And to tell us more about what is new and help us
with some hard questions, we have a guest.

English: 
be uploading guides
and Codelabs so you
can see them in more detail.
Please check
web.dev/progressive-web-apps
and web.dev/reliable.
Thanks for watching.
[MUSIC PLAYING]
ANDRE BANDARRA: Hi, I'm Andre.
Today we're going to answer
some frequent questions
about the new features and
capabilities for installed PWAs
that were reserved previously
only to native apps.
And to tell us more
about what is new
and help us with some hard
questions, we have a guest.
Hi, PJ.

English: 
Hi, PJ! Tell us a tell us about your role at Google.
Thanks so much, Andre.
I'm PJ. I'm a product manager on the Chrome Web Platform
team. I work on Progressive Web Apps, usually called
PWAs. Basically, Progressive Web Apps
are modern applications built using web technologies
that are making users happier.
PWAs have a lot of capabilities, of which
one is that they can be installed into a user's computer
just the same as any other application.
Oh, cool. So that means we have exactly the right person in
the room.
For people in the audience who are not yet familiar with
installable PWAs.
Can you tell us a bit more about what they are and where
they are available?
Being installable is really a standard feature for PWAs
because it's giving web developers the ability to make
applications that can be started from the Start menu
on Windows, from the Application folder on Mac,
the home screen on Android and iOS.
And these can really look and feel like any other

English: 
Tell us about your
role at Google.
PJ MCLACHLAN: Thanks
so much, Andre.
I'm PJ.
I'm a product manager on the
Chrome Web Platform team.
I work on Progressive Web
Apps, usually called PWAs.
And basically,
progressive web apps
are modern applications built
using web technologies that
are making users happier.
PWAs have a lot of
capabilities, of which one
is that they can be installed
into a user's computer just
the same as any
other application.
ANDRE BANDARRA: Oh, cool.
So that means we have exactly
the right person in the room.
For people in the
audience who are not yet
familiar with
installable PWAs, can you
tell us a bit more
about what they are
and where they are available?
PJ MCLACHLAN: Being installable
is really a standard feature
for PWAs, because it's giving
web developers the ability
to make applications that can
be started from the Start menu
on Windows, from the
Application folder on Mac,
the Home screen on
Android and iOS.
And these can
really look and feel
like any other
application on the device.

English: 
So for applications that
users are using repeatedly,
being installed means that that
app is a little bit more top
of mind for the user, because
that launching surfaces is
immediately accessible
to the user,
they don't have to navigate
anywhere in the browser
to get back to the application.
It also means that
the application
is showing up in the activity
switcher as a separate app.
And that makes install quite
attractive to developers.
But I want to be clear
that a PWA doesn't have
to be installed to be a PWA.
Being installable is just
one property of a PWA.
You asked this question
about distribution.
So let's talk for a moment
about where PWAs are available.
So first, PWAs can be installed
directly from the web browser
on both desktop and Android.
And on desktop, PWAs can also
be listed in the Windows Store.
On Android, you can find
PWA-powered Android apps
in the Play Store.
These use a technology called
a Trusted Web Activity.

English: 
application on the device. So for applications that
users are using repeatedly, being installed means
that that app is a little bit more top of mind for the user
because that launching surface is immediately accessible
to the user. They don't have to navigate anywhere in the
browser to get to get back to the application.
It also means that the application is showing up in the
activity switcher as a separate app.
And that makes install quite attractive to developers.
But I want to be clear that a PWA doesn't have to be
installed to be a PWA.
Being installable is just one property of a PWA.
You asked this question about distribution, so let's talk
for a moment about where PWA are available.
So first, PWAs can be installed directly from the web
browser on both desktop and Android.
And on desktop, PWAs can also
be listed in the Windows store.
On Android, you can find PWA powered Android apps
in the Play Store.
These use a technology called a Trusted Web Activity.
You also see PWAs in a Samsung store.

English: 
You also see PWAs in
the Samsung Store.
You might have heard
that PWAs are showing up
in the Chrome OS Play Store.
And that's an early
access feature
I'm really excited about.
So let's save a little time
at the end of the session
to talk more about that.
ANDRE BANDARRA: Oh, I'm
really looking forward
to learning more about
Chrome OS Play Store.
But before that, can you tell
us about the recent features
that you think are most
exciting for developers?
PJ MCLACHLAN: I
really wish that we
had time to go into
everything that's
shipping, because there's
a lot happening right Now
but I'm going to have to just
pick a few favorites for today.
So the features that
I'm most excited about
are some of the things that
web developers could previously
only do using a
hybrid technology,
like an Electron app
or a Cordova app.
And to begin here,
let me just mention
that the ability to install PWAs
on desktop is still really new.
So for those of you
in the audience who
have been paying close
attention and have
seen the announcement
at I/O last year,
this might seem like old news.
But I think for a lot of the
web development community,
this is still a
very new feature.

English: 
You might have heard the PWAs are showing up in the Chrome
OS Play Store. And that's an early access feature I'm
really excited about. So let's save a little time at the
end of the session to talk more about that.
Oh, I'm really looking forward to learning more about
Chrome OS Play Store.
But before that, can you tell us about the recent features
that you think are most exciting for developers?
I really wish that we had time to go into everything that
shipping because there's a lot happening right now.
But I'm going to have to just pick a few favorites for
today. So the features that I'm most excited about
are some of the things that web developers could previously
only do using a hybrid technology
like an Electron app or a Cordova app.
And to begin here, let me just mention that the ability to
install PWAs on desktop is still really new.
So for those of you in the audience who have been paying
close attention and might've seen the announcement at I/O
last year, this might seem like old news.
But I think for a lot of the web development community,
this is still a very new feature and people are still

English: 
And people are still getting
used to this super power
that install can
happen everywhere.
You can now write
one app and have
it be installable on desktop,
on tablet, on smartphones.
And users can discover that
app on your website and search
results, in Play, in the Windows
Store, in a Samsung Store.
And this is giving
web developers
a really unprecedented
reach for distribution
of their applications.
So I'm just super
excited that this is now
possible to have install across
all of these different screens
and through all of these
different channels.
The other features that
I'm most excited about
are all the capabilities that
were previously only possible
with Cordova or
Electron-- so for example,
registering a file type
handler for an app;
offering an immersive mode,
so creating better web
games through an immersive
mode; and adding context
menus for shortcuts, and more.

English: 
getting used to the super power that install
can happen everywhere.
You can now write one app and have it be installable on
desktop, on tablet, on smartphones.
And users can discover that app on your website, in Search
results, in Play, in the Windows store, in
a Samsung store.
And this is giving web developers a really unprecedented
reach for distribution of their applications.
So I'm just super excited that this is now possible
to have install across all of these different
screens and through all of these different channels.
The other features that I'm most excited about are all the
capabilities that were previously only possible
with Cordova or Electron.
So, for example, registering a file type handler
for an app, offering an immersive mode - so
creating better web games through an immersive mode
- and adding context menus for shortcuts
and more.
So the file type handler would allow

English: 
a user to start a web based image editor
by double clicking on an image in their OS file
explorer?
That's exciting.
Exactly that. And so with file type handling,
you can register a file extension or MIME type.
So let's say that you've written a new type of image editor
and you can edit JPEG and PNG files.
You could register a file type extension for those
file types. And then, those file types will automatically
open in your editor if the user double clicks on those.
A word of warning, file type handling isn't quite here yet.
We expect it to go into origin trial in Chrome 85 in
August and to be available generally sometime
in the fall or winter.
Looking forward for this one to go stable.
Tell me more about immersive mode.
What does that mean?
Immersive mode is a term that's just been borrowed from
native. It's a full screen mode, and basically it
removes all of the operating system decorations.
So no status bar, no navigation bar.

English: 
ANDRE BANDARRA: So
the file type Handler
would allow a user to
start a web-based image
editor by double-clicking
on an image in their OS file
exporter.
That's exciting.
PJ MCLACHLAN: Exactly that.
And so with file
type handling, you
could register a file
extension or MIME type.
So let's say that you've written
a new type of image editor,
and you can edit
JPEG and PNG files.
You could register a file type
extension for those file types,
and then those file
types will automatically
open in your editor if the
user double-clicks on those.
A word of warning, file type
handling isn't quite here yet.
We expect it to go into origin
trial in Chrome 85 in August
and to be available generally
sometime in the fall or winter.
ANDRE BANDARRA: Looking forward
for this one to go stable.
Tell me more about
the immersive mode.
What does that mean?
PJ MCLACHLAN: Immersive
vote is a term that's
just been borrowed from native.
It's a fullscreen mode.
And basically, it removes
all of the operating system
decorations-- so no status
bar, no navigation bar.

English: 
And this is great for games or other media.
Basically, when you want to be able to address every single
pixel on the screen.
So I could start a video player in fullscreen from
dicom and a home screen?
Nice.
And what about app shortcuts?
Sure. App shortcuts are a way to provide quick access
to important functions in the app directly from the
app icon. So, for example, on
Android, you might be familiar with a long press on
the app icon on the launch screen, so - or on the home
screen - so if you were to long press on a home screen icon
for, say, a mail application, you might see compose
functionality directly in the
menu that appears when you long press on the mail client.
App shortcuts also work on desktop operating systems
and that support will be arriving in Chrome
85.
Interesting. So that's like deep linking to parts of my PWA

English: 
And this is great for
games or other media,
basically, when you want
to be able to address
every single pixel
on the screen.
ANDRE BANDARRA: So I could
start to video player fullscreen
from the icon in
the Home screen.
Nice.
And what about app shortcuts?
PJ MCLACHLAN: Sure.
App shortcuts are a way
to provide quick access
to important functions in the
app directly from the app icon.
So for example, on
Android you might
be familiar with the long
press on the app icon
on the launch screen--
or on the Home screen.
So if you were to long
press on a Home screen icon
for, say, a mail
application, you
might see compose
functionality directly
in the menu that
appears when you long
press on the mail client.
App shortcuts also work on
desktop operating systems.
And that support will be
arriving in Chrome 85.
ANDRE BANDARRA: Interesting.
So that's like deep
linking to parts of my PWA

English: 
directly from the icon
that's on the Home screen.
PJ MCLACHLAN: Exactly.
ANDRE BANDARRA: Switching
gears a little bit,
we launched Trusted Web
Activity at last year's I/O.
And since then we have
many feature requests
and questions from developers.
And I wanted to go through
some of those with you today.
First, developers
have pointed out
that the way permissions work
in the browser and in native
is different.
And that makes users
a little bit confused.
As an example, native apps get
the notification permission
by default, while web
apps need the users
to accept the permission first.
How are we planning to
solve those inconsistencies?
PJ MCLACHLAN: So
let's just start
by just sharing my perspective
on the philosophy here of--
I don't think that
users should need
to think about
what technology was
used to build the application
that they're using.
Users should really just
have a job-- you know,
users just have a
job to get done.

English: 
directly from the icon that's on the home screen?
Exactly.
Switching gears a little bit.
We launched Trusted Web Activity at last year's I/O.
And since then, we have many feature requests and questions
from developers.
And I wanted to go through some of those with you today.
First, developers have pointed out that the way permissions
work in the browser and in native is different,
and that makes their users a little bit confused.
As an example, native apps get the notification permission
by default, while web apps need users
to accept permission first.
How are we planning to solve those inconsistencies?
So let's just start by just sharing
my perspective on the philosophy here.
I don't think that users should need to
think about what technology was used to build the
application that they're using.
Users should really just have a job - you know, users just
just have a job to get done.

English: 
And we want to help
developers help users
as easily as possible.
So wherever it makes sense,
I think an installed web
application should use the
operating systems, typically
UI, for things like
managing permissions,
launching and switching
between applications,
just to match the user's
expectations for how
things should behave on the
device that they're using.
So we've introduced this concept
of notification delegation
into installed PWAs.
And that means that
when a PWA is installed,
it will delegate the
notification permission
into the native setting
area on Android.
So another difference
between an app installed
from Play and a web
app, for example,
is that an app
installed from Play
automatically receives
notification permission.
So we want that
experience to be the same
for Android applications
that were built using a PWA.
And that's why we've
delegated the web notification
permission to the Notification
Settings panel in Android.

English: 
And we want to help developers, help users
as easily as possible. So wherever it makes sense,
I think an installed web application should use the
operating system's typical UI
for things like managing permissions, launching
and switching between applications just to match
the user's expectations for how things should behave
on the device that they're using.
So we've introduced this concept of notification
delegation into installed PWAs.
And that means that when a PWA is installed, it will
delegate the notification permission into
the native setting area on Android.
So another difference between an app installed
from Play and a web app, for example, is that an
app installed from Play automatically receives notification
permission. So we want that experience to be the same
for Android applications that were built using a PWA.
And that's why we've delegated the web notification
permission to the notification settings panel in Android.

English: 
And these apps can be
configured to auto-enroll users
in notifications so that
they just look and feel
and behave exactly like
any other Android app,
and the user doesn't even need
to know that this application
was built using web technology.
In the future, we're going
to be adding location
to the Settings panel, too.
Of course, there won't be any
auto-enrollment for location,
because users are not
auto-enrolled in location--
for location permission in
Android native apps either.
So we're just going to match
what the behavior is between--
for a native
application, for apps
that are installed from,
say, the Play Store,
or from any other
distribution channel
where the user may or may not be
aware that this application is
a web application.
We're to continue, as well,
to delegate more permissions
and match OS preferences
over the next few releases.
ANDRE BANDARRA: Got it.
So this will make the experience
more seamless to users,
regardless of the
technology the developer
used to build the app.

English: 
And these apps can be configured to auto-enroll users in
notifications so that they just look and feel and behave
exactly like any other Android app.
And the user doesn't even need to know that this
application was built using web technology.
In the future, we're going to be adding location to the
Settings panel, too.
Of course, there won't be any auto-enrollment for location
because users are not auto-enrolled
in location permission in Android native apps
either. So we're just gonna match what the behavior is
between, you know, for a native application for
apps that are installed from, say, the Play Store or from
any other distribution channel where
the user may or may not be aware that this application
is a web application.
We're going to continue as well to delegate more
permissions and match OS preferences over
the next few releases.
Got it. So this will make the experience more seamless to
users, regardless of the technology that the developer
used to build the app.

English: 
Another frequent request--
developers sometimes
feel that when they make
an Android app using
PWA and Trusted
Web Activity, they
should have a communication
channel between the needs
of application and the web app.
This would allow them to use
native platform capabilities
where an equivalent on the
web doesn't exist already.
Is this something that
is being considered?
PJ MCLACHLAN: I'm really
glad you asked this question,
because this is exactly the
kind of product question
that I really love.
And I'd like to hear from
our audience on this one.
So today you could pass
parameters into a Trusted Web
Activity when you launch it.
And you can use intents to
leave the Trusted Web Activity
and pass some information
into another activity
inside of your app.
And we're considering adding
support for a message bus.
For example, we could
extend POST message
to enable a bus with
this functionality.
However, I don't have
use cases for developers
on exactly what they need here.

English: 
Another frequent request.
()Developers sometimes feel that when they make an
Android, an Android app using PWA and Trusted Web Activity,
they should have a communication channel between the native
application and the web app.
This would allow them to use native platform capabilities
where an equivalent on the web doesn't exist already.
Is this something that is being considered?
I'm really glad you asked this question because this is
exactly the kind of product question that I really love,
and I'd like to hear from our audience on this one.
So today you can pass parameters into
a Trusted Web Activity when you launch it.
And you can use intents to leave
the Trust Web Activity and pass some information into
another activity inside of your app.
And we're considering adding support for a message bus.
For example, we could extend postMessage()
to enable a bus with this functionality.
However, I don't have use cases from developers on
exactly what they need here.

English: 
Most capabilities are
already in the web platform
or are planned in--
or planned as part
of the Fugu effort
to add web capabilities
to the web platform.
So if the web platform
has missing capabilities,
I'd really rather add
that capability to the web
than create a message
bus to native.
Because if a capabilities
part of a web platform,
then it's going to
work everywhere,
and developers only
need to have one code
base, which is simpler.
And that code base will
work in multiple browsers,
whether the app is installed
or not installed, et cetera.
So I'd like to turn this just to
do a question for our audience.
What do you need to
do in native code
that you can't do today
in the web platform?
And are there things we
could do to improve the web
platform so you wouldn't
need to do that in native?
Or perhaps it's something that
could only be done in native,
and you really need
that message bus.
I'd really like to hear
from you about this.
ANDRE BANDARRA: Cool.
I'm also looking forward to
hearing from folks on Twitter,

English: 
Most capabilities are already in the web platform or are
planned in, or
planned as part of the Fugu effort to add web capabilities
to the web platform.
So if the web platform has missing capabilities,
I'd really rather add that capability to the web than
create a message bus to native.
Because if a capability is part of a web platform, then
it's going to work everywhere.
And developers only need to have one code base, which is
simpler, and that code base will work
in multiple browsers.
Whether the app is installed or not installed, etc.
So I'd like to turn this just into a question for our
audience.
What do you need to do in native code that you can't do
today in the web platform?
And are there things we could do to improve the web
platform so you wouldn't need to do that in native?
Or perhaps it's something that could only be done in
native, and you really need that message bus?
I'd really like to hear from you about this.
Cool. I'm also looking forward to hearing from folks on

English: 
Twitter or if you are watching this live,
on the live chat.
Many developers have a native application and
prompting users to install a PWA in the in the browser
when they already have the native app installed can
lead to some confusion.
Is there a way to prevent the prompts from showing when
the native app is already installed?
So that's a great question.
It's also probably one of the top concerns that I
hear from teams that are implementing a Progressive Web
App. It's called channel conflict, and it arises where
you're not sure which experience is going to be best
for the user. So I think the most important thing for
developers to know is that you do have full
control over the promotion of PWA install.
So you don't need to worry about
the browser promoting your PWA if you don't want it to.
There are ways you can prevent the browser from
promoting the installation of your PWA.
If, for example, you have a native app. So let's talk about
how that works.

English: 
or if they're watching this
live, on the live chat.
Many developers have
a native application.
And prompting users to
install a PWA in the browser
when they already have
the native app installed
can lead some confusion.
Is there a way to
prevent the prompts
from showing when the native
app is already installed?
PJ MCLACHLAN: So that's
a great question.
It's also probably one
of the top concerns
that I hear from teams that are
implementing a progressive web
app.
It's called channel conflict.
And it arises where you're
not sure which experience is
going to be best for the user.
So I think the most important
thing for developers to know
is that you do have full control
over the promotion of PWA
installs.
So you don't need to worry about
the browser promoting your PWA
if you don't want it to.
There are ways you can prevent
the browser from promoting
the installation of your
PWA if, for example, you
have a native app.
So
Let's talk about how that works.

English: 
First, in the web app manifest, there's
a couple of fields you should know about.
One is called the related_applications field, and this is
where you can list native apps for Android and iOS.
And then there's another boolean field called
prefer_related_apps, and if this is set to true, the
browser is not going to promote the install of your PWA.
Secondly, there's an event that fires when a PWA
passes the installability check in the browser.
And when this happens, developers can call
a preventDefault() method.
And that's going to block any promotion of the PWA install
in browser UI.
Finally, there's a new JavaScript API that just landed
earlier this year called getInstalledRelatedApps().
And this is going to let developers inspect if the user
has native apps installed on the device that
are associated with the origin that the user is currently
on. And just to be clear, this is going to let you see
any app that the user happens to have installed the device.
The app has to be associated with the origin that the user
is on.

English: 
First, in the web app manifest
there's a couple of fields
you should know about.
One is called the related
applications field.
And this is where you
can list native apps
for Android and iOS.
And then there's
another Boolean field
called prefer related apps.
And if this is set
to true, the browser
is not going to promote
the install of your PWA.
Secondly, there is
an event that fires
when a PWA passes the
installability check
in the browser.
And when this
happens, developers
can call a prevent
default method.
And that's going to block
any promotion of the PWA
install in browser UI.
Finally, there is a
new JavaScript API--
just landed earlier
this year-- called
Get Installed Related Apps.
And this is going to
let developers inspect
if the user has native apps
installed on the device that
are associated with the origin
that the user is currently on.
And just to be clear,
this isn't going
to let you see any app that the
user happens to have installed
on the device.
The app has to be
associated with the origin
that the user is on.

English: 
This API does allow for a lot of programmatic
flexibility for developers.
So it means you have full control.
Which experience do you want to promote to the user?
You could use, for example, user behavior.
You could provide the user with preferences in your
HTML. It's really up to you how you want
to use this control, but it does give you as a developer
a lot of control over what you promote to the user
and when it happens.
So this means I can use this API to check
if my native application is installed and make decisions
like showing the "add to home screen" prompts or not?
What if my native app's using Trusted Web Activity, will
this API still return?
So yes, absolutely.
This does mean that you can use the API to check if your
native app is installed.
A Trusted Web Activity is really just an Android app,
so it will show up just like any other Android app,
and this API will return it.
It will also return if the app has

English: 
This API does allow for a lot
of programmatic flexibility
for developers.
So it means you have full
control which experience do you
want to promote to the user.
You could use, for
example, user behavior.
You could provide the user
with preferences in your HTML.
It's really up to you how
you want to use this control.
But it does give
you, as a developer,
a lot of control over what
you promote to the user
and when it happens.
ANDRE BANDARRA: So
this means I can
use this API to check if my
native applications installed,
and make decisions like showing
that onscreen prompts or not.
What if my native app is
using Trusted Web Activity?
Will this API still return?
PJ MCLACHLAN: So
yes, absolutely.
This does mean that you
can use the API to check
if your native app is installed.
A Trusted Web Activity is
really just an Android app.
So it will show up just
like any other Android app.
And this API will return it.
It will also return
if the app is--

English: 
been installed from the web browser. So a PWA installed
from the web browser will also get returned by
getInstalledRelatedApps().
Cool. On the developer experience side, we've been working
on tools like Bubblewrap to help developers build their
project using Trusted Web Activity.
But many developers still wonder, if it wouldn't be easier
if you could just copy and paste the URL to the Play Store.
That's a great question. The reason why we've been focusing
our efforts on making this easier for developers on
Bubblewrap is that we could create
a much more powerful, much more flexible system,
for building
Android apps using a command line utility
like Bubblewrap than what we could do in a web interface.
So app stores are really a different ecosystem
and they have different requirements and policies.
And we believe that developers should have powerful
tools that they need to rethink the experience of their
applications for this environment.

English: 
it has been installed
from the web browser.
So a PWA installed
from the web browser
will also get returned by
Get Installed Related Apps.
ANDRE BANDARRA: Cool.
On the developer
experience side,
we've been working on
tools like Bubblewrap
to developers view their project
using Trusted Web Activity.
But many developers
used to wonder
if it wouldn't be easier if
they could just copy and paste
a new URL into the Play Store.
PJ MCLACHLAN: That's
a great question.
The reason why we've been
focusing our efforts on making
this easier for
developers on Bubblewrap
is that we could create a
much more powerful, much more
flexible system for developer--
for building Android apps using
a command line utility
like Bubblewrap
than what we could do
in a web interface.
So app stores are really
a different ecosystem.
And they have different
requirements and policies.
And we believe that developers
should have powerful tools
that they need to
rethink the experience
of their applications
for this environment.

English: 
And we also want to avoid giving developers the perception
that they can just drop a website into the store without
thinking about that experience.
So there are all kinds of design decisions that should go
into building an Android app, whether or not it's a PWA
powering that app. For example, what's the splash screen
going to look like? When you gonna hide it?
When you going to show content?
Do you do notifications?
There wouldn't be this this kind of flexibility for
configuring these options with a web interface.
That being said, we want to be as easy as possible.
So we've worked to streamline it to the maximum
extent that we can with Bubblewrap
and make it a powerful but flexible
and easy to use developer tool.
What about Play Store policies?
Do Android apps that use PWAs inside a Trusted
Web Activity still need to comply to those policies?
I think you said the magic word there, which is Android
app. It doesn't matter how your Android app is built,
whether it's built using PWAs and web technology,
or whether it's built using Java or Kotlin.
Play policies apply to all Android apps distributed in

English: 
And we also want to avoid
giving developers the perception
that they could just drop
a website into the store
without thinking
about that experience.
So there are all kinds
of design decisions
that should go into building
an Android app whether or not
it's a PWA powering that app--
for example, what's the splash
screen going to look like?
When are you going to hide it?
When are you going
to show content?
Do you need notifications?
There wouldn't be this
kind of flexibility
for configuring these
options with a web interface.
That being said, we want it
to be as easy as possible.
So we've worked to streamline
it to the maximum extent
that we can with Bubblewrap and
make it a powerful but flexible
and easy-to-use developer tool.
ANDRE BANDARRA: What
about Play Store policies?
Do Android apps that use PWAs
instead of Trusted Web Activity
still need to comply
to those policies?
PJ MCLACHLAN: I think you
said the magic word there,
which is "Android app."
It doesn't matter how your
Android app is built--
whether it's built using
PWAs and web technology,
or whether it's built
using Java or Kotlin.
Play policies apply to all
Android apps distributed

English: 
in the Play Store, and
therefore they also
apply to Android apps built
using progressive web apps.
ANDRE BANDARRA: Gotcha.
So same store policies apply.
What about applications
designed for children?
Can developers use web
technology in those apps?
PJ MCLACHLAN: If the target
audience for your application
is children, you need to comply
with a Play Family Policy
requirements.
And these requirements
are intended
to help keep minors safe
from inappropriate content.
Unfortunately, that's really
difficult for the review teams
to evaluate with web apps,
where the content can change,
and not only can the first
party content change,
but it's really easy
to include third party
content inside a web
application, which can be
unintentionally inappropriate.
Let's imagine that you are
using an advertising network
or something else,
where you're loading
content in from a
third party site.
You might not be able to
verify yourself for sure
whether or not that content is
always going to be appropriate.

English: 
the Play Store. And therefore they also apply to Android
apps built using Progressive Web Apps.
Gotcha. So same same store policies apply.
What about applications designed for children?
Can developers use web technology in those apps?
If the target audience for your application is
children, you need to comply with the Play Family
Policy requirements.
And these requirements are intended to help keep minors
safe from inappropriate content.
Unfortunately, that's really difficult for the review
teams to evaluate with web apps where
the content can change. And not only can the first-party
content change, but it's really easy to include third-party
content inside of a web application, which can be
unintentionally inappropriate.
Let's imagine that you were using, you know, an advertising
network or something else where you're loading content in
from a third-party site.
You might not be able to verify yourself for sure whether
or not that content is always going to be appropriate.

English: 
So for the time being,
it's not possible to build
Android apps using
Trusted Web Activity that
comply with Play Family Policy.
We are working on ways to make
this possible in the future.
ANDRE BANDARRA: Got it.
So it seems PWAs are
crossing over ecosystems.
And developers need to adjust
some of their expectations
from that.
Developers using
Trusted Web Activities
are also expected to
fulfill quality criteria.
Is that why that
criteria exists?
PJ MCLACHLAN: Yeah.
Nobody wants an app
store that's going
to be cluttered with
low-quality apps.
And we also want developers
to succeed with their apps
in the Play Store.
Keep in mind,
something that's really
different about distributing
through the Play Store
than it is to just sort of
build a PWA on your own site,
is that these apps have
user ratings and reviews.
And we want to make
sure that developers
are set up for success
to get good ratings
and reviews on an--
for their app.
So at a minimum,
users expect apps
that they install
from the Play Store
to look and feel app-like,
to be fast, to walk offline.

English: 
So for the time being, it's not possible to build
Android apps using Trusted Web Activity that comply with
play family policy.
We are working on ways to make this possible in the future.
Got it. So it seems, PWAs are crossing over ecosystems and
developers need to adjust some of their expectations
from that.
Developers using Trusted Web Activities are also expected
to fulfill a quality criteria.
Is that why that's criteria exists?
Nobody wants an app store that's going to be cluttered with
low quality apps.
And we also want developers to succeed with their apps in
the Play Store. Keep in mind something that's really
different about distributing through the Play Store than
it is to just sort of build a PWA on your own site, is
that these apps have user ratings and reviews.
And we want to make sure that developers are set up for
success to get good ratings and reviews
for their app. So at a minimum, users expect apps
that they install for the Play Store to look an feel
app-like, to be fast, to work offline.

English: 
And that's why we
have quality criteria
for PWAs in the app store.
And this is exactly what
progressive web app criteria
has been intended to do
from the very beginning.
So first I want to
share that there
are certain types
of events that can
happen to a web application
that are effectively a crash.
And these are things
like a 404 happening.
These are things like
failing an offline check when
the user goes offline and
returning the Chrome dino.
This is effectively a crash;
failing the digital asset
links verification,
which is something
you need to do with a
Trusted Web Activity
to verify that you are
the owner of the content
and the owner of the
application in the app store.
So if any of these
things happen,
starting in Chrome
86 in October,
we are going to be mapping those
crashes into Android vitals
crash events.
And you will see
Android apps begin
to crash if users
are running into 404s
in your application, et cetera.
So that's just something that
developers should be aware of.
And we're making
another announcement

English: 
And that's why we have quality criteria for PWAs
in the App Store. And this is exactly what Progressive Web
App criteria has been intended to do from
the very beginning. So first, I want to share
that there are certain types of events that can happen to a
web application that are effectively a crash.
And these are things like, you know, a 404 happening.
These are things like failing an offline check when the
user goes offline and returning the
Chrome dino. This is effectively a crash.
Failing the digital asset links verification, which is
something you need to do with a Trusted Web Activity to
verify that you are the owner of the content
and the owner of the application in the App Store.
So if any of these things happen, starting in Chrome
86 in October, we are going to be mapping those crashes
into Android vitals crash events, and you
will see Android apps begin to crash if you're
if users are running into 404s in your application, etc.
So that's something that developers should be aware of.
And we're we're making another announcement with more
details about this.

English: 
with more details about this.
The second thing is, on
the performance side--
right now we don't have a
date to announce with respect
to enforcement of performance.
But developers should be
aware that the criteria
is a full badge PWA and
a Lighthouse performance
score of 80 or better.
And you can use
webpagetest.org/easy
or PageSpeed Insights against
your start URL as the fastest
way to check whether or not
you're meeting this criteria,
because having the app launch
quickly is an essential part
of having a great application
experience for your users.
We will be providing
a lot of notice
as to when enforcement
will begin.
It will be in 2021.
It won't be in 2020 for the
performance and full badge PWAs
course.
So expect to hear more about
this later in the year.
ANDRE BANDARRA: So last one.
Tutor has recently
launched their PWA
on the Chrome OS Play Store,
which was quite exciting.
Can you tell us more
about how they did it

English: 
The second thing is on the performance side.
Right now, I don't have a date to announce with respect
to enforcement of performance.
But developers should be aware that the criteria is
a full badge PWA and a Lighthouse performance score
of 80 or better. And you can use webpagetest.org/easy
or PageSpeed Insights against your start URL as
the fastest way to check whether or not you're meeting this
criteria. Because having the app launch quickly
is an essential part of having a great application
experience for your users.
We will be providing a lot of notice as to when enforcement
will begin. It will be in 2021.
It won't be in 2020 for the performance and full badge PWA
scores. So expect to hear more about this
later in the year.
So last one, Twitter has recently launched
their PWA on the Chrome OS Play Store, which was
quite exciting.
Can you tell us more about how they did
it and when this is becoming generally available?

English: 
I'm really excited about this.
Right now, this is an early access program and it requires
manual intervention from our team members.
So it's not something that we can extend to everyone in the
community, but we're working on getting it to general
availability. And when that rolls out, it'll be possible
for everyone to just do this themselves.
I'll give you a hint. It uses Trusted Web Activity.
So if you're building a great Trusted Web Activity, it'll
be really easy to make
your Progressive Web App available in the Chrome OS Play
Store in the future.
And I hope we can share more about this in the second half
of the year.
So, well, Trusted Web Activities are coming to Chrome OS.
Nice.
Well, I think we covered a lot.
I wish we had more time to discuss.
I do, too. And so for those of you who are
watching, please do reach out to us on Twitter if you want
to give feedback.
And if you're watching this live, please join us on the
live chat. We'll see you there.
And thanks so much for watching.
Yes. Thanks for watching!

English: 
and when this is becoming
generally available?
PJ MCLACHLAN: I'm really
excited about this.
Right now this is an
early access program,
and it requires
manual intervention
from our team members.
So it's not something
that we can extend
to everyone in the community.
But we're working on getting
it to general availability.
And when that rolls out,
it'll be possible for everyone
to just do this themselves.
I'll give you a hint.
It uses Trusted Web Activity.
So if you're building a
great Trusted Web Activity,
it'll be really easy to
make your progressive web
app available in the Chrome
OS Play Store in the future.
And I hope we can
share more about this
in the second half of the year.
ANDRE BANDARRA: So wow.
Trusted web activities
are coming to Chrome OS.
Nice.
Whoa, I think we covered a lot.
I wish we had more
time to discuss.
PJ MCLACHLAN: I do, too.
And so for those of
you who are watching,
please do reach out
to us on Twitter
if you want to give feedback.
And if you're
watching this live,
please join us on the live chat.
We'll see you there.
And thanks so much for watching.
ANDRE BANDARRA: Yes,
thanks for watching.

English: 
[MUSIC PLAYING]
PJ MCLACHLAN: Hi, everyone.
Thanks for joining.
I'm PJ.
I'm a product manager on
the Chrome Web Platform
team responsible for progressive
web apps, notifications,
and permissions.
Today's talk is about quieter
notification permission
prompts, and how recent
changes to how Chrome handles
notification
permission requests can
make browsing for the web a
little better for everyone.
There's also important
information in here
for developers who
use notifications
to help you improve
your user experience,
improve your notification
accept rates,
and to tell you about upcoming
changes that will detect
and flag abusive use
of notification prompts
or content.
If you're not
familiar with them,
web notifications are a
channel for communicating
timely and contextually relevant
information to the user.

English: 
Hi, everyone!
Thanks for joining. I'm PJ.
I'm a Product Manager on the Chrome Web Platform team
responsible for Progressive Web Apps, notifications,
and permissions. Today's talk is about quieter notification
permission prompts and how recent changes to how Chrome
handles notification permission requests
can make browsing for the web a little better for everyone.
There's also important information in here for developers
who use notifications to help you improve your user
experience, improve your notification accept rates,
and to tell you about upcoming changes that will detect and
flag abusive use of notification prompts
or content.
If you're not familiar with them, web notifications are a
channel for communicating timely and contextually relevant
information to the user.

English: 
Mostly, these work just
like push notifications
in mobile apps,
except they can also
work on desktop, on Windows,
Mac, as well as smartphones.
On Android, for example,
web notifications
appear in the
notification drawer.
And on desktop, they typically
appear in the top right
corner of the screen.
In some cases, notifications
aren't just helpful,
they're almost essential
to the app's functionality.
For example, if you
had an incoming call
from a communication app, like
Google Duo or Chat, that's
not something you want
to know about later.
You need to know
about it right away.
Of course, not
everyone uses apps
that require
notifications, and not
all web sites are putting the
needs of their users first.
That means that we are
seeing a lot of websites
out there that are
misusing notifications
in ways that are annoying
or could be abusive.
Before we get into
that, though, I
want to talk about how users
get enrolled in notifications.
To receive web
notifications, a user
needs to accept a notification
permission request.

English: 
Mostly these work just like push notifications in mobile
apps, except they can also work on desktop,
on Windows, Mac, as well as smartphones.
On Android, for example, web notifications appear
in the notification drawer, and on desktop, they typically
appear in the top right corner of the screen.
In some cases, notifications aren't just helpful,
they're almost essential to the app's functionality.
For example, if you had an incoming call from a
communication app like Google Duo or Chat,
that's not something you want to know about later.
You need to know about it right away.
Of course, not everyone uses apps that
require notifications, and not all websites
are putting the needs of their users first.
That means that we are seeing a lot of websites out there
that are misusing notifications in ways that are annoying
or could be abusive.
Before we get into that though, I want to talk about how
users get enrolled in notifications.
To receive web notifications, a user needs to accept
a notification permission request.

English: 
When websites prompt users out
of context for a notification,
such as when a user first
arrives on the website,
it can be a pretty
annoying distraction,
both from the browsing
experience itself
as well as from the
website's content.
Worse, some abusive
web sites look for ways
to trick users into
accepting notifications
that are then used to promote
malware or undesired content.
I want to cover why
we have notifications
on the web platform in the first
place in a little more depth.
Web platform is there
to enable web developers
to create powerful applications,
and web notifications
are part of that toolkit.
Without notification
support, there
would be entire
types of apps that
would be simply impossible to
build using web technology.
So for example, messaging apps,
calendars, e-commerce or food
delivery notifies,
taxi or ride sharing
apps all depend on notifications
to provide a timely tap
on the shoulder to the user.
And while you could imagine
that some of these apps

English: 
When websites prompt users out of context for a
notification, such as when a user first arrives on the
website, it can be a pretty annoying distraction,
both from the browsing experience itself, as well as from
the website's content.
Worse, some abusive websites look for ways
to trick users into accepting notifications
that are then used to promote malware or undesired content.
I want to cover why we have notifications on the
webplatform in the first place in a little bit more depth.
The web platform is there to enable web developers to
create powerful applications, and web notifications
are part of that toolkit.
Without notification support, there would be entire types
of apps that would be simply impossible to build
using web technology.
So, for example, messaging apps, calendars,
e-commerce or food delivery notifiers, taxi or
ride sharing apps all depend on notifications
to provide a timely tap on the shoulder to the user.

English: 
And, well, you can imagine that some of these apps might be
usable without notifications.
You could see that most of the time you're probably going
to want one.
We've also all equally experienced some of the misuse
of notifications though.
That includes things like unwanted marketing,
promotions, or content that just isn't very important or
relevant to us at a given moment.
To address this problem, starting in Chrome 80, that was
released in January of 2020, we started making changes
to how these notifications request work to help make
browsing the web safer and less interruptive.
We're gonna get into that new UI in the next slide.
In Chrome 80, we added quiet notification UI.
Quiet notification UI is less interruptive, but
it still lets the users know that the request has been
made.
There's a little bit of animation to catch the eye, but on
desktop, the dialog is in the omnibox,
so it doesn't actually cover any part of the web content.
On mobile, the notification prompt used to be a modal in
normal UI, but in quiet UI it's an easily dismissed
info bar at the bottom of the screen.

English: 
might be usable
without notifications,
you could see that
most of the time,
you're probably
going to want one.
We've also all
equally experienced
some of the misuse of
notifications, though.
That includes things
like unwanted marketing,
promotions, or content
that just isn't
very important or relevant
to us at a given moment.
To address this problem
starting in Chrome 80 that
was released in
January of 2020, we
started making changes to
how these notifications
request work to help make
browsing the web safer and less
interruptive.
We're going to get into that
new UI in the next slide.
In Chrome 80, we added
quiet notifications UI.
Quiet notification UI
is less interruptive.
But it still lets the users know
that the request has been made.
There's a little bit of
animation to catch the eye.
But on desktop, the
dialog is in the omnibox,
so it doesn't actually cover
any part of the web content.
On mobile, the
notification prompt used
to be a modal in normal UI.
But in quiet UI, it's an
easily dismissed infobar

English: 
at the bottom of the screen.
Quiet notification
UI aims to reduce
the visual priority
and interruptiveness
of notification requests.
On desktop, which you see
in this example on the left,
you'll notice the bell icon
initially animates with text,
indicating that notifications
are blocked on the site.
In mobile, the quiet
UI is now an infobar.
And both of these
cases in product help
explains to the user
why notifications
were blocked on the site.
Quiet notifications UI
was created specifically
to address the concerns I
mentioned earlier in this talk.
We received frequent complaints
from users in Chrome feedback
about disruptive notification
provision prompts
or unwanted notifications.
That being said, there are
services with tens or hundreds
of millions of users, like
messaging apps and calendars,
that are depending on timely web
notifications every single day.
Let's talk for a moment
about how users get
enrolled in quiet
notification UI.
There are several ways
that this can happen.

English: 
Quiet notification UI aims to reduce the visual
priority and interruptiveness of notification requests.
On desktop, which you see in this example on the left,
you'll notice the bell icon initially animates with text
indicating that notifications are blocked on the site.
In mobile, the quiet UI is now an infobar,
and both of these cases in-product help explains
to the user why notifications were blocked on
the site.
Quiet notifications UI was created specifically to address
the concerns I mentioned earlier in this talk.
We received frequent complaints from users in Chrome
feedback about disruptive notification permission prompts
or unwanted notifications.
That being said, there are services with tens or hundreds
of millions of users, like messaging apps and calendars,
that are depending on timely web notifications every single
day.
Let's talk for a moment about how users get enrolled in
quiet notification UI.
There are several ways that this can happen.

English: 
First, users could just
enroll themselves manually
by changing their preferences
in Chrome settings.
Second, sites that
have very low accept
rates for notification
permission requests
will be automatically enrolled
in quiet notifications UI.
And this is currently sites that
have the lowest few percentile
of notification accept rates.
So the absolute rate needed
for quiet notification UI
does change over time, because
we are using percentiles.
We'll also periodically increase
the accept rate percentile
that's needed to preserve
normal notification UI.
We always keep a
control group of users
that are in normal
notification permission UI
so that if a site's
accept rates improve,
we can remove it from
quiet UI enforcement.
Third, there are some users
who almost never accept
any notification
permission request.
These users simply don't
want notifications.
And for these
users, we adaptively
enable quiet notification
mode on their behalf

English: 
First, users could just enroll themselves manually
by changing their preferences in Chrome settings.
Second, sites that have very low accept rates
for notification permission requests will be automatically
enrolled in quiet notifications UI.
And this is currently sites that have the lowest few
percentile of notification accept rates.
So the absolute rate needed for quiet notification UI
does change over time because we are using percentiles.
We'll also periodically increase the accept rate percentile
that's needed to preserve normal notification UI.
We always keep a control group of users that are in normal
notification permission UI, so that if a site's accept
rates improve, we can remove it from
quiet UI enforcement.
Third, there are some users who almost never accept any
notification permission requests.
These users simply don't want notifications.
And for these users, we adaptively enable quiet
notification mode on their behalf in Chrome settings

English: 
in Chrome settings
if they repeatedly
blocked notification requests.
As sites improve their behavior
and use of notification
permission requests,
we expect that there
will be fewer and fewer users
who are adaptively placed
in quiet notification UI mode.
Finally-- and this is
starting in Chrome 84,
which is coming up soon--
we're going to begin enforcing
against abusive notification
prompts that try
to mislead users,
are phishing for
private information,
or promoting malware.
In this case, in addition
to quite notifications UI,
the user is going to be
advised in the notification
prompt that the site may
be trying to trick them.
So what should you do to
make sure your website is not
enrolled in quiet
notification UI?
Well, first and foremost,
if you're prompting users
to enroll in notifications
as soon as they
arrive on your website,
please stop doing that.
This is the easiest
way to improve
your notification accept rate.
Very few users will
accept the notification
from a site they're
visiting for the first time.
And if you think about
it, why would they?

English: 
if they repeatedly block notification requests.
As sites improve their behavior and use of notification
permission requests, we expect that there will be fewer and
fewer users who are adaptively placed in quiet
notification UI mode.
Finally, and this is starting in Chrome 84,
which is coming up soon, we're going to begin enforcing
against abusive notification prompts that try to mislead
users, are phishing for private information, or promoting
malware. In this case, in addition to quiet notifications
UI, the user is going to be advised in the notification
prompt that the site may be trying to trick them.
So what should you do to make sure your web site is not
enrolled in quiet notification UI?
Well, first and foremost, if you're prompting users to
enroll notifications as soon as they arrive on your
website, please stop doing that.
This is the easiest way to improve your notification accept
rate. Very few users will accept a notification from a
site they're visiting for the first time.
And if you think about it, why would they?

English: 
We're all experiencing information overload.
Wait until you know your user better and you know you can
add value for your user before you prompt them.
You can and should prompt your user to accept notification
when there's a clear user benefit and in the context of the
user's journey in your application. Websites
that ask for notification permission in the context where
the benefit is clear to the user have 80% accept rates or
higher. That should be your goal.
Even if you do the best possible job with your notification
prompts UX, it's possible that some of your users
may be in quiet notification UI mode.
So the first thing you want to check here is to make sure
that the accept rates on your site are what you really
expect them to be.
Notification accept rate data is in the Chrome User
Experience Report, which is a public database containing
important information about real world Chrome metrics
for popular destinations on the web.
There's a minimum number of users and decisions that are
required for data to be available in the Chrome User
Experience Report.

English: 
We're all experiencing
information overload.
Wait until you know your
user better and you know
can add value for your user
before you prompt them.
You can and should
prompt your user
to accept notification when
there's a clear user benefit.
And know the context
of your user's journey
in your application.
Websites that ask for
notification permission
in context where the
benefit is clear to the user
had 80% accept rates or higher.
That should be your goal.
Even if you do the best possible
job with your notification
prompt UX, it's possible
that some of your users
may be in quiet
notification UI mode.
So the first thing
you want to check here
is to make sure that the
accept rates on your site
are what you really
expect them to be.
Notification accept rate data
is in the Chrome User Experience
Report, which is a public
database containing
important information about
real-world Chrome metrics
for popular
destinations on the web.
There is a minimum number
of users and decisions
that are required for data to
be available in the Chrome User

English: 
Experience Report.
And that's to help with
preserving visitor privacy.
So if your site doesn't
have data in the Chrome User
Experience Report, you may
need to get that information
from somewhere else.
For example, most
notification service providers
will have this
instrumented so that you
can check your accept rates.
Or if you've rolled your own
notifications implementation,
you may need to add your
own instrumentation.
It's also a good idea to
look at the notification
accept rates of sites that are
like yours in the Chrome User
Experience Report so you
can get a sense for what
the benchmark except rates
are and aim to be better
than those.
The article linked
here in the slide
will give you more
details about how
to use the Chrome
User Experience
data set to learn more about
users' notification permission
prompt decisions
on your website.
So the second thing
you need to think about
is, how can you make your
notification requests
more in context?
I know I mentioned this
before, but it bears repeating.
You want to make
absolutely sure that you're
asking for
notification permission

English: 
And that's to help with preserving visitor privacy.
So if your site doesn't have data in the Chrome User
Experience Report, you may need to get that information
from somewhere else.
For example, most notification service providers
will have this instrumented so that you can check your
accept rates.
Or if you've rolled your own notifications implementation,
you may need to add your own instrumentation.
It's also a good idea to look at the notification accept
rates of sites that are like yours in the Chrome User
Experience Report, so you can get a sense for what the
benchmark accept rates are and aim to be better than
those.
The article linked here in the slide will give you more
details about how to use the Chrome User Experience dataset
to learn more about users' notification permission prompt
decisions on your website.
So the second thing you need to think about is
how can you make your notification requests more in
context? I know I mentioned this before, but it bears
repeating. You want to make absolutely sure that you're
asking for a notification permission at a moment in the

English: 
user's journey that makes sense to them.
In this example, we're showing a notification request the
first time the user receives a response to their first
chat. This is a perfect moment.
Even with quiet notification UI, it should be
obvious to the user why they would want to accept
notifications.
The activity in motion in the web app, combined with the
motion of the browser prompt should be sufficient cues for
the user to enroll in notifications if they want them.
If your user doesn't accept notifications with
this in-context pattern, there is a pretty good chance that
they just don't want notifications.
And you should respect that decision.
Before I finish, I want to share a little
bit about what's coming next for notifications.
First, we're planning to increase the accept rate
percentile that's needed to have normal notification
prompts. Since this is a percentile as well,
something to keep in mind is that if other sites are
improving their notification UX and you're not,
your site may be slipping into a lower percentile
and quiet notifications may be activated for your site.

English: 
at a moment in
the user's journey
that makes sense to them.
In this example, we're
showing a notification request
the first time the user receives
a response to their first chat.
This is a perfect moment.
Even with quiet
notification UI, it
should be obvious to
the user why they would
want to accept notifications.
The activity in motion
in the web app combined
with the motion of
the browser prompt
should be sufficient
cues for the user
to enroll in notifications
if they want them.
If your user doesn't
accept notifications
with this in-context pattern,
there is a pretty good chance
that they just don't
want notifications,
and you should
respect that decision.
Before I finish, I want to
share a little bit about what's
coming next for notifications.
First, we're planning to
increase the accept rate
percentile that's needed to have
normal notification prompts.
Since this is a percentile as
well, something to keep in mind
is that if other sites are
improving their notification UX
and you're not, your site
may be slipping into a lower
percentile, and
quiet notifications
may be activated for your site.

English: 
If your site has a notification
accept rate of over 50%,
you're in safe territory.
But we recommend aiming
for 80% or better.
Second, Chrome places
a high priority
on user privacy and security.
And we intend to take more
steps to protect users
from abusive notification
in the future.
That includes more protections
for abusive notification
content, as well as
retroactive action
to help users who may
have already enrolled
in notifications
from abusive sites
prior to the release
of Chrome 84.
Most important, as we improve
the signal-to-noise ratio
of the web
notification ecosystem,
we hope users will come
to view notifications
as being more helpful.
If this happens, it means we're
doing a good job protecting
users from unwanted
notification prompts
and unwanted
notification content.
Ultimately, this
will help developers
who use notifications for key
functionality in their apps,
as users are more likely
to accept notifications
when they have less
reason to be worried
about spammy or
abusive notifications.

English: 
If your site has a notification accept rate of over 50%,
you're in safe territory.
But we recommend aiming for 80% or better.
Second, Chrome places a high priority on user privacy
and security, and we intend to take more steps to protect
users from abusive notification in the future.
That includes more protections for abuse of notification
content, as well as retroactive action
to help users who may have already enrolled in
notifications from abusive sites prior to the release
of Chrome 84.
Most important, as we improve the signal to noise ratio
of the web notification ecosystem, we hope users will
come to view notifications as being more helpful.
If this happens, it means we're doing a good job protecting
users from unwanted notification prompts and unwanted
notification content.
Ultimately, this will help developers who use notifications
for key functionality in their apps, as users are more
likely to accept notifications when they have less reason

English: 
Thanks again for
joining today, everyone.
Have a great day.
[MUSIC PLAYING]
PETE LEPAGE: True or false--
IndexedDB is limited to 25 megs.
False.
Gone are the days of
tiny storage quotas.
True or false-- localStorage
should be avoided.
True.
It's synchronous and may
cause performance issues
by blocking the main thread.
All right, here's another one.
True or false-- cookies are
a great way to store data.
False.
They've got their
uses but should never
be used for storage.
How about this one?
AppCache is a great way to
make your app work offline.
Yeah, trick question.
Absolutely false.
AppCache is awful, and it's
going away soon, thankfully.

English: 
to be worried about spammy or abusive notifications.
Thanks again for joining today, everyone.
Have a great day!
True or false? IndexDB is limited to
25 megs.
False. Gone are the days of tiny storage quotas.
True or false? Local storage should be avoided.
True! It's synchronous and may cause performance
issues by blocking the main thread.
All right, here's another one. True or false?
Cookies are a great way to store data.
False, they've got their uses, but should never be used
for storage.
How about this one? App Cache is a great way
to make your app work offline.
Yeah. Trick question. Absolutely false.
App Cache is awful, and it's going away soon, thankfully.

English: 
So how should we be storing data
and caching our critical apps
resources on the client?
How much can we store?
How does the browser
deal with eviction?
And be sure to stick
around to the end,
and I'll tell you how
you can start Chrome
with only a tiny
storage limit so you
can test what happens when
you exceed your storage quota.
I'm Pete LePage.
Let's dive into
storage on the web.
Modern storage makes
it possible to store
more than just small chunks
of data on the user's device.
Even in perfect wireless
environment, caching
and other storage
techniques can substantially
improve performance,
reliability,
and most importantly,
the user experience.
With the Cache Storage API,
you can cache your static app
resources like HTML,
JavaScript, CSS,
ensuring that they're
always instantly available.
And with IndexedDB, you can
store all kinds of data--

English: 
So how should we be storing data and caching our critical
app resources on the client?
How much can we store?
How does the browser deal with eviction?
And be sure to stick around to the end, and I'll tell
you how you can start Chrome with only a tiny
storage limit so you can test what happens when
you exceed your storage quota.
I'm Pete LePage. Let's dive in to storage on the web.
Modern storage makes it possible to store more than just
small chunks of data on the user's device.
Even in perfect wireless environments, caching and
other storage techniques can substantially improve
performance, reliability, and most importantly,
the user experience.
With the Cache Storage API, you can cache your static
app resources like HTML, JavaScript, CSS,
ensuring that they're always instantly available.
And with IndexedDB, you can store all kinds
of data, article content, users document,

English: 
settings, and more.
IndexedDB and the Cache Storage API are supported
in every modern browser.
They're both asynchronous and will not block the main
thread. They're accessible from the window object,
web workers, and service workers, making it easy
to use them anywhere in your code.
There are several other storage mechanisms that are
available in the browser, but they've got limited
use and may cause significant performance issues.
If you're concerned about storing large amounts of data on
the client, don't be.
Unless you're trying to store several GBs,
modern browsers typically won't even bat an eye.
And even then, it really comes down to the amount of disk
space available on the device.
Of course, implementations vary by browsers.
Firefox allows an origin to store up to 2 GB.
Safari allows an origin to use up to 1 GB.
And when that limit is reached, Safari is currently the

English: 
article content, user
document, settings, and more.
IndexedDB and the
Cache Storage API
are supported in
every modern browser.
They're both asynchronous and
will not block the main thread.
They're accessible from the
window object, web workers,
and service workers,
making it easy to use them
anywhere in your code.
There are several other
storage mechanisms that
are available in the browser.
But they've got limited
use and may cause
significant performance issues.
If you're concerned about
storing large amounts of data
on the client, don't be.
Unless you're trying
to store several gigs,
modern browsers typically
won't even bat an eye.
And even then, it
really comes down
to the amount of disk space
available on the device.
Of course, implementations
vary by browsers.
Firefox allows an origin
to store up to 2 gigs.
Safari allows an origin
to use up to 1 gig.
And when that limit
is reached, Safari
is currently the only browser
that'll prompt the user

English: 
to increase that limit.
And Chrome-- well, look.
It's a little complex,
but stick with me here.
Chrome and most other
Chromium based browsers
limit storage to 80% of
the total disk space.
And each origin can
only use 75% of that.
For example, if you
had a 10 gig hard disk,
Chrome would limit
its storage to 8 gigs,
then each origin would
be limited to 6 gigs.
Essentially, each origin
will be allowed to use up
to 60% of the total disk space.
It sounds complex, but
there's an easy way
to see what's available.
In many browsers, you can
use the Storage Manager
API to determine the
amount of storage that's
available to the origin
and how much storage
that you're already using.
It reports the total
number of bytes used
and makes it possible to
calculate the approximate bytes
remaining.
Unfortunately, the
Storage Manager API
isn't implemented
in all browsers yet.

English: 
only browser that'll prompt the user to increase that
limit. And Chrome?
Well, look, it's a little complex, but stick with me here.
Chrome and most other Chromium based browsers limit
storage to 80% of the total disk space,
and each origin can only use 75% of that.
For example, if you had a 10 GB hard disk,
Chrome would limit its storage to 8 GB, then
each origin would be limited to 6 GB.
Essentially, each origin would be allowed to use up
to 60% of the total disk space.
It sounds complex, but there's an easy way to see
what's available.
In many browsers, you can use the Storage Manager
API to determine the amount of storage that's available
to the origin and how much storage that you're already
using. It reports the total number of bytes used
and makes it possible to calculate the approximate bytes
remaining. Unfortunately, the Storage
Manager API isn't implemented in all browsers yet,

English: 
So you must use feature
detection before using it.
But even when it is
available, you still
need to catch over quota errors.
In some cases-- and I'm
looking at you, Chrome--
it's possible for
the available quota
to exceed the actual amount
of storage available.
Most Chromium based
browsers factor
in the amount of free space when
reporting the available quota.
Chrome does not, though.
And it will always report
60% of the actual disk
size this helps to
reduce the ability
to determine the size of
stored cross-origin resources.
So what should you do
when you go over quota?
Most importantly,
you should always
catch and handle write errors,
whether it's a quota exceeded
error or something else.
Then, depending on
your app design,
decide how to handle it.
For example, delete content
that hasn't been accessed
in a long time, or remove
data based on its size,

English: 
so you must use feature detection before using it.
But even when it is available, you still
need to catch over-quota errors.
In some cases, and I'm looking at you, Chrome, it's
possible for the available quota to exceed
the actual amount of storage available.
Most Chromium based browsers factor in the
amount of free space when reporting the available
quota. Chrome does not though, and it will always
report 60% of the actual disk size.
This helps to reduce the ability to determine the
size of stored cross origin resources.
So what should you do when you go over quota?
Most importantly, you should always catch
and handle write errors.
Whether it's a QuotaExceededError or something else.
Then, depending on your app design, decide
how to handle it.
For example, delete content that hasn't been accessed
in a long time, or remove data based on its size,

English: 
or provide a way for users to choose what they
want to delete.
Both IndexedDB and the Cache API throw a DOMError
named QuotaExceededError when you've exceeded
the quota available.
For IndexedDB, the transactions onabort()
handler will be called, passing an event.
That event will include a DOMException in the error
property and if you check the name of the error, it'll
return QuotaExceededError.
For the Cache API, writes will reject with a
QuotaExceededError DOMException.
Data stored in the browser can be cleared in a couple of
ways. It's most commonly initiated
by the user choosing to clear data in the browser's
site setting panel.
But it can also happen when faced with storage
pressure like low disk space.
In this case, browsers typically automatically
delete data from the least recently used
origins and continue to delete that

English: 
or provide a way
for users to choose
what they want to delete.
Both IndexedDB and the Cache
API throw a DOM error name
QuotaExceededError when you've
exceeded the quota available.
For IndexedDB, The
transactions onabort handler
will be called passing an event.
That event will
include a DOM exception
in the error property.
And if you check the
name of the error,
it'll return QuotaExceededError.
For the Cache API,
writes will reject
with a QuotaExceededError
DOM exception.
Data stored in the browser
can be cleared and a couple
of ways.
It's most commonly
initiated by the user
choosing to clear data in the
browser site setting panel.
But it can also happen when
faced with storage pressure
like low disk space.
In this case, browsers
typically automatically delete
data from the least
recently used origins

English: 
until the storage pressure has been relieved.
If the app hasn't synced data with the server,
it will cause data loss and means that the app won't
have the resources needed to run.
Both of which can lead to a negative user experience.
Thankfully, research by the Chrome team shows that this
doesn't happen very often and it's far more common for
users to manually clear storage.
Thus, if a user visits your site often the chances are
small that data will be deleted.
Let's take a look at a specific example of how automatic
eviction might happen in Chrome.
Origin A is the least recently visited site
Origin B is the next least recently visited site
and so on.
Origin E and Origin K are getting close to their
quota limits, but they haven't reached it yet.
And the over all usage is less than the
total quota, so nothing is going to be evicted.
Origin B has a star next to it because it was granted

English: 
and continue to delete that
until the storage pressure has
been relieved.
If the app hasn't synced
data with the server,
it will cause data
loss and means
that the app won't have the
resources you needed to run,
both of which can lead and
a negative user experience.
Thankfully, research
by the Chrome team
shows that this doesn't
happen very often,
and it's far more
common for users
to manually clear storage.
Thus, if a user visits
your site often,
the chances are small
that data will be deleted.
Let's take a look at
a specific example
of how automatic eviction
might happen in Chrome.
Origin a is the least
recently visited site.
Origin b is the next least
recently visited site,
and so on.
Origin e and origin
k are getting
close to their quota limits,
but they haven't reached it yet.
And the overall usage is
less than the total quota,
so nothing is going
to be evicted.
Origin b has a star
next to it, because it

English: 
persistent storage, meaning that it can only be deleted
by the user.
Check out my article on web.dev for more info about
persistent storage, when you should be using it, and
how to request it.
Now let's say the user visits Origin N
again, which happens to be a music playing site.
The user saves a few more songs for offline listening.
Now, each origin is still within its quota limit,
but Chrome has exceeded the overall limit.
To get back under the overall limit, Chrome will start
evicting stored data from the least recently used origin
first and continue until it's back under the total
limit. Firefox and other Chromium based
browsers work in essentially the same way.
Safari is a little different.
When it's out of storage, it will prevent anything
new from being written.
But they recently implemented a new 7
day cap on all writeable storage, including IndexedDB,
service worker registrations, and the Cache API.

English: 
was granted persistent storage,
meaning that it can only
be deleted by the user.
Check out my article on
web.dev for more info
about persistent storage,
when you should be using it,
and how to request it.
Now, let's say the user
visits origin n again,
which happens to be
a music playing site.
The user saves a few more
songs for offline listening.
Now, each origin is still
within its quota limit,
but Chrome has exceeded
the overall limit.
To get back under
the overall limit,
Chrome will start
evicting stored data
from the least recently
used origin first
and continue until it's
back under the total limit.
Firefox and other Chromium
based browsers work
in essentially the same way.
Safari is a little different.
When it's out of storage,
it will prevent anything new
from being written.
But they recently implemented
a new seven-day cap
on all writable
storage, including
IndexedDB, service worker
registrations, and the Cache
API.

English: 
This means that after using Safari for seven days
and not interacting with the site, it will evict all
content for that site.
This eviction policy does not apply to Progressive Web Apps
that have been added to the home screen, essentially
installed PWA.
Check out the Webkit blog linked in the description for
complete details.
Modern computers typically have large hard drives, which
makes it hard to test the over quota failures.
So here's a little pro tip.
Create a small RAM disk.
Here, I've created a 500 MB RAM disk
on my Mac. Then start Chrome using
the 'user-data-dir' flag.
This tells Chrome to store the user profile and user data
on the RAM disk.
Chrome now thinks my disk is only 500 MB, thus
it's going to limit my storage quota to only 300 MB,
which I can quickly fill.
This makes it much easier to verify that my code behaves
properly when it hits those quota exceeded errors.
Chrome DevTools also have helpful features for

English: 
This means that after
using Safari for seven days
and not interacting
with the site,
it will evict all
content for that site.
This eviction policy
does not apply
to progressive
web apps that have
been added to the Home screen--
essentially, installed PWAs.
Check out the WebKit blog
linked in the description
for complete details.
Modern computers typically
have large hard drives,
which makes it hard to test
the over quota failures.
So here's a little pro tip.
Create a small RAM disk.
Here I've created a 500
meg RAM disk on my Mac.
Then start Chrome using
the user-data-dir flag.
This tells Chrome to store
the user profile and user
data on the RAM disk.
Chrome now thinks my
disk is only 500 megs.
Thus, it's going to limit my
storage quota to only 300 megs,
which I can quickly fill.
This makes it much
easier to verify
that my code behaves
properly when it
hits those QuotaExceededErrors.
Chrome DevTools also
have helpful features

English: 
understanding what's going on with the data that you've
stored. In the Application panel, the Clear Storage
panel will show you how much storage you're using for the
origin and makes it easy to clear some or all
of that data that you've got stored.
The Storage panel lets you see what's in local and session
storage, as well as what's an IndexedDB, including the
actual databases and even the individual entries.
And the Cache Storage panel shows you what stored in Cache
Storage.
Gone are the days of limited storage and prompting the user
to store more and more data.
Using the Cache Storage API and Indexed DB,
you can effectively store all the resources that your app
needs to run.
Be sure to check out my article "Storage for the Web,"
where I've got additional details and info on some other
not so good storage mechanisms.
Then check out my article on Persistent Storage to
learn how you can protect your data from being blown away
even when the device is facing storage pressure.

English: 
for understanding what's
going on with the data
that you've stored.
In the Application panel,
the Clear Storage panel
will show you how much storage
you're using for the origin
and makes it easy to clear
some or all of that data
that you've got stored.
The Storage panel
lets you see what's
in local and session
storage, as well as what's
in IndexedDB, including the
actual databases and even
the individual entries.
And the Cache Storage
panel shows you
what's stored in cache storage.
Gone are the days
of limited storage
and prompting the user to
store more and more data.
Using the Cache Storage
API and IndexedDB,
you can effectively
store all the resources
that your app needs to run.
Be sure to check out my
article, "Storage for the web,"
where I've got
additional details
and info on some of the
not-so-good storage mechanisms.
Then check out my article
on persistent storage
to learn how you can
protect your data from being
blown away, even when the device
is facing storage pressure.

English: 
See you soon!
Hi, everyone! My name is Thomas.
And today I want to talk to you about the explorations that
we've been doing with the Zoom team over the past few
months and some of this specific advanced APIs that
we've been exploring together.
As you've probably seen, Zoom has become a staple in many
homes. And so it's critical that we're able to provide a
good experience directly through the browser.
Zoom does have a web version today, but compared to its
native client, it's missing some features and sometimes
misses the mark in performance.
We wanted to change this. And so even before COVID hit, we
started working with the Zoom team to understand exactly
what changes they would need and what new things in Chrome
they would want to create a truly great experience.
Now, Zoom is, of course, a video conferencing application.
You can't talk about video conferencing on the web without
talking about WebRTC.

English: 
See you soon.
[MUSIC PLAYING]
THOMAS NATTESTAD: Hi everyone.
My name is Thomas.
And today I want to talk to
you about the explorations
that we've been
doing with the Zoom
team over the past
few months and some
of this specific
advanced APIs that we've
been exploring together.
As you've all
probably seen, Zoom
has become a staple
in many homes.
And so it's critical that
we're able to provide
a good experience directly
through the browser.
Zoom does have a
web version today.
But compared to
its native client,
it's missing some features
and sometimes misses
the mark in performance.
We wanted to change this.
And so even before
COVID hit, we started
working with the Zoom
team to understand
exactly what changes
they would need
and what new things in Chrome
they would want to create
a truly great experience.
Now, Zoom is of course a video
conferencing application.
You can't talk about video
conferencing on the web

English: 
WebRTC is a really great full stack solution
that provides a complete package for achieving video
conferencing on the web.
WebRTC was built and standardized about 10 years ago
and now ships in all major browsers.
This makes it the best choice if you want a complete
solution with broad support across browsers.
WebRTC's strength of being that complete solution
can, however, also be a challenge for someone like Zoom,
who have their own custom protocols and their own
architecture. Zoom would rather want a set
of more simple low level APIs that they can
then build their own architecture and system on top of
themselves. And the three specific ones that we've been
exploring and that I want to talk to you a little bit about
today are WebAssembly SIMD, WebTransport,
and WebCodecs.
I'll mention from the start that all of these are fairly
cutting edge and most of them are in active development.
So while they're all in a place where you can start to play
with them, these aren't shipping APIs just yet.
But hopefully this presentation will cover some of the

English: 
without talking about WebRTC.
WebRTC is a really great
full stack solution
that provides a complete
package for achieving
video conferencing on the web.
WebRTC was built and
standardized about 10 years ago
and now ships in
all major browsers.
This makes it the best choice
if you want a complete solution
with broad support
across browsers.
WebRTC's strength of being
that complete solution
can, however, also
be a challenge
for someone like Zoom, who
have their own custom protocols
and their own architecture.
Zoom would rather want a set
of more simple low-level APIs
that they can then build their
own architecture and system
on top of themselves.
And the three specific ones
that we've been exploring
and that I want to talk to
you a little bit about today
are WebAssembly SIMD,
WebTransport, and WebCodecs.
I'll mention from the
start that all of these
are fairly cutting
edge, and most of them
are in active development.
So while they're
all in a place where
you can start to
play with them, these
aren't shipping APIs just yet.
But hopefully, this
presentation will cover
some of the early parts of it.
And by the time
you're watching this,

English: 
it might be in the
future, and you'll
be able to actually
use these directly
[INAUDIBLE] already shipped.
So first of all, I want to
talk about WebAssembly SIMD
and how it can provide really
highly performant code.
Most of you probably heard
about WebAssembly already.
But as a recap, WebAssembly, is
a new low-level binary format
for the web that is compiled
from other languages
and offers maximized
performance.
This means that you can take
something like C++ or Rust
and then compile it into
WebAssembly before shipping it
to the client.
WebAssembly has
been out for a while
and has been shipping in all
major browsers for a while.
But we're continuing
to expand it
with functionality such as
SIMD, which stands for Single
instruction Multiple Data.
To explain SIMD, let's look
at this incredibly simple loop
that just adds two
arrays together.
Without SIMD, the CPU
would go through this loop
and add their different
elements together one by one,
requiring four full steps.
But with SIMD, the CPU is able
to vectorize these elements

English: 
early parts of it. And by the time you're watching this, it
might be in the future and you'll be able to actually use
these directly and they'll all already shipped.
So first of all, I want to talk about WebAssembly SIMD and
how it can provide really highly performant code.
Most of you probably heard about WebAssembly already.
But as a recap, WebAssembly is a new low-level binary
format for the web. And it's compiled from other languages
and offers maximized performance.
This means that you can take something like C++ or Rust and
then compile it into WebAssembly before shipping it to the
client.
WebAssembly has been out for a while and has been shipping
in all major browsers for a while.
But we're continuing to expand it with functionality such
as SIMD, which stands for Single Instruction Multiple
Data.
To explain SIMD, let's look at this incredibly simple loop
that just adds two arrays together.
Without SIMD, the CPU would go through this loop and add
their different elements together one by one requiring
four full steps.
But with SIMD, the CPU is able to vectorize these elements

English: 
and then take just a single CPU operation to add them.
The best part is that because compilers are so smart,
they can automatically detect these optimizations
and do them for you.
In Emscripten, you just need to pass the
-msimd128 argument to emcc.
And for Rust, you can pass "-C target-feature=+simd128".
This will cause the compilers to automatically find and use
SIMD where possible.
Sometimes you also want to have more explicit control.
And this is where you will want to use SIMD intrinsics,
which let you use the SIMD instructions directly.
This is more detail than I can cover here.
But if you're interested, I highly encourage you to go and
check out these links.
SIMD can be used for a huge variety of things,
including highly performant ML models such as this hand
tracking, a real life invisibility cloak,
and real time automated background removal.
And this last use case is just one of the things that Zoom
is excited about using SIMD for.

English: 
and then take just a single
CPU operation to add them.
The best part is that, because
compilers are so smart,
they can automatically
detect these optimizations
and do them for you.
In Emscripten, you just need
to pass the -msimd128 argument
to emcc.
And for Rust, you can pass
-C target-feature=+simd128.
This will cause the compilers to
automatically find and use SIMD
where possible.
Sometimes you also want to
have more explicit control.
And this is where you'll want
to use SIMD Intrinsics, which
let you use the SIMD
instructions directly.
This is more detail
than I can cover here.
But if you're
interested, I highly
encourage you to go and
check out these links.
SIMD can be used for a
huge variety of things,
including highly performant
ML model, such as this hand
tracking; a real-life
invisibility cloak;
and real-time automated
background removal.
And this last use case
is just one of the things
that Zoom is excited
about using SIMD for.

English: 
They have an awesome feature where you're able to
automatically remove the backgrounds so that people in
conferences can't see all the random stuff that you have in
your background and then replace it with fun videos or
animations.
If you're interested in diving into WebAssembly and SIMD,
here are some of the links that should help you get
started. WebAssembly SIMD is doing an origin trial
in Chrome 84, which will start rolling out to
users on July 14th.
If you aren't familiar with origin trials, it's basically a
mechanism for you to test out features with production
users, while we may still be making some changes to
the API. You can read more about those origin trials
at this link as well.
So the next API I want to get into is WebTransport,
which is a next generation networking API for client to
server communication. Let's look at the definition of
WebTransport. WebTransport provides bi-directional
transport through both unreliable datagrams
and reliable streams based mechanisms.
That's a mouthful, but let's see if we can't break it down
and understand it a bit better.

English: 
They have an awesome
feature where
you're able to automatically
remove the background so
that people in conferences
can't see all the random stuff
that you have in
your background,
and then replace it with
fun videos or animations.
If you're interested in diving
into WebAssembly and SIMD,
here are some of the links that
should help you get started.
WebAssembly SIMD is
doing an origin trial
in Chrome M 84, which
we'll start rolling out
to users in July 14.
If you aren't familiar
with origin trials,
it's basically a
mechanism for you
to test out features
with production users
while we may still be making
some changes to the API.
You can read more about those
origin trials at this link
as well.
So the next API want to
get into is WebTransport,
which is a
next-generation networking
API for client-to-server
communication.
Let's look at the
definition of WebTransport.
WebTransport provides
bi-directional transport
through both unreliable
datagrams and reliable streams
based mechanisms.
That's a mouthful.
But let's see if we can't
break it down and understand
it a bit better.

English: 
First, bi-directional means that it enables easy
two way communication.
With something like HTTP, the connection has to be
initiated by the client and you have to send all of the
requests at once and then wait for a response.
With WebTransport. You don't have these limitations and so
you can enable a much more interactive session.
Looking at the two different mechanisms, unreliable
datagrams are one of the mechanisms for sending data
through WebTransport.
These datagrams are similar to UDP datagrams
in that they are packets of information that get sent, but
with no guarantees about delivery or ordering.
Reliable streams, in contrast, are similar to
TCP streams and provide reliable and ordered
data communication.
So now that we have an understanding of the definition of
WebTransport, let's understand what you might actually use
it for. Firstly, WebTransport will be
the only mechanism to do unreliable data communication
without leveraging WebRTC.

English: 
First, bi-directional
means that it enables
easy two-way communication.
With something like
HTTP, the connection
has to be initiated
by the client,
and you have to send all
of the requests at once
and then wait for a response.
With WebTransport, you don't
have these limitations,
and so you can enable a much
more interactive session.
Looking at the two
different mechanisms,
unreliable datagrams are one of
the mechanisms for sending data
through WebTransport.
These datagrams are similar
to UDP grams, in that they
are packets of
information that get
sent but with no guarantees
about delivery or ordering.
Reliable streams, in contrast,
are similar to TCP streams
and provide reliable and
ordered data communication.
So now that we have an
understanding of the definition
of WebTransport,
let's understand what
you might actually use it for.
Firstly, WebTransport
will be the only mechanism
to do unreliable
data communication

English: 
without leveraging WebRTC.
And this is exactly why Zoom
is interested in looking
into WebTransport,
because it will
allow them to simplify
their deployment
and put it a little more in
line with the other platforms
that they support.
It's important to note,
though, that WebTransport
won't be just a pure
UDP sockets API,
since it does have some
requirements around encryption
and congestion control.
It does offer an
alternative to WebSockets.
And to understand
exactly how it compares
to WebSockets and WebRTC,
let's look at this chart.
So to understand
the differences,
let's dig into each
of these pieces.
First, WebTransport
and WebRTC offer
both reliable and unreliable,
while WebSockets only
offer reliable delivery.
WebTransport is a
in-development API,
while both WebSockets and
WebRTC are widely available.
While WebRTC provides a fairly
high-level complete solution
to the problem of
video conferencing,

English: 
And this is exactly why Zoom is interested in looking into
WebTransport, because it'll allow them to
simplify their deployment and put it more in line with
the other platforms that they support.
It's important to note, though, that WebTransport won't
be just a pure UDP sockets API since it does
have some requirements around encryption and congestion
control. It does offer an alternative to WebSockets.
And to understand exactly how it compares to websockets
and WebRTC, let's look at this chart.
So to understand the differences, let's dig into each of
these pieces. First, WebTransport and WebRTC
offer both reliable and unreliable, while websockets
only offer reliable delivery.
WebTransport is a in-development
API, while both websockets and WebRTC
are widely available.
While webRTC provides a fairly high-level complete solution

English: 
WebTransport and WebSockets
are both much lower-level APIs
that doesn't solve
everything for you
but gives you more
of that basic access.
WebTransport also enables
multiple cancelable screens,
whereas WebSockets can
only do a single stream,
and WebRTC can also
do multiple streams
but they aren't cancelable.
So here is a quick example
setup for how you can actually
use WebTransport.
In this part of
the code, we really
just set up our
new QUIC transport,
which is a specific
subtype of WebTransport,
and create that object,
passing in the URL
that we want to connect to.
Then we just set up some
simple logging function
and await the transport
being ready for us to use.
Then we can simply
grab the writer
from the sendDatagrams function
of our transport object,
which we can then use to
send data at any point.
Remember that this
data that we send
does not have any guarantees
of delivery or the order
that it will be delivered in.
Next, let's look at
how you can actually
read data from the server.

English: 
to the problem of video conferencing, WebTransport and
websockets are both much lower-level APIs that doesn't
solve everything for you, but gives you more of that basic
access.
WebTransport also enables multiple cancellable
streams, whereas websockets
can only do a single stream.
And WebRTC can also
do multiple streams, but they aren't cancelable.
So here is a quick example setup for how you
can actually use WebTransport.
In this part of the code, we really just set up our new
QuicTransport, which is a specific subtype of WebTransport
and create that object passing in the URL that we want to
connect to. Then we just set up some simple logging
function and await the transport being ready for us to use.
Then we can simply grab the writer from the sendDatagrams()
function of our transport object, which we can then use
to send data at any point.
Remember that this data that we send does not have any
guarantees of delivery or the order that it will be
delivered in. Next, let's look at how you can actually
read data from the server.

English: 
Here you see a simple example
where we get the reader
from the getReader function.
And then in a classic
while-true loop,
we just read things
from that reader,
and then detect them
when we're done,
and console log out the actual
values that we're able to read.
WebTransport is still
very much in development,
but we do have a blog
post already published
about how you can use it.
And you can find that
and more information
at these various links.
So now it's time for us to jump
into our last and exciting API,
the WebCodecs API,
which aims to offer
direct codec access on the web.
But first, let's back
up and remind ourselves
what exactly a codec is.
A codec is a device
or computer program
which encodes and decodes
a digital stream or signal.
While many of us have not
worked directly with codecs,
we've all seen common
examples like MP3, VP9,
H.264, and many others.
Codecs are actually used in tons
of places throughout Chrome,
such as the audio and video
tags, WebAudio WebRTC,
and the MediaRecorder API.
However, in all of these
places where it's used,

English: 
Here you see a simple example where we get the reader from
the getReader() function.
And then in a classic while-true loop, we just read
things from that reader and then detect then when we're
done and console log out the actual values that we're able
to read. WebTransport is still very much in development,
but we do have a blog post already published about how you
can use it. And you can find that and more information at
these various links.
So now it's time for us to jump in to our last and exciting
API, the WebCodecs API, which aims to offer direct
codec access on the web.
But first, let's back up and remind ourselves what exactly
codec is.
A codec is a device or computer program which encodes and
decodes a digital stream or signal.
While many of us have not worked directly with codecs.
We've all seen common examples like Mp3, Vp9,
H264, and many others.
Codecs are actually used in tons of places throughout
Chrome, such as the audio and video tags, WebAudio,
WebRTC and the Media Recorder API.
However, in all these places where it's used, you can't

English: 
you can't really configure
and get pure access
to just the codec part.
For example, WebAudio allows
for decoding a media file,
but needs to work
on the complete file
and doesn't support a
streaming based approach.
MediaRecorder has some controls,
but they are very high-level,
and you can't really configure
it to support extremely
low latency use cases.
As mentioned previously,
WebRTC does give you
a lot of this
control, but it needs
you to bring the whole
package of WebRTC along.
And without doing that,
it's hard to get access
to just the encoding and
decoding parts that you want.
As a result of this
lack of configuration,
some apps have started
compiling these codecs
to JavaScript and WebAssembly.
Some of you may
remember that this
is how the awesome
application Squoosh lets you
resize and re-encode images.
This approach is really
cool and workable today,
but has some specific drawbacks.
Specifically, it increases
your bundle size,
lowers the performance,
causes slower startup time,

English: 
really configure and get pure access to just the
codec part.
For example, WebAudio allows for decoding a media file
but needs to work on the complete file and doesn't support
a streamed based approach.
MediaRecorder has some controls, but they are very high
level and you can't really configure it to support
extremely low latency use cases.
As mentioned previously, WebRTC does give you a lot of this
control, but it
needs you to bring the whole package of WebRTC along.
And without doing that, it's hard to get access to just the
encoding and decoding parts that you want.
As a result of this lack of configuration, some apps have
started compiling these codecs to JavaScript and
WebAssembly. Some of you may remember that this is how the
awesome application Squoosh lets you resize and re-encode
images.
This approach is really cool and workable today, but has
some specific drawbacks, specifically it
increases your bundle size, lowers the performance, causes

English: 
slower start up time, and reduces the power efficiency.
Really, what you want is to avoid shipping these codecs
altogether and just get the direct access that you need
through the codecs that are already shipping as part of the
browser.
And that's exactly what the goal of WebCodecs is.
And in their own words, the goal of WebCodecs is to provide
web apps with efficient access to built-in, both software
and hardware, media encoders and decoders for encoding
and decoding media.
WebCodecs' main advantage is that it lets you get
the direct access that you need to again build your own
systems on top of the basic codec access.
This completely unlocks some use cases like video editing,
since you really need that frame by frame access and faster
than real time encoding and decoding to do this properly,
something that's currently completely impossible on the web
platform, except for maybe shipping codecs with
WebAssembly.
Additionally, many existing things that are possible today
the web, but only if you use WebRTC,
things like cloud gaming, live streaming, and video

English: 
and reduces the
power efficiency.
Really, what you want is to
avoid shipping these codecs
altogether and just
get the direct access
that you need through the
codecs that are already shipping
as part of the browser.
And that's exactly what
the goal of WebCodecs is.
And in their own words,
the goal of web codecs
is to "Provide web apps
with efficient access
to built-in-- both software
and hardware-- media
encoders and decoders for
encoding and decoding media."
WebCodecs' main
advantage is that it
lets you get the direct access
that you need to, again,
build your own systems on top
of the basic codec access.
This completely unlocks some
use cases like video editing,
since you really need
that frame-by-frame access
and faster-than-real-time
encoding and decoding to do
this properly, something that's
currently completely impossible
on the web platform, except
for maybe shipping codecs with
WebAssembly.
Additionally, many
existing things
that are possible today on
the web but only if you use
WebRTC--
things like cloud
gaming, live streaming,

English: 
conferencing will get more flexibility about how
they can interact with these codecs.
Zoom, for example, is looking into using this API in
conjunction with the WebTransport API.
They're hoping that they'll be able to take encoded
video frames and send them up to the server using
WebTransport at the same time that they'll be fetching down
encoded frames and then decoding them to show to the
client, providing a really smooth, integrated experience.
Next, let's look at some of the simple examples for how you
can use the decoder part of this.
Here in this Canvas setup part, we're really just grabbing
a canvas's context and then
making this very simple function to paint
a video frame to that canvas from
converting it to an image bitmap.
Now, when you want to set up the decoder part, you just
call this new VideoDecoder element and you set up the
output function that we defined previously, as well as just
console logging out any errors.
Then you configure it with the codec that you want to use.

English: 
and video conferencing--
will get more flexibility
about how they can
interact with these codecs.
Zoom, for example,
it's looking into using
this API in conjunction
with the WebTransport API.
They're hoping that they'll
be able to take encoded video
frames and send them up to
the server using WebTransport
at the same time that they'll
be fetching down encoded frames
and then decoding them
to show to the client,
providing a really smooth
integrated experience.
Next, let's look at some of the
simple examples for how you can
use the decoder part of this.
Here in this canvas
setup part, we're
really just grabbing
a canvas' context
and then making this
very simple function
to paint a video
frame to that canvas
from converting it
to an image bitmap.
Now, when you want to
set up the decoder part,
you just call this new
VideoDecoder element.
And you set up the
output function
that we defined previously, as
well as just console logging
out any errors.

English: 
And then you have this incredibly simple function that you
just pass in your encoded chunk and call the Decode()
function from your video decoder.
And then it does the rest of the work for you.
WebCodecs is still extremely new, but for those of you
who are curious, you can go and check out the explainer to
see what the team is currently working on.
We will also be doing a web.dev post about WebCodecs.
So if you're seeing this in the future, be sure to go and
check out that. And that brings us to our overview of these
three new exciting APIs that we've been exploring with
Zoom. You've hopefully gotten a better understanding
of some of these new and advanced APIs and hopefully
an understanding of how they will be bringing all of us
closer together in the future.
Thank you so much for your time and I hope you enjoy the
rest of the sessions.
Goodbye.

English: 
Then you configure it with the
codec that you want to use,
and then you have this
incredibly simple function
that you just pass
in your encoded chunk
and call the decode function
from your VideoDecoder,
and then it does the
rest of the work for you.
WebCodecs is still
extremely new.
But for those of
you who are curious,
you can go and check
out the explainer
to see what the team is
currently working on.
We will also be doing a
web.dev post about WebCodecs.
So if you're seeing this in
the future, be sure to go
and check out that.
And that brings
us to our overview
of these three new
exciting APIs that we've
been exploring with Zoom.
You've hopefully gotten
a better understanding
of some of these new
and advanced APIs
and hopefully an
understanding of how
they will be bringing all of us
closer together in the future.
Thank you so much for your time.
And I hope you enjoy the
rest of the sessions.
Goodbye.
[MUSIC PLAYING]
