SPEAKER: Testing testing 
testing, this is a test for
Chrome Dev Summit.
Testing testing, this is a test
for Chrome Dev Summit.
Testing testing, this is a test
for Chrome Dev Summit.
Testing testing, this is a test 
for Chrome Dev Summit.
Nicole Sullivan, Product Manager
.
[Music] 
SPEAKER: So they asked me if I 
could do an advert for the 
Chrome developer's YouTube 
channel. 
SPEAKER: We thought that is too 
easy. 
SPEAKER: We will distract him 
using the ingredients we have. 
SPEAKER: You can go to Chrome 
developers, look at content on 
PWAs, DevTools, the talks that 
have been going on, you can 
watch the 
page, [ laughter ].
For  -- there is new content 
every week.  Go to 
YouTube.com/Chromedevelopers and
subscribe, because we are good 
people.
Yep.
SPEAKER: A good cake. 
SPEAKER: Pretty good.
  [Chrome Dev Summit theme 
music]. 
SPEAKER: I tell yowl  you what, 
I don't remember that.  It was 
such a strange moment in my 
life.  Welcome to day
two of Chrome Dev Summit!  
SPEAKER: Welcome, welcome. 
SPEAKER: You can do better. 
SPEAKER: There is so much energy
in the room.  I wonder if they 
had enough sleep, plenty of 
people in the room and on the 
live stream.  People in the 
room, give us a cheer.  C'mon!
[ Applause ]. 
SPEAKER: People watching via the
live stream, give us a cheer.  
SPEAKER: That doesn't work, you 
can't do that. 
SPEAKER: I like the thought of 
somebody in my office going, 
woo! 
SPEAKER: What is today's theme? 
SPEAKER: Yesterday was about 
today, and today is about 
tomorrow.  
SPEAKER: What that means is 
about what you can do today 
about the features with the web,
and tomorrow is more future 
looking.  And so we're going to 
talk about things from emerging 
and developing proposals to some
new ideas, right?
You're going to talk about new 
ideas. 
SPEAKER: That is what I said, it
is going to be like a round in a
Big Web Quiz.  As you are 
watching this stuff, you get to 
decide whether it is future of 
the web, or ware.
And we want your feedback on 
this as well, because some of 
this is far-out ideas and we 
want to know if we are going in 
the right direction or not.  
That's a great thing about being
here. 
SPEAKER: Absolutely.  To get us 
started, we need day two keynote
and for
that
, we are inviting Nicole 
Sullivan and Malte Ubi, as we 
call them, stubborn force.
[ Applause ]. 
>>NICOLE SULLIVAN: Welcome to 
the day two keynote.  I'm 
Nicole.  
>>MALTE UBI: And I work for the 
JavaScript team at keynote.  And
this is awkward because this is 
the second keynote and it was 
all talked about yesterday.  We 
will do something different, 
normally in a Chrome Dev Summit 
keynote, they talk about the 
exciting new APIs and the 
platform networks available for 
you.  We will do that, but we 
want to talk about something 
different.  We want to talk 
about web frameworks.  
>>NICOLE SULLIVAN: Developers 
who build for the web often 
choose to use a framework.  In 
the past, changes to the web 
platform did not take frameworks
into account, and changes to 
frameworks didn't take the web 
platform into account.  We think
both the web platform and 
frameworks benefit from a close 
collaboration and so we're out 
to make that happen.
  Obviously if you are billing 
something simple, it makes sense
to choose simple technologies.  
As soon as an app is 
sufficiently complex, developers
choose to use a framework.  A 
few months ago, I put out a 
very, very methodologically 
correct Twitter poll and asked 
people why they choose to use 
frameworks and got a whole bunch
of answers.  This one resonated 
with a lot of people.  You can't
not use a framework, your only 
real choice is to either use one
that is open source, documented,
tested, supported, maintained, 
mucheer, proveen, and has a 
community, like Stack Overflow, 
or be one that you cobble 
together that you maintain
yourselfyourself. 
>>MALTE UBI: I have done that.  
And we want to recognize that 
frameworks are part of the web 
and if you are using a frail  --
framework, you are using the 
platform.  So we will 
extrapolate to the further web 
stack that is based  on that 
insight.  In the bottom, you 
have the web primitives, the 
DOM, the fetch API, 
ServiceWorkers, and stuff like 
that.  
>>NICOLE SULLIVAN: And above 
that, we have built in modules. 
They are exciting, they are a 
new thing.  It is the exciting 
that we can build in a layered 
way for the web.  There are 
high-level APIs that solve 
virtual scrolling and carousels.
We love to hate the carousel, 
but they are on almost every 
site in the universe.  We can 
build a high-level API, a 
virtual-scroller, and that 
drives out the low-level APIs 
that we need to make those 
experiences accessible, 
searchable, and fast.  Those 
low-level APIs can be used by 
the entire ecosystem of virtual 
volescrollers, and that allows 
everyone to level up. 
SPEAKER: And we have high-level,
Polymer, and next JS and 
sampler.  These are not 
standardized, they are not web 
standards, but most applications
use one of these and we think of
them as part of the platform. 
SPEAKER: The next part is web 
components, you probably think 
that doesn't make any sense, web
components are a primitive or 
standard.  I'm not talking about
THE web components, but yours.  
Maybe a picker, tag, a carousel,
those that layer on top of your 
stack.  Why it is important to 
put them there is they are an 
important measure of 
interoperability between 
different parts of your system. 
>>MALTE UBI: A quick poll here, 
who has built a React 
applicationapplication? That is 
most hands.
Keep it up.
If you are anywhere in your 
company, you also have an 
Angular, JQuery, Polymer, 
anything like that? Both hands. 
>>NICOLE SULLIVAN: Yeah, I love 
your honesty.  
>>MALTE UBI: Wouldn't it be nice
to use the date picker in both 
of those, to have a design span 
that spans the entire 
application suite? That's why we
think the web components are the
right technology to build that. 
>>NICOLE SULLIVAN: There's a 
problem with web development 
today, we are struggling to make
stuff that is fast and 
responsive and has buttery 
smooth animations, I think we 
are struggling, in a bunch of 
ways it is really hard.  Not 
everybody struggles, there are 
examples of sites that achieve 
this.  At scale, we observe that
our performance goals are not 
being met. 
>>MALTE UBI: And what we are 
observing is in web dev today, 
we often have to make that 
choice between developer 
experience, how we feel as 
developers, and user experience 
and how users feel.  And that 
shouldn't really be how things 
are, and the vast majority of 
cases, developer experience can 
lead to great user experience.  
That's not how it works today, 
but if we can achieve those 
things coming together, the web 
will be better for everyone.  
>>NICOLE SULLIVAN: This is why 
we are so excited about 
frameworks. Frameworks sometimes
make web apps slower, that's a 
reality.  But they are also our 
best hope to make them faster. 
>>MALTE UBI: That's a bold 
statement. 
>>NICOLE SULLIVAN: It is a bold 
statement. 
>>MALTE UBI: To prove it is 
happening, we thought, it would 
be nice to celebrate the great 
improvements a framework has 
made for this year.  We will 
start with React, they have done
foundational work, for example, 
making code-splitting something 
that is first class supported in
the framework, which is nice.  
They are breaking up the 
rendering of huge dump trees 
into chunks, if you have an 
update, it does not lock the 
browser, and everything is done 
in small breaks.  
>>NICOLE SULLIVAN: This is a 
small break you will hear us 
talk about things.  Angular made
improvements, the CLI enabled 
performance budgets, how many 
users have you alienated by 
installing one more library or 
other 
things? M
ichelle MalkinMal. 
>>MALTE UBI: SPEAKER: And Vue 
did the same thing, and that is 
the same idea and bringing the 
best practice to all the users, 
and making preloading and 
prefetching the framework 
something does by default.  
SPEAKER: Polymer did something 
similar, they are transitioning 
to LitElement, and they are 
faster, because Firefox does web
component support.  
SPEAKER: We will talk about 
Svelte. 
SPEAKER: Well, they are so fast,
what do we even say about this? 
SPEAKER: Well, it is a
great example.  In Hacker News 
App, they built it not in an 
idiomatic way, and everything 
combined is under 20KB, which is
amazing.  
SPEAKER: And AMP did some great 
stuff, too, apparently Malte, 
this is the only feature he 
shipped because he is a manager.
He shipped Feature Policy 
against synchronous XHR for all 
ads.  
SPEAKER: If you are going to put
your own feature on the slide --
SPEAKER: Nice. 
SPEAKER: And he reduced, or they
reduced the JS size on the wire 
by 20 percent by enabling 
compression algorithm.  We love 
this, how great is it that you 
have to turn on a different 
compression and you get a 20 
percent reduction on size in the
wire? 
SPEAKER: Moving on to Ember, 
which removed jQuery from the 
default bundle, I have a jQuery 
t shirt, making the bundle 20 
percent smaller and they made it
backward compatible so people 
can slowly migrate. 
SPEAKER: Yeah, they made it so 
anyone can turn on and off the 
old code using the same 
functionality. 
SPEAKER: And implementing the 
incremental progressive 
rendering with batched 
rehydration which comes to the 
chunking of work theme that we 
are seeing all the time.
Great, so to summarize this, we 
really wanted to get to a state 
where not only the super experts
make great web experiences, 
everyone should be able to do 
it.  And frameworks are an 
integral part of making that 
happen.  Today we feel that we, 
and by that, we mean browsers, 
framework, and tools, are 
focused on giving a great 
developer experience with a 
focus on user performance.  By 
putting in best practices, all 
the users get all the benefits, 
and we can achieve great 
outcomes for users as we scale 
up the web.  
SPEAKER: To make this happen, we
are announcing three things 
today: The first one is we are 
including frameworks in the 
Chrome intent to implement 
process.  How many folks know 
about our intent to implement 
and intent to ship stuff? A few,
that is awesome.  We have two 
important check points, we have 
more than that, but two very 
important check points when we 
go to ship a feature, the intent
to implement and the other is 
the intent to ship.  At both 
points when we are about to 
build and ship something, we 
want to get a lot of feedback.  
So we go through this intent 
purpose to go in and 
intentionally draw in that 
feedback.  Previously we listed 
web developers as folks we 
wanted to get feedback from, but
now we are explicitly adding 
frameworks to that intent to 
implement process. 
SPEAKER: And secondly, put real 
dollars behind this.  We are 
starting with a budget of 
$200,000 to kick start 
developing performance features 
and frameworks, we have a list 
of features we want all 
frameworks to give to users by 
default.  Folks working on the 
frameworks can ask for funding 
doing the actual work.  We are 
working on the details, if you 
are interested, check out this 
bit.ly link for more 
information.  
SPEAKER: The third thing that we
wanted to do is increase 
collaboration with -- between 
frameworks in the Chrome team.  
It is funny to announce this 
today, because it is something 
that we started in the summer, 
and we've been working with a 
bunch of frameworks for the past
several months.
But we are excited to talk 
through what we have started 
already. 
SPEAKER: Right, that brings us 
to the next section, it is very 
much under construction, you 
heard that in the intro today, 
what you are hearing today, 
these are not things that you 
are shipping tomorrow, you can 
try them out a little bit, some 
are -- we are just thinking 
about them.  So there is still 
very much time to give us your 
feedback, use cases, to see how 
they work. 
SPEAKER: The first thing we will
talk about is display locking.  
We don't want to update the DOM 
inadvertently and this gives a 
way to lock it.  Before I joined
Google, many times, I think this
is one of the most requested 
features.  And before I joined 
Google, the Polymer team and the
Paint team had a bunch of 
conversations about a new 
primitive, Display Locking.  The
idea of this is you can 
basically lock down a section of
your DOM and we will not trigger
render on other bits of the DOM 
until you unlock it and you say 
it is ready to go.  It is subtle
and not something you may 
interact with directly, but it 
is something that frameworks 
need to elim not unnecessary 
browser work.  If you have 
comments, we would love to hear 
them. 
SPEAKER: Is that related to that
VG color? I'm using that every 
day. 
SPEAKER: Absolutely. 
SPEAKER: Nice. 
SPEAKER: [ Laughter ]. 
SPEAKER: Well, this is awkward. 
Is it working? 
SPEAKER: I don't know. 
SPEAKER: It is kind of strange, 
right, to see a blank white 
screen in the middle of a 
presentation? Why do we think 
that's okay when web pages are 
loading? Sort of, ah, what's 
going on? 
SPEAKER: Yeah, every time, every
single time you load a new web 
page, the browser says, hmm, 
what's the good idea to do? I'm 
going to paint white, and then 
it is going to take a while.  
Why is that the case?
So it shouldn't be the case. 
SPEAKER: We should tell you that
we totally scared the tech check
people when we had a blank white
screen during the tech check. 
SPEAKER: We need page revisions 
on the web.  You are probably 
familiar with stuff from the 
material design specifications, 
where one page goes to the 
other, and there are ideas of 
how to do this.  This is what 
you would see in the 
application, going from one 
state to the other, and we also 
want this for normal 
navigations.  This is a great 
example.  This is something that
the image search team at Google 
is working on. 
SPEAKER: It is kind of subtle, 
so look closely. 
SPEAKER: We will walk you 
through it.  So this is their 
image search result page, and 
when you click on one of the 
images, you go into the Slack 
box to see the larger version of
the image.  So their goal was 
that more people click through 
to the underlying web page, and 
so they wanted to make that 
really easy by putting that web 
page at the bottom of the page, 
right?
So that, as a user, all you have
to do is, you know, put your 
finger on it and kind of draw it
up.  Right?
We are hoping that this greatly 
creates and increases the number
of navigations that are actually
happening, but this is something
that you cannot build on the web
today.  
SPEAKER: And we would love for 
you to be thinking, if you 
didn't have to have that blank 
white screen, what would you 
build, what would you design? 
Because it is actually pretty 
exciting space. 
SPEAKER: Right, today you really
have to choose between a fancy 
transition, or a real navigation
when a browser loads a new page.
If there is not a solution for a
cross-origin case where you 
navigate to a new domain at all.
And we are excited about 
portals, they give you advanced 
transitions between websites 
that are real navigations, SPAs,
that's the way to do it today, 
and this example shows that this
is a navigation from one web 
page to another, you can morph 
between them however way you 
want, which is something that we
should have in 2018. 
SPEAKER: It does, people choose 
a single page app with a lot of 
complexity, but they want fancy 
transitions. 
SPEAKER: Hard to get with a 
single page app, cool. 
SPEAKER: So I got a sporty new 
car, and within a month I got a 
ticket.  I was driving too fast.
SPEAKER: In Germany, hopefully. 
SPEAKER: No, on the San Mateo 
bridge.  I will not tell you how
fast I was going, but I found 
out that my car has this really 
cool feature.  I can turn 
something on so that it beeps 
anytime I go over 80 miles per 
hour.  That seems really good, 
because there should be limits 
and this car does not feel like 
it is going fast when it is 
actually going pretty fast.
We -- there is also something 
you can turn on for a motor for 
a car, which is called a 
governor, which actually will 
not allow the car to go over a 
certain speed.  Feature Policys 
are sort of like that, you have 
a couple of options, you have 
enforce mode and report only 
mode.  You can turn a Feature 
Policy on for something like 
synchronous XHR, and you can 
say, nope, I don't want to allow
that and it will simply not 
allow that to happen on your 
site at all anymore, or you can 
turn on report only mode and you
get that beep, hey, something is
wrong.  You should check it out.
We're pretty excited about 
Feature Policys because they run
in CI, in development, you can 
run a different set on your 
feature Policys on your third 
party and ad content on your 
page.  You have a lot of 
flexibility on how you use it 
and we are excited with how you 
use it.  We use the feature sync
XHR policy, they have it turned 
on for all ads in AMP, which is 
exciting. 
SPEAKER: You want to turn this 
on for your website, there is no
reason -- 
SPEAKER: There is no reason or 
conversation that Malte will 
bring up not to turn on sync 
XHR.  And there are other 
exciting developments as well, 
you have to turn on the flag to 
check it out.  I would love for 
you to look at unoptimized 
images, over sized images and 
unsized media, see how they 
improve your performance, with 
enforce on or report only mode. 
We would love your feedback.
  And you will hear Jason talk 
more about this a little later 
today. 
SPEAKER: Right. 
SPEAKER: All right, next we want
to talk about instant loading, 
how we get from super fast to 
zero milliseconds.  To show you 
what we mean by that, because 
instant is an overloaded word, 
there are instantp apps, which 
don't do instant stuff.  So we 
will talk about this.  This is a
film strip you might be familiar
with, how a web page loads, this
loads in 8 seconds, you can make
it faster. 
SPEAKER: And we spent years 
trying to eke out every second 
we can out of the film strip. 
SPEAKER: We want this, for 
everything to go away and only 
the last frame to render right 
away.  We have a solution for 
that, which is actually rather 
obvious.  You have to load the 
web page before the user clicks,
and now you might wonder why I 
put a bathroom stall on the 
screen.  It has a little gap 
which, like this, [ laughter ], 
it introduces a privacy problem.
SPEAKER: [ Laughter ]. 
SPEAKER: Does anyone know why 
this is the case?
[ Laughter ].
  So when you load some other 
web page, before the user said 
they wanted to go there, the web
page might be able to read your 
cookies, and stuff like that, 
that is not something that the 
user expects.  We need a 
solution for that, and the 
solution for that has been on 
under development for a while, 
it is called Webpackaging, what 
it provides as a feature is 
privacy-preserving instant 
loading for the web.  I will 
explain how that works in a talk
later today that goes into 
detail from people that are 
working on it.
So the way that Webpackaging 
works is you are a document 
authoreringnd you have a TLS 
certificate and you sign the 
content to be created by you, 
and anyone can deliver it on 
your behalf, but the browser 
says, this was originally signed
with that TLS key, I can say 
that it came from example.com 
and that was the original party.
I want to drill down on this 
anyone can deliver it.  You have
a different CDN, but you can 
deliver it over bit torn, or it 
doesn't matter, you can do http 
over anything, which
is really cool.  
SPEAKER: So we collaborated with
the AMP team on this, because 
they are one of the frameworks 
that we are working with.  We 
are pretty excited about the 
instant loading and how fast we 
are going to be able to bring 
content to users, but Malte, 
what about the urls? 
SPEAKER: They are not really 
good urls, so that starts with 
Google.com/amp, with 
Webpackaging, that goes away.  
And this is the important part, 
it is a web standard.  With 
Webpackaging, we can bring 
instant loading to all of these 
frameworks, it is not an 
exhaustive list, it doesn't 
matter, this technology is not 
completely technology.  It 
doesn't care about technology, 
really cool, we cannot wait for 
this to land in browsers.
Webpackaging also addresses one 
more problem that I think is 
really subtle.  Eric Meyer 
tweeted this and wrote this blog
post, which is really cool, a 
while ago.  Travelling to Africa
and noticing that, you know, 
HTTPS, which is the greatest 
thing that happened to the web 
in a while, among many other 
greatest things, actually 
introduced a few problems.
So if you, let's imagine your 
cell tower and you are not 
actually connected well to the 
internet, but you have LTE to 
everyone connected to you.  It 
is great if you have an edge 
cache at that cell tower.  With 
http, you can build that, with 
https, you cannot.  So you have 
to go to the origin so somebody 
can surf on the TLS certificate.
And the great thing with 
Webpackaging is that you can 
bring back that feature, with 
the TLS security, the old great 
benefit of having edge caching 
for http, but together with the 
security benefits of TLS.
And also, I wanted to really 
quickly say that Webpackaging is
not related to Webpack, however,
[ Laughter ], however, there is 
a bundling spec coming up and so
Webpackaging is actually going 
to bring bundling as a 
first-class feature to the web 
platform, which I think is also 
way overdue.  So bundlers like 
Webpack can eventually use 
Webpackaging as the output 
format.
You can actually try this today.
There's a sub spec called signed
exchange, you can go to this url
and try it out. 
SPEAKER: Let us know what you 
think.  
SPEAKER: Yeah. 
SPEAKER: And later today, you 
will hear 
Kinoku.
And the next bit we would like 
to talk about is scheduling.  
Anytime developers build 
something sufficiently complex, 
they need to manage a task cue, 
prioritize work, and make sure 
that everything is done on time.
SPEAKER: So we don't run out of 
time on the talk is. 
SPEAKER: That, too.  A typical 
app has competing deadlines, 
keeping the user input 
responsive and rendering 
smoothly at 60 fps, while doing 
the work of fetching, preparing,
and rendering the UI.  In 
talking to the React team a few 
months ago, we realized a 
framework-based scheduler has 
serious down sides.  It cannot 
schedule third-party code, it 
has a lot of difficulty 
interleaving task with browser 
functions like rendering and 
garbage collection.  It has to 
fight other systems for 
resources.
And with help from React, Google
hap  maps, AMP, Polymer, and the
web standards community, Shubhie
and Jason are designing a 
scheduler that will run in the 
browser.  It has high and 
low-level APIs, like Grand 
Central Dispatch, and being able
to interleave code from 
different sources.  They are 
working on how do you interleave
garbage collection or a code 
within a promise.  What is 
exciting about this is when you 
break things down into small 
tasks, it is more useful when 
you have a way to schedule them.
When you schedule things, they 
are more valuable if you have 
broken things down into small 
tasks.  This is where frameworks
and the browser work well 
together to provide a much 
better user experience than we 
are capable of providing today. 
This is something that I'm most 
excited to PM right now, and I'm
excited for you to hear from 
Shubhie and Jason later. 
SPEAKER: It is a great example 
for how frameworks help, taking 
the code and breaking it into 
small chunks, that is really 
difficult.  And but, as a 
framework layer, this is 
something that people can just 
take a long time doing and 
having it for everyone. 
SPEAKER: You saw in the 
framework awards section earlier
that three frameworks are 
breaking things down into tiny 
rendering chunks, so this is 
exciting. 
SPEAKER: All right, next we are 
talking about animation worklet 
and jank-free parallax, in front
of a jenky animated gif.  And we
will talk about web animation 
APIs, there's a few.  There's 
Safari Preview where it is 
landed, we should have it in 
most browsers.
And CSS animations are awesome 
and basically everywhere, you 
should use these and the 
animation worklet does not 
provide additional value except 
in very important circumstances,
these APIs are time-based, which
makes sense, animations go from 
A to B over time, that's what 
they do.  However, there are 
some animations that are not 
time-based.  For example, this 
one, which animates Pac-Man 
based on scrolling left and 
right.  This is difficult to do 
on the web today in a jank-free 
fashion.  So the reason for this
is that, with animation worklet 
where you don't get jenk, the 
worklet runs close to the 
software that does the 
scrolling, so you can do it
even when the main thread does 
something busy, which is really 
cool.  My team has been working 
on getting this into the scroll 
bot animations, and it was just 
a drop and a replace and make 
everything so much smoother.  
Surma is talking about this 
later today in the talk about 
Houdini. 
SPEAKER: We have some really 
cool demoes. 
SPEAKER: Yeah. 
SPEAKER: And almost every web 
app needs something like an 
infinite list.  It just happens,
right?
On the other systems, on 
Android, on iOS, there's things 
like UI Table View, and on the 
web, you are left to figure it 
out yourself.  And many 
frameworks have really good 
implementations of a virtual 
list, and an infinite scroller 
situation, but the web platform 
primitives make certain 
behaviors impossible.  Have you 
ever tried to use find-in-page 
if you are using an infinite 
list? I have often went back to 
search for a tweet, you cannot 
search for things that are not 
in the DOM right then.  This is 
obviously going to have 
accessibility implications as 
well.
With new low-level APIs that 
we're putting into place, like 
searchable and visible DOM, all 
of a sudden these are solvable 
and addressed.  You will hear 
more about the collaborations 
with React, Angular JS, and 
Twitter in Gray Norton's talk 
later today. 
SPEAKER: That brings us to the 
end of the talk, just 
summarizing.  We are talking 
about a vision for a 
framework-inclusive web platform
feature. 
SPEAKER: Instant loading and 
page transitions are coming to 
the web, and as a designer, 
think about what you are going 
to build with that. 
SPEAKER: We're talking about a 
set of low-level APIs to build 
reliably fast web apps, that is 
especially true if web 
developers take advantage of 
them by default.  
SPEAKER: Thank you very much. 
SPEAKER: Thank you.
[Music].
  (Applause). 
SPEAKER: Day two, more Big Web 
Quiz, if when  I can get the 
words out.  We will get the Big 
Web Quiz on the screen.  We are 
going to play another round.  
Actually, before I do that, can 
I play the intro video? 
SPEAKER: Mine that you ruined? 
Yeah, yeah? Here we go.
[Dark, foreboding music played 
over and over again]. 
SPEAKER: You broke it, you were 
not supposed to break it. 
SPEAKER: I will stop there. 
SPEAKER: That was not 
intentional, it broke. 
SPEAKER: Just before we came out
here.  Just another deploy. 
SPEAKER: I have changed the 
presentation view, I think that 
is exactly what it has done.  
You broke that.  
SPEAKER: Perfect.  Marvelous. 
SPEAKER: One of the things he 
added was polling every second. 
SPEAKER: This is live debugging.
SPEAKER: That is exciting, isn't
it.  Well, at least we know the 
polling is working. 
SPEAKER: This is going to go 
really, really badly. 
SPEAKER: Because it is day two, 
to remind you, the prize that is
up for grabs, use it to wow your
colleagues and scare your pets. 
SPEAKER: Ohhh. 
SPEAKER: But that means the 
questions are
a little trickier.
SPEAKER: I don't want to press 
this button I don't know what is
going to happen. 
SPEAKER: So go to bigwebquiz.com
and see how it -- 
SPEAKER: Your question, here it 
comes.  
Please, c'mon.
SPEAKER: Should we roll back the
deloy we did earlier? 
SPEAKER: This feels like two 
years ago when we had a total 
log in failure. 
SPEAKER: At least we didn't take
anyone down with us. 
SPEAKER: Introduce the next 
speaker. 
SPEAKER: Jason 
Chase, ladies and gentlemen.
Feature Policy & the Well-Lit 
Path for Web Development.
SPEAKER: Good morning, everyone.
I'm one of the Jasons you will 
hear about.  I lead the Chrome 
team to help developers better 
understand and control the web 
platform.  We are working on 
Feature Policy and reporting API
that we think can really make 
your lives easier as web 
developers, maybe it will not 
save you from speeding tickets, 
like Nicole said, but we will 
see what we can do.  We know 
there are many benefits to being
on the web but, as a development
platform, it is far from 
perfect.
On the Chrome team, we regularly
provide web consoles to help 
people address issues with their
site.  
And now, these consoles bring 
together developer relations, 
engineering, and other folks to 
help people deep dive into 
questions or issues with their 
site.  The goal is to improve 
actionable recommendations to 
improve things.  As a result, we
have identified common mistakes 
that have happened over and over
again.  And, spoiler alert, 
performance is a common theme 
here.
And the two big performance 
problems that we see are too 
much script, and too many image 
bytes being sent over the 
network.
So you know there's a right way 
to do things, but it is an 
adverture to find that 
sometimes.  If you find it, 
there are bumps, or you might 
slip off the path.  As we have 
seen in the talks, there's a 
long list of things to keep 
track of and get right.  This is
hard enough if you are an expert
developer, imagine you are a dev
team with junior developers and 
you need to keep everyone on the
same page.
So we need help.
What if we could make web 
development easier with guard 
rails? So if you imagine it on a
path, if you hit a rail, you 
know you are doing something 
wrong.
And the rails stop you from 
leaving the path.
Now, this isn't necessarily a 
new idea.  But, looking at 
repeated mistakes in our web 
consults, we are thinking about 
different ways to apply this.  
Concretely, how could we help 
you put guard rails in place for
development?
So AMP gives us examples, an 
example of guard rails in web 
development.  AMP gives 
guarantees that you are not 
going to be off-source by making
it impossible to do things 
wrong.  AMP started by providing
building blocks tightly 
constrained to give performance 
improvements in ux, this is a 
way to give improvements in the 
web platform.  Not everyone can 
or will want to use AMP, so to 
learn from AMP, we thought of a 
more configurable approach.  And
that is Feature Policy.
So we're really excited about 
it, we think it is a great tool 
to help you guide you towards 
the well-lit path of web 
development.  And Feature 
Policy, you set a series of 
policies to enforce throughout 
the site.  It restricts what it 
can access and modifies the 
browser's behavior for certain 
features.  So let's talk through
some of the common problems 
we've seen.
So images often are extra 
quality or metadata that take up
space, but is it required? And 
as we have seen in other
talks, again, there are many 
tools to optimize images, but 
you have to remember to use 
them, or integrate them into 
your build pipeline.
And now, ideally, your page 
should never serve images that 
are larger than the version that
is rendered on the screen.  
Anything larger results in 
wasted bytes and slows page load
time.  So a common example is 
sending a desktop site's hero 
hero image to a mobile device.  
We degyne policies to catch 
common mistakes of unoptimized 
or over sized images.  We 
researched the impact that we 
think these policies can have on
10,000 of the most popular 
sites.  Our analysis shows that 
10 percent of sites can save 
over 500KB on average.  That can
reduce load time by up to 3 
seconds, sorry, 10 seconds, on a
3G network.  So we will take a 
quick look.  This is the policy 
applied to the site.  And now, 
on the left, policy for 
unoptimized images is turned 
off, and you can see the images 
load to normal.
On the right, we have turned on 
the policy, and it has blocked 
one of the images.  So we 
compute a compression ratio, of 
bytes to pixels, and if that 
ratio is greater than .5, we 
will block the image and show a 
placeholder.  So the idea is to 
point out the images that need 
to be corrected.
This site might look familiar.  
You know at Google, we are big 
believers in dog fooding SORKS 
weso we appied all of the 
policies we could to the site.  
We made a few mistakes.  We 
applied a bunch of policies, but
it was over sized images policy 
that we caught that we are 
sending some images that are too
big.  So, oops.
And so it just shows that even 
the experts, like a developer 
relations team, can make 
mistakes, and these -- so, you 
know, we need better tooling to 
help pitch these kind of things.
So another common problem is 
images without explicit sizes.
And now, as you see on the left,
this can cause the user 
experience to jump around, as a 
browser loads in new images and 
re-sizes the page.
On the right, we have applied a 
policy to catch this.
And so the browser can set these
images to a fixed size and it 
will keep the user
experience stable.
So how does this all work?
Policies, like I said before, 
are contracted with the 
developer and browser.
We use them to inform the 
browser about the intent for our
site.  It is a set of rules for 
how each paint should behave.  
And now the browser helps keep 
us honest by validating that it 
is behaving to its stated rules.
So if a page, or an embedded 
30-party content attempts to 
violate the rules, the browser 
will identify or block the 
behavior.  In some cases, it may
override the behavior to provide
a better user experience.
  So let's take a look at how 
you configure policies.  Each 
policy refers to a single 
feature and it defines a list of
origins that are allowed to use 
the feature.
And now, with each page 
response, you can add the 
Feature Policy http header.  In 
this example, it catches all 
over sized images, this is what 
saved the bacon on the CDS side.
And here, the none key word 
means that no origin is allowed 
to use the feature of over sized
images, disabling it entirely 
for your site.
Now, looking at that example, 
you might wonder why we say over
sized images are a feature.  And
now we have policies for helping
you enforce best practices to 
ensure good performance and user
experience, and the way these 
often work is we will define a 
known bad practice as a feature 
and then you use policy to 
prevent it.
Now, we do this for 
web-compatible reasons.  We 
don't want to go ahead and break
the internet by all of a sudden 
turning off over sized images 
for everyone.
So we will allow that behavior 
by default, and then you opt in 
to turn
it off.
Now this header applies a policy
that catches any images that are
not optimized.
And now here we are making an 
explicit exception for our photo
CDN.  In this case, it might be 
because our users expect really 
high-detail, glossy pictures, 
and those don't compress well.
And now, in this case, there is 
only one origin listed, and all 
origins will not be
allowed.
So here is an example that puts 
them all together.
So first, we have a policy that 
ensures every image has explicit
dimensions.  So we saw this 
earlier preventing your user 
experience from jumping around.
And now second, we have a policy
to selectively enable the geo 
location feature.  We are 
applying this to our origin and 
trusted maps provider.  The key 
word is the origin of the 
top-level page. And combining 
this with explicit origins, you 
have full control over who is 
allowed to use a feature.
And then, finally, we are 
allowing any origin to use 
autoplay.
And here you see an asterisk 
keyword.
That means essentially everyone.
So, by default, Chrome will only
allow autoplay on same origin 
iFrames, with this policy, we 
can allow cross-origin iFrames 
to play as well.
In the past example, we saw 
features in the more traditional
sense, well-known APIs that are 
exposed to the web, like web GI,
camera, full screen, autoplay.  
We talked about policies 
enforcing best practices, and 
now we are talking about 
policies giving granular 
controls over the features you 
use.  You probably have seen 
this example, you go to a site, 
and before you interact with the
site, you have a pop-up, a 
location, a microphone, or is 
something like that.  With 
Feature Policy, you can lock 
this down to prevent the use of 
the feature at all, or really 
just dole it out to specific 
origins you trust.
So we have talked through some 
examples of an http header, you 
can also use an allow attribute 
to control Feature Policy on 
iFrames.
So now, why might you want to do
this? Well, as we talked about, 
you can be really selective.  So
maybe you have one frame on your
site that shows a map, you can 
use the allow attribute to grant
the geo location usage to that 
frame and on no other frames.
And no, in the example we see 
here, this allow attribute is 
changing the default for the 
autoplay feature.
What you will notice, if you go 
to January and embed on YouTube,
they are already using this in 
production today.
So you have seen a lot of 
flexibility in how you can 
configure Feature Policy.  So we
built in some rules to make sure
that they are being applied 
correctly.
So Malte in the keynote talked 
enthusiastically about not using
same text HR, we have a policy 
for that and AMP is using it in 
production.  So first, policies 
are inherited.  So scripts will 
inherit the policy of the 
containing frame.  This means 
that, for your top-level 
scripts, they will inheritt 
thethe -- inherit the policies 
of the main document.  That
the top and they cascade down.  
So in the nest, they know it 
applies to all sub-frames, 
regardless of how far it might 
nested.  This applies to the 
iFrame allow
attribute as well. 
So if you have an allow 
attribute and a header, the more
restrictive of the two policies 
wins.
  On the other hand, we have 
this one-way toggle.  Disabling 
the feature turns it off 
permanently.  So that means 
again, in AMP's case, they 
turned off the feature with the 
policy and that means no frame, 
nested or otherwise, can turn it
back on.
And now, for policy-supporting 
best practices, it might not be 
feasible for the browser to 
block the bad practice before it
occurs.  It can only detect it 
in some cases and break the page
or notify you.  The goal is to 
let you know that there's a 
policy to fix.  We saw this in 
the policy for optimized images 
and over sized images.  And, in 
other cases, the browser can 
block the bad practice so the 
site continues to behave well.  
The front-sized media works this
way.  So we will apply to fixed 
sized images to prevent the user
experience
from jumping around.
We have DevTools and Lighthouse 
to give you insight into the 
development of your site.  
Feature Policy allows you to set
the policies up front and catch 
the policies during development 
before anyone writes a lot of 
code.  You can write policies 
that default for new sites and 
pages and you can enforce 
standards across
your dev team.
You can choose to turn them on 
for some users and not others, 
or some pages on your site and 
not others as you incrementally 
improve your site.  So, at 
minimum, we recommend enabling 
your policies on your staging 
server.  This can let you catch 
problems that you didn't
see during development.
You might have your images come 
from a content management system
and they are not available as 
you are developing the site, so 
you can catch problems with 
that.  In addition to staging, 
you can also apply policies
in production.
You have your dev team, 
managers, all the way to the CEO
have policies enable to find 
real issues in the wild.  For 
production-ready users, you will
not enable the policies if it 
breaks the experience.  For 
example, with image policies, 
loading unoptimized images more 
slowly is probably a better 
experience than having them 
missing all together.  You still
want to know about all of these 
violations, so you can
correct them.
We have designed report-only 
mode to give you the best of 
both wormed  worlds.  You can 
config policies similar to the 
examples we saw before but mark 
them as report only.  You can 
see here, internal users, they 
have the policies enables, we 
will break things, we will see 
problems with images that need 
to get corrected.  For 
production users, we have a 
report-only mode, so the 
production users
get the production-ready 
experiences and you can get 
reports of what has gone wrong.
  And speaking of reporting, 
Feature Policy violations is 
just the start.  There's a lot 
of information from the wild 
that is really valuable.
So, for example, browser 
interventions.
Sometimes the browser needs to 
intervene to improve the 
experience for the end user.  So
an example of such an 
intervention is blocking 
document.write on 2G 
connections.  So there, we have 
discovered that doing a 
document.write on a slow 
connection can really impair the
user experience.  So to protect 
that, we just blocked it.  But 
that happens in the wild, you 
don't know about it, you want to
find out about it so you can 
correct it.
And what else? Looking at this, 
there's a lot of things going 
wrong in the wild on your site 
that you would really love to 
know about.  And the problem 
with some of these is they are 
just console messages that you 
cannot collect. And there are 
other things that have a bespoke
API, window.error, you have to 
hook up things to see the 
errors, and there are crashing 
and network errors that are not 
possible to catch from script.  
A solution is the reporting API 
to collect all of the 
information from the wild.  A 
one-stop shop.  It exposes the 
information that was not 
available before, we talked 
about network errors, crashes, 
that is coming soon, Feature 
Policy violations, deprecations,
interventions, network errors, 
you can get those today with the
reporting API.  There are two 
ways to use the reporting.
We have a reporting observer, 
which lets you collect reports 
client-side with a JS API.
And you can filter by report 
type.  Here, we said that we 
care about deprecations and 
feature Policy violations you. 
Can use a call-back on however 
you would like to capture them. 
Because it is client-side, the 
available types are limited.  A 
key feature here are the buffer 
results, you can report on 
observability later and get the 
reports that happened earlier in
page load.
So every report has some common 
fields, so we can see here the 
type of the report and the url 
where the report happened, and 
the report type, there's a 
specific body.  For Feature 
Policy violations, it is telling
you the feature and where in the
code the violation occurred.  
And now the second way to use 
reporting is with the report to 
respond header.  This can allow 
you to configure out-of-band 
delivery reports.  The browser 
will queue up reports and send 
them to the location you are 
choosing, separate from the 
execution of your page.  And so 
here, you can get all of the 
report types that we saw 
earlier.
So you will see a couple fields 
here.  First there is group, 
that allows you to name this 
particular set of end points.  
Then you can refer to that group
name in other parts of
your configuration.  And the max
age says how long the config is 
valid for, after that, we will 
throw away the config.  This 
allows you to change end points 
over time and switch to new 
ones.
  And end points is an array for
urls, you can configure 
multiple, if a browser cannot 
reach one fall-back, it will try
another.
And talking about one place to 
configure reporting, CSP, 
Content Security Policy, now 
integratewise thenowinates with 
the reporting API.  So you 
specify a reporting API and you 
give it an url to where you want
the report to go.
And now you can use report to 
directive, and you point it at 
one of the end points you 
configured.  So when you go back
to the example, you have a set 
to header, the end point, the 
age, and you refer back to it 
here.
And so where can you use this 
stuff?
So both the Feature Policy 
framework and the reporting API 
are shipped in Chrome.
Firefox is implementing Feature 
Policy and Safari has support 
for the iFrame allow attribute.
Now, what is really great is you
can benefit from using these 
policies now, even though there 
isn't broad support across 
browsers.  If you do some of 
your local testing in Chrome, 
with flags enabled, you can 
apply all of these policies and 
catch problems before they each 
other environments.
So I've talked a lot about a few
policies, you know, walked 
through the examples and 
explained why to use them, but 
that is just the start.  So here
is a list of all the ones we 
have available today.
Most of the ones are around 
granular control, turning on and
off,
geo location, microphone, you 
can use that right now, we saw 
the YouTube example and with 
AMP.  And we have policies and 
best practices on the flag and 
we are working on a bunch more.
And so, seeing that long list, 
you say, man, how am I going to 
set up policies and try them 
out? We have a handy DevTools 
extension to toggle the policy 
on and off and see the effect on
your page, so you  so you don't 
have to config the set header, 
you can try it and see what will
happen.  So this is an example 
of using an extension to turn 
off the geo location feature.  
Now this extension uses a 
JavaScript API, so the advantage
there is you can feature detect 
which policies are even 
supported, and you can go one 
step further and you can query 
to see what policies are enabled
and disabled.  You can code 
defensively, if you have contend
that is embedded, you can code 
defensively if you know the 
feature is not available to you.
We are eager to hear which 
policies are useful to you, we 
are interested around the images
and XHR.  So we made it easier 
for you to hopefully copy and 
paste and getty header sent 
quick.  It  if you have ideas or
feedback, we would love to hear 
from you on the GitHub repo.  
And finally, you can head over 
to Feature Policys.rocks and you
can get ideas on Feature Policys
and live demoes.  Thanks a 
lot.
(Applause).
SPEAKER: Jake, would you like to
try the quiz? 
SPEAKER: Yes, I would. 
SPEAKER: I wonder if the intro
intro is good to see again. 
SPEAKER: This time I will press 
the button and it will play all 
the way through. 
SPEAKER: C'mon, please.
[Big web quiz
theme music].
[Trumpet, do do do do do do do 
do].
[Deep voice: 2]. 
SPEAKER: So it feels like it is 
working, get your phones out, 
the set of questions today are 
-- 
SPEAKER: I will press the 
button. 
SPEAKER: There you 
go.
TLDS, we are asking if the TLDS 
you are shown are valid, 
top-level domains. 
SPEAKER: The end of the domain 
name. 
SPEAKER: Like.rocks,.com. 
SPEAKER: You can load pretty 
much whatever you want. 
SPEAKER: Well, as long as we 
don't have fake onessones. 
SPEAKER: It is a finite list. 
SPEAKER: Three seconds per 
question to hopefully give you a
little bit more time.  Some 
people said it was too quick 
yesterday.
So we have some city names. 
SPEAKER: Some of those may be 
real. 
SPEAKER: New York, black, white,
Berlin, this is fluctuating 
wildly on these,
the confidence.
SPEAKER: It would be unfair if 
some cities and countries were 
and some were not. 
SPEAKER: That would be awkward. 
SPEAKER: Oh, .San Francisco.  
Low confidence, that means that 
people are voting 
50/50.
That's how it panned out.  Back 
at the start there, there we go.
Purple, brown, it is pretty 
evenly split, I would say.
Very confident Vegas is one, 
should we see what the answers 
are?
They were all fake except for 
Vegas.  Washington, fake, Vegas,
real.  Fair enough.
Next set, what have we got here.
SPEAKER: Can I remark that it is
a 50/50 split.  I would have 
been the same.  
SPEAKER: Exactly. 
SPEAKER: So New York is fake, 
Berlin, all good to go. 
SPEAKER: We have black, not 
white, and Berlin is one as 
well.  Fair enough. 
SPEAKER: Look at the next set.  
So TLDs are odd. 
SPEAKER: So 50, 50.  I want to 
say the San Francisco, is the 
home city represented? It is 
not.  
SPEAKER: It is not. 
SPEAKER: No love in the room for
San Francisco.
And the last set we have here, 
.Jake. 
SPEAKER: .Jake. 
SPEAKER: Fake? No, it is fake. 
SPEAKER: That's a fake one. 
SPEAKER: That's a relief. 
SPEAKER: Honestly, if Rome is 
not getting in there, Jake is 
not getting in there. 
SPEAKER: [ Laughter ]. 
SPEAKER: I'm calling it  
SPEAKER: It is nice that it 
worked.  It is time for a break,
we will be back in here at 
11:30.  See you then!
.
[Break].
SPEAKER: We should have, like, 
revenge of the CSS colors.
Part two.
On your marks, get set, go.
Ohh, lavender. 
SPEAKER: I like this 
three-letter word.
Pale violet red? Marvelous. 
SPEAKER: Hmm. 
SPEAKER: I feel like that should
be a real one. 
SPEAKER: Yeah, I can
see that.  
Yeah.  Medium hot pink, I love a
good hot pink.  I'm not sure if 
medium hot pink is a real one. 
SPEAKER: This impplies that 
there's a level of pinks. 
SPEAKER: Trans parent green, not
sure how that works, honestly, 
but it could be there.  Or could
it?
Ohhhh.  Closing, here we go.
Any second now... woo!  
SPEAKER: Yeah, not surprising.  
I think, looking again at this 
list, I think I was very 
confused as to which one of 
these, which one of these -- 
SPEAKER: Yeah. 
SPEAKER: They are all real in 
that block. 
SPEAKER: Would you believe it? 
Yeah, you probably would.  
SPEAKER: The audience is pretty 
confident about this.  
SPEAKER: I bet you really know 
your CSS colors, don't 
you. 
SPEAKER: Yes. 
SPEAKER: Interesting.  
SPEAKER: Wow, this crowd is 
getting wiser to our charm. 
SPEAKER: Space Gray, it is not.
I heard the sigh there -- ahhh. 
I'm so disappointed in myself.
[ Laughter ].
  Don't be.  These are so silly.
[ Laughter ]. 
SPEAKER: Yep, the point is to be
silly!  
SPEAKER: Exactly, so it is time 
for the next speaker who is 
called Gray, a valid CSS color 
and also an excellent human.  
Ladies and gentlemen, Gray 
Norton!  
>>GRAY NORTON: Good morning, 
everyone.  And welcome back from
the break.
It is true, my name is Gray, I 
don't think my parents knew that
my hair would prematurely match 
my name or I would be a valid 
CSS color.  I give them a lot of
credit.  So I'm the engineering 
manager for the Polymer project 
at Chrome, my team focuses on 
web libraries and components and
helps you use them.  But today 
we will talk about something a 
little different.
Hoar maybe we will just stand 
here and look at my picture.  So
shockingly, we are talking about
performance.  As you may have 
noticed, we are a wee bit 
obsessed with that topic.  You 
heard about speed tooling, best 
practices for doing everyone 
faster from loading to rendering
to responding to user input.
But for the next half hour, 
we're going to drill in on 
something a little more 
specific.  That's a proven 
pattern for improving 
performance that we would like 
to see more on the web.  Ben and
Dion introduced in yesterday's 
keynote that today's web 
platform is really a 
high-performance machine, it is 
way faster in almost every way 
than it was even a few years 
ago.  But there are surefire 
ways to slow it down, and one of
those ways is trying  to render 
too much, you can bring high 
performance devices to a crawl 
by giving too much DOM.  There 
are key parts of the rendering 
process, like styling and 
layout, that take longer the 
more nodes you have in your 
page.
So for a certain class of 
performance problems, the best 
thing you can do to speed things
up is to lighten your load, 
minimize the number of DOM nodes
you have in a page at any given 
time.  There are various ways to
do this, but a simple and 
effective way is to adopt a 
pattern called virtual 
controller.  Some members of my 
team have teamed up with those 
inside and outside of Google to 
see what it looks like to add 
first-class  virtual scrolling 
support to the web platform.  We
will see what we have done so 
far, it is early but we are 
excited about where the project 
is headed.  We will start with 
an introduction to what virtual 
strolling is and how it is used 
on and web, and the performance 
problems we are trying to 
address, how too much DOM can 
impact the responsiveness of 
your pages.  Next, we will talk 
about the approach we are 
talking, and we will show you 
what we've come up with so far.
And then, finally, we will look 
ahead to what is next and how 
you can get involved if you are 
interested.
So, without further 
ado, we will see what virtual 
scrolling is about.  First, we 
will look at ordinary scrolling.
The first box here is the view 
port, and as you scroll, the 
document moves behind the view 
port, moving content up or down,
but all the content was present 
from the start.  The only thing 
that changes is what we see.
And now, this is the same page 
in a virtual-scroller.  As you
see, only part of the content 
exists in the document, nodes 
are added and removed to keep 
the view port.  And at render 
time, we render one screen full,
and virtual scrolling improves 
performance in other ways that 
are not as obvious.  When does 
it make sense to use virtual 
scrolling? It is helpful when 
you need to display a lot of 
data in a list, like this 
address book.  It is an obvious 
call for content feeds, 
obviously in cases like social 
media,
or messaging.
But it showed up in less 
traditional places too, some 
publishers started stitching 
together articles in a way that 
takes advantage of the low 
friction of scrolling.  This may
not look like your garden 
variety infinite list, but it is
a case where virtual scrolling 
can help.  We think there's a 
strong argument to be made to 
look at virtual scrolling in 
cases beyond lists and feeds.  
This Wikipedia article, for 
example, might have 10s of 
thousands of nodes.  And the 
same technique for lists can 
help keep the weight of 
documents like this under 
control.
So virtual scrolling isn't a new
idea, it is widely used on new 
native platforms, and iOS and 
Android put views front and 
center on HDKs.
And this is a big part of why 
native mobile apps tend to feel 
fast.  And a basic native view 
is actually heavier than the 
equivalent chunk of DOM, but the
main stream use of virtual 
scrolling on platforms like 
these help keep performance in 
line.
You might we thinking that 
virtual scrolling isn't new to 
the web, either.  You are right.
There are popular and effective 
virtual scrolling solutions for 
all of the top frameworks, 
including React virtualized, 
React Window, the virtual scroll
view port in the angular 
material CDK, and the view 
virtual-scroller library.
But these solutions are 
currently getting by without 
much help from the web platform.
So in the age-old tradition of 
paving cow pads, what does it 
look like for the web to offer 
first-class virtual scrolling 
support, and how can we make 
things better? We came up with a
few ideas.
First, virtualization generally 
means you can't link to 
locations within the same 
document, since the browser 
can't scroll to a node that is 
not currently on the DOM.  This 
may not be an issue for many use
cases, you don't see links 
between address book entries, in
other cases, it is a big deal 
and it breaks a fundamental 
feature of the web.
And there's a similar problem 
with the browser's find-in-page 
feature that sees text on the 
DOM.  And since content is added
on-the-fly, it is generally not 
visible to search engine 
crawlers.  This is not available
for every use, but we would love
the performance use case for 
virtual scrolling without 
sacrificing indexability.  And 
we would love to increase the 
amount of virtualization that 
happens on the web by adding 
platform-level support.  Virtual
scrolling on the web is 
basically a fringe feature, you 
have to discover that you need 
it and you have to jump through 
hoops to get it.  We would like 
to put it front-and-center, like
in mobile SDKs, to make it 
easier to discover and use.
Okay, so we've talked about what
virtual scrolling is and why 
we're interested in adding it to
the web platform.  To really 
understand the nature of the 
performance problems we are 
trying to solve, it would help 
to look at some examples.
Since problems are the worst on 
low-end devices, we use a 
typical Android Go phone for 
this exercise.  So I actually 
recorded a bunch of real-world 
interactions for this talk, but 
I wasn't super happy with the 
quality of the recording, so I 
ended up creating dramatic 
re-enactmentes on DevTools with 
throttling turned on.  I would 
love to show you on the device 
itself if you want to find me 
after the talk.
So first we will look at how the
size of the document impacts 
Rendering and what that means 
for responding to user inter 
actions and content updates.
So I created a simple example to
demonstrate.  It is a bit 
contrived, but it exercises the 
browser engine in the same way 
that real web pages do.  So this
mocked up content feed has five 
items that performs pretty well.
We have advanced features, like 
a dark mode, a compact layout 
mode, and these are made using 
standard CSS techniques, 
changing classes.  And lastly, 
we have this slider here that 
lets us increase or decrease the
size of the items by changing 
the base font size in the 
document, which is a technique 
that many of you have probably 
used in the layouts
of the or the layouts themselves
are based on Ms so the whole 
thing scales.
And this performs pretty well, 
but it is only
five items.  
Let's bump up to 50 items on the
feed, don't focus on the 
scrolling performance, we will 
talk about it in a moment, we 
will focus on the other 
interactions.  I added a jenk 
indicator, the screen flashes 
red when the rendering is slow, 
we are not too bad at 50 items, 
there are delays for the 
switches and there is 
sluggishness as we start moving 
the slider.
But where we really start 
feeling the pain is when we get 
up to 500 items or more.  The 
laglaginess of this example is 
impossible to miss.  So you will
notice that, when we go to dark 
mode or compact mode, there's a 
very noticeable lag and the 
slider is going to be virtual 
unusable, it is so slow to 
update.
And, if anything, it actually 
feels worse on the real phone
than this.
This example is a bit contrived,
you may not do this exactly on 
your page, but the effects it 
illustrates are real.  Rendering
any content on your page, 
whether it is in response to 
user interactions or changing 
data is slower as the document 
gets bigger, and on devices like
this, it can kill your page.
So let's jump ahead of it and 
see how much virtual scrolling 
might help.  So this is exactly 
the same page with our 
work-in-progress 
virtual-scroller swapped in for 
the scrolling region, we have 
bumped up to 5,000 items, but 
only a fraction are in the 
document at any given time.  As 
you can see, we are back to the 
same performance we had in the 
original five-item list.  This 
is essentially how it feels on 
the phone as well.  And 
remember, this is a very 
low-powered Android Go
device.
And next, let's see how 
virtualizing might help us at 
load time.  A quick caveat about
this,
make it interactive, depends on 
a lot of factors.  And virtual 
scrolling does not directly 
impact many of them.  But to the
extent that sliding and layout 
are slowing things down at load 
time, virtual scrolling can 
definitely help.
So being the big web nerds that 
we are, we will use the 
single-page html spec dock as an
example, this is massive, 
somewhere in the order of a 
million words, and it is 
notoriously slow to load.  On my
Android go device, with a good 
but not great connection because
I have gone over my data 
allowance for the month, it took
about seven seconds to get 
something on screen.
So, for comparison, we built a 
virtual scrolling version of 
this page out of duct tape and 
bailing wire.  It loads the 
original doc into a hidden 
iFrame and populates the 
virtual-scroller as the node 
scroll into the node frame.  You
wouldn't build it this way, but 
it is an interesting test case 
and illustrates the impact.  As 
the visualization shows, we got 
something on the page faster, in
three seconds instead of SAEB 
seven and it gets better from 
there.  The original version on 
this phone was suffering from 
jenk well under a minute, and 
the virtual scrolling version is
available right away.  There's a
delay as it updates, but unlike 
on the original version, you are
never sitting and waiting for 
seconds while the screen is
locked up.
Okay, so before we move on from 
performance, one quick word 
about scrolling performance, it 
is weird to talk about that and 
not to talk about scrolling.  So
because scrolling is driven by 
the GPU, even on our low-end 
Android go device, you get 
decent scrolling, it scrolls at 
a high frame rate and it looks 
good at normal scrolling speeds.
When you scroll quickly, 
something funny happens.  The 
screen blanks out, goes 
completely white, for long 
periods of time while the 
rendering pipeline catches up.  
And this actually gets worse as 
the size of the document 
increases, just like the other 
performance problems we have 
been looking at.
So I did salvage one of my 
original, on-device recordings 
and we will look at it here.  
The virtual-scroller reduces the
blanking problem, it happens 
much more quickly, when it does 
happen the renderer catches up 
quickly.  The scrolling is 
mildly smoother overall.  As you
would expect, doing JavaScript 
layout on a CPU-challenged 
device like this does impact the
frame rate.  As you can see from
the video, it is a very good 
experience in the last, even on 
this low-powered
device.
Okay.  So we know what virtual 
scrolling is, why we want to add
it to the platform, and we see 
first-hand why we need it.  We 
are seeing the performance 
issues we are trying to 
overcome.  So I want to talk for
a moment about the process that 
we're going through.  Because it
isis actually not that common in
recent years for the web to add 
new high-level features.  Sowe 
so we had a bunch of things that
we really want to do, and treat 
this as a guinea pig for how we 
might add higher-level features.
So we will start with some basic
principles.  Anytime you are 
building something in the web 
platform itself, as opposed to 
in user space, it is important 
to get the very basic use cases 
right because web APIs live 
forever, and it is much more 
important that, you know, we do 
the simple stuff right.
So, with that in mind, we wanted
to make sure that we learn from 
prior art.  There are
virtual scrollers on other 
platforms, and we want the 
feature to be easy a
to use, we want you to put in 
web content, as opposed to 
picking up a new API and 
thinking fundamentally different
about the way you build stuff.  
Make it work, it sounds trite, 
but everything needs to work.  
We talked about link, 
find-in-page, it is critical 
that accessibility works, we 
want tabbing from item to item 
to work.
And, of course, it needs to be 
fast.  After all, this is a 
performance problem that we are 
trying to solve.
Okay, the next thing we want to 
do is we want to embrace 
layering.
And we will talk a little bit 
more about what we mean by that.
Nicole actually mentioned this 
in the keynote.  One of the 
important things we want to do 
out of this process is identify 
and implement any lower-level 
primitives that the browser may 
be missing in order to support 
this high-level use case.
We also want to enable this 
thing to be used out of the box,
like you use vanilla html in 
JavaScript.  We don't want you 
to pick up a flame work, we want
you to use virtual scrolling on 
the platform simply.  But we 
want to give libraries a solid 
base to build on, and low-level 
primitives are things they can 
use, but we want the 
virtual-scroller itself to be 
something they can take in whole
or in part and layer on top of 
for their own virtual scrolling 
solutions.
  And I mentioned this is all 
with the idea of blazing a 
trail.  We think that there's a 
lot of room to improve the 
developer experience and the 
user experience of the web 
platform by
adding some high-level features 
in coming years.  With that in 
mind, we want to use 
virtual-scroller as a lens or 
testing ground for us to examine
what it means to add new 
high-level features.  We will 
talk about some of these things 
as we look at the 
virtual-scroller.
So, with those principles in 
mind, we sat down to do our 
homework.  The first thing we 
did is we set up what we called 
did at that time the infinite 
list study group.  So everyone 
would take a chance at looking 
at a virtual scrolling solution,
we would discuss what is working
and what isn't, and we would 
document it here on GitHub.
And once we've done that for a 
while, we come up with a set of 
requirements and it was time to 
start building.
And now, I don't know what this 
is.  I think it is molding or 
something, but it looks like 
something I would really like to
build.  So I put it on the 
slide.
And we have the same sort of 
love for virtual-scroller.  So 
once I started implementation, 
we did that on GitHub as well.  
And one of the things you will 
notice here is that 
virtual-scroller is being 
implemented in JavaScript.  And 
this obviously is sort of 
unusual for something that's 
under consideration for a web 
platform API, it is not 
typically how we build platform 
APIs.
But this gets to the layering 
concept.  We wanted to make sure
that we didn't reserve any 
special powers in adding these 
features, and one of the best 
ways to do that is to develop 
things in the same environment 
that framework and app 
developers develop in.
So the next important thing 
about the process is we want to 
be in constant dialogue.  We 
talked about the fact that we 
are doing these things out in 
the open in GitHub and we have 
been talking to people all 
along, we are talking to browser
vendors about the idea behind 
virtual-scroller and the 
principles we are looking at for
adding new high-level features. 
We are talking to framework and 
library authors so, along the 
way, we've talked to members of 
the Angular JS team who are 
working on the Angular material 
scroller, we talked to the AMP 
team, we've talked to members of
the Ionic team who are working 
on their own virtual-scroller, 
and Brian Vaughan with react 
virtualized and window as well 
and we have a lot of active 
discussions going on in GitHub. 
So it is important to us that 
this process be carried out in 
the open and with the input of 
the community, including
frameworks.
Okay.  Without process as back 
drop, we will look at what we've
built.  So this is that example 
that we're looking at before, 
and I show it here in DevTools 
just so you can see, as we 
scroll, there are just a few 
items in the DOM at any given 
time.
And, when we scroll, you will 
see them cycle
in and out.
And so, even though we have a 
list of 5,000 items here, at any
given time, we're only laying 
out styling, rendering a very 
small number of them.
And, as you can see, the 
virtual-scroller itself is a 
custom element.
And let's see how you would 
actually use that in
vanilla style.
So starting from a blank slate 
here, the first thing we're 
going to do is we're going to 
use the browser's native module 
loader to import this.  And now,
this is also a new concept that 
we're trying out as part of the 
way we're thinking about 
higher-level APIs.
Typically, browser APIs are sort
of baked into every page.  
You may use them, you may not, 
but there's a memory impact and 
the size of the browser is 
impacted.
We would like to explore a 
pay-as-you-go model.  We have, 
in the form of the module 
loader, a way to dynamically 
load libraries and code at 
runtime.  So we're using that 
here. And you can see that we 
have a proposed syntax for 
requesting something that would 
be part of a standard library 
provided by the browser.  I will
not go into the details here, 
but you can read about it.
So, once we've imported it.  We 
just put the virtual-scroller 
tag into our html document.  And
this is what you get, without 
doing anything more.
You can see the virtual-scroller
has a default size, much like an
image or an iFrame -- not an 
image, rather, but an iFrame.
And so it is not doing anything 
yet, we will make
it do something.
So find the query selector
to find the virtual-scroller, 
we're going to fetch some data, 
this is the first bit of 
virtual-scroller API, it is an 
item source, you can assign this
an ordinary array, as we've done
here.  We will call our async 
function, and we get that, which
probably isn't exactly what we 
want either.  So we wanted the 
virtual-scroller to do something
by default, so what it does is 
it just takes the item and it 
tries to render it into the 
item.  In this case, it is a 
string and it doesn't look very 
good.  So let's fix that.
So the next bit of API is this 
update element hook.  So here, 
we're basically just going to 
take the contact name, put it in
the text content.  And then we 
have something that looks much 
more useful.
So just a few lines of code, 
vanilla html in JavaScript and 
we have a virtual-scroller.
So let's take a look at
one more example.  You notice 
the virtual-scroller initially 
was empty and, by default, we 
just create a div for each item.
You can actually override that, 
in addition to the update 
element, there's a create 
element hook and you can give it
any kind of DOM you want.  You 
can put a template inside the 
virtual-scroller and what is in 
there is what is instantiated 
for each item.  So in my update 
element function, I'm going to 
assign the image and, lastly, 
I'm going to specify I want a 
vertical grid layout.  So out of
the box, it supports vertical, 
horizontal, normal, and grid 
layouts.  Layouts are pluggable 
under the hood, it is TBD as it 
is exposed in the API when it 
ships.
  So we are taking a layered 
approach, we are not putting in 
fan  fancy features, but it is 
important that these things work
and we have layering.  So here 
we have headers interspersed 
with less items.  These are 
items that you find in our 
GitHub repo.  And this is 
important, we want to support 
various loading patterns.  This 
is an infinite scroll and, as 
you go, each item is loaded and 
you just keep stroll  scrolling.
We support the common pattern 
where you scroll to the bottom 
and you ask for more, so we are 
validating that these basic 
constructs are working for these
use cases.  Fancier features, if
you have built something like 
this, worked with the UX team, 
they will want fancier stuff.  
So this is a proof of concept 
showing that things like swipe 
to dismiss can work with this 
virtual-scroller as well.
Okay.  So we mentioned that we 
thought that there would be some
missing primitives that we might
discover.  We think we found 
one, specifically it is called 
invisible DOM.  So you remember 
our earlier examples, in this 
one, you will see it is a lot 
like the virtual-scroller, 
except instead of actually 
getting rid of and adding nodes,
it is basically just making 
nodes invisible.  Invisible DOM 
is in the document much like a 
display none element, but it is 
different from display none in 
that it can be linked to and 
found by the browser's 
find-in-page.  So this is an 
example where we have 
non-traditional virtual scroll 
content, this is a document like
you would find in Wikipedia or a
long-form news article.  You can
see that, here it is, we get a 
certain way down the list and, 
all of a sudden, there is 
nothing highlighting over in the
window because some of these DOM
nodes are invisible.  They are 
just not there.  And, as we 
scroll, we will see that they 
are basically being flipped to 
visible.
And so when the virtual-scroller
is working with invisible DOM, 
this is how
it works.
So you will see here, we will do
some re-sizing, change the 
orientation, you can see that 
the layout is preserved.  Again,
this is just ordinary styling 
and layout on these items and 
virtual-scroller works out of 
the box, this is what we're 
talking about when we say we 
want to make it simple, we don't
want you to think about  hard in
using a virtual-scroller.  And 
this is what is cool, this is 
links, what invisible DOM is 
making possible for us.  When I 
click on a link, the DOM node it
is linking to are not in the 
document yet.  Well, they are in
the document, but they are not 
rendered.  You get an event on 
them and, by default, the 
browser flips visible and scroll
to it.  But in the case of 
virtual-scroller, we are 
capturing the eentevent, 
controlling to -- scrolling to 
it, but it is seamless.  And it 
is as if the entire document had
been rendered the whole time.
And I am just about out of time.
But I wanted to show you, one of
the exciting things about using 
invisible DOM with 
virtual-scroller is that it 
means you can actually put 
invisible content directly in a 
document, and we're talking with
our friends in search and the 
idea is that you would be able 
to effectively have your content
be entirely indexable, but to 
benefit from the performance 
wins of virtual scrolling by not
having a render at all in first 
load.
So very quickly, we will talk 
about the path forward.
Again, very early days, but very
encouraging results.  What is 
next? More invisible DOM 
integration, we just started 
exploring how the two play 
together, because thanks to some
great work from Rekina on our 
DOM team, we have a working 
version of it become available 
very recently, it is available 
behind the flag in Chrome 
Canary.
We think there may be additional
primitives as well.  We talked 
about some, I will not go into 
detail here, but invisible DOM 
is just the first.  Framework 
collaborations: So, from the 
start, we've actually in the 
repo had proof-of-concept 
integrations with lit html and 
Preact and, for this talk, it 
ended up getting cut, I did a 
basic React integration as well.
But it is very early there and 
we have not worked closely with 
the frameworks on that 
specifically yet.
More advanced use cases, so we 
saw some of those examples, but 
there are others that we have 
not gotten to yet.  Performance 
optimizations, it is pretty fast
right now, but there has not 
been intensive optimizations 
done yet.  And down-the-sack 
explorations, a question we get 
is why doesn't the browser do 
this stuff better in the first 
play  place, why do you need a 
virtual-scroller, can't it just 
do this? And these are 
discussions we are having 
externally and internally. 
It is a complicated story, it is
complicated, it can lock you 
into a simpler model so you have
wins there.  But it is possible 
that we can also pursue other 
types of optimizations further 
down the stack that get you some
of the same types of it benefits
without you going to a 
virtual-scroller, and those are 
some of the things that we're 
discussing.
So the standards process, so, 
again, we are talking to browser
vendors all along, but it is 
very early for this in similar, 
high-level features.  And the 
rest could go on.  If you are 
interested, I invite you to 
engage with us on GitHub.  We 
have a very active set of the 
discussions going on in the 
issues here.
And here is the GitHub repo, 
where you will find us.
And, with that, I have been 
excited to show you 
virtual-scroller.  I hope you 
are as excited as I am come into
the web 
platform.
[ Applause ].
SPEAKER: Thank you, Gray.  It is
time for another one? 
SPEAKER: Yes, another question. 
And a few people have been 
asking about the Chromebook -- 
yes, everyone here gets a 75 
percent discount off one of 
these, the place is get that is 
the registration desk. 
SPEAKER: Until 3:00PM. 
SPEAKER: Yes. 
SPEAKER: We have been told that 
people are going to the forum 
space, but the forum space 
doesn't have the discount code. 
Go to the registration. 
SPEAKER: I think they are quite 
good.  
SPEAKER: Have you been paid to 
say this? 
SPEAKER: Yeah, pretty much. 
SPEAKER: [ Laughter ]. 
SPEAKER: Let's do another round.
All right.  Now this one is 
going to be about CSS properties
and attributes.  You are going 
to see a series of letters in a 
row, I think we call them words,
and you need to decide whether 
it is a CSS property or an html 
attribute.  You have three 
seconds to guess each one, so 
you need to be fast.
Right, let's look at what is on 
the screen.  Here we go.
Default, touch-action, unicode 
-- okay.  This takes me back.
Speak, 
codebase, using an
access key.  You remember that 
one?
Yeah.
Auto capitalize, that sounds 
useless.  I wonder what that is 
for.
Gap, sounds like a style thing. 
Gap, fashion, who knows.
Zoom,
I remember that one.  So we will
look at which way you all voted 
when it 
apires on the 
screen.  Right.
Yes, there we go.
So what is the voting like? 
SPEAKER: It is even split.  
SPEAKER: So we are kind of 
saying that the unicode-bidi is 
the html attribute.
These CSS properties have the 
hyphens in, that is not always 
true, we get a lot of SVGs and 
they have the hyphens in.  Okay,
we will look at the next set. 
SPEAKER: I'm curious about the 
code base thing. 
SPEAKER: So you are saying that 
html attribute for the code 
base, for speak and access key. 
The answer, of course, is -- 
SPEAKER: Ahh. 
SPEAKER: So what is code base? I
don't know this. 
SPEAKER: It is from the uplift 
tag. 
SPEAKER: Is it still a valid 
tag? 
SPEAKER: Let's say probably 
maybe.  
SPEAKER: Just like everything in
html. 
SPEAKER: And speak is how a 
screen reader would pronounce 
that, so it is essentially a 
display, although not a visual 
display.  An access key, that's 
the CSS as well.  We will look 
at the next set.
Autocapitalize, bgcolor, resize,
gap.  
SPEAKER: Bg color is a CSS 
attribute. 
SPEAKER: From the olden days. 
SPEAKER: Auto capitalize is for 
an input image as you are 
typing.
Yes.
  We will look
at the last block.  So everyone 
is saying that zoom is CSS, it 
is.  I built my career on top of
this CSS attribute. 
SPEAKER: Yes. 
SPEAKER: This is, like, when I 
was fresh out of university, at 
the BBC, I built a career out of
knowing all of the IE6 bugs.  I 
would be called to different 
departments, we have a bug, 
let's get Jake in.  I was like, 
I have this, I just need the 
element to put in zoom 1 and 
walk away.  Promotion, please. 
SPEAKER: Did they promote you? 
Amazing. 
SPEAKER: Nope. 
SPEAKER: [ Laughter ], all 
right.  Is that it? 
SPEAKER: Yes. 
SPEAKER: So yesterday, me and 
Jake came on stage to talk about
the app we built, Squoosh, and 
we used a web worker, but we 
were not honest about the rough 
edges of working with workers.  
But Jason and Shubhie coming on 
stage next have some ideas.
  So please welcome to the stage
Jason and Shubhie.
A Quest to Guarantee 
Responsiveness: Scheduling On 
and Off the Main Thread.
SPEAKER: I'm Shubhie Panicker, a
software engineer working on
Chrome. 
SPEAKER: And I'm Jason Miller. 
SPEAKER: -- and approaches for 
moving script off of the main 
thread.  .  Jason and I are in 
this space, exploring gaps in 
API, for achieving gap 
responsiveness guarantees.  We 
are excited about the 
opportunity here working with 
existing primitives and APIs 
that we show in our talk.  
SPEAKER: So to illustrate this, 
we will use this problem space 
using a demo.  This is an 
application that searches 
through photos as you type.  As 
you can see, with the JavaScript
control red spinner animation, 
it is doing a fair amount of 
blocking work on the main 
thread.  And as it is happening,
the app cannot respond to input 
so the typing is queued.  So 
looking closer, what we see if 
we pull up the profiler is 
something like this, a sequence 
of long tasks the block the main
thread and cause the input 
queuing.  We can see it on a 
simplified view here, if we 
receive input and do processing 
and response to that input, like
searching photos and rendering 
list items, we are skipping 
frames already.  In addition to 
that, if we receive additional 
input while that task is 
running, it will get queued and 
it is only able to execute once 
that has completed.
So this is data captured from 
real users on real websites in 
the wild, and it shows a break 
down of where Chrome was 
spending its time while handling
input.  So there's a lot of 
interesting data here and we 
don't have time to get into it. 
The main thing to look at is the
amount of time we are spending 
in this V8.execute task.  That 
is Chrome running JavaScript 
during touch handling, and the 
biggest contributor to latency 
in the average and in the use 
case. 
SPEAKER: In the search example 
app, there's a lot of types of 
work happening here.  And all of
this different work has what we 
are calling deadlines.  So, for 
example, the user is typing in 
that search box, their input has
to be responsive, there is 
ongoing animations on the page, 
they have to render it 
consistently and smoothly. And 
then there's the heavy lifting 
of touching search results, 
post-processing and rendering 
these search results in time so 
it is relevant to the user's 
stacked end query.  It is 
difficult for apps to balance 
these competing needs to reason 
about all of these different 
deadlines and keeping everything
meeting these timelines. 
SPEAKER: Right, so we have a 
bunch of different types of work
and each of these types of work 
has a different deadline.
And what we need to be able to 
work through this is priorites
orities.
SPEAKER: There's a couple 
high-level approaches to try and
achieve responsiveness 
guarantees.  The first approach 
can just be doing less work, and
there are other ways of doing 
this, such as in an infinite 
feed, you only render what is 
visible, we thought a strategy 
with the virtual-scroller talk 
right now.  But this is not 
always possible.  Modern apps 
often have a ton of work to do. 
So a second strategy here is 
chunking up work and 
prioritizing these chunks of 
work.
In practice, though, this is 
also very difficult.
It can be impractical to achieve
this manually on your own as an 
app developer, we think there's 
a real opportunity here for 
frameworks to step in and help 
their users.  Frameworks are in 
a great position to ensure 
chunking and prioritizing of 
work.
So stepping back a bit, what we 
need here is some way to provide
the chunks of work, or tasks, to
a system that can hold them in a
task queue and this system can 
make good decisions about when 
to take tasks out of the task 
queue and execute them at an 
appropriate time based on 
everything that's going on.
And this is the definition of
a scheduler.  
SPEAKER: Google maps is a great 
example of an application that 
uses a scheduler to keep 
interactions smooth.  It has to 
manage multiple interactions and
events and it does it 
concurrently.  It schedules work
and gives higher priority to 
input response tasks.  We can 
see it here.  Let's say I'm 
panning the map and, as I'm 
panning, additional tiles are 
coming into the view port and 
need to be loaded.  If I stop 
panning and pull up the drawer 
in the bottom, has  that's the 
highest priority task.  So the 
tiles being loaded need to be 
de-prioritized. 
SPEAKER: So a good aspect of a 
scheduler is the ability to 
execute work at the best time.  
And this is an appropriate time 
based on everything that's going
on, various factors like what 
the type of the task, what is 
important to the user right now,
what is the overall state of the
application, what is the 
internal state of the browser, 
etc.
So, to understand this notion of
best time, we have to step down 
a level and look at the 
browser's rendering pipeline.  
The browser is periodically 
bumping frames, every 16ms for a
60 frame-per-second display 
rate, and each frame has a set 
of things that happen in 
sequence.  For instance, we have
requestAnimationFrames followed 
by style layout and paint, in 
Chrome, handlers are handled 
before the requestAnimationFrame
call backs.  So there is limited
time to do the urgent work that 
needs to happen in the current 
frame.  And then the app has to 
immediately start thinking about
preparing for that next frame.
And the third type of work here 
is idle work, which we left over
in the current frame, or there 
might be plenty of idle time if 
no frames are being rendered.  
So this is the terminology that 
we are using for these types of 
work, user blocking tasks for 
the current frame.  This is 
typically to provide the user an
immediate acknowledgement of 
what they are doing.  So, in our
example lab, this might be 
keeping that typing interactive 
in the search box, keeping those
animations going on the page, 
overall keeping the page 
responsive, buttons should be 
toggleable, default work is this
next category of work.  This is 
typically user-visible and it is
preparing for the next 
framework, a future frame.  In 
our example, this would be the 
work of the post-processing, 
preparing the search results, 
rendering them in time.
And, finally, the third 
category, idle work, this is 
work that is not user visible.  
This is at the end of the frame 
or, if no frames are rendered, 
analytics, back-ups, SIRNG  
syncs, or indexing.
So, on the right here, we sort 
of listed some existing 
primitives, existing ways of how
ap 
developer can submit work to the
browser to support these 
target levelsism so 
requestAnimationFrame and others
are good for this.  And 
microtasks are suitable for 
blocking work, they do not go to
the event loop, and we see use 
cases where developers are doing
non-urgent work without knowing 
it is blocking rendering.  With 
default, we have set time out 
zero, post-message, these are 
hacks and work arounds.  There's
not a real primitive here and we
are working to fill this gap.  
For idle, RequestIdleCallback is
a great API.
So JavaScript schedulers can be 
built today using these 
primitives.  And now, while it 
is possible to build a 
scheduling system in JavaScript,
they suffer from gaps, primarily
because they don't have enough 
control on signals to properly 
control scheduling.  So so we 
will go through examples.  So 
for example, we see JavaScript 
schedulers and we are trying to 
estimate the frame deadline.  So
we are doing bookkeeping, trying
to guess at it, but they are 
doing it poorly because it is 
not possible to do it well 
without knowing the browser 
internals.  So we consider 
exposing an API for that.  Is 
input pending is a useful signal
for schedulers and we are 
actively exploring an API. And 
then there is other coordination
work.  So, for example, handling
fetch response priorities is 
pretty relevant, if you are 
doing urgent work for the 
current frame, you don't want 
your low-priority fetch 
responses to come in and 
interrupt that.
In practice, though, there's a 
lot of other work that is 
happening in the browser, the 
browser might initiate various 
call-backs such as ready state 
change for XHR, or a post 
message from a worker, there is 
internal work, and it is
not possible to prioritize code 
for all of this.
So this got us thinking how 
about moving the scheduler one 
level down and integrating it 
directly with the event loop 
where you already have most of 
these signals and a lot of great
information.  This would solve 
an additional problem, the 
coordination problem between 
multiple parties in the app.  If
you have third-party content, or
embedded libraries, or legacy 
code or other frameworks, they 
can coexist and use the same 
system that exists in 
priorities.
So this is a very early sketch 
of what an API might look like.
The key thing here is a set of 
global task queues targeting 
each priority level.  It is 
simple and straightforward 
compared to using a myriad of 
different APIs.
The second thing is we think it 
will be useful to have a notion 
of user-defined virtual task 
queues, and it gives the 
developers more control over, 
like, managing a group of tasks 
and doing bulk operations, like 
update priority, cancelling the 
tasks or flushing the task 
queues if the app is going away.
SPEAKER: So here we can see a 
simplified version of that map 
scheduler that we looked at 
using this task queue API.  So 
first we hook into the user 
block and default task queues to
give us a high and low-priority 
queue, and then we listen for 
pointer move events, each time 
we get an event, we pen a task 
with the coordinate for the 
pointer move.  The pen 
translates the map tiles, and 
then it might queue a 
low-priority task to, you know, 
detect any tiles that have moved
into the view port and 
potentially load those tiles.
The thing to note is if you 
receive a new pointer move event
before we invoked the load more 
tiles task, that is given a 
higher priority than loading 
more tiles.  That what we want, 
we give higher priority to 
input-driven tasks.  Let's say 
that the team behind maps needs 
a response to pan jet  gestures,
that's an it  good use case for 
idle priority
tastasks. 
SPEAKER: In green, it is 
rendered in time, you can see 
that the work is chunked up, 
there is high priority work at 
the beginning of every frame, 
followed by purple and green, 
followed by default priority 
work and yellow to prepare for 
the next frame, as wellAs the 
Web landscape grows increasingly
complex, it's increasingly 
important to demonstrate the 
myriad ways a Web experience can
be interacted with.
don't know what the end game is 
going to be.  This is a great 
time to give us feedback and to 
chart the course here.  For 
developers, we think that there 
is an opportunity here with 
improved scheduling, even for 
just properly using existing 
prim  primitives.  For authors, 
we want you to consider a 
scheduling system and 
collaborate with us now to 
develop the right set of APIs in
this space.
React's work on concurrent and 
time slicing has proven that 
frameworks can really play a 
good role in terms of helping 
apps improve responsiveness of 
apps.
And we are already working with 
React and actively looking to 
form partnerships with other 
frameworks and apps.  This is a
link to our GitHub repo and 
filing issues is a great way to 
get that dialogue going. 
SPEAKER: What about work that 
cannot be chunked, we have a lot
of jobs that need to be executed
and it is almost impossible to 
break that work up? This is an 
example that illustrates what I 
mean.  Let's say we have a text 
editor that does live JavaScript
bundling as you type.  If I load
in decent amount of code, things
are a little bit slow.  
So every time the bundling 
process kicks in in response to 
the input, it blocks the thread 
and this causes the cursor to 
freeze, it queues up the input 
until bundling is completed and 
it disrupts the user experience.
You can see it in the CPU 
profiler on the right.  It is 
difficult to break that work up 
into 50ms chunks for two 
reasons, I did not write the 
bundler code and modifying that 
is a lot of work for me.  And 
plus there is a whole bunch of 
libraries that are used to make 
these things happen, those 
dependanies, and downloading, 
parsing, and evaluating those on
the main threads blocks.  So 
using background threads allows 
us to offline that work so the 
main thread can keep handling 
the input.
There's a few use cases that 
lend themselves well to this 
approach.  If you are building a
computer-design tool, a game, or
coding, these are great places 
to start with threads.  The same
thing for ai, machine learning, 
crypto.  If these are the things
you are doing, you should start 
here.  In the browser, our 
primitive for threading is the 
worker.  If you are not used 
workers in a while, they are 
basically threads.  They have a 
simple messaging interface, you 
can assign a message to the 
worker and you can receive a 
message back.  They have no DOM 
access whatsoever and a very 
limited global scope.  Just 
fetch and module stuff, and they
were shipped 10 years ago and 
available everywhere.  So the 
API for workers look like this, 
you instantiate the name of the 
worker and pass in the name of 
the script and we can listen for
messages out of the worker and 
send the messages to the worker.
So this is an object and we 
would like to say, invoke a 
compute hash function.  We are 
going to pass the contents of 
the file expressed here as an 
array buffer.  The second 
argument to postMessage is 
interesting.  This tells the 
browser that, rather than 
structure the cloning array 
buffer, it will transfer it in. 
And once compute hash has 
completed, it will say 
postMessage back to our thread 
and we will be dropped into the 
message handler on line three. 
SPEAKER: So this postMessage of 
the data is incurring a 
serialization on the thread, it 
is cued up and hopping over to a
worker thread, followed by 
deserialization, and end to end,
this is called a thread hop.  
This thread hop has a cost from 
the data subject to what is 
called structured cloning, which
is a copying and behavior while 
recursing the JavaScript object,
the size of the data is relevant
to the cost of
the thread hop.
So one down side of the 
postMessage API is it does not 
have a notion of statefulness 
between the request and the 
response, if you make a lot of 
requests you get a lot of 
responses back and it is hard to
correlate the responses to 
requests. 
SPEAKER: We've seen how to 
communicate with a worker using 
postal 
al message, you can use message 
channel.  You can pass your port
to a context, frame, or worker, 
and you can message between the 
two.  They have the same 
interface as we saw.  The other 
is broadcast channel, a message 
channel that is shared to all 
contexts associated with an 
origin, so all tabs, frames, 
worker, ServiceWorker.  You 
instantiate a channel with the 
key word.
And soon, we are actually going 
to have a fourth way to 
communicate, and this is 
transferable streams.  It lends 
itself well to things like audio
and video where the format you 
would want to use to express 
these things is streaming.  The 
thing with all of these APIs is 
they are message-based.
And based on some of the common 
usage patterns that we've seen 
and what we heard from 
developers, we think there's a 
case here for a higher level 
API.
So we've seen solutions to this 
in user land in libraries like 
com link, via.JS.  These 
coordinate messaging across 
boundaries by abstracting the 
host message using something 
called proxying. 
SPEAKER: So messaging certainly 
improves over a postMessage -- 
sorry, proxying improves over a 
postMessage, but every has the 
cost of a thread hop and it can 
come as a surprise to developer.
Platform lacks can cause leaks 
in these APIs, and they don't 
have the notion of a back-end 
thread, or a concept of managing
threads and re-sizing the pool, 
and embedded libraries are not 
able to share the same thread or
thread pool. And for complex 
APIs, it can be impractical to 
re-create this API surface 
cross-thread.
So this raises the question, is 
there an opportunity here for 
better integration with the 
browser? Is there an opportunity
to provide a more compelling 
API? 
SPEAKER: Right.  So we think 
there's a use case here for 
something that looks something 
like this.  So here we are 
passing the name of a function, 
in some other context, and some 
arguments to a theoretical 
post-task method.  This 
post-task method would return a 
promise that returns to the 
return value of that function 
somewhere else.
And this abstract code helps us 
move from a message passing 
model to a more task-oriented
model.  
SPEAKER: So in looking at the 
requirements for a better API, 
we considered other platforms,
like iOS and Android.  IOS has 
grand central dispatch, an 
available API that is loved by 
developers.  Android has 
something called async task, 
which is a very minimal and 
clean API.  We talked to 
framework developers and experts
in usage of these APIs that were
deeply familiar with the pit 
falls.  What we learned in terms
of the baic requirement for our 
model is
good ergonomics, for tasking 
versus coordinating.  A native 
thread pool, that is sharing in 
embedded libraries and other 
parties in the app, and 
system-controlled thread 
management where the system can 
be in control of making 
decisions on re-sizing the 
thread pool or decisions on 
where to run which tasks.  So we
set off on a path towards 
building a basic task-queue 
based API, inspired by grand 
central dispatch.  And a na￯ve 
API might look like this, we 
have three tasks, A, B, and C.  
And each one depends on the 
results of the previous task.  
And we can submit the task from 
the main thread over to the 
worker thread and we start 
getting responses back.  So 
here, for three tasks, we paid 
the cost of six thread hops.
So there's a few down sides here
and gotchas.
So, for one, the thread hops can
be expensive on lower-end 
devices, depending on the data 
size, it can be up to 15ms.  And
this is add up, if the hops are 
in the path of user interaction,
it can add up to multiple 
frames' worth of latency.  In 
Android, we are seeing this in 
the real world.  So the hosting 
back to the main thread is not a
good idea.  Besides the latency 
issue, it can cause congestion 
from queue build up and you 
might remember from the early 
main thread scheduling talk, we 
are doing all of this work to 
chunk up our work and execute 
our high-priority and our 
default priority work and all of
these postMessages coming at 
random times messes with main 
thread scheduling.  A second 
thing to note is default 
posting, even to the current 
thread, can be pretty bad and we
saw this in grand central 
dispatch with the dispatch 
current queue API. 
SPEAKER: So this brings us to a 
new proposal that we have that 
incorporates some of the 
learnings from other platforms. 
It lets developers send back to 
the main thread, it allows you 
to task together, and pay the 
return cost only once.  It also 
minimizes thread hops using a 
built-in sticky thread pool.
What we want is the experience 
that you see up here on the 
right.  So let's dive into that.
If we re-visit the code editor 
that we showed earlier that 
bundles JavaScript as you type, 
if you do this with task 
worklet, we can leverage the 
features to improve performance 
very considerably.  Because it 
avoids transferring data between
threads, the bunding and 
minifying task uses the same AHT
generated from the parsed task. 
Only the resulting minifyled 
code is an is a small string is 
sent back to the main thread.  
The implementation looks like 
this, this is the workload 
module, and you class with the 
process method.  On the main 
thread, we coordinate the 
Dataflow using this post tax.  
So we are going to parse the 
code and pass the resulting AST 
through the bundle and minified 
task.  And the important thing 
to note here is that none of 
these variables are holding 
values, these are pointers to 
data that exists in a thread 
pool.  Data transfer back to the
main thread only happens when we
await the result property of 
that last task.
So doing this in a typical work 
implementation takes six hops as
we saw.  We executed three tasks
and we need to pass down and up 
for all of them.  In task 
worklet, this is two thread hops
because we can transfer data 
between tasks.
Task worklet is backed by a 
thread pool.  So let's say that 
we start off with a task that 
produces a large set of images. 
When we post a task with some of
the images as the argument, it 
will attempt to run in the 
thread where that data is 
already available.  So data is 
never transferred between 
threads in this case and that 
leads to fewer thread hops.  To 
take advantage of pooling, we 
will resort to changing data 
between threads to get 
parallelization.  And finally 
let's say the result we are 
looking for here is a comparison
of the number of cats versus dog
photos, because that is what is 
important in the end.  In this 
case, the only thing we transfer
back to the back to the main 
thread is a single integer and 
that is extremely cheap.
So we have been thinking about 
what the feature of web 
development off the main thread 
might look like.  Today we have 
libraries like com link that use
reflection to emulate the thread
in the workers so it can be 
moved seamlessly.  We are moving
to a task worklet model where 
developers approachal
multi-thread where you name and 
the task graph that optimizes 
execution and Dataflow.  This is
an early proposal, we are 
looking for feedback and real 
world use cases.  There's an 
implementation in chromium 
behind the features flag and we 
have poly fill and source code 
and demoes available at this 
GitHub repo, there's a link at 
the end of the presentation as 
well. 
SPEAKER: So there's been a lot 
of interest in this idea of 
multi-threaded JavaScript over 
the last couple of years, there 
are several independent 
explorations by various 
frameworks and apps.  We dug 
into this in the last few months
to understand how far can we get
with just using the worker API 
as a way to achieve 
threadedness.  And, to set some 
context here, a new worker 
doesn't just spin up a raw OS 
thread, it actually creates its 
own JavaScript environment on 
top of this.
And part of that is what is 
called a V8 isolate that has a 
non-trivial weight, in addition 
to the weight of the OS thread. 
A key implication here is that 
the worker, by creating its own 
JavaScript environment, is not 
able to share data or code with 
the main thread.  This is 
fundamentally different from 
background threads on other 
platforms and other
languages.
So this has implications in 
terms of using workers in a 
mainstream way and, by that, I 
mean when the worker is in the 
path of user interaction.  In 
particular, we looked at two app
development models using worker.
The first one is doing state 
management in a worker, and this
is where you can do sort of the 
heavy lifting, business logic-y 
stuff in a worker. And the 
second model goes even further 
and does the bulk of rendering 
in the worker.  While worker 
doesn't have access to the DOM, 
there are libraries like worker 
DOM so you can do virtual DOM 
updates in the worker and then 
ferry this back to the main 
thread.  So real apps have been 
built using these models, 
however, there are some 
significant challenges that we 
want to sort of highlight here 
if you are planning to go down 
this route.
The first thing is that it is 
hard to have synchronous access 
to a worker, and real apps need 
to synchronize to the app state,
and this means you now have to 
maintain and replicate this app 
state in both places and 
synchronize it continuously.  
And this has a cost in terms of 
thread hops.
The second thing here is that 
the worker has to be 
bootstrapped with all the script
and modules that it needs.  As 
we said, it does not share code 
with the main thread and this 
has implications for start-up 
delay.
So we run benchmarks and, to get
back into the space cost of a 
worker, and these are numbers 
from a medium Android device.  
Start-up takes upwards of 10MS, 
this is a Chrome-on-Android.  
Thread hop varies from 1 to 
15MS, depending on the device 
and the size and type of the 
data.  And look out for a blog 
post that will be accompanyying 
this talk in the next week or 
two and we will have detailed 
links to the benchmarks and data
there.
We set up more realistic 
benchmarks, we build app thatare
representing the app development
models that we mentioned is the 
state management in a worker and
rendering in a worker, and we 
did a ton of runs on real mobile
devices, both with and without 
worker.  And we looked at a 
variety of metrics, everything 
from loading metric metrics, 
loading metrics, to memory 
metrics, such as frame rate and 
latency, and we approximate it 
using cycle time.  So the blog 
post will have more details on 
this.  But I do want to 
highlight one bit of interesting
data.
So this is basically showing 
runs with an app that is 
represented in rendering in a 
worker.  The red are runs with 
worker, blue is runs without 
worker.  So what we are seeing 
here is that, on worker, we are 
seeing a higher and more 
improved frame rate.  But on the
flip side, we are seeing a 
higher input latency.
So there's a fundamental 
tradeoff here between improved 
smoothness versus user
latency.
Workers are able to free up the 
main thread and they can free up
the main thread to focus on 
rendering, and fewer script 
means fewer hiccups.  And input 
latency suffers from thread 
hops, and the worker environment
is a limited environment and 
doesn't have APIs and is not 
just the DOM, there are many 
APIs that are not available, 
like media, audio, etc. 
SPEAKER: So the key thing to 
take away from this is workers 
might be able to make your 
rendering smoother, but it might
do it at the expense of input 
delay.  There are cases where 
this is worth it.  Amp script 
renders using workers to sandbox
potentially misbehaving 
JavaScript.  Slower or problem 
attic code that is running 
worker in the emulated DOM 
cannot impact the AMP document. 
So in AMP, the benefits they get
from sandboxing untrusted code 
out weigh the latency they get 
from transferring events.  So we
wanted to summarize when to use 
workers, but there is no perfect
rubric
for this.
There's a couple of hints to 
use, if you have code that 
blocks for a long time, simple 
inputs and outputs, or a request
model, you are in a position to 
start off with workers.  If you 
have code that relies on the DOM
or is in the path of input 
response or code that needs 
minimal overhead, you might want
to start off with a different 
solution.  You can approach 
workers later.
So when adapting a threaded 
approach to state management, 
make sure that your state 
management and business logic 
out weighs the cost of creating 
a worker and sending and 
receiving messages.  Make sure 
that the worker is pulling its 
own weight.
So we're at the beginning of a 
fairly major shift unhow 
applications are developed for 
the web.  We are excited to 
explore new possibilities for 
effective scheduling and 
throttling, and we hope that all
of you are too.  
SPEAKER: So we want to leave you
with some of these key messages 
from our talk today.  It is hard
to achieve responsiveness 
guarantees because there is so 
much work happening in modern 
apps, and we think scheduling is
a way to tackle this, and we can
improve scheduling with existing
and new primitives and 
frameworks are in a good 
position to play a big role 
here.  In terms of an offloading
work from the main thread, you 
can use worker as an extension 
to better mainthread scheduling,
some types of work are better 
suited to worker than others, 
and new APIs like task worklet 
are going to be compelling to 
utilize worker for scheduling.
So that's about it, we will have
a blog post coming with more 
details.  These are the, again, 
the links to the GitHub repos.  
The issues on the repos are very
welcome and appreciated and a 
great way to -- for the feedback
loop.  Do not hesitate to reach 
out to us on email or Twitter.  
Thank you.
[ Applause.] 
SPEAKER: Okay, another quiz 
question. 
I don't know if you have heard 
of JavaScript before -- 
SPEAKER: But DC39, is a group 
out of ECMA39, TC39 is a group 
that defines future JavaScript 
features.  And the way it works 
is people submit proposals and 
we discuss it.
So you are going to see a name 
of a proposal and you need to 
say if it is a real or fake 
proposal. 
SPEAKER: Just something we made 
up.  
SPEAKER: Okay, you will get a 
few seconds peri
item.  So here they come up, 
object seal, logical hash pipe, 
that sounds delicious. 
SPEAKER: I'm curious see how it 
works. 
SPEAKER: Sounds logical. 
SPEAKER: Exceptional seal. 
SPEAKER: Temporal permanence, 
optimum primes, these are just
all words.  
SPEAKER: Imagine going to a 39 
meeting, it is just all words, 
words words words.  
SPEAKER: Soon we are saying 
everything in words. 
SPEAKER: I don't know anymore. 
SPEAKER: Smooth operator. 
SPEAKER: That sounds great.  
SPEAKER: The question has 
closed, okay. 
SPEAKER: People are confident 
about object seal SFLRFBLSH
SPEAKER:
Power rangers, we are not sure 
about logical hash pipe.  Let's 
see the answers.
Object seal is real, blind ref 
is something you complain about 
in soccer. 
SPEAKER: These are the 
proposals, I'm not promising it 
is making it to JavaScript.  It 
is a proposal.  
SPEAKER: The next group, a tiny 
-- logical assignment.  
Let's
have a look.  
SPEAKER: So a realm is a global,
if you create an iFrame, you 
create another realm.
Excellent stuff.
  I like the way your were like,
yeahhhhh, Jake doesn't know what
he is talking about. 
SPEAKER: I don't want to start a
Twitter fight. 
SPEAKER: Spreadable mix-ins,
Fake. 
SPEAKER: It is a good poll.  
People think something about
mix-ins. 
SPEAKER: So optimum primes is a 
robot. 
And optional chaining --
yes, I like that.
55 percent of people thought 
smooth operator was one.  No,
that's a song.
Is that it? 
SPEAKER: That's all of the 
groups. 
SPEAKER: So who is speaking 
next? 
SPEAKER: Well, up next to talk 
about application architecture 
stuff, a couple of people who 
have been on and off the stage 
for the last couple of days, it 
is Paul and Surma.  Big round of
applause!
Architecting Web Apps - Lights, 
Camera, Action!
SPEAKER: I was on the way to the
office this morning, and I 
realized it is very much like 
web dev.  Rush hour, like all 
the traffic, every time I get to
the office, 9:00AM.  So all of 
these people are in their cars, 
everyone is just rushing and 
nobody can move through anybody 
else.  I think that is like web 
because -- you have this main 
thread, you have one thread, on 
the main thread, you have 
styles, JavaScript, layout. 
SPEAKER: Okay. 
SPEAKER: Almost everything is 
running, and everybody is 
competing for this one resource,
the road, or in this case the 
main thread. 
SPEAKER: So, likes, Mr. 
Framework has a car, Mr. Paint 
has a car, Mr. Business logic 
has a car and they all want to 
be on the road and it is full. 
SPEAKER: And then everybody gets
the angry tweets and sees all of
this performance advice and 
nobody knows what to do because 
it is constrained into one 
place. 
SPEAKER: Yeah, look at the 
production value. 
SPEAKER: I know, and it is -- 
there we go.  
SPEAKER: Videos. 
SPEAKER: I don't want to see it 
any longer than necessary.  So 
look, rush hour.  As we 
described in that video, that is
kind of how we feel when we look
at the web at large.  We look at
it and we go, it -- all of this 
code should be here, but it just
feels like the traffic is the 
problem.  There's too much going
through the main thread.  
SPEAKER: And, traditionally, you
would, you know, the main thread
is full.  It is over worked and 
under paid, you would say, cool,
I will use threads, on any 
platform, you can spin the 
thread, run it, call the 
function, and everybody is 
happy.  But JavaScript is 
inherently single threaded and 
you cannot do that. 
SPEAKER: Each thread is its own 
universe, like we heard, there's
a V8 isolate.  So you cannot 
just call this on another 
thread, but you have shared 
stuff that you can work on.
So that's a challenge, and then 
it gets more interesting because
say, for example, you are trying
to build, I don't know, a chess 
game for argument's 
sake.  It takes a few 
milliseconds to calculate a 
move. 
SPEAKER: And it gets 
exponentially difficult, and you
build it with DOM, and you know,
a bright spot says we should do 
3D.  And you were like, I was 
already behind on my project -- 
SPEAKER: And a brighter spark 
says, how about VR? 
SPEAKER: Yeah, I want to stand 
on the board. 
SPEAKER: I want to be on the 
game. 
SPEAKER: There is already rush 
hour. 
SPEAKER: Turns out, frame rate 
is very important when it comes 
to VR. 
SPEAKER: There can be voice, or 
many other things that it could 
be. 
SPEAKER: It is unlikely for you 
to do it on the web currently. 
SPEAKER: So this is the question
that we have been thinking 
through for the last little 
while.  Is there anything we can
do or suggest or think of to 
help? 
SPEAKER: We have two birds, we 
are looking for a stone. 
SPEAKER: Exactly.
  [ Laughter ]. 
SPEAKER: Okay.  ActiveModel.
So this, we kind of stumbled 
over this.  The ActiveModel is, 
as we said right here, 45 years 
old and it is used, or made 
popular by elixir and ponies, 
there are languages that use it 
to this day and successfully so,
and we realized it is a really 
good fit for the web. 
SPEAKER: Because what it does, 
it kind of -- it makes a feature
of that single threadedness of 
JavaScript, but we like to 
explain -- if you come across it
before, great.  If you have not 
come across the ActiveModel, we 
like to explain it in a specific
way.  Check this out. 
SPEAKER: So when we did super 
charge, you saw usus on the 
screen, behind the cameras, we 
had a crew.
And that means that we had one 
person working the camera, one 
person worrying if the audio was
good, a director, and each of 
these people were responsible 
for that device.  
SPEAKER: Instead of going over 
and pressing one person's but  
buttons or messing with 
settings, most people have to 
communicate with one another to 
get the job done.  Like the 
actor is in the system, but you 
have to send messages to one 
another and communicate and 
collaborate to get the final 
thing working. 
SPEAKER: So that's kind of where
video, or more production value.
That is good, right? So that's 
where we see a mentality that 
fits the web really well.  When 
you start thinking about where 
can you draw a line for 
individual pieces of 
responsibility in your app.  
Instead of thinking about 
classes and calling the method 
on the other class, you can 
think about the other message 
and how to send a request to 
happen. 
SPEAKER: Like areas of 
ownership, so at a conceptual 
level, how would you think this 
through, what would it look 
like?
So imagine that you have an 
actor, and its job is to run 
your user interface.  That's the
area of ownership, that's what 
it does.  That is the job, and 
only that.  You might also have 
another actor whose job it is to
handle state for your 
application and, yet, another 
one that handles the storage.  
And imagine in your app, a 
typical interaction is 
favoriting the item.  The 
interface sends to the state and
says this was favorited and the 
actor handling sends it to the 
storage and says we need to 
remember this.  And we can at 
this point introduce a new actor
into this story, something that 
can broadcast. When the state 
changes, we want to send that 
both to the user interface and 
the storage to the -- to be 
reflected in both.  
SPEAKER: Yes. 
SPEAKER: As you can see here, 
that's a sep RAISHZ
-- 
separation of concerns.  The 
click, and the transation, I 
recommend.
So ittal really -- it helps you 
to think about your app in a 
different way, where does new 
code go, it is a good way to 
structure the app in this way. 
SPEAKER: Absolutely, we used the
separation of concerns when we 
talk about html, JavaScript, CSS
whether , whether we talk about 
components in a modern 
component.  It is another 
version of the same story. 
SPEAKER: And another benefit 
that you get and we heard about 
this problem a couple times is 
we often see big chunks of 
JavaScript just run, frameworks 
updating the virtual DOM and 
then the DOM or something like 
that, and then with this 
pattern, we can introduce a 
natural breaking point where you
can give the browser a chance to
ship a frame.  Every time you 
ship a message, the point is 
where you say, okay, the browser
can ship a frame and intervene 
if we're out of the frame 
budget. 
SPEAKER: So another side effect,
a positive one of this, is 
location and dependence.  We 
will come back to this, this is 
repeating the frame.  But think 
about actors as, they are not 
all the same.  They have 
different requirements, some 
actors do not have access to the
main thread, for example, 
because of the kind of work they
do and we can run them in 
different locations, and not on 
the main thread.  As a result, 
we will come back to that.  But 
maybe we just bought ourselves a
little capacity for rush hour. 
SPEAKER: As long as the messages
are delivered to the actor, the 
actor will do the same work that
we did before, and we will 
respond with the same message to
the entire app and it keeps 
behave thing same way no matter 
how it runs. And because of the 
location and dependence, we can 
lower the likelihood of long 
work impacting the main thread 
and making the app jenky. 
SPEAKER: So that is what it is, 
but I think that maybe -- I like
seeing code.  I think that code 
helps.
And so we have we are not 
launching a product, or a 
framework, or even a library.  
We just want to have a chat with
you about architecture, and we 
are using this stuff for a while
with our colleague, Tim, and we 
are putting frames together.  We
will show you the code we have 
been using. 
SPEAKER: We welcome you to try 
it out, and you can write your 
own.   We can't care if you use 
our version or a different 
version, we care about the 
concept of the architecture. 
SPEAKER: So think about a stop 
watch app that you start and 
counter pin, you have seconds, 
you pause play, that kind of 
thing.  And then you might reset
the time, if you don't.
So, in our code, we have this 
active base class, and that top 
function up there, hook-up, is 
the first thing that you need to
know. And the job of hook-up is 
to register the actor in the 
system so you can send it 
messages later on.  Because 
ultimately, we won't know where 
this actor is in the system, and
so we need a -- it is like a 
registry where we can say, I'm 
going to tell you that there's 
an actor, it is found under this
name. 
SPEAKER: It is the equivalent 
from custom elements, there's 
customelements.defined and this 
custom element is known by this 
thing and it does the same for 
actors. 
SPEAKER: So we have actors, the 
clock and the UI, and in the 
bootstrap we instantiate UI and 
hook it up so it is available on
the UI as a string name, and we 
do the same with the clock, like
so.
So now we can talk about how you
might implement something like 
the clock itself.
And, in our case, when you have 
something like this, it is a -- 
almost like a pure data actor.  
It doesn't have any need to go 
near the DOM, it just wants to 
tick and pause and all of those 
kinds of things. 
SPEAKER: What do you need? A set
interval and that's it. 
SPEAKER: A time out? I have a 
thing against set intervales, 
long story, find me after and I 
will explain why.  We will model
this as a state machine.  We 
start with a paused state, we 
can transition to a running 
state.  Every second, we will go
to the tick state, and that will
take us back to the running 
state.  You can imagine being in
this tick/running state, like a 
clock, Indeed.  We can
pause and we can reset and go 
back to that. 
SPEAKER: That's a nice pattern 
that goes along well with the 
message passing because of all 
of the triggers in the state 
machine world can be you send a 
message to this state machine, 
it is ingested and a transition 
happens. 
SPEAKER: Absolutely. 
SPEAKER: So we found that 
there's a lot of implementations
for state machines out there in 
the wild, and that is not very 
unexpected.  And we have have 
been using X state, written by 
David from Microsoft, it allow 
you to declare your state as a 
JSON object, and then you pass 
this to the machine constructor 
and you get a state machine. 
SPEAKER: Yeah.  So our clock 
extends the active base class 
and we instantiate our state 
machine.  We say go to the 
initial state which is that 
paused state, and then later on,
imagine that we receive a 
message and our active base 
class has a message, I have a 
message, what do I do? In this 
case we assume that the state is
changing, so we use the state 
machine transition to get from 
where it was to where it needs 
to be, so the message is driving
the clock. And we inspect the 
new value.  If the clock is 
running, we set a tick time out 
for one second.  If we tick, we 
increment the tick count. And 
then the clock will send itself 
a message.  Now, it could call 
its own functions.  But we tend 
to be a little bit -- we like it
fair.
So we want to make sure that the
clock sends itself a message 
like every other app has to send
itself a message. 
SPEAKER: Like the message queue,
it processes the one message and
goes to the next message.  So we
are cutting that line, if you 
want to keep it fair, we queue 
the message to wait until the 
end like a good  good human. 
SPEAKER: Cancel the tick, we 
will re-send the tick account 
and it will send a message to 
pause.  And let's talk about, oh
no, sending messages.  So the 
clock, we're going to have to 
send a state update to the user 
interface so they can reflect 
the time going up and this is 
the opposite to the hook up.  
This is look up.  
SPEAKER: Hook up, look up. 
SPEAKER: Okay, hook up and look 
up.  So we look up the UI, and 
then we can send it a message. 
SPEAKER: That is very important 
to note here, the handle, the UI
variable is not the instance.  
You cannot change the member 
variable off the class, it is an
object with the sent method and 
that is the only way you are 
allowed to interact with any of 
the other actors. 
SPEAKER: Exactly.  So in this 
case, we're going to send the UI
a message and the message is 
going to say what the time is 
and whether or not the clock is 
running.
We found the TypeScript is 
really helpful at this time 
because the messages need to be 
well-formed and understood and 
there needs to be a data 
contract and we found, from 
practical experience, that 
TypeScript is a good way of 
saying, this is a number, a 
string, another object, and so 
on and so forth.  So just take 
that as what it is, really. 
SPEAKER: Yeah, a recommendation.
SPEAKER: We found that useful.  
Let's talk about the UI a little
bit.
Interestingly, you can bring 
your own framework. 
SPEAKER: Yeah, in this model, we
don't care about the framework, 
you can use React, View, you can
use lift, Svelte, whatever you 
are comfortable with and 
whatever makes sense in your 
scenario.  The interesting shift
is that the UI framework is not 
your base, or your entry point 
anymore.  The center of the 
universe has kind of moved. 
SPEAKER: Yes, the bootstrap -- 
SPEAKER: The UI is one 
participant of many in the 
system of actors. 
SPEAKER: If we find it is not 
behaving well, you can swap it 
out.  The only things it has to 
do is to listen to messages of a
particular type.  In that case, 
we use Preact.  It works great, 
we will import it.  UI extends, 
when it receives a message, we 
will render using Preact to the 
document body.
SPEAKER: Same again, we need to 
send messages back. 
SPEAKER: Absolutely, to do that,
on the UI side, we find the 
clock actor, we look up on it, 
and we will send it a message.  
In this case, for this familiar 
example, sending it a message to
start, for start, reset, and so 
on.
And then we talk about location 
independence a teeny bit more.  
One of the questions that Surma,
Tim, and I ask our selves is 
does this actor need access to 
the main thread, and it has to 
do with the rush hour thing.
The general rule of thumb, there
are caveats we will mention, but
the rule of thumb is the UI 
actor is one that needs the DOM 
and ought to be on the main 
thread wherever possible.
There's an exception, that is 
that certain APIs for media and 
device identity are only 
available on the main thread 
today. 
SPEAKER: We think it is a bug, 
we are talking about exposing 
these APIs in a worker and 
somewhere else.  But that is 
just not the world we live in 
today.  So for now, that's a 
restriction. 
SPEAKER: Yep, and tools, not 
rules.  And you might be 
thinking I should move all of my
actors away from the main thread
app.  If you have a really 
chatty actor that needs to talk 
to the UI actor, you might want 
to leave it along side the UI 
actor on the method.  As Jason 
and Shubhie were talking about, 
there's a cost to the thread hop
and that is more expensive than 
sending the message and keeping 
it along side the UI actor. 
SPEAKER: If you want to do that,
measure and see what the impact 
is. 
SPEAKER: Exactly.  So for 
location independence, all of 
that not with standing, measure 
we were back here, where we 
started with the four actors, 
they are on the main thread 
where we put them by default.  
So you might want to look at it 
more like this, and you are 
thinking, why did they say not 
main thread? Surely they meant 
web workers, and we
kind of did.
When we build these apps, web 
workers feature heavily, we move
a lot of actors to workers if 
they are non-chatty. 
SPEAKER: We have tried or think 
it is sometimes useful to run 
the actor on the server side, 
you can incorporate the back end
into the architecture of the 
app.  It is another actor in the
system.  And the game that you 
have been playing all day is 
actually written in this model. 
So every play out that is 
playing is an actor, the panel 
that the emcees use to control 
the app is an actor, the 
presentation view is another 
actor and the Firebase storage 
is the shared actor. 
SPEAKER: The mechanism by which 
they chat can be a fetch, web 
socket, it doesn't matter, as 
long as these actors can talk 
and they have a way of sending 
messages, you are set.  So back 
to the original question: Did we
actually help with rush hour, 
would this help? So we will 
review. 
SPEAKER: So one thing that we 
achieved is that we are kind of 
making it less likely to have 
big chunks of uninterruptable 
JavaScript and more little 
chunks where the browser can 
stop between and shape a frame. 
That is one advantage that we 
have. 
SPEAKER: The interdependence, 
many apps can be run 
successfully away from the main 
thread, fewer cars at rush hour.
SPEAKER: And as a result, a lot 
of workers can happen in an 
unexpected way if you are 
processing a big API response, 
that is  can happen in a worker.
SPEAKER: And there are other 
benefits that Surma, Tim, and I 
have noticed.  One is better 
testing, with the area of 
ownership, it is easier to look 
at an actor and say, I know what
you should do, you have a 
non-message that I can call and 
I can make sure that you to the 
right thing.  So the testing 
seems to become a little bit 
easier. 
SPEAKER: And from the other 
side, you can mock another actor
by implementing the message that
the actor needs to receive and 
not do the actual work, but just
send pre-recorded messages back.
SPEAKER: You have a separation 
of concerns, and again, it is 
just -- it helps you in terms of
maybe dividing the work with 
your teammates, or even just 
deciding for yourself which 
actor needs to be responsible 
for this part of the system.
SPEAKER: You get the 
code-splitting, because you have
actors that can be hooked up to 
the system, at any point in 
time, it allows you to split 
them up and load them lazily, 
you can import from underneath. 
SPEAKER: And bring your own 
framework, if you want to use a 
particular library or framework,
you can, it is not that you have
to -- there is no prezipive way.
If you want to use it one way, 
that is great.  There are 
considerations in this world, in
this set-up that we described.  
One is actor perf challenges.  
If you imagine the UI actor, it 
wants to run long and not be 
yieldy, you have the problem, 
the process and the application 
system is not going to decide 
not to hog the CPU.  This is not
going to go away.  But we think 
that the scheduler API that 
Jason and Shubhie mentioned in 
the previous talk is a huge part
of thus  this story, it is a 
great way for individual actors 
to break the work into smaller 
chunks.  And you are like, I 
don't know if I can actorize my 
blog.  We  we agree, this works 
when you have apps, and you have
apps, I can ring fence and have 
mark-offs for this application 
and have an owner for it. 
SPEAKER: And there is definitely
a different mental model, it 
shifts the center of the 
universe away from an UI 
framework to many center pieces,
all of the actors communicate
communicating.
So where do we draw the line, 
what is an actor and what is 
part of an existing actor, what 
messages should we send and how 
granular should it be? So if it 
seems weird, if you are playing 
around with this, that is to be 
expected, it is a different way 
of architecturing a web app.  
SPEAKER: So that was the rush 
hour bit.  And the FEECH 
future-facing stuff, we have 
thoughts about that, too. 
SPEAKER: You talked about 
actors, I want to talk about 
cameras and I want to talk about
cameras because it plays into 
that story.  So we will -- 
before I drop it.
So the modern camera has two 
bits, the camera body, the thing
that kind of holds the statement
-- 
SPEAKER: Like the business 
logic, it takes the picture, 
where to store it. 
SPEAKER: When you are shooting 
the videos, where you are.  And 
similar to a web app, that's the
sort of -- the state of what is 
going on.  But you have 
different lenses for different 
tasks.  So that one would be 
something like portrait lens, 
this one might be a wide-angle, 
something of a landscape. 
SPEAKER: To make sure the mounds
are compatible.  And with an 
actor, they speak in the same 
messages to each other. 
SPEAKER: They are important and 
standardized, and everyone plays
to the stay data contract.  And 
other than that, you can do what
you like. 
SPEAKER: Plug it in. 
SPEAKER: Off you go. 
SPEAKER: Last video, I promise.
  And how do camera lenses apply
to this story? 
SPEAKER: When we talked about 
this earlier, I think naturally 
we would have all thought of the
DOM, we would have thought of in
the chess game, or this version.
But theres the freedom that you 
get from the UI actor, as long 
as it can speak the right 
messages, it is implemented in 
different technologies.  So you 
could have a different actor 
that does 3D.
And there you get that, it has 
to send the same messages as the
standard DOM version, or maybe 
one for XR.  It needs to be able
to do that, you need to send the
right messages, or voice as 
well.
A similar kind of story.
So, as a bi-product of the
actor model, we have this 
swappable actors.  And maybe you
want DOM and voice. 
SPEAKER: And you can imagine a 
set-up where you have a DOM 
actor with a lot of effect and 
visuals, and one DOM actor that 
is implemented in the same app, 
but with a much less intense 
memory consumption version, like
a low-end version of the 
website. 
SPEAKER: Yep. 
SPEAKER: And once you detect the
device you are running on is 
kind of struggling to keep up, 
you can switch it out in the 
middle of the app and down grade
to the low-end visual version. 
SPEAKER: Or a reduced motion or 
something like that. 
SPEAKER: And technically, this 
is something called multi-view 
or modal, there's a lot of ways 
to interact with your app and, 
as a by-product, we think this 
model allows you to do it well. 
If you are interested in the 
base class, this is the place to
go if you want to take a snap of
that. 
SPEAKER: This is not only the 
active base class, but it is a 
boilerplate.  So it get you 
started, it is roll-up 
configured, it does the 
code-splitting, the lazy 
loading, you can start quickly 
writing actors and get a feel 
for how it feels. 
SPEAKER: And it is experimental 
and the stomping ground of what 
we have been using and we would 
love to have a chat with you so 
you can tell us what you think.
  We're excited that something 
from 45 years ago has some 
full-circle, it seems to be the 
thing that -- 
SPEAKER: It has been hiding in 
plain sight. 
SPEAKER: It is, from some of the
places in the -- we brought it 
over, not in a purest way, we 
have our own take on these 
things, but it respects the 
single-threadyness of 
JavaScript, it helps with rush 
hour and it enables us to go 
multi-modal, which is exciting. 
And, on that note, thanks.  
SPEAKER: Thank you very much.
.
  [ Applause ]. 
SPEAKER: Thank you, Surma and 
Paul.  I have been told, my 
coemcee from last year, Monica, 
is in the audience.  
SPEAKER: Hi, Monica!  Everyone 
say hi to her and talk about 
machine learning. 
SPEAKER: Are you jet lagged? Do 
you want to take over? We can 
mic you up, it is fine.
How do you feel it took two 
white guys to replace one of 
you? I'm sure there is social 
commentary there.
[ Laughter ].
  It is lunch time.  Excellent.
So the same as yesterday, lunch 
is out, like, over by the forum,
and any specific dietary 
requirements, they are all 
catered for there as well.  And 
then come back here at 2:30.  
See you then!
$1
From Low Friction to Zero 
Friction with Web Packaging and 
PortalsKinuko Yasuda, Chrome 
Engineer  This is a test for
Chrome Dev Summit.
SPEAKER: No, it is going to 
happen until we all die, that's 
how I feel.  I wake up in the 
middle of the night hearing, 
nana, nana. 
SPEAKER: It is worst when you 
are Backstage and you can hear 
it a little bit. 
SPEAKER: Is it playing 
somewhere, or in my head now?
[ Laughter ].
  Anyway, lunch, awesome though 
it is, can make you feel a 
little bit sleepy. 
SPEAKER: Yes, so we thought we 
need a way to wake everyone up. 
I see conferences where everyone
stand up, jump up and down? No, 
we wouldn't do that to you.  But
we would do a quick, we will do 
a Big Web Quiz. 
SPEAKER: We will get the screen 
up.  I'm super fond of this 
question, I -- 
SPEAKER: This is our favorite, 
isn't it.
So there's a couple of changes 
here, we're going to give you 
four seconds per question, a 
little bit longer than usual, 
and it is also two points, 
double points, because it is 
quite exciting.  And I will 
explain the rules to you now.
True, or false?
Off you go.
We are looking at low 
confidence.   So not an array, 2
plus 2 as a string. 
SPEAKER: Is finite 0? 
SPEAKER: No. 
SPEAKER: NaN equal 
NaN? 
SPEAKER: The first is a not 
array. 
SPEAKER: Is not equal false? 
SPEAKER: We have is finite zero.
SPEAKER: You are going to tell 
me that there's a different 
version. 
SPEAKER: We will find out. 
SPEAKER: Fair enough.  Oh, we 
reached the end of that round. 
Let's have a look. 
There are some -- [ laughter ]. 
Honestly, if there was a 
programming driver's license, I 
would say certain people might 
need it revoked. 
SPEAKER: We did say yesterday, 
these are silly questions, don't
worry if you are getting them 
wrong.  I think the first one 
really -- [ laughter ]. 
SPEAKER: It is a bit of a 
worrying sign, if I'm honest.  
SPEAKER: So all are false except
for the object, which is -- hang
on, hang on.  What is going on 
with the array nonsense, the 
array plus 2 plus 2.  We will 
move to the next screen.
So finite, 0, I love it when it 
is almost 50/50. 
SPEAKER: It will be true. 
SPEAKER: That's true, yes. 
SPEAKER: There's a finite 
number. 
SPEAKER: It is finite by itself,
and it is happy to coalesce the 
string to a number. 
SPEAKER: Okay, just so you know.
SPEAKER: I guess we saved that 
piece of knowledge for the next 
screen. 
SPEAKER: Keep that in your head.
SPEAKER: Oh, oh oh.  It is not 
the same as noun, good to know. 
SPEAKER: And it is also a
type of number, not a number.  
And oh, because of coercion, its
number equals equals a string.  
So wait, it is true? What is 
going on here.  Is that true! ?
I don't know.
We need to double check that 
one. 
SPEAKER: Yes, keep going. 
SPEAKER: You will find the 
points.  Null equals false, it 
is false. 
SPEAKER: Excellent, that is 
right.
And the final, so we have 
number.is finite here, you say 
it should be false, is it false?
It will not coerce the string to
a number, it will say that is 
not a number, it is not finite. 
And false, in a string, it is 
true, and then not, and it is 
true again.  So not, not, 
stringy true, which is like 
saying the word false. 
SPEAKER: Sure. 
SPEAKER: I'm not feeling any 
more awake than I did before 
lunch. 
SPEAKER: And are you feeling 
deep sadness? 
SPEAKER: I always do. 
SPEAKER: And even moreso now, 
yes. 
SPEAKER: Should we take a little
look at the leader board? 
SPEAKER: I think we ought to. 
SPEAKER: I like the animations 
you have done on this screen. 
SPEAKER: We are looking at CSS 
blend modes. 
SPEAKER: We have two people 
featuring from the leader board 
yesterday.  Liz from third to 
second, and Will has been first 
from the start, good luck 
catching up, and
Preet is in third place. 
SPEAKER: Can they stay there for
the ultimate prizes? 
SPEAKER: So we should get the 
next speakers up on the stage.  
SPEAKER: Here to talk about web 
packaging, give a round of 
applause for Rudy and ciNiko!
From Low Friction to Zero 
Friction with Web Packaging and 
PortalsKinuko Yasuda, Chrome 
Engineer.
Hi, everyone.  I'm Kinuko, on 
the project for the web platform
in Chrome.  I'm Rudy, with 
Accellerated Mobile Pages at 
Google these APIs
. 
SPEAKER: I hope you like it, we 
will dive into it. 
SPEAKER: This is showing a 
slide, this is an old-school 
projector
slide.
And in our day, slides were a 
good way to communicate.  To 
view the slide's content, the 
projector needed to position 
into place so the light source 
would shine through it.  If you 
watched one of these old-school 
slide shows, you remember how 
tedious it was to progress 
through the slides.
This makes me think about the 
web today, the same feeling.
And when you browse the web 
today,
you can feel all the 
navigations.  Today, the web is 
CROOUZ a lot on the go, in 
between meetings, in the 
elevator, or on a poor 
connection.
And when I have limited time 
before the next distraction, 
staring at it 5 seconds or 
longer, for the page to be 
interactive, that is 
incredibleably noticeable.  Many
of you have observed we used 
exaggerated transitions on this 
talk.  We will put an end to it 
soon.  Those are one-second 
fades, think of how the cycle of
reading and waiting, reading and
waiting, that we have
grown used to.
It is time to do better, we will
help you create zero, seamless, 
user experiences. 
SPEAKER: So a seamless web 
experience is not new.  In 2011,
we launched a 
feature,  instant search.  We 
put into the search result -- 
and the feature scenarios for 
various reasons in particular 
for privacy reasons, it only 
worked for the search results 
that the user already visited, 
and before which we had a high 
confidence of user interest.  
And then, in 2015, another 
example of user experience was 
launched.  I'm talking about 
AMP.  Ruby is what took us 
through in the way that things 
are headed. 
SPEAKER: Stage of the art age 
loading is of intense interest 
to us at Google and Google 
search.  We point users to a lot
of web pages, and thinking about
the totality of the experience 
that the user gets, we want to 
be as fast and seamless as 
possible, even as the user goes 
off of search and into the whole
great world of content they are 
looking to explore.  When we 
started AMP, we are intent on 
using the full power of the web 
platform that is available, but 
we wanted to get here and we 
felt what we can achieve in a 
scalable way on the web got us 
up to
here.
So we gave it thought and came 
up with an architecture for 
instant loading that works on 
today's web.  It is used today. 
The first is the JavaScript 
library to ensure that the 
experience is fast by default.  
This is enforced by validation 
step that keeps the experience 
desk as the site is getting 
updated.  The next layer was 
thinking about how server 
response times can vary a lot it
globally, and not every site is 
situated on infrastructure. And 
sticking huge images into pages 
for mobile viewing is common, so
we added the second layer of 
caching where we can ensure that
the content is pushed to the 
edges of the network for faster 
delivery and we can do 
common-sense optimizations.  And
the best way to get load time to
pre-zero ms is user generated 
content.  This is
attempted in search before, and 
we need to think about the 
privacy implications of such a 
design.  
The cache will help us 
compliment the prerendering very
well and Kinuko will explain 
more of that in a moment.
So we got through this, this is 
where we ended up.  Most of this
is the AMP viewer, that's the 
page you are visiting.  It is 
responsible for displaying the 
content, serve through the 
emcache for speed and privacy 
reasons.  But the url is still 
saying Google.com in it.  To 
help the user understand where 
the content they are viewing 
came from, we needed to add an 
extra piece of UX to the content
area of the page.  Instant 
loading was achieved, and the 
design constraints we faced and 
the workarounds we built for 
them ended up being put on full 
display in the product 
experience, that wasn't great.  
So we heard a bunch of feedback 
on this, maybe from some of you.
So earlier this year, we started
down a path to make the urls for
AMP pages better.  After having 
AMP in the wild for two years, 
we took all that we learned and 
developed the necessary 
primitives in the web platform 
directly so we can make all 
content across the web benefit 
from this technology.  So this 
means for the cases where you 
clix  click on a link in search 
and it is a simple navigation, 
we want the publisher's url in 
the address bar, while having 
the instant or nearly-instant
loading experience.
SPEAKER: So we talked a lot 
about how we are getting to the 
goal of highly optimized user 
experience.
And a lot of this handling is 
required because of a gap in the
web platform, we are now taking 
inspiration from past efforts, 
like AMP, and the instant sites 
and are trying to eliminate this
gap by extending the web 
platform and, by doing so, we 
want to enable this frictionless
user experience across all 
content on the web.  We are 
looking for
at Webpackaging and the portals.
We will start with Webpackaging.
As the name implies, it is meant
for packaging a piece of web 
content, we use it for very 
interesting use cases, but I 
want to explain how it can help 
instant navigation for AMP and 
non-AMP content.  So, stepping 
back, we wanted to make web 
content load instantly and 
reliably.  This is why it is 
hard. When you publish something
on the web, oo you have your 
server, and when you build the 
product there, the server might 
be overloaded.  And then the 
content will load slowly the 
experience is
not good.
So supposedly a content link 
from a popular traffic
store site, and then the 
navigations happen very fast.
The prefetch
to the website -- 
now we're going to fix this, 
before the site, to add the 
cache here.
And then the re.flow site can 
bootstrap the page load in a 
privacy manner because it allows
the browser to fetch this 
content from this cache.
Then it loads instantly, so is 
this the holy grail? Not yet.  
As explained, this design is in 
full display
in the product, the url site -- 
this is where the browser thinks
the content is coming from.  
This is confusing to the users.
  The issue is that the web 
platform doesn't provide a 
proper way to get other 
bootstrapped url pageloads, but 
when the critical resources can 
be shared, andand launched only 
in your behalf.  When the user 
navigates, it has a regular page
load from the server on the 
image faster.
  So how can we achieve this? So
the browser
needs a way to verify the 
resources that are served by a 
fast cache.  This can be done by
adding a proof of origin on the 
resources, which is exactly what
Webpackaging 
provides.
Webpackaging is a concept from a
spec proposal, and this is a 
signed exchange.  It is 
performant that has a single 
http exchange, or a pair of http
request
responses.
It is signed so the browser can 
verify the resource.
And on top of the exchange, 
called banner exchange.  This is
a banner of exchanges
and can have resources on one 
package.  
We think this will enable the 
building block.  And we are 
starting a feature on Chrome 71,
which is in beta now.  You can 
play locally with it by enabling
a flag, or joined experiments to
all of your 
sites, please visit bit.ly link.
We would love you to visit to 
create this feature more 
quickly.
So to create the signed 
exchanges for your resource
resourceses, we need where to 
find the exchanges.  And the 
url, this can be created -- and 
you can generate signed 
exchanges for your resources by 
using a tool to process -- I 
will talk about these options 
later in it
this talk. 
SPEAKER: So the trial process is
needed for sites that are linked
to signed exchange content, that
can be your own site and also 
exchanges like Google.com.  We 
enrolled Google.com in the 
signed exchange origin trial, we
will show you a demo for signed 
exchanges using AMP for Google 
search.  I would like to welcome
Suma from
1-800-Flowers and Rustam. 
SPEAKER: Thank you, we have a 
platform for user discovery, for
speed and engagement into the 
active developer community and 
rolling out web components.  
Webpackaging -- today we are 
excited to demo an example of 
Webpackaging live.  Are you 
ready? Check
it out.
Let's search Christmas
trees.
So notice in the search result 
that the AMP badge is prominent
prominently featured, you know 
what this is in the AMP search 
unit.  So I will click or tap on
it.  And instantly, as Rudy was 
mentioning, you see that there 
was no Google in the url head.  
It is pre-keyed over here, you 
instantly -- you are natively on
the website, versus being on a 
separate cache.
And importantly enough, you see 
that there is no -- there is no 
viewer header, so it adds -- you
know that you are on the page, 
in this case, 1-800-Flowers.com.
And furthermore, having 
attribution so seamless adds to 
more confident realization that 
there will be absolutely 100 
percent attribution going from 
the surp to the native site.
And a big shout -out to the AMP 
team and to Google mobile 
consultants team who have been 
pushing the boundaries of UI/UX 
in making sure the web is taking
all the strides possible to code
to the next level.  Rustam, do 
you want to go through all this
works some. 
SPEAKER: Sure, we will look at 
how you deploy something to 
signed exchange.  In the green, 
you have the request flow from 
the origin, the front-end proxy,
to the user's device.  In the 
bottom, you have the request 
flow into the AMP cache.  In 
between, you have an AMP 
packager.  This prepares the 
documents for the cache and 
signs them to the signed 
exchange.  At CloudFlare, we sat
down to think about how to use 
the global, programable network 
to make this simpler.  This is 
what we ended up with.  We took 
the logic necessary to build 
signed exchange and built it 
into a CloudFlare worker, this 
sits at the edge and supports 
the crypto graphic operations, 
the packageic operations, and 
the logic to sit between the 
user and AMP cache request 
flows.  So what's the worker? 
And simply put, it is V8 running
on the edge.
And this allows you to write 
JavaScript targeting the 
ServiceWorker's API, deply 
deploy it for edge, and having 
this is a great example of what 
workers is capable of.  So in 
addition to releasing the code 
that supports this demo so that 
you can all build your own 
workers to try a signed 
exchange, we also plan on 
building a full-fledged 
CloudFlare feature to support it
at launch.  Back to
Rudy.
  [ Applause ]. 
SPEAKER: Thank you, if you are 
publishing AMP content, we 
invite you to try a preview of 
AMP content in search using the 
instructions on the code.  You 
can learn more about creating 
packages and building an 
end-to-end flow that you saw 
from your own
AMP content.
SPEAKER: We are seeing the 
benefits that signed exchanges 
bring to AMP publishers, and it 
also benefits all pages on the 
web, too.  Now I will show you 
an example of regular 
navigation, the content will 
load slowly on one hand.  On the
other hand, it shows how it can 
be prefetched with Webpackaging.
The user navigates to a page on 
a different site instantly, so 
it is down from the cache of the
re.flow site, so in a privacy 
preserving 
manner.
So we still -- it feels like we 
are progressing through a page 
in this joined experiment, not a
nice humanist experience. And 
you will be wondering how we can
improve this further.  Let me 
introduce the latest proposal, 
portals.
So let's see what we mean by 
navigation versus transition? It
is not too surprising, this 
shows the regular navigation t ,
it loads slower depending on 
connectivity, on the right, when
the user taps on the article, 
the animation is triggered, 
creating a sense of continuity. 
The navigation
just happened.
So it is worth taking a closer 
look, as you can see from the 
address bar.  So you can see the
on feed.glitch.me.  And when the
animations are finished, it is a
cross-site transition, combining
portals in the signed exchanges 
enables these types of user 
experiences, while preserving 
the user's privacy.
  Portals are not limited to 
cross-site navigations, let's 
look at how we can build the 
user experience of a single 
website with micropage 
architecture.
I am showing web comics, with 
this ARIA exploration for
their website.
When you reach the end of a 
chapter, as you can see, it 
takes time to load the next 
chapter.
That's because the website that 
is using multiple page 
architecture and it needs to 
load a new page for each 
chapter.
And now, let's see how that can 
look like with portals.  At the 
end of the chapter, we can go to
the next chapter.  It makes the 
transition seamless, pretty
cool, right?
And this is adding to the 
seamless experience without 
reading to the app, which is not
a trivial amount of work.
  So portals, portals are like 
iFrames.  You can create one as 
an image to image of a page, 
using portal tag, it looks the 
same as an iFrame, and then it 
navigates to the element by 
calling an activate API.  When 
that API is called, the element 
goes from the page and it 
becomes the new top loader page.
You can also add animation to 
smooth out the transition.
So, what is the difference 
between portals and iFrames? The
biggest difference is that 
portals can be navigated into 
and, as I understand the 
differences, portals are 
createdcreated -- we will re-cap
the benefits. Portals enables 
seamless page transitions with 
single page apps, but without 
having to rearchitect your site 
in the different site origins.  
So you can build your website 
using multiple pages and you can
connect them with portals.
So here is the example called 
snippet of portals.  You can 
create a portal as an html 
element and then you can append 
it to the page to have it 
embedded.  And then when the 
user taps the embedded portal, 
you can show a nice
animation in the code to make 
the actual transition.  That's 
it.
Exciting, isn't it?
And then you probably want to 
know the current status.
Well, we haven't explained that 
on the GitHub.
So bit.ly/portals has more, 
Chrome implementation is in 
progress.  We are aiming for a 
release next year and are 
eagerly awaiting
your feedback.
I have one more topic.
Bundled exchange.  In the 
bundled exchange ARIA, there are
multiple exchanges to be in one 
package.  You might be wondering
about the current status of the 
development.
So, while the Chrome team had 
the type to explore the 
possibilities, we think this can
be enable interesting use case 
scenarios, like offline PWAs and
much more.
Here is an example at 
news-reader PWA.  This is based 
on an app built by an awesome 
developer, it runs in a custom 
Chrome builder to use banded 
exchanges.  The app allows the 
user to -- by getting the 
ServiceWorker to download and 
save the article, the user can 
later read the saved articles 
from multiple sites, even while
offline.
And note this is coming from the
origin and use case and the site
keeps control over it.
Here is another example, loading
a lot of resources is costly.  
And the banding of them in one 
JavaScript file, like
Webpack, is a technique and we 
wanted to see if bundled 
exchanges can be used to allow 
the browser to cache individual 
resources in the bundle without 
executing the JavaScript.  And 
the results looked like this, 
and it looks promising, we 
think.  There is some potential.
And we want to know what you 
think.
SPEAKER: So let's go back to the
main topic and wrap up this 
talk.  So we talked about two 
new proposals for zero-friction 
user experiences.  First, 
Webpackaging enables 
privacy-preserving instant 
navigations, and portals give 
seamless transitions between 
pages or sites,
combining to give transitions on
any web pages, even across 
origins. 
SPEAKER: And here is a look at 
the road map.  Our plan is to 
ship signed exchange on stable 
by the middle of 2019 and to 
start on origin trial for 
portals around then as well. 
SPEAKER: For Google search, we 
are excited about signed 
exchanges and portals as a path 
to building lower friction user 
experiences across the web.  
Following the steps of the demo 
you saw earlier in AMP, we will 
give AMP for exchanges next 
year.  We're looking at how we 
can give the same technologies 
for highly optimized
user
experiences. 
SPEAKER: And we are engaging 
various partnes because we want 
to refine what we have, and we 
are making sure that it helps 
them achieve a highly optimized 
view 
experience.
For instance, content publishers
and web developers at 
1-800-Flowers, web comx, CDNs 
such as Digisert, and they are 
working across the web on 
multiple apps. 
SPEAKER: We are waiting on your 
feedback and are eagerly waiting
to hear what you think.  Here 
are the links where you can give
input.  We will be at the ask 
Chrome area if you have 
questions.  We are excited about
the future of the web and 
enabling the experiences we 
showed today and would 
appreciate your help in moving 
these technologies forward, 
thank you.
SPEAKER: 
Thanks. 
SPEAKER: We will do another Big 
Web Quiz, do the ready dance.
For your 
entertainment. 
So is window on window, or 
window on document? Is document 
on window or document on 
document? Implementation, does 
title belong to the window or 
document? 
SPEAKER: Is secure context? And 
the confidence is, it is 
fluctuating a lot. 
SPEAKER: Navigator, no no no. 
SPEAKER: It is changing a lot. 
SPEAKER: What else have we got 
on here? WebKit
is full screen.
Few seconds, and low
confidence.
All right.
There's a lot of chattering in 
the room.
All right.
How to make yourself unpopular 
as an emcee, ask really horrible
questions.
Okay, title, document.  Window 
is on window.
Oh, I think we did this 
yesterday, you can have 
window.window.window.  And I 
will do that for hours, probably
not on request.
Document, yes, window.document, 
and document.implementation.  Of
course it is, why wouldn't it 
be.  Is secure context, 
everybody is very split, they 
are not sure of that one.  I 
have no idea.
Oh, okay.
Device pixel ratio, well, I'm on
the side
of the window, and WebKit is -- 
WebKit is full 
screen, it is on document. 
SPEAKER: Is WebKit the window, 
not the document? 
SPEAKER: Yes, that makes sense. 
SPEAKER: 
Oh, the web. 
SPEAKER: I always get this one. 
SPEAKER: Window get computer 
style is not all, and 
document.all, that's a live node
list.
Honestly, I mean, window or 
document.  Who doesn't ask 
themselves that every single 
day? 
SPEAKER: Ah. 
SPEAKER: Everybody in this room,
that's the answer to that 
question.
Right, our
next speaker.
State of Houdini.
SPEAKER: You have seen him a 
lot.  Ladies and gentlemen, 
DasSurma.
  [ Applause ]. 
>>SURMA: Hello 
everybody, yes, me again.  
Prepare for more bugs.
I'm trying to get the clicker 
working, but it is not.  I'm 
going to use the space bar
for now.
There we go.  Apparently my name
is Surma, good.
And I'm excited to be here, we 
reached a point where, with 
Houdini, I can talk about actual
APIs because they are starting 
to land and that is really 
exciting.  As with any talk, I 
kind of have to start with what 
Houdini is.  On my Twitter, I 
often see there's a lot of 
confusion, there's a software 
called Houdini, apparently a 
magician called Houdini, so I 
want to clear up what Houdini is
really about.
  So every browser has more or 
less four major stages in the 
rendering pipeline.  It starts 
with styles where the browser 
collects all of the styles in 
the document, and then figures 
out which element is affected by
which of these styles, and now 
that we know the width, the
heighth, if it is Flexbox or 
grid, we can do layout and 
calculate how big the element 
is, align them on the page, and 
get boxes on the page, they are 
empty and transparent.  In the 
next stage, we can take that 
layout page and paint it, just 
draw it.
And we can draw it on the page, 
sometimes elements are on their 
own piece of paper, which is 
called layer, and then once we 
have done painting, respecting 
things like background color and
the text cullr and the border 
color, we can give all of that, 
all of these pieces of paper to 
the compositor and put it 
together on the page that you 
see on your screen.  If 
something was its own layer, we 
can move the pieces of paper 
around and that's how animations
are made.  That is a shortcut, 
but you can see where I'm coming
from.  And going back to the 
question, what really is 
Houdini? It is a standards 
effort in the CSS working group 
in W3C to expose hooks into the 
major stages of the layout phase
to the developer, to you, so you
have more control not only over 
the visuals, but to write the 
poly fills and to have more 
control over how your page just 
appears to the user.
It is hard because these four 
stages are different at every 
browser, sometimes they are 
parallel, or maybe they are not 
that clearly separated.  We are 
working with the browsers to 
make sure that everyone can 
implement these APIs.
Houdini can be super 
intimidating at first, under the
umbrella, there's a lot of APIs 
and you don't immediately know 
what to do with what, but they 
kind of form a hierarchy.  So 
you have four high-level APIs 
for major APIs that basically 
represent those four major 
stages of the rendering 
pipeline, and then you have 
low-level APIs that have the 
under pinning, the basis of 
Houdini and that makes the 
high-level APIs possible in the 
first place.  And, specifically,
worklets are really interesting.
Worklets are kind of the Swiss 
army knife within Houdini for 
performance.  So I wanted to 
make sure I take a second to 
explain these, we will use them 
for the rest of the talk and, 
more importantly, I wanted to 
distinguish them from workers.  
A lot of people confuse these 
two and I cannot blame them, 
they sound very similar and they
is ahave a lot of overlap, but 
there are important differences,
and, to talk about that, we will
talk about the event loop.  This
is the event loop, if you don't 
know about it, it is fine.  I 
will explain everything you need
to know today, but if you want 
to know more, I recommend that 
you watch Jake's talk about it 
which you can find it on YouTube
if you type in his name and 
event loop, and I'm using a lot 
of his visuals in his talk.  
This is an event loop, it is a 
loop, it processes events, and 
it is called an event loop.
Whenever an event happens, the 
JavaScript engine checks if 
there's a handler for this event
in your code base and then takes
the code for that handler and 
queues it up, and every turn the
event loop takes something out 
of the queue and runs it, and 
the next thing, it takes 
something else out of the queue 
and runs it.  And that is super 
simplifed and there is more 
nuance to this, but that is kind
of how the event loop works.  
And in this case, the worker 
looks like this, it is a 
separate event loop.  It is an 
isolated scope with its own 
handler and events and they have
nothing to do with each other.  
They might put a task into the 
other looped queue with 
postMessage, but that is pretty 
much it.  And there is 
considerable cost to spinning up
and maintaining an event loop, 
and that is why you can't just 
spin up a thousand workers and 
call it a day, because that is 
quite costly.
Worklets are different.
They are also isolated 
JavaScript code and, you know, 
with their own scope.
But worklets don't have an event
loop.  Instead, they kind of 
attach to existing event loops. 
And that makes them a lot 
cheaper to create and maintain. 
You can even fetch multiple 
worklets to an existing event 
loop and, because most worklets 
are specified to be stateless, 
we can even migrate them in the 
middle of their lifetime. If it 
makes more sense for your code 
to run in sync with another 
event loop, we will take it off 
to the next event loop where it 
makes more sense.  This comes in
handy later on, but that is 
basically the big difference 
between those two.  And now that
we have worklets in our back 
pocket, we can talk about the 
very first Houdini API which is 
for the paintWorklets.
So, as I said, I will not go 
over these in order, maybe order
of availability.  For the 
paintWorklets, the CSS is the 
CSS paint API, and all of the 
elements have to be painted 
sooner or later to appear on 
screen. And you could use CSS to
customize how elements apire on 
screen, but only with the way 
that CSS exposes.  So, for 
example, if you want to do 
rounded corners, you can use 
border radius and you get this 
and it is kind of great.  But 
there are different ways to make
a box seem like it has rounded 
corners.  If you want to use any
of these other ways, you are 
kind of screwed nowadays.  What 
do you do?
So, for example, there's an 
squircle, which is 
mathematically speaking closer 
to a circle than a square.  If 
you wanted to have this thing on
the web today, what would you 
do? Maybe an SVG background 
image, not really a border, you 
can use a Canvas image, but with
Houdini, you can teach CSS how 
to draw the exact look that you 
want to have on your page.
So how does this work? Step one 
with all worklets is that you 
have to load a JavaScript file 
into the worklet that the 
browser gives to you.  So in 
this case, we have the CSS 
namespace and all of the 
worklets that Houdini brings are
going to be in the CSS namespace
sooner or later.  In this case, 
it is paintWorklets and every 
worklet has the module that it 
can load a JavaScript file into 
the worklet.  Let's take a look 
inside the file.  In that file, 
we want to teach CSS how to 
paint something new with 
JavaScript.
But first it needs a name.  So 
there's a registered paint 
function and it takes a class 
and now we want to basically 
associate a name with that 
class, and every paint class, 
and this can be using my paint, 
we need to define a paint call.
And this paint call gets the 
context which is almost 
identical to the canvas contact 
that you are hopefully familiar 
with, the geometry object that 
tells you how much width and 
height that the element has that
you are supposed to paint, and 
the properties object, which 
allows you to read the styles of
the object you are painting, 
background color, text color, 
font size, all of these things 
are in there for you to read.  
And basically, I'm setting the 
font style to hot pink and I 
will draw the biggest possible 
circle in the middle of the 
element.  Not very useful, but 
in a way useful so I can show 
you what is going on.  And now 
that we defined how to draw this
appearance, how do we tell the 
browser to use this new 
appearance? So we do that in 
CSS, I'm hitting a new style, 
you can use paintWorklets 
everywhere that CSS expects an 
image.  In this case, I'm 
setting the background image not
to a url but to a paint, and I 
will use the name of the 
registered paint in here, so 
this is my paint, and this is 
actually -- if it works, no.  
This is not good.
I told you, prepare for
more bugs.
No, the entire laptop 
froze.  
No!
Maybe it is coming back to 
identify,
life, I will try something.  I 
achieved something today.  I 
will try to get out
of here.
[ Laughter ].
  Um, what do you do in this 
kind of situation? I might be 
able to remedy this.
So I can see, maybe you can 
follow along.
Yay.
We are trying to kill this.
SPEAKER: Are you running
--
SPEAKER: 
>>AUDIENCE MEMBER: Are you 
running slack?
[ Laughter ].
[ Applause ]. 
SPEAKER: Give me two seconds, we
are hopefully back up in couple 
of minutes.
This is like Supercharged, with 
life debugging on stage.  We are
hoping that this was just the 
browser having a hiccup, more 
than anything else.
I am not sure if this is going 
oo  to work.  I'm really sorry 
about this.
That might be the only chance I 
have, honestly.
Let's do a reboot.
So are you -- do you want to 
drop the other one into the 
other screen? 
SPEAKER: This entire thing is 
just not responsive.  There is 
not much I can do.
I have to -- you say just 
reopen, this thing is not 
responsive at all. 
SPEAKER: Oh, jeez. 
SPEAKER: So I will do a re-boot.
We can try, I have it in here as
well,
right? 
SPEAKER: [ Laughter ]. 
SPEAKER: All right, basically, 
give me a couple minutes, 
okay?
[ Applause ].  
SPEAKER: All right.  I'm already
thinking in my head which 
content I can cut to still stay 
in time.
Oh!  Do you know why I was 
lagging? The power was out.
Apparently it didn't charge, so 
my battery
was empty.
Great.
This is exactly the kind of 
experience I was hoping to give 
you at a Chrome Dev Summit.
[ Laughter ].
Now, the thing is, I can't turn 
it back on, because it is 
telling me the battery
is empty.
All right, we are almost there.
Now I'm going to make really 
sure that this power plug stays 
in.
This is like a speed run.  I 
almost want to participate in a 
speed run, but not without me
knowing it.
It is booting up, you just 
cannot see it yet.
So
we're getting there. 
>>AUDIENCE MEMBER: 
(Indiscernible). 
>>SURMA: I know, that is the 
point.  Why should I walk there?
I'm setting up my laptop,
you 
dance!
[ Applause ]. 
PAUL: Please boot up faster [ 
laughter ].  Did you make sure 
it is charged? Never going to 
let you forget this.
This is not how I expected this 
to go.  With friends like these,
do do do do, are you back up and
running? 
>>SURMA: Almost.  Keep going,
keep going.  
SPEAKER: Do you need moral 
support? 
SPEAKER: I actually dance at my 
desk.  
SPEAKER: You do.  
SPEAKER: Because I don't sit 
down at my desk, it helps my 
back to stand.  But there is 
nothing like having your head 
phones on and coding and being 
like, [snapping fingers].
Because everyone around you is 
like, what is he doing? Yeah, 
I'm debugging.
[ Laughter ].
Just so you know, that is how I 
-- I was told there's a problem.
Can
you dance? 
SPEAKER: What's happening? 
SPEAKER: This is your framework.
SPEAKER: That's why I'm worried.
SPEAKER: Your framework is 
actually doing quite well. 
SPEAKER: Brilliant, that's all I
care about.  That is fine. 
SPEAKER: Is it up and running? 
SPEAKER: So my favorite thing 
that happened thin last -- in 
the last couple days is when the
Big Web Quiz went down for you 
yesterday.  I was watching that 
Backstage on the live stream, 
and the live stream
is about 30 seconds behind.  And
the users are saying, this 
doesn't look right.  And then 
behind me a voice came and said,
what is going on? And then I 
turned around, it was you.  And 
on screen, it was you.  And I 
was like, Paul cloned himself, 
that seems natural.  That is 
fine.  
SPEAKER: It is a good way.  #. 
SPEAKER: That's a really 
interesting question, big web 
quiz.  I wonder how they will 
answer it. 
SPEAKER: So what happened? 
SPEAKER: Is it running 
yet? 
SPEAKER: Almost. 
SPEAKER: Is it recompiling the 
code? 
SPEAKER: We have an amazing team
Backstage, the editor will be 
like, oh, either that, or they 
will cut to me dancing, and then
they will go back to you. 
SPEAKER: I hope that's the edit 
that we put up that is correct 
is  -- that is way more 
enjoyable than this. 
SPEAKER: So the prize that we 
printed, yes.
  So I ordered it from online 
forum, and I couldn't check out.
And I was just like, why are you
not letting me select the credit
card? There was JavaScript 
error. 
SPEAKER: Really. 
SPEAKER: Yeah. 
SPEAKER: They
were using back (indiscernible).
  There's a follow-up to this.  
They had a 1-800 toll free 
number, I had to call, I really 
need this on Tuesday, this was 
last Friday. And they are like, 
that s no problem, we can give 
it to you.  I need your account 
number and all of that.  And 
then there comes the visual. 
SPEAKER: Yes, because some might
think that we didn't plan the 
content of that poster very 
well, when you look at it, it 
looks like it has been maybe 
badly designed, just saying.
[ Laughter ].
  But actually, it is 
exquisitely designed to be 
awful.
[ Laughter ]. 
SPEAKER: Yes, that was our 
intent.  And so this poor lady 
was like, I'm seeing misspelling
and outside of the border area, 
are you sure that -- yay! !  
SPEAKER: Well, you are welcome, 
my friend. 
SPEAKER: Thank you. 
SPEAKER: Ladies and gentlemen, 
DasSurma. 
SPEAKER: You have 7 minutes 
left. 
SPEAKER: Um, you are -- we left 
off at the sqirqle.  Welcome to 
the worst talk I have ever 
given.
  [ Applause ]. 
SPEAKER: So
I'm actually
--.
  [ Laughter 
].
  SPEAKER: What did you do? 
SPEAKER: Nothing!  Absolutely 
nothing.  
SPEAKER: Jake, that is your 
framework.  So fix that.  
SPEAKER: This is Houdini, this 
is nothing to do with me.  
SPEAKER: [ Laughter ]. 
SPEAKER: Oh, just like the cake?
YeahState of Houdini.
State of Houdini ! 
 Look!  
SPEAKER: There a way you can 
switch out to videos? 
SPEAKER: I have a couple of 
videos, that is true. 
SPEAKER: So what happened, did 
you restart and
Chrome updated? 
SPEAKER: If you are doing an 
experimental build, do a build 
of your own that can update over
night.
That is precisely what he needs 
right now. 
SPEAKER: I will try something, 
it is not ideal, but hopefully 
it will do the trick.  
SPEAKER: So now I'm on the edge 
of my metaphorical seat. 
SPEAKER: How much time do I have
to fill with this so that
talk?
[ Laughter ].
I feel, at this point, I'm going
to re-record the talk and put it
out as a video. 
SPEAKER: What's your plan? I'm 
interested. 
SPEAKER: Stable Chrome. 
SPEAKER: So the demo is not 
going to work. 
SPEAKER: I have most of them as 
videos. 
SPEAKER: So that seems like a 
good back-up plan.  You could 
have gone that way the first 
time around. 
SPEAKER: I have 5 minutes left, 
so that is good. 
SPEAKER: You are using my slide 
framework, right? 
SPEAKER: I am. 
SPEAKER: Enable experimental web
platform features. 
SPEAKER: Oh I need that? 
SPEAKER: Yeah, man. 
SPEAKER: 
Wow, amazing. 
SPEAKER: Everybody warned you 
for not using Jake's framework. 
SPEAKER: My framework is not to 
blame here. 
SPEAKER: I'm wondering if it is 
still the battery issue, because
it discharged because the power 
wasn't quite plugged in. 
SPEAKER: Oh, that was what it 
was.
This is in my diary for the next
Chrome Dev Summit today.  
SPEAKER: So what are we waiting 
on now? 
SPEAKER: The beach ball of 
death, I will put it on screen 
so everybody can see it.
[ Laughter 
].
  It is beautiful, isn't it? 
SPEAKER: I have my laptop in 
case we need to do a big web 
quiz question. 
SPEAKER: I have my phone, we can
do it from here. 
SPEAKER: Yeah. 
SPEAKER: We will have it as a 
background.  
SPEAKER: We have a
can clipboard. 
SPEAKER: What questions can we 
do that we have definitely 
checked the answers for? 
SPEAKER: All of them. 
SPEAKER: I mean, the -- oh, the 
CMS would be the next one.  We 
definitely checked that one.  We
can do CMS. 
SPEAKER: Yeah. 
SPEAKER: I think my laptop is a 
goner. 
SPEAKER: It is gone? 
SPEAKER: Oh, man. 
SPEAKER: This is, like, the 
worst.  
SPEAKER: I told you. 
SPEAKER: That's the word of 
encouragement that Surma needs 
right now. 
SPEAKER: At this point, the only
thing I can do is, like, go out,
try to fix it. 
SPEAKER: Should we do a quiz 
question, you have until the end
of the quiz question? Should we 
call it? 
SPEAKER: Put it on, I will try. 
SPEAKER: Okay. 
SPEAKER: So go to the quiz. 
SPEAKER: All right, look at the 
quiz.  Hooray.
So you are all on our side.
Hey, look at that!  CMS es. 
SPEAKER: They are known to have 
a funny name. 
SPEAKER: They are. 
SPEAKER: And researching this 
round was, frankly, hilarious.  
Not as hilarious as this.
[ Laughter ].
  Let's start the round. 
SPEAKER: Should we introduce 
what the idea is? We can pick 
the ones which are real, which 
are fake.  And since every 
developer in their career builds
a CMS and gives the user name, 
it is a difficult round. 
SPEAKER: I like easy-peasy 
content squeezy. 
SPEAKER: A-haha. 
SPEAKER: This is giving it
a far cry. 
SPEAKER: Feeling content. 
SPEAKER: That's a good one.  
SPEAKER: Ultimate content 
managerize. 
SPEAKER: That's a
wrestler. 
SPEAKER: Very confident about 
magnolia. 
SPEAKER: All right.  Let's 
reveal some
answers.
Ahh I'm disappointed that easy 
peasy content squeezy is not a 
CMS. 
SPEAKER: Now that you say that, 
somebody is publishing npm 
module right now.  
SPEAKER: Brilliant!  I will 
gladly use it if it is a good --
SPEAKER: Is magnolia
a popular one? 
SPEAKER: And Jake CMS
CMS -- so is the failure 
spreading? 
There's a common factor. 
SPEAKER: Far cry is an actual 
CMS. 
SPEAKER: What kind of content 
are we talking? The Far Cry I'm 
familiar with is a different 
kind of CMS, managing people 
from being alive to dead. 
SPEAKER: Or based on the person 
who developed it.  Writing a 
Content Management System is -- 
SPEAKER: Is there a history? 
SPEAKER: It is possible an 
homage.  Tiddly Wiki.  61 
percent think it is real.  I 
think they are right.
I had to giggle about that 
Backstage.
[ Laughter ].
And ultimate content 
managerizer, like easy peasy 
content squeezy, also not real.
Surma, how is it? 
SPEAKER: I feel like my laptop 
is legit broken now.  
SPEAKER: So you are saying, at 
Google every few years, you get 
a good laptop.  You were
boasting that you have to last 
until April. 
SPEAKER: I'm without a laptop 
now.  Houdini broke my laptop. 
SPEAKER: If you want to try
Houdini -- [ laughter ]. 
SPEAKER: I see the next speaker 
is already waiting, so we might 
as well. 
SPEAKER: Might as well should. 
SPEAKER: I think we will call 
this one. 
SPEAKER: Our next speakers. 
SPEAKER: First of all, can we 
give a round of applause to
Surma?
[ Applause ]
. 
SPEAKER: All of my worst talk 
failures have happened at Google
event and none of them were that
horrific.  
SPEAKER: You can be like -- 
there is always somebody worse 
off, and that somebody is Surma.
SPEAKER: Thank God that wasn't 
live streamed. 
SPEAKER: Right!  
SPEAKER: [ Laughter ]. 
SPEAKER: He will be okay. 
SPEAKER: Our next speakers
are  Chris Wilson and
John Pallett.
Building Engaging Immersive 
Experiences.
SPEAKER: What was that? Stay 
away from my laptop, Surma.  He 
has never been so happy to see 
me.  I'm Chris Wilson, here with
my colleague,
John Pallett.  
We're going to talk about the 
immersive web, this is not 
running off my laptop, or they 
will not be mine.  So we talked 
about  immersive a lot and what 
we define as the immersive web. 
Everyone has heard of virtual 
reality.  Who saw ready player 
one? A small percentage, a good 
movie, it is good entertainment.
It is like that.  And virtual 
reality is all about immersing 
yourself in an alternate 
reality, compared to the reality
blinders on, replacing 
everything you can see and 
usually hear and immersing 
yourself in this totally 
different worldment.
  That world may
be be a game, visualizing a data
set, a work space, my kids like 
to play a game where I put on a 
VR headset, how close can they 
dance to me before I notice they
are there, which is pretty 
close. And this is my favorite 
place to go into VR, which is my
desk, you experience virtual 
reality through a tethered 
headset, or a smartphone VR 
system, like day dream view, or 
Google card board, my personal 
favorite.  And any of these 
devices use a combination of 
head tracking, screen display, 
optices, and controllers to make
you feel like you are present in
a totally different world.
At Google, we are working on 
exposing this to the web for 
really quite a long time on all 
of these devices, from high-end 
desktop headsetsss
on Windows, to a poly fill.  And
in fact, you may not have a VR 
headset, or a 29 cents worth of 
card board.  
Web XR and the XR poly fill can 
view VR worlds on a mobile 
device using the orientation API
so you can look around on 3D 
scene, you can look around 3D 
world even if you don't want to 
drop your phone into a headset. 
In addition to bringing the VR 
to the web, we also, like my 
team, actually works on bringing
the web into VR at least on 
daystream
-- day dream devices.
You can launch a VR version of 
Chrome into the day dream home 
screen and we make browsing the 
traditioning 2D web a great 
experience.  But, of course, it 
the the cool part is when the 
browser in VR can be used to 
browse immersive worlds, you can
hop back and forth between the 
2D web and VR content that is 
hosted directly on the web.
And so this gives you a really 
great, easy experience and 
really actually totally 
immersive, you are just 
navigating inside that world.  
And having a browser inside the 
VR world is so useful it turns 
out that 83 percent of day dream
users also regularly use the 
browser in VR, this was 
something that we added on after
the fact, day dream shipped 
without a browser, and this is a
regular occurrence for most of 
them, shows how important the 
content of the web is, even when
you are living inside a VR 
world.  And enough about virtual
reality for a second, I want to 
talk about when you don't want 
those reality blinders on.  
Actually, I like to interact 
with my kids and I want to be 
age -- able to see them and not 
have them dance in front of me. 
And the most exciting extension 
of the computing platform is 
augmented reality, not just 
stickers, or dropping objects 
into your reality, the key to 
understanding AR's content is it
is about the concept of the 
computer seeing around you, 
interpreting it, finding 
surfaces, and augmenting the 
reality with bits of user 
experiences, instead of 
replacing your reality, we want 
users to blend 
virtual and real experiences.  
So for AR, there are headsets, 
there are projection systems to 
display on real-world
surfaces, but most users will 
experience AR like I did, using 
a camera pass-through experience
on a mobile device, showing 
things like AR stickers.
And then you think the things 
that the web is really, really 
good at, the long tale of 
software products you expect 
from the web, the experience 
that users will happily click 
on, but they wouldn't 
necessarily install on their 
devices, the massive success of 
the web as a commerce platform 
is a huge benefit.  You can 
start to see how enabling 
developers to build immersive 
experiences that are delivered 
in an ephemeral fashion is a 
fantastic idea.  You don't have 
to install an app to see how the
couch is going to look in the 
living room, you don't have to 
install the app to view an 
immersive video trailer.  The 
ephemerality of the web allows 
you to do these experiences a 
fantastic match, and our mission
is to enable web developers to 
break out of that plane, the 
design world that we have been 
living in for so long, to 
deliver immersive experiences.  
And to enable that, we needed to
start with the baseline, like, 
being able to connect immersive 
displays and render to them.
And that's where the web XR 
device API comes in.  This 
replaces the old VI to expose AR
and VR functionality and a 
platform layering, the under 
pinning only, it lets us connect
to the devices, render displays,
understand which way they are 
pointed, get -- interact with 
controllers, that kind of stuff.
And this is a really broad, 
multi-year effort by a bunch of 
different companies, Google, 
Mozilla, Microsoft, Samsung, 
Amazon, oculus, a whole bunch of
others as well have been working
on this for a while.
And this has all been developed 
in the W3C, we have a brand new 
immersive web working group, I 
co-chair chis with my colleague,
Ada rose Canon, sitting here in 
the front.  From Samsung.  And 
we are tasked with taking the 
spec to a final status, and this
is why we created a working 
group, because we feel it is 
super important to actually land
this now and not just keep 
talking about how cool it could 
be in the future.  This 
showathize
-- shows the maturing of the API
because it is closer and closer 
to becoming a final standard and
we incubate new ideas in the 
immersive web with a community 
group.  If you want to 
experiment with web XR today, 
you can enable them with 
Chrome's flags.  If you want to 
try out AR scenarios, you have 
to enable the second flag, that 
is going away soon because we 
have done new mode work in the 
spec to make that work, too.
We have a currently-running 
origin trial, too, if you want 
to deploy this out to normal 
users.  And, of course, if you 
are willing to take the 
responsibility of making 
changes, as the spec and the 
implementation change.
And now, finally, I mentioned 
the web XR poly fill a couple of
times, this is something I 
wanted to give a little more 
detail about.  This is a poly 
fill JavaScript library that is 
maintained by the community 
group.
It helped developers in a couple
of different ways.  First, it 
offers a JavaScript-only 
implementation that works it for
for VR scenarios in any mobile 
browser, using orientation 
events.  So with mobile Safari, 
with card board devices or flat 
displays, you can get a web XR 
implementation through 
JavaScript.
And secondly, if they implement 
the older API, like Firefox and 
Microsoft edge did this, it can 
actually build an XR on top of 
that and you get the hardware 
speed-up of their former web VR 
implementation.  You can make it
accessible to a wider range of 
users with just one script.
Now, with that, I want to bring 
out my colleague, John, who is 
going to drill down into the 
augmented reality possibilities 
in a bit more detail.  Thanks, 
John. 
SPEAKER: Thanks, Chris. 
SPEAKER: [ Applause ]. 
SPEAKER: So let's talk about 
augmented reality, or AR.  As 
Chris mentioned, augmented 
reality is largely about being 
able to over lay information on 
top of the real world.  If you 
tried out stickers, or put masks
on your face in a smartphone, 
you have seen augmented reality.
And the reason for this is there
are hundreds of millions of 
phones on tablets right now that
support AR, and most of those 
devices have web browsers, 
there's a big opportunity for 
web developers here and the 
lowest-hanging fruit is the 
ability to add a new experience 
to an existing 2D web page.  It 
does not require a new site, you
can add the AR capability to an 
existing website.  There have 
been a number of partners that 
are experimenting with this 
using the API and turning on the
flag mentioned.  They are doing 
this in Chrome dev and canary, 
one example is an augmented 
reality platform that allows 
businesses to put objects in the
real world.
On the left, you can see that 
users can learn about a product 
by getting information in 
context on the product itself, 
rather than having to go through
data sheets.  It saves shipping 
demo units to businesses that 
are thinking about buying 
machinery or heavy equipment.  
And looking at
objects from different angles is
helpful for fashion, and 
education, where students can 
explore what they are learning 
about.  You can do this using a 
3D model, but if you can put it 
into the real world, you get a 
better sense of context and 
scale.
And what is interesting is that,
from a user experience 
perspective, there's a lot of 
interesting things to learn 
about Augmented Reality, 
particularly how it fits on the 
web.  And it is a good reason to
start experimenting with it now,
if you are thinking about adding
it.  So by way of example, West 
Elm who sells home decor and 
furniture, does testing.  They 
picked four shoppers at random 
and showed them a prototype 
shopping website that 
incorporated AR.  And this isn't
a huge study, but they had 
interesting findings and gave us
permission to share them.  And 
what they learned is, with these
customers, the terms AR or 
augmented reality, it is not a 
common vocabulary.  Basic 
terminology, like view in your 
room, is a better way of telling
users what to do.  And without a
visual, that text can really get
lost with everything else that 
is visual on the site.  So what 
they are looking at now are ways
to add an icon and text so that 
the user has a call to action, 
they know what to do.  One 
approach is to have a rotating 
3D model so the user understands
that this is more than just an 
image on the page.  They learned
that users are confused without 
clear directions, there's a 
delay while the user is figuring
out what to do, do I move my 
phone around, what is this 
circle on the floor, I have 
never seen this before.  You 
have to guide to the path of 
success, they use loading 
indicators so the users are not 
lost so they can see how they 
are placing furniture or an 
object into the real world.
  When a participant places an 
object, the most common request 
is the ability to move it, spin 
it, or remove it from the 
screen.  The original placement 
was not always where they wanted
it to, or how clear how users 
can do that.  And another thing 
that they heard from their test 
subjects is that getting 
validation that the size of the 
virtual object matched the real 
world is helpful.  If you are 
shopping for furniture, you want
to make sure it fits into the 
space.  So showing real-world 
dimensions with the model can 
help.  And finally, there is 
feedback on how real the assets 
looked.  And now, this study was
particularly unique, west elm 
did it in store, they can put 
the virtual object next to the 
physical and get real shopper 
feedback. And the feedback and 
realism here was 6/10.  You can 
see that there are differences 
between the two.
So West Elm is looking at how to
handm typical lighting scenarios
and make sure the shadows under 
the object are more pronounced 
and detailed.  So the key 
message here is that there's a 
whole lot of things that you can
learn, a whole lot of 
tremelining going on here and 
augmented reality on the web has
differences from apps where 
someone might be installing an 
augmented reality app and they 
are discovering in a completely 
different way.  Despite the 
challenges, a very -- a telling 
finding from this study was that
three of the four participants 
said they would, 10/10, use the 
AR, once they knew what the term
meant, to do furniture shopping.
Now if you are like me, if you 
grow up with commercials talking
about three out of four dentists
recommend this toothpaste, you 
probably wondered, what does 
that fourth dentist think? Does 
it rot teeth? And the answer 
here is actually maybe.  Three 
people absolutely, one maybe.  I
visualize that these people are 
shopping for home decor, and 
they are like, would you use 
hissthis? Maybe, I'm shopping 
for a couch.
So three people would use it, 
and it is consistent from what 
we heard from users who see the 
value in visualizing objects in 
the real world.
  So if you are thinking about 
experimenting this and adding it
to the website, we will think 
about how you can build things. 
Chris mentioned the flags you 
enable in the presentation, but 
we will talk about how you can 
add the immersive experience.  
You can write WebGL code, if you
are doing augmented reality, 
that's the top over the camera 
feed, that is one way you can do
it.  We recommend a library.  
Three.JS is a library for 3D 
graphics and it does the heavy 
lifting so you don't have to use
WebGL directly.  We will see how
they work together, we will not 
create a webxr session, but we 
will see how an object can be 
placed into a real-world scene 
with a narrow use case, put a 
reticle into a real-world use 
case.  And this helps the user 
know hot to do.
So what we're going to do first 
in this case is take the reticle
and add it to the scene.  It is 
worth noting, we like the 
reticle to be a half a meter 
wide, and the device API locks 
the coordinates of the real 
world to the virtual world. And 
that means that, one, in the 
WebGL or the virtual space, one 
unit is one meter in the real 
world, 10 units is 10 meters.  
If the mesh is half a meter 
wide, it is half a meter in the 
real world.  And we will render 
the reticle under the camera on 
real-world geometry, we need to 
understand where the real-world 
geometry is.  The web XR device 
API has the capability to do a 
hit test, fire an array into the
real world and get intersection 
points with real-world surfaces.
So taking an array from my eye, 
to the stage, and getting the 
intersection of where I'm 
looking at on the stage and 
normal facing up, so I know 
where the surface is and which 
direction it is facing. And in 
order to do that, you need a 
ray, and what 3D can help you 
can is a ray caster function, 
not shown here, that gets you 
the origin and the direction of 
the ray fired from the scene to 
the camera.  We will pass that 
into the hit test API, which is 
web XR.  We will take the return
value, get a list of things, 
because there might be more than
one object behind the first hit.
We will take the first one, the 
closest, and we will convert the
position into a 3.JS matrix and 
we will set the position of the 
reticle and then you are done.  
The reticle is positioned so it 
will render over the real world 
object that is detected from the
ray.  This is worth reiterating 
that it works because the 
virtual coordinate system 
matches the real world one.  I 
stepped a lot of points here, 
but you can combine frameworks 
like three.JS with WebXR if you 
know 3D programming basics.
But some of you may not know 3D 
programming basics, and if you 
have tried to add a 3D model to 
your site, you probably know 
already it is not super easy.  
3D models can be complex to read
and to the display.  And even in
the west Elm example, there are 
design considerations, if you 
are starting from scratch, how 
do I allow them to rotate the 
objects and so forth? And 
responsive design, if you want 
it to work on mobile or desktop,
if you want a simple model and a
turn table view, you need to 
know mow to how to handle 
resizing.  Do you need to 
display a poster image to 
prevent download until the user 
wants it? And for augmented 
reality, ideally, you take 
advantage and progressively use 
some of the capabilities on 
platforms even if they are not 
available on all browsers.  And 
as WebXR comes out and moves to 
stable, it is one more thing to 
learn and add.  If you have 
experimented with this before 
and found it a little bit 
tricky, you are not alone.  So 
the team has been looking at 
this problem, we a made public 
an early version of a 3D modern 
viewer web component, this is 
really early, and it does some 
things today that makes life a 
little little bit easier.  We 
released to give you feedback.
To release this, we want you to 
add 3D models to your site 
without you learning 3D 
programming or writing code.  We
want it to work well across 
browsers and form factors with 
progressive enhancements to take
advantage of capabilites on 
browsers where they are 
available.  And then the third 
goal is that as the new API 
ships, the WebXR Device API, we 
want the component to take 
advantage of them so you don't 
have to keep up to date with all
of the changes that are coming 
out.
So, like I said, it is super 
early but we made some progress.
I want to give you a sense of 
what the component did today.  
We will run through a few 
examples, the first case, the 
static gltf model.  It is a 3D 
file format, it is a required 
component of the model viewer, 
it is a format viewer that 
allows us to work across all 
browsers.  If we add attributes,
we can bring the model to life. 
We set the background cull and 
to auto rotate.  With the 
controls attribute, we can allow
the user to spin it around, move
it, and take a look of what is 
going on and move from the back 
to front.  We added a poster 
image capability, you can delay 
the loading of the model so you 
are not consuming data on mobile
if that's what you want to do.  
And the attributes are dynamic, 
if you add a script that 
switches the poster image back 
and forth, you can animate it to
give the user a sense it is not 
an image, it is a 3D model that 
they can click on.  It works in 
that way, similar to an image 
tag.
The component also handles some 
forms of responsive design, so 
you can see here that it will 
scale up for desktop and it will
scale down for mobile, and it 
will manage the staging and the 
lighting and the rendering of 
the model properly.  It can also
manage multiple instances on the
same page, so it will take care 
of WebGL from that perspective, 
and it uses IntersectionObserver
to make sure it is not burning 
battery and GPU when you can 
actually see the model.
  Finally, the team is 
experimenting with more of the 
progressive enhancement 
capabilities.  In this case, you
can see that they are 
experimenting with the WebXR API
and incorporating that so you 
can add more attributes to turn 
on AR across different devices.
  And again, this is really 
early, the team is working on 
more features for user interface
and responsive design features 
to make it possible to add a 3D 
model to a web page.  There as 
lot on realism, AR, 
interactivity, we want you to 
try this out and give feedback 
in the debug. the GitHub.  If 
you are interested, go, try it 
out, let us know what you think.
So we covered the WebXR Device 
API, three.JS, and we touched on
the new, early release, the 
model viewer web component.  If 
you are interested in more, this
is the slide to take a picture 
of. 
The links are on the screen.  If
you are watching at home, you 
can check it out later as well. 
With that, thank you very much 
for your 
time.
  [ Applause ].
SPEAKER: So we said at the start
of day two, that day two would 
be experimental.  And sometimes 
experiments fail.  Surma is 
back.
[ Applause ].
We would like to give him a 
one-shot.  We're going to roll 
into the break a little bit, 
there will still be a little bit
of a break, and we will pick it 
up and be back in as we planned 
to.  But we will give him one 
chance to finish the talk.  
Because Houdini is amazing. 
SPEAKER: And just before we came
on stage, I said, is it looking 
like it is going to be okay?
He said, no, we are having 
microphone issues now. 
SPEAKER: Is it getting better? 
SPEAKER: I don't know. 
SPEAKER: That sounds good. 
SPEAKER: So that's winning, I'm 
audible.  Should we? 
SPEAKER: Of course. 
SPEAKER: Give a massive round of
applause, ladies and gentlemen, 
it is 
Surma.
[ Applause.]
 SPEAKER: So the Sqircle.  As I 
was saying, [ laughter ], what 
do you do if you wanted to draw 
an sqircle? We are still on the 
same page.
We have 15 minutes.
We load it, we have the file, we
have that.  We had this one, we 
draw a circle in the middle, and
then we will have the real, the 
paint function for the 
background image instead of the 
normal SVG image.  I'm going to 
press the button.
  [ Applause ]. 
SPEAKER: We have a paint circle!
I'm so happy.
By the way, everyone, thank you 
so much for the kind messages on
Twitter and the support in the 
room.  It could have been so 
much worse.  Thank you very 
much.  And I set the background 
image on a text area, this is 
where I'm with animating width 
and height, don't do this at 
home, kids, never animate width 
and heighth.  But the circle is 
pink and in the middle.  So what
is the advantage of using 
Houdini's paint API over a 
normal canvas  The first is auto
repaint, what the browser needs 
to do in the code to run the 
painting operations.  It is auto
sized, if you worked with the 
html canvas, you know the number
of pixels on an html canvas is 
completely independent from the 
number of pixels the canvas has 
on the screen, which is super 
painful to work with and, in 
this case, you don't have to 
worry about it.  It is 
automatically set to the correct
size.  It is off main thread, 
the code that you write to do 
the paint operation does not run
on the main thread.  And that is
a lie.
Currently, in Chrome, we do run 
it on the plain  -- main thread.
But the worklets are migrated, 
so as soon as we have the 
infrastructure to run somewhere 
else, it will happen.  That 
means you don't use any of the 
main thread budget making sure 
the page is smooth.
No DOM overhead, this is under 
rated, I see pages on the web 
that use an assembly of 
different styles to achieve a 
visual effect.  With this, you 
are using a virtual canvas, not 
one DOM element to achieve this 
effect.  And, for example, this 
is -- no.
No!
This is actually not
that bad.
All right, we got this.
This is just an aw, snap.  We 
can work around this.
Is this full
screen?
Sure, hang on.  We almost got 
this. I will move this over 
here.
I will move back.
And we're going to go to here.
All 
right?
[ Laughter ].
AUDIENCE:
Awww.
  [ Applause ].
So this, I'm literally, I'm not 
giving up.
For me, this just means that I'm
going to skip that slide.
So what I'm going to do is I'm 
going to go out of full screen 
and move this to a different 
slide.  I'm going to go back 
into full screen.
And we're going to go to this 
one.
All right.
[ Laughter ].
  [ Applause ]. 
SPEAKER: Wow.
All right.
My point -- [ laughter ] -- that
I was trying to make was that we
found that, on low-end devices, 
implementing these effects, like
the wonderful effect is more 
efficient in the paintWorklet 
than using DOM elements.  So 
this is why this is actually a 
performance primitive to make 
your app run buttery smooth, 
even on the low-end devices.  So
this is another effect that Una 
wrote that is very nice, and 
this points at a nice example to
show how the browser can decide 
when to paint and where not to 
paint.  So in the paint class, 
you can declare your 
dependencies.  You can say these
are the CSS properties that I 
rely on.  So the browser knows 
only when these properties 
change will the code have to 
run, otherwise it won't.
And so, in this case, you have a
couple of custom properties 
saying I want this number of 
stars, and this -- the different
sizes kind of thing.  And we 
will keep the animation, and 
then you end up with this effect
and it is actually efficient in 
the sense that it does not run 
on every frame, but re-paints 
when the animation tells you it 
is necessary.
This is another effect, a simple
clock that people might write 
with SVG or a canvas, some 
people might try to make this 
happen with a DOM and a couple 
of DOM elements.  If you look 
closely, the hand has a trail 
and that suddenly makes it a lot
harder to do with SVG or the DOM
and maybe Canvas is more 
appropriate.  The nice thing is 
you can have a module, a clock 
CSS Houdini module and configure
it just with a couple of 
different custom properties, you
can animate the background 
color, the thickness of the 
hand, the circle at the end, the
length, you can show or not show
the individual stops on the 
clock, you can do all of these 
things at once, but it gets 
really stressful to look at.
[ Laughter ].
  So I would, I wouldn't do 
that. It is a trail, and the 
trail is a CSS transition so the
browser knows while the 
transition is going on, I need 
to repaint this every frame.  
The second it is done, it stops 
repainting.  So an easy 
performance win.
So far, we have been using CSS 
paint for background images.  
And we have gotten prey pretty 
far, I think, if you remember 
what I was talking about a 
couple hours ago, it feels like.
  You can use it anywhere, with 
CSS image, or a mask or border 
image.  In the land of border 
image, you can make this organic
look where the border looks 
hand-drawn.  If had  you want 
that look, CSS paint makes it 
easy to achieve that effect.
This is an important progressive
enhancement, and one important 
note about the syntax, it 
detects support for paint 
worklet, not for one specific 
paint worklet.  So even if the 
name does not exist as a 
paintWorklet, it is evaluated, 
which is handy.
And, for this talk, I want to 
introduce the 3 Pig Stability 
index, a little notion of how 
stable an API is based on the 
story of the 3 pig and the wolf.
  So in this case, the paint 
API, the spec is
a recommendation, W3C speak for 
the fact that it is stable.  
Safari and Chrome have it in 
development right now, and, with
that, I will call it break 
stability.  That is supposed to 
be a brick emoji, which is 
standardized in June of this 
year, the fonts do not have it 
yet, but it luckily looks like a
brick.
  If you went to learn more, I 
will shamelessly plug the 
articles I wrote here.  In-- if 
you have questions, you can hit 
me up.
And with that, I talked about 
the first Houdini API and I will
see if the browser can handle 
this.
Should I risk hiding the url 
bar? I will do it.
The next one, compositeng.  And 
as the compositor, the main job 
is to do the animations with the
papers that move around.  So the
animation is called the 
animation worklet API.  We will 
talk about that.  If you are 
thinking about animations on the
web, you have three choices, or 
two and a half, you have CSS 
transitions, which allows you to
transition from the currents to 
the new val do you, and the key 
frame animation, a declarative 
API, and you have the web 
animations API, the imperative 
version that's  that allows you 
to nest the timelines, but it is
badly supported.  It is behind a
flag, Chrome has the 
implementation, it is missing a 
lot of features, Edge doesn't 
have it, so it is not usually a 
good choice.  Even if it was, 
there are scenarios where web 
animations happens is not good 
enough.  And this is where it 
comes in.  Soy this is where you
see a normal web animations 
animation.
It would -- it is thought about 
in the dot animate call.  This 
is the same thing, just a little
more elaborate, part of the
API, and the work is similar.  I
will use the worklet animation, 
because we are associating this 
animation with the worklet, we 
need to provide a worklet name. 
Other than that, it stays the 
same.  We have the key frame 
effect, and we have two key 
frames you will use within two 
seconds.  And then we have the 
animation worklet on the CSS 
name space, we can call add 
module, and within the animation
file, we can use JavaScript.
We have an animate call back, 
when I get the current time, and
the effect of the animation, and
now it is our job to set the 
local time of the effect 
depending on the current time.  
If we do it like this, where we 
don't think about it, it is like
a pass through and it behaves 
like a normal web animation API.
But this is JavaScript, you can 
implementimplement arbitrarily 
complex time mappings.  What 
does this mean? I will not go 
into all the details, but give 
you a taste of it.  If you want 
to know more, I will shamelessly
plug another article, I would 
welcome feedback.
  So what would you use this 
animation for? A year or two 
years ago, Safari proposed the 
spring timing function, and it 
is implemented in Safari and no 
other browser implemented so 
far.  What do you want to do if 
you wants  want to use it, or 
any other timing function that 
does not exist? So we can write 
the bounce animator, if you 
animate the element from A to B,
you can move it from A to B, but
you can move it from A to B and 
a little bit back and a little 
bit less back and it looks like 
a bounce.  We have the 
constructor where we take the 
options, an option for 
bouncyness, which makes sense, 
how bouncy is it supposed to 
look.  In the animate call, we 
use the bounce function, 
depending on the bounceness and 
the time, depending on where the
two key frames we want to end 
up.  And the bounce function, I 
implemented with dodgy physics, 
in the end, you can think of it 
like implementing this kind of 
graph between two key frames.  
So if we do that and we run 
that, you can see that it is now
actually a bounce.  Keep in mind
this is literally just two key 
frames, we are bending time, so 
to speak.
And this animation, because it 
is a worklet, runs off the main 
frame in the compositor thread. 
If the main frame is busy, this 
animation will run framed 
perfect and make sure that your 
animations look really smooth.
So, so far, we have done this, 
the animation, if we look at 
this, I explicitly wrote out 
document.timeline, it is an 
optional argument.  It.  With 
the animation worklet, you can 
get time somewhere else.  Not 
just the actual time.  You can 
get the scroll timeline, and 
even an input timeline.  We are 
talking about input timeline, 
but I have nothing to show here,
but I can show you scroll 
timeline.
So here, you see a Pac-Man that 
I have linked to the scroller at
the bottom.  I can basically 
scroll the scroller at the 
bottom and the animation jumps 
to the position of where I'm in 
the scroller, and it gives you 
the animation scroller.  And 
this is kind of fun, but not 
that useful.  But you can see, 
you can conceive of more useful 
usages, scroll link effects, 
three animations, the name and 
button and avatar are all 
control link effects that assume
the position in the animation 
depending on how far I scroll.  
The parallax effect is easier to
implement than it is currently 
on the web.  And in combination 
with CSS scripts and snap 
points, you can have a lot of 
synergy here, you have smooth 
transitions between different 
sections of the app, you can see
the indicator and the images 
zoom into view, and then the -- 
for me, the really interesting 
thing is that animation worklet 
is the same thing as the paint 
worklet was for rounded corners.
If you don't like how scroll 
snap points work, animation is 
low enough for you to implement 
your own version.  So we 
future-proof the web for 
whatever people will come up 
with in the future.
  So let's talk about the 3 Pig 
Stability index for this API, 
and we are in collaboration with
Apple, Microsoft, and Mozilla, 
and we're at a point where we 
feel fairly confident that all 
of the browsers are on board of 
a conceptual level of what we 
come up with.  So I would, all 
in all, give this the word 
stability.  We feel confident 
about it, we want to see how you
feel about it.  It is in canary,
we are going to an origin trial 
in 71, if you want to test the 
production, we would love if you
do, sign up here.  And the 
article will give you what you 
need to get started, if you 
cannot, contact me, I'm happy to
help out so we can get the real 
benefits on Animation Worklets. 
I don't know how much time I 
have left, I will go for it, 
because I can take a little bit 
about layout API.
So the layout API, I'm going to 
start with the 3 big stability 
index, it is a complete straw.  
We refactored this two weeks 
ago, so there's a half-finished 
implementation in canary, you 
can play with it, don't expect 
the code to work next week.  But
there's so much potential in 
this editor that I wanted to 
give insight into this.  So with
the custom layout API, or the 
worklets, you can basically 
define your own display values. 
So I'm just going to have the 
main element, a couple of difs 
in there, and all of the other 
magic happens in the worklets.  
I have the module, and in there 
we have a layout call back.  I'm
not going to explain all of the 
parameters because, A, I don't 
understand them all, and layout 
is pretty complex.
But I'm going to keep this one 
simple so we can get a feel for 
it.  I will loop overall of the 
children and the child nodes on 
the custom layout elements, I'm 
going to lay them out in empty 
space, basically asking how big 
would you be if you had no 
constraints, and then I'm just 
going to give them a random 
offset, give them a random 
position within the rectangle 
that I occupy.
And then I return this list to 
the browser saying, I did may 
layout, please go forth and 
paint me.  And that's what the 
browser will do.  So if you look
at this, you can see alcouple of
rectangles, and every couple of 
seconds I add or remove a 
rectangle to force a layout and 
I get a new position that is 
truly useless.  But you can see 
this is the layout face of the 
browser, this runs during the 
layout phase of the browser, 
something that was closed off to
developers so far.
And we are running different 
aspect ratios and the masonry 
algorithm, if you will, takes 
care of assembling these.  And 
the number of columns is just 
the custom properties that I can
increase and it will scale up 
and give you this nice little 
masonry look which, so far, you 
have to it do with position 
absolute magic, and this is just
layout which I think is really 
exciting.
  If you go into the code, I 
don't have an article for the 
layout worklet yet, by the time 
I'm done with an article, it is 
out of date again.  Please go to
the repository I have maintained
with samples of all of the work 
I talked about today.  You can 
fork them, play around with 
them, or I would be happy if you
contributed to them.  I want to 
build a collection of off the 
shelf Houdini elements with 
popular effects.  If you want to
keep up with the development of 
Houdini, I made, 
isHoudinireadyyet.com, you can 
see the browsers, which API they
support, which ones are in 
development, which ones they at 
least wants  want to implement, 
there are links to the articles,
to the demoes, to the specs, 
hopefully I can make this the 
one-stop shop that you need to 
keep up with Houdini.  I can't 
believe that I made it to my 
last slide.
[ Laughter 
].
  [ Applause ].
SPEAKER: I think you should take
a bow. 
SPEAKER: I gave me talk.  They 
are the awesome ones. 
SPEAKER: Thank you so much.  
SPEAKER: It is time for a break.
Is it still? 
SPEAKER: Yep. 
SPEAKER: Good. 
SPEAKER: You did not completely 
take all the time.  We will back
in here at 4:30.  So have a 
break.  I think we can all take 
a break.
Absolutely, see
you then.
Using WebAssembly and Threads
SPEAKER:
So before we trigger layout, we 
will do what style and paint -- 
SPEAKER: Yes, I explained.  So 
style calculations is where the 
browser calculates the style. 
SPEAKER: Okay. 
SPEAKER: 
Haha. 
SPEAKER: Which is something like
the height, you can look at the 
cascade. 
SPEAKER: And layout is where it 
is figuring out the geometry of 
the elements, how wide and high 
and all of that kind of stuff, 
if one pushes all out of the 
way, and painting is where you 
fill in the pixles, and 
compositing is where you pay the
layers in together.  We have it 
all on the web, you should read 
about it. 
SPEAKER: And this question, yes.
SPEAKER: It is the -- does it do
the geometry thing, does it 
trigger layout, ready? 
SPEAKER: Let's go. 
SPEAKER: Box-shadow. 
SPEAKER: Does that
trigger 
layout? Perspective, that is a 
3D transform for one
background image.  Display, flex
display, inline.  All of the 
displays are going through my 
head. 
SPEAKER: Do you use display 
running? 
SPEAKER: I don't believe you. 
SPEAKER: All right.
So should we find out whether 
they trigger layouts? Box 
shadow?
No, it doesn't.
No, it doesn't, it changes the 
shadow, but not the width or the
height.  The outline is the 
same, it is like border, except 
it comes to a paint thing.  So 
it just adds an outline, but it 
actually does not change the 
size, border changes the size of
the element.  And container, as 
soon as you put containment on, 
it will trigger the layout.  
Well done for everyone that got 
it right on height.  And this is
a little bit more
evenly spread.
Should we reveal them? 
Perspective doesn't, but it can 
change where the element 
appears. 
SPEAKER: But it is only done in 
the compositing phase, we will 
do 3D forms as a compositing 
step.  And this is the 
interesting one about transform,
you can trigger layout.  
SPEAKER: Most of the time, it is
a no, that's why it is a no.  
You can transform if it is to 
the right or the bottom of the 
screen.  
SPEAKER: There we go.  
SPEAKER: That's a full-on no 
isn't it for cursor.  I agree 
with that one.  
SPEAKER: Yep, and there they are
absolutely correct.
So the difference between line 
and border, border is going to 
grow the size of the element. 
SPEAKER: Indeed. 
SPEAKER: And display, if you do 
flex to display in line, you 
have to figure out the geometry 
effect of that. 
SPEAKER: There we go. 
SPEAKER: Should we go away and 
introduce the next speakers? We 
should. 
SPEAKER: Here to speak about 
WebAssembly and threads, Alex 
and Thomas!
[ Applause.] 
Using WebAssembly and Threads
SPEAKER: I'm Thomas, the product
manager for WebAssembly. 
SPEAKER: I'm Alex, a software 
engineer on ChromeOS. 
SPEAKER: We're going to talk to 
you about WebAssembly, we will 
describe what WebAssembly is and
what you can use it for.  I will
show off the amazing new 
features that the WebAssembly 
team has been working on to 
deliver to you in this last 
year, I will showcase the 
amazing applications that we 
managed to build with 
WebAssembly and are shipping 
into production.
All right.  So what is 
WebAssembly actually? 
WebAssembly is a new language 
for the web.
And it offers an alternative to 
JavaScript for expressing on the
web platform.  It is important 
to note though that WebAssembly 
is in no way a replacement for 
JavaScript, rather, it is
designed to fill the gaps that 
exist in JavaScript today.  
WebAssembly is designed as a 
compilation target, you write in
higher-level languages, such as 
C++ and compile into 
WebAssembly.  WebAssembly is 
also designed to deliver 
reliable and maximized 
performance, which is something 
that can be difficult to get out
of JavaScript.
Most exciting is the exact that 
WebAssembly is now shipping in 
all four major browsers, making 
it the first language to be 
supported in every browser since
JavaScript was created 20 years 
ago.  What can I do with this? 
Because WebAssembly offers 
maximized and reliable 
performance, you can expand the 
set of things that you can 
feasibly do in the browser, 
things like video editing, 
complex application, codecs, and
many, many more 
performance-depending use cases 
can be supported on the web.
Secondly, WebAssembly offers 
amazing portability.  Because 
you can compile from other 
languages, you can port not only
your own applications and 
libraries, but the wealth of 
open source, C++ libraries and 
applications that have been 
written.
Lastly, and potentially the most
exciting to many of you, is the 
promise for more flexibility 
when writeing for the web.  
Since the web's 
inception, JavaScript is the 
only way to ship executed code 
on the web and with WebAssembly 
you have more choice.  I will 
show you the features we have 
been adding in the last year.  
The first off is source maps, 
you all know important how 
source maps are when you are 
looking with TypeScript or 
Babel, but it is more important 
when you are trying to debug 
your WebAssembly code.  Source 
maps let you turn something that
looks like this into something 
just slightly more friendly like
this.
With source maps, you can see 
the specific line of code where 
an error has occurred, and you 
can also set break points and 
have the program actually pause 
at the appropriate moment.
One of the big feature requests 
that we've heard from users is 
for better performance when 
you're starting up your 
application so that your module 
can actually get going faster.  
For that, we created streaming 
compilation.  In the past, when 
you compiled a module, you had 
to wait for it to be loaded off 
the network and only then can 
you compile it.  And with 
streaming compilation, you can 
compile each piece of the module
immediately before the other 
parts have finished downloading.
To show you what that looks 
like, we will show an example, 
the fetch for a WebAssembly 
Fibonacci module, we pass that 
into WebAssembly that 
instantiates streaming and it 
takes care of all of the 
underlying bit and pieces for 
you to deliver this experience.
We did some profiling at 
different network speeds to see 
the impact of this.  At 50mbps 
network speed, the network was 
the primary bottleneck and the 
compilation was done as soon as 
the module was loaded.  It was 
not until you hit the 100mbps 
speeds that you needed more time
to download the module to 
compile and get it going.  To 
make start-up time faster, the 
team built and launched a new 
compiler, The LiftOff compiler. 
This LiftOff compiler takes the 
WebAssembly bytecode that comes 
off the wire and executes it 
immediately.  The WebAssembly 
takes it off the main thread and
it is TurboFaned.  When it is 
done with the WebAssembly code, 
there is no need for further 
developer action.  Unity did 
some benchmarking on the effects
that LiftOff had where they 
tried to load a large game, it 
went from taking 7 seconds to 
less than one second, which 
makes all of the difference when
you are waiting to get into a 
game experience.
And probably the biggest feature
that the team has been working 
on this year is WebAssembly 
threads.  WebAssembly threads 
lets you run fast, highly 
paralyzed code for the first 
time.  It allows you to bring 
existing libraries and 
applications that bring threaded
code to the web.  This is such a
big feature that I will leave 
the explanation to Alex later 
on.  Before I get into that, I 
will show off it  the cool new 
applications that are building 
and launching with WebAssembly 
this year.
The first off is Sketch Up, a 3D
modeling software you can use 
quicklyism unlike tradition 
computer-aided design, most 
people can learn it away.  
Allows you to learn perspective 
and push/pull things in 3D.  You
can design a living room, or 
export a model for 3D printing. 
It is a lot of fun and you can 
check out this app right now 
just by going to 
app.sketchup.com.  It has been 
around for a desktop 
application, but the team's 
strategy has been to expand and 
broaden the market of people who
can use 3D modeling and by 
making it simple and easy to use
and accessible to everyone.
Delivering the app over the web 
was a critical step in that 
strategy, and the team knew they
wanted to use the same code base
for the desktop applications 
because re-writing the entire 
application in JavaScript was 
not an option.  The team's 
approach was to use the 
WebAssembly and 
inscripted compiler to put it 
into the web.
It took two months to bring to 
the web, when you realize how 
drastically it expanded the 
reach of the application.  The 
early focus for Sketch Up has 
been on the next generation of 
3D modelers, and today's sketch 
up is one of the most popular 
aches on the G suite for 
education marketplace.  The team
has increased the paying 
customer base by 10 percent, 
they see a growth in session 
time, returning visitors and 
active users.
  Moving on, the next 
application I want to mention is
Google Earth.  I'm happy to say 
that they have their application
to WebAssembly, including the 
new WebAssembly threads.  They 
have this threaded build working
in Chrome and Firefox, making 
Google earth the first 
WebAssembly multi-threaded 
application to be running in 
multiple browsers.
Google earth did some 
benchmarking comparing their 
single-threaded version to their
multi-threaded version.  They 
found that the frame weight 
almost doubled when they went to
the threaded version, and the 
amount of jenk/drop frames 
reduced by more than half.
All right.  So the last big 
company and launch that I want 
to mention is Soundation, a 
web-based music studio that 
allows anyone to make music 
online with a wide sample of 
instruments, samples, music, and
effects.  No hardware 
instillation is
required, everything is 
available instantly and 
everywhere.  Their mission is to
accelerate music creativity in 
the world, by offering an 
affordable service
to the demographic of new 
producers.  It allows their 
users to enter competitions and 
even get record deals.  They use
audio worklets to deliver a 
smooth experience.  Launched in 
66, they bring web processing to
the web platform.  It is just 
with other web technologies, 
such as WebAssembly and shared 
away buffer.  It is one of the 
first adopters of WebAssembly 
threads, and they use it to 
achieve fast processing to 
seamlessly make songs.  Let's 
have a look at the architecture 
and see if we can look at 
anything.  On the JavaScript 
side of the world, we have the 
application UI.  That 
application UI then talks to an 
audio mixing engine, and this 
audio mixing engine spawns off 
multiple different worker 
threads in WebAssembly, and each
of them can take to the same 
piece of shared away buffer 
memory, and this is then passed 
on to the mixing thread which 
passes it to the audio worklet 
which produces the final result.
This is showing the improvements
on rendering a song, adding a 
single additional thread doubled
the performance, and by the time
they added five threads, they 
tripled the performance of the 
application.
That's a great visualization 
showing the performance 
improvements.  But since this
this is Soundation, I wanted to 
show you the experience in 
WebAssembly.  This will not be 
an entirely pleasant experience.
This is not what you want when 
you are trying to create 
beautiful music, but they 
succeeded in launching with 
WebAssembly threads and they can
deliver an experience that 
sounds just a little bit better.
[Music].
So, as you can see, not only is 
it a much more pleasant 
experience, but the CPU has 
additional cycle of other work.
All right.  So I want to close 
off my segment by talking about 
some of the amazing community 
projects that we've seen people 
working on out there.
And the first of these that I 
want to mention is the awesome 
work by the Mozilla and Rust 
community to bring Rust through 
WebAssembly, they have a lot of 
tools that you can check out at 
rustWasm.GitHub.I/O.  We are 
seeing other projects bringing 
it to the web like WebAssembly, 
like PERL, Go, and the.Net 
framework.  Many of these 
languages require garbage 
collection which is not 
supported in WebAssembly, though
we are working on it.  These 
languages come to the web by 
taking the garbage collection 
system and the run time bits and
compiling it down to WebAssembly
and shipping everything down to 
the application page.  This is a
great strategy for getting 
started and experimenting with 
languages on the web.  Because 
of some of the performance and 
memory characteristics, they are
not suited for production 
applications. The 
fully-supported languages today 
are C, C++, and Rust, and 
everything else should be 
considered experimental.  There 
are so many more amazing 
community projects that I don't 
have time to do justice.  We see
people importing gaming 
emulators, canoe libraries, 
digital signal processing, and 
operating systems, like 
Microsoft Windows 2000 now 
available in a browser tab which
is, if not a pleasant 
experience, definitely 
interesting.  You can check out 
these demoes and much more at 
the forum where we have multiple
demoes for you to check out.
  And with that, I want to hand 
it off to Alex where we talk 
about WebAssembly threads and 
how to use these features.
  [ Applause ]. 
SPEAKER: Thank you, Thomas.
One of the big themes at the 
conference here when we talk 
about the browser, we talk about
using the platform.  Quite a few
people think of the platform as 
a software stack that is inside 
the browser.  I want to think of
it a little bit different, as 
the hardware that is in the 
machine.  So here is an artist's
rendition of the inside of a 
desktop microprocessor, this is 
what you see if you take the 
plastic off the top.  At the 
top, you have the green bit that
interfaces, the yellow boxes are
integrated memory controllers, 
and you see all of the blue 
tiles here, and they are cores. 
I would say that each of these 
is a CPU core in its own right. 
It is a desktop microprocessor, 
in your pocket, if you have an 
iPhone or an advanced Android 
phone, you have 6 to 8 cores in 
here, ready to do good computing
work.  So when you write a 
normal web application, you are 
looking at something like this. 
You have one called running, all
of this silicon doing nothing, 
you are not using the platform. 
You have people spawning a web 
worker to do the hard work and 
have the main thread for the UI 
and you are running a double 
threaded application.  You are 
running the platform a bit 
better but not doing everything 
you could.  Since we introduced 
WebAssembly threads, you can do 
stuff like this, use many cores 
for the application.  And there 
is visible improvement in the 
user experience.  So I would 
really like you to start 
thinking about how you can adapt
your application.
So when you create a web worker,
you have to understand that's a 
threading primitive and they run
concurrently, you can compute in
parallel.  And one of the 
things, on the left, we have the
main thread, we are all familiar
with the main thread that 
interacts and talks to the DOM. 
And the worker that we generate 
is the background thread, 
running in parallel, but it 
doesn't have the ability to call
web APIs and interact with the 
DOM and stuff like that.
But when you create workers and 
you create them normally with a 
JavaScript thing, it creates an 
instance.  So these instances 
kind of sit on their own, on the
side, they are run in parallel, 
they don't get to do anything 
with the DOM.  So they run on 
the side.  So if you create one,
you get a V8 hanging off the top
of the main thread, you get an 
instance hanging off of the 
worker.  If you create a bunch 
of workers, you get a bunch more
V8 instances.
And now, each of these instances
consumes memory.  So that means 
that if you start spawning a lot
of
JavaScript workers, it might 
consume more and more memory and
you run out of room on the 
phone.  I will let you out on a 
secret, they don't talk to each 
other.
So you have separate bits of 
JavaScript running in all of 
these parallel threads, but they
don't communicate.  They don't 
have any shared state, there is 
another copy of V8 sitting in 
memory.  The way these talk to 
each other is with a 
postMessage, like I will send it
from this worker to that one, I 
will sit and wait for it to 
arrive, there is no 
predictability, not a great 
experience for a multi-threaded 
application.  So when we 
implemented a WebAssembly 
thread, it looks a lot like 
this.  This is an example of 
having three threads.
So, under the hood, we actually 
spin up the three web workers.  
But the difference here is that 
the WebAssembly module is shared
between the workers.  So there 
is one copy of the code, so you 
are not consuming nearly as much
memory.  And more importantly, 
there's a shared state and they 
share it through shared array 
buffer.  So if you are a 
JavaScript engineer, you know 
what a typed array is, you use 
it day-to-day.  And shared 
buffer is the same thing, but it
can be shared across workers.  
And so what that means is that 
the state of the execution of 
the application is visible to 
any of the workers in parallel.
And now, if you fire off 
something into a pool of workers
and have it hanging off the main
app, it looks like this.  You 
have the main app for the main 
application that talks to the 
DOM, it sees the shared array 
buffer and that is being 
manipulated by all the parallel 
threads in the WebAssembly 
module.
Okay, and by now, you are 
probably thinking this is all 
well and good, how do I use this
stuff in the actual 
applications?  I will start with
a simple example.  We will do an
example, which is just a little 
Fibonacci program that runs in 
two threads, you have the main, 
the background, the WebAssembly 
module, talking to the shared 
array buffer.  We take source 
code, a bit of C code, we want 
to compile that into a form that
we can use in parallel 
threadings.  And the way we do 
that is with
the enscriptem tool
chain.  And we use dash S use P 
threads equals 1, that turns it 
on. And the second is thread 
pool size equals two, it tells 
the compiler I want to allocate 
two threads to the application. 
When I start the WebAssembly 
module, it will pre-create two 
threads and get going with that.
And now, this is kind of a 
visualization of what would 
happen, if you set P thread pool
size to two, you get the picture
on the left, the two workers if 
it is to eight, you get eight 
workers, that happens at the 
start-up time of the web app.  
You may be wondering why you 
care about the number.  The 
thing is that you should try to 
set it to the maximum number of 
threads.  If you say I want two 
thread and the application is 
three or four, it is a bit of a 
problem.  So what happens is 
that the WebAssembly module has 
to yield to the main thread 
before it creates the worker.  
So if you are relying on the 
threads being there at the 
start-up, you need to set the 
number high enough.  If it is 
too high, you are wasting 
memory.
So in this case, they use five 
threads, it works for them.  So 
when tuning the apps, you need 
to think about it.  So if you 
want to try this stuff, which 
I'm sure you are all dying to, 
if you fire up Chrome 70, change
the setting support, default to 
enabled, you have to re-start 
the browser and you can test it 
locally.  Once you build a 
working WebAssembly app, you 
want to deploy to the public.  
You do that by getting an origin
trial token.  And that is tied 
to your domain, and you get it 
from us.
And you basically, it is a 
metatag that you put on the 
page. And that tells the 
browser, hey, these people are 
trying WebAssembly threads and, 
you know, let's use it.  So if 
you want to do that, I encourage
you all to do so, go to the 
short think and there's a form 
that you can fill in, putting 
the url, the reason you want to 
use this stuff, and then go and 
start building something really 
cool.
Now, of course, as developers, 
we spend most of our times in 
DevTools trying to debug things.
So in Chrome 70, at the moment, 
which is released to stable, you
can see step instructions, and 
that is WebAssembly 
instructions.  It looks like 
this, not friendly as Thomas 
pointed out.  You can see the 
little box up here, showing you 
the threads, so this is a 
two-thread application running, 
and this box is the actual 
WebAssembly disassembled code.  
It is not the binary, it is a 
text form of the instructions 
that are in the module.  You can
single-step those, that is all 
well and good, but we don't 
really like that debugging 
experience.
So Chrome 71 brings source map 
support, as Thomas mentioned 
earlier.  It allows you to 
change what you saw before into 
something that looks like this.
So this is, like, the source 
code of the Fibonacci function 
sitting in DevTools, you can 
single step over instructions, 
you can set break points and do 
stuff like that.  If you want to
do this yourself and generate 
the source map, you need to set 
two flags on the emcc compiler 
command line.  The first is dash
dash source map, and the other 
is source map base where you 
find the file.  So I'm using 
local host on my own 
workstation.
  And so I will just recap on 
what we have talked about today,
just to say you can remember 
what you talked about.  The 
first is streaming compilation, 
which lets the browser compile 
the WebAssembly module as it 
comes over the wire, it is 
launched in Chrome 70 and speeds
things up.  The second is 
liftoff, the
tiered compiler.
And then we
have, of course, the WebAssembly
threads, and Chrome 71 that is 
out soon contains source maps, 
it is easier to debug your code.
So I encourage all of you people
out there to start writing using
WebAssembly threads because it 
unlocks the power of the super 
computer that is sitting in your
pocket right now.
Thank you.
[ Applause ] 
SPEAKER: Is it time for another 
quiz? 
SPEAKER: Yeah, let's go for it.
Right.
Okay, I think this is my 
favorite one.  
SPEAKER: Yeah, you had a lot of 
fun researching
this. 
SPEAKER: Right.
You see a lot of names up on 
screen. 
SPEAKER: It is Chrome, but there
are other ones as well.  I don't
know if you heard.  Some of them
are going to be ones we made up.
SPEAKER: Yep. 
SPEAKER: Some of them are not.  
Some of them are from the past, 
so they are not all 
modern browsers.  You get a
few seconds to pass.
So
Mosaic net wrangler, fire bird, 
Konqueror, Dillo.  What are you
thinking? 
SPEAKER: Firebird maybe. 
SPEAKER: Eww. 
SPEAKER: Fandango. 
SPEAKER: Cyber dog. 
SPEAKER: Web walker. 
SPEAKER: Jake 
browser. 
SPEAKER: Voyager. 
SPEAKER: That's how you 
pronounce it, what is the 
correct way? Okay.
People are confident about 
mosaic. 
SPEAKER: We made it in our talk,
that is pretty good. And pretty 
sure that the others are fake.
Well... it is split.
So jelly cat we made up, timber 
wolf is rule, it is a 
gecko-based browser.
Killer net is a terrible British
TV series from the '90s.  So 
threw  that in there.
SPEAKER: People are pretty 
confident about this. 
SPEAKER: So the answers are...
net wrangler is fake.  So that 
is the search they use on Dexter
when they don't want to say 
Google.
So Konqueror has been around for
a while, and Dillo is a tiny 
browser for embedded systems.  
SPEAKER: I didn't know that. 
SPEAKER: 
Eww, [ laughter ].
That is a real one, yeah. 
SPEAKER: What is it? 
SPEAKER: It is the, well, it is 
an acronym, and it stands for 
the Emax Web Wowser. 
SPEAKER: Of course, EMAX. 
SPEAKER: Fandango I made up, and
eye brows is the best name for a
browser, what Safari should have
been called.  Eye brows is 
another web browser for the 
Omega, and cyber dog is a 
browser by 
Apple.
  I didn't know any of this.
Web walker, I made that up.
Jake browser, I made that up.
And mothra, that is a thing.
And voyager, another omega one.
Only a couple more questions to 
go, and it is really tight.  
SPEAKER: Yes, be sure to join 
the game. 
SPEAKER: Yes. 
SPEAKER: There will be one more 
question to go after this. 
SPEAKER: So next talk, the whole
reason I am a developer is 
because I want to be lazy, and 
in this talk somehow he is going
to tell us how to be lazy.  
Please welcome to the stage 
Justin Fagnani!
(Applause).
The Virtue of Laziness: 
Leveraging Incrementality for 
Faster Web UI.
SPEAKER: I hope that everyone is
having a great summit, I work on
web components, Polymer, and web
html and I will talk about the 
virtue of
laziness.
Next slide, advance, today is 
the day.  We will look at how to
do less, be lazy, take breaks, 
and end up with a better web 
application for it.  When I say 
better, I'm talking about four 
overlapping goals.  We want to 
deliver great user experiences. 
And more than looking at fast 
apps, we want to make this easy,
so easy that it is the default 
because this is going to make 
our users happy, our developers 
happy, and happy developers make
better user experiences in the 
long run.  So with these four 
general goals in mind, I'm going
to walk through several 
techniques for leveraging 
asynchronous programming for 
building better UIs.
So we're going to look at 
batching work for better 
performance and developer 
experience, keeping our UIs 
responsive with non-blocking 
rendering, managing async state 
for a better
experience, and coordinating 
async UIs once we have sync 
ronisty.
And I will give some background 
of the talk and jump into it.
  So a quick note, this is a dev
two of Chrome Dev Summit, it is 
a little bit more future-forward
looking, and the stuff that I 
made here, the demo and the 
helper code is a little bit 
experimental.  But it is using 
current browser features, and so
all of these techniques still 
work today.
So now for a little background, 
I mentioned that I work on web 
components and lit-html.  So we 
will use these things as the 
basis for the demoes in the 
talk.
So if you haven't used web 
components before, web 
components lets you define your 
own html element tags.
  So it really refers to two 
specs here, custom elements and 
shadow DOM.  And it allows you 
to combine your own tags with 
custom implementation and UI.  
To create a custom element, you 
extend a built-in class, you add
the implementation and you 
register the class with the 
specific tag name with the 
browser.  You can use the 
element and the tag name 
anywhere you can use html, in 
the main page document, and in 
other frameworks.
And so, next: Lit html.  So lit 
html is a way to write 
declarative lit html templates 
in JavaScript.  This is a 
feature that came out in ES6, 
they are strings that are 
denoted with back ticks instead 
of quotes and they can have a 
template tag in front of them, 
we will use the lit html 
template tag which allows us to 
process the template to make it 
more efficient, and inside we 
can have expressions, plain 
JavaScript expressions.  Once we
have it, if you want to render 
it to the DOM, you pass the 
function and give it a node to 
render to and it will make the 
DOM appear there.  If you call 
the render function multiple 
times with the same template, 
lit html will only update the 
expressions changed, and not the
rest of the DOM in the template.
  And then if you take web 
components and lit html and 
combine them, you have 
LitElement.  So LitElement is a 
convenient way to write web 
components because this is day 
two and a little 
forward-looking, I'm using 
JavaScript features here like 
decorators and class fields, and
LitElement gives you two 
features.  One is the ability to
declare observable properties.  
So these decorators will
create create getters and 
setters.  And this will return 
the render method that returns 
the lit html result.  When it 
updates, it takes the method and
putatize to  -- puts it to the 
shadow root. 
And then we have a helper 
decorator to register it to the 
element.  And then you can 
render it as html in the way you
would expect.
So that brins us to the first 
technique, batching work.  If 
you go to the element 
definition, you see in the 
render method
here that this is called for 
automatic, and the question is 
when the method is called.  So 
we will look at an example here,
we will look at the element and 
this applies if you used it in 
the main html with the 
framework.  We will create the 
element instance, and then we're
going to set a property.
So the question is, should we 
render now?
We could render now, but we 
don't know that we're not going 
to set another property right 
after we set this property here.
And if we did render after every
property set, we would be 
rendering multiple times for 
every element as we update the 
data.  We don't want to do that,
so
instead we're going to in queue 
a task and in the future it will
run and render the element.  And
when you know it has rendered 
and it is complete, we add the 
promise hanging off the element 
here called update complete.  
This is going to resolve when 
the element has rendered, if you
wait for it, you have a fully 
rendered element.  You have an 
asynchronous pipeline under the 
hood in LitElement.  When a 
setter is called for property, 
it will call the request update 
method.  That will schedule the 
update task, but not when there 
is not one existing.  If it is, 
we use the same task and that's 
when we get the batching.  We 
are going to get the update on 
the element and that is when the
work is done to render to the 
DOM.  We do it for performance 
and developer ergonomics. If you
go back to the template, we see 
that it renders two different 
properties in the same template.
It is easier to reason about 
them if we don't worry about the
order in which the properties 
are set, or whether they are 
both set together or not.  So I 
would like to take all of the 
changes that are incurring for 
an element, batch them together 
and allow you to write a simple 
declarative template to render 
the element.  An interesting 
implication of this is 
LitElement rendering is always 
async, you don't opt into be 
async and you cannot opt out of 
being asynchronous.  When we 
explain this to people, 
sometimes we get a question 
won't the UI partially update.  
The answer is no, I built an 
animation to try to show this. 
We have a tree of elements, we 
will assume they are all 
LitElements, they are passing 
data down the tree via 
properties.  That's our 
component tree, and then right 
here we have the microtest 
queue.  We have a queue of 
microtasks the browser runs to 
completion.  The yellow box is 
the current microtask.  If we 
have a property on A, that 
causes the microtask to be 
queued.  When A is run, it sets 
properties on B and C.  So the 
tasks are going to be queued, it
is going to set on D and E, C 
sets some on F, and we run the 
entire queue until it is empty. 
Once it is empty, then the 
browser can paint.
And to show this with a demo, I 
made a demo here of a tree of 
elements and each one takes an 
artificially long time to 
render.  And so normally, you 
know, you might expect if you 
don't know how the microtask 
queue works that these paint 
individualindividually.  If you 
click the render button, they 
snap at once.  So the whole 
thing takes 750ms, we don't see 
the intermediate states.  This 
is great if the UI is painting 
fast and not taking 750ms.  If 
you have a very complex tree and
UI is rendering slowly, then we 
introduced jenk, which we don't 
want.
And so this brings us to the 
next technique, non-blocking 
rendering to keep a responsive 
UI.
So we just saw that we can have 
async rendering and block paint 
input.  We can have complex UIs 
that take a long time to render,
and we need to render in less 
than 10ms to keep the 60 frame 
per second target.  One way to 
look at this, we have all of the
microtasks in the blue 
rectangles and they fill out a 
complete task, and this task 
blocks rendering.  As long as 
the complete task fits within 
the 10ms budget, we are fine.
But as soon as the task exceeds 
the budget, we're going to 
introduce jenk.  So the 
technique here is to break this 
up so that instead of having a 
whole bunch of microtasks and 
one long task, we give a task 
per component to render.  
Hopefully this fits in under 
10ms and we get smooth updates.
  So we will tap into this 
asynchronous update pipeline 
that LitElement has and we will 
customize the schedule step 
right here. And this brings us 
to the first experimental helper
that we are calling for the 
moment lazy LitElement.  And, 
under the hood in LitElement, 
there's the schedule update.  It
waits for the microtask and 
validates which renders.  And 
instead of waiting for a 
microtask, we wait for a promise
that is resolved on set time out
timing.  It allows the browser 
to paint and handle input before
we render.  Before we we go back
to the demo, we turn on lazy 
rendering and everything is 
rendered on time out time and we
can paint the intermediate steps
as we go.  So we have reduced 
jenk by showing some 
intermediate state.
  And so a lot of frameworks 
have been working on 
asynchronous rendering over the 
years and especially React 
recently.
And they have created a demo 
that I like, an 
Surpinski triangle demo.  You 
have a large tree of components 
and each is written to take an 
artificially long amount of time
and they a data label, and to 
update them it will take them 
and while updating the label, we
will animate the tree, and to be
smooth we need JavaScript.  So 
when we take time to update the 
tree we get jenk.  This is a 
nice demo, it highlights 
subtleties whether you are doing
asynchronous rendering.  So we 
have the sub-tree render, we 
wanted the continuous 
script-driven animation to be 
smooth and we have the inputs we
want to handle.
So we have the render element 
with the microtask queue.  And 
as the triangle increases in 
size, we get jenk and we want to
avoid that.  We can reimplement 
this by changeic to lazy
LitElement and it is smooth.  
The next, we have the 
high-priority inputs, it brings 
us to the idea of urgent 
updates.  If you defer 
rendering, you have situations 
where you want to render 
smoother than you scheduled 
yourself to be rendering.  So 
with the urgent updates, we 
created in lazy LitElement,
we do not override the -- we 
have request urgent update so it
can be done sooner.  It is a 
simple implementation, I wanted 
to show it because it is a 
little bit interesting.  Instead
of a promise with set time out, 
we do that, but we store the -- 
let me go back here.
Maybe not, okay.  Well, we 
restore the resolve function on 
the instance of the element when
we request, or when we schedule 
an update.  And then we can go 
back and call that resolve 
function, which calls the 
promise to resolve earlier than 
scheduled.  We are jumping from 
the task queue to the microtask 
queue and we are going to render
as soon as possible.
Okay, and so this is how you 
would use it.  So we have a 
partial implementation of our 
dot element here, these are some
event call backs that might be 
called from mouse over and on 
mouse out, and they are going to
set the state that the rendering
is based on and then they call 
the request urgent update to 
quick us to the faster timing.
  And so once we do that, we can
see that we have the smooth 
animation, the labels update, 
and we get a fast hover over 
effect here by calling the 
request urgent update.  I want 
to talk quickly about 
scheduling.
So in that demo, I did a very 
simple thing.  I said, instead 
of using a microtask, we will 
use a full task.  And I actually
did not expect that to work as 
well as it did when I made the 
demo, but it did work very well.
The browser ends up doing a very
good job of kind of executing as
many tasks as it can before it 
has to painted
-- paint a frame.  It leaves off
queues, the ability to cancel 
work and call off long chains of
tasks.  
So that's where we would plug 
into a native platform scheduler
API that Shubhie and Jason 
talked about earlier.  The 
important thing is, with web 
components, we don't have an 
overarching framework that can 
coordinate and schedule the 
components for us and we can get
components from different 
vendors.  So being able to plug 
into a vendor API for scheduling
is going to help us
tremendously here.
And so we will move on to 
managing async state.  So we 
have talked about being 
synchronous on a per-component 
level.  So yielding to the 
browser and letting it paint in 
between components.  But 
sometimes we need to manage data
that itself is asynchronous, and
lit-html rendering is 
asynchronous, when you give it 
the templating function, it is 
going to render to do that.  So 
what if you want to render 
asynchronous data inside of a 
synchronous template? We can 
look at how we handle data here.
If we have a string and a plain 
reference to that string, it is 
easy to use.  We just use it in 
a template and we get the 
rendering that we wanted.
And if we change this instead to
load off the network, it turns 
out that lit html handles 
promises already.  So what we 
will get is a blank screen here,
when the promise resolves,
we will render hello world.  And
this is what we expect out of 
the box, we might not want to 
render a blank screen out of the
initial state and this gets us 
to directives, and this is how 
templates are rendered by lit 
html. And one of the useful 
directives that it ships with is
called until.  And what until 
does, it takes a promise, it 
will render the result of that 
promisepromise when it resolves,
but it will render a placeholder
until the promise 
resolves.  You can see that, and
we call the promise, and the 
loading place holder, that is 
going to show first, when it 
resolves, we are going to render
our content there.
So this example is a little too 
simple, it assumes we have the 
promise available that we want 
to use.  And a
lot of times, we want to run 
some task that we want to render
and we have operations dependent
on the instant state.  So we 
have a file name property and we
want to fetch databased on
the file name.  We might be 
tempted to call fetch in line 
with the template and then 
render it, and this does work, 
but it has a problem where every
time you render the template, we
are going to call fetch.  You 
might be rendering the template 
because other properties change.
And in this case, we are going 
to flood the network with a lot 
of fetch requests and you might 
show an alternating end and 
resolved state in the UI. And it
is almost the mental model that 
we wanted.  So what we really 
want to do is to be able to run 
a task that is dependent on some
data only when that data 
changes.  So that brings
us to the next experimental 
helper here, run async, and it 
performs the operation but only 
when the data it depends on 
changes.  And this is a 
directive factory.  So you give 
it an asynchronous function that
takes data and produces a result
and it returns a directive you 
can use inside of the lit html 
template.  If you want to 
produce the previous example, we
can set a fetch content 
directive by passing run 
asynchronous, a file that takes 
a name and calls fetch for us.  
We go and use it inside of the 
template and we pass the file 
name here and then we pass 
another template that is going 
to render when we have 
successfully resolved that 
promise.
  So this lets us get part of 
the way to our goal here.  We 
can render some asynchronous 
data, and it turns out that 
asynchronous data can be in a 
number of different states.  For
any async operation, you can be 
in initial state, pending, you 
can have accessibility 
completed, or ended in failure. 
It helps if we model and think 
about all of the states 
explicitly so we make sure that
we handle them.
And next slide here, having some
clicker problems.
Do we have another clicker? 
Today is the day of 
malfunctions. There we go.
Hopefully this works.  I will 
keep going until it breaks 
again, and I can at least be as 
good at this as Surma.
Sorry, Surma.
So here, we can see that the 
directive takes templates for 
all of the different states that
are UI can be in.  So we have a 
success template, a pending 
template, an initial state 
template and the error template 
to make this a little bit more 
realistic, I made a demo that 
searches the npm package 
repository.  And this is a 
search-as-you type search demo, 
we have a search box and a 
results panel.  And there it 
goes again.
Do we have another one? Okay.
It is not just
me.
Okay.
We will keep going.  So to build
the demo, we will go in two 
steps.
Uh-oh.  Halloween is over, this 
isn't haunted anymore, is it? 
The screen -- 
okay.  Did I do that? This is 
all going to be edited out, it 
is going to be fantastic.
[ Laughter ].
So we're going to build this 
demo in two parts, hopefully.  
So first, we're going to define 
a search packages directive 
using run async.  And here is 
the initial starting point for 
this directive, the async task 
function here is going to 
generate an url for the npm 
search service here and then get
the results by fetching it.  And
then here we're going to handle 
the response and just do a 
little bit of due diligence and 
make sure that we have a 200 
response and return that, 
otherwise we throw the message 
we got back.
And I wanted to make this task 
able to kind of trigger that 
initial state template, and the 
way to do that in async is we 
throw, in run async, we throw 
the initial state error.  I will
check to make sure zee  we have 
a query to execute, and if not I
will throw the error.
And it turns out the npm 
registry is difficult to get it 
to trigger an error, usually it 
triggers empty results, so it 
shows the error state template, 
I will do prevalidation of the 
query and make sure I don't 
start with the dot or 
underscore. And to make it more 
realistic if you do a search as 
a type 
UI, you have requests that you 
initiate that you don't care 
about the results.  So we have 
the abort signal to cancel the 
request, and it runs it for you,
you get it in the argument here 
and you forward it to the fetch 
API.  So this is the entire 
search package directive here 
built with async.  Next week 
need to use it.  This is a 
snippet of the demo UI.  We have
the input element here, which 
updates the query, and down here
we use the search packages 
directive.  We pass it to query,
give it a success template, we 
iterate over the results 
template and give it little 
cards and the pending initial 
and error state templates here. 
And so when we go to use the 
demo, we saw we had the initial 
state template rendering, when 
we type, it turns into loading 
and we get the results back.  If
we enter a query that starts 
with an invalid character, you 
see the error template there and
that updates as you type.  If 
you realize your mistake and you
type in a new term, you will get
all of the intermediate async 
state templates as you type.  So
that's the demo, and now you did
see most of the implementation, 
it was easy to do with that 
directive.  And the key 
take-away is that we want to 
explicitly model our 
asynchronous operations state.  
If we do that, we are more sure 
to take care of all of the 
states we can be in.  If we 
build an UI for instate, we can 
let the users know what is going
on with the application and they
have a better user experience.  
Once we have the application and
the UI built up of the 
asynchronous components, we need
to coordinate them.  So if you 
have a lot of async children in 
your page, how do we ensure a 
consistent UI if you want to, or
how do we avoid a sea of 
spinners? So I added, to 
demonstrate the spinners 
problem, I added to the demo a 
little feature here.  When you 
search, the cards are going to 
do their own query to bring up 
the dist-tag of the npm 
packages, you can see a lot of 
spinners on the page and this 
can be a distracting UI.  So we 
want to give developers an 
option to not deal with the CS 
spinners.  And the way we will 
handle this is we will 
coordinate with events.  We will
fire a promise carrying event, 
and the promise is going to 
resolve when some work is done, 
so the async child component 
fires, creates the event, and 
fires the promise.  And that 
looks like this, A is the 
container up there, and E&F are 
async children and they fire the
pending state event that holds a
promise.  The container is going
to handle the event and wait for
the promise and the children, 
when the work is done, they 
resolve the promise and finally 
when all of the promises are 
settled, we're going to update 
the UI.
So that brings us to the last 
experimental prototype here, 
pending container.
And pending container kind of 
takes care of all of this 
plumbing for you.
And it is a class mix-in, you 
can apply it to a super class, 
like LitElement.  It has two 
features, we have the has 
pending children property, 
whether there is async work 
happening below you.  When this 
property changes, it causes a 
re-rendering of the element.  
And then it triggers the state 
event and bookkeeping if we have
any extra work below us.  To use
it, you can apply the mix-in to 
the super class here, to the 
LitElement.  Once you do that, 
you have available the has 
pending children property that 
you can use in the template.  So
now we're going to add a 
spinner, this is a top-level 
spinner, to the UI that is 
triggered based on whether or 
not there is any pending 
children.  So the run async 
helper fires the events for us 
and the container mix in 
captures them.  So we have an UI
where we have a spinner, and it 
happened again.
There we go.  Okay, it might be 
a faulty button here.
So what we want to add is a 
spinner at the top of the UI 
that is going to be going 
whenever there are pending 
search results coming back from 
the server, or we have children 
that need to update.
So now you see that we get the 
spinner as we type, we don't get
the spinner on the children, but
we can see the top-level spinner
is still going, and the results 
come in and the spinner stops.  
So that's the UI we are going 
for, and it was pretty 
straightforward to build with 
these directives.  When you have
an asynchronous UI, there's a 
lot of options to handle this.  
You can try to block the UI, you
can show the raw incremental up 
dates, have spinners on your 
page, or replace it all with 
top-level holders and spinners. 
This depends on the
UX and UI that you are trying to
build, but you want to give the 
plumbing and the framework so 
you can build whatever you 
choose to build.
This is my wrap-up.
So we are very excited about 
some additional work that is 
going to be done in this area.  
So Jason and Shubhie talked 
about the native scheduler API 
which we are extremely excited 
about.  Display locking is 
another proposal where you lock 
in a portion of the screen to 
update it incrementally and then
flip it to the new version, and 
Gray talked about 
virtual-scroller where you can 
handle large amounts of data as 
well.  And on our end, we will 
work on library and examples 
where you can lazy load 
components, background 
rendering, and viewpoint 
visibility rendering so things 
render as they show on the 
screen.  I have a link here,
and in the ask Chrome area doing
Q&A after this if you would like
to ask any
questions.  Thank you.
[ Applause ]. 
SPEAKER: Final big web quiz 
question of the day.  Any users 
use Webpack? You are going to 
love this one.  Build tool, is 
it a build tool, or is it just 
the noise that a human makes?
[ Laughter ].
  SPEAKER: [ Laughter ] that is 
correct 
, that is very 
true. 
All right.  Full English, 
Rudolph, I love sabit of brunch.
Surma.JS or 
Jake.JS.  Possibly real.  
Webpack, I think I may have 
heard of that one.  
Brussles
Sprouts.  That is zucchini over 
here. 
SPEAKER: No, eggplant. 
SPEAKER: Oh, I get so confused. 
And build gates, I saw that at 
the end there.
[ Laughter ].
  All right.
So let's see how you did.
Full English is fake.  Brunch is
real.  I would have put it as a 
fake.  
SPEAKER: The second one -- 
SPEAKER: I have no idea, that's 
a kind of vegetable. 
SPEAKER: It is like a -- 
SPEAKER: A pumpkin. 
SPEAKER: I saw one recently, it
was not an appetizing-looking 
thing.  Jake.JS, you are saying 
that is fake? It is real!  [ 
Laughter ].
I had to wait all day, it was 
actually real.  I'm losing faith
in all of it.
What are we doing as a 
community? What are we doing? 
Well, we have parcel, not a 
Surma.JS.  Sure, why not.  Let's
see here.
Broccoli, yes.  Brussles 
sprouts, Webpack, 100 percent.  
SPEAKER: I feel that we stabbed 
Webpack is a thing.  In my 
introduction, I may have ruined 
that one. 
SPEAKER: The last 
set, here we go.  Build Gates, 
I'm sad that that's not a thing.
Okay.
Where are we at? The final 
speaker is over there? Cool.  It
is time for the final talk.  
Round of applause
for Dan Dascalescu! 
Chrome OS: Ready for Web 
Development.
Dan Dascalescu, Partner 
Developer Advocate  . 
SPEAKER: I am Dan Dascalescu, 
I'm a developer advocate at 
Google and we want to talk to 
you about why ChromeOS ChromeOS 
is an awesome choice as a web 
developer platform.  There are 
two reasons why you should 
develop on ChromeOS.  It is an 
unprecedented convergence of 
technology stacks.  It brings 
web applications and can run 
Android apps and for Google play
support, you can install 
browsers and you can test web 
apps on it.  And starting with 
Chrome 69, you can run Linux and
the development to a workflow 
there.
And this is a sneak preview of 
what is coming to the talk, you 
see a terminal, and a PWA.
So the second reason you should 
development on ChromeOS and the 
target OS is the variety of 
devices, you may have seen 
Chromebooks from a variety of 
manufacturers and you may have 
seen convertibles from various 
manufacturers, and also the 
all-in-ones like the OG Chrome 
base, and small, four-factor 
PCs, this was the Chrome box, 
and HPA, so the manufacturer 
followed suit, this is the 
minifour factor PC, you plug it 
into the port of the display and
it turns up on the computer.  
And you can attach via microUSB 
or bluetooth.  This is the 
Chrome box commercial, it 
manages style and queue-up 
displays.  And this is the first
tab powered by ChromeOS, this is
the tab 10, Google has a variety
of devices, the Google pixel 
book, the flagship, which is 75 
percent off for you guys.
And the latest offering, the 
pixel slate, which was
announced last month.
So in the slide, why ChromeOS, 
you heard that we have a large 
market share, and extensive 
presence in the EU space.  
ChromeOS is popular there. And 
if you target for ChromeOS, we 
have convertible form factors 
and devices that have or have 
not gotten a keyboard, or a 
mouse, or a stylus or a touch 
screen.  So this can future 
proof you from devices that have
not been 
invented yet.
And then you have the foldable 
tablet, the future is here 
already.
So then again, the reason why, 
diversity, you can develop apps 
on Linux and test them on a 
variety of Android and Linux 
browsers.  So ChromeOS brings 
your own development workflow, 
the one you are familiar with, 
the development tools, a variety
of form factors, from mobile to 
tablet, and browsers on Android 
and Linux.  There's quite a few 
of them, Samsung, and the others
installed on Google pixel as 
well, and this is edge and 
browser and Firefox running on 
the same ChromeOS machine, you 
can install desktop 
browsers, you test on Firefox if
you have the latest version of 
it.  And this is Firefox, this 
is epiphany, the non-web, and 
you can install Docker, which I 
heard that many of you are 
interested in it in the forum.  
There's a thread on Reddit if 
you search
for Docker not working, you will
find this thread.  Try it, but 
it does work.
So how does it work, how does 
ChromeOS manage to ship these 
experienceables that are speed, 
simplicity, and security.  How 
can it download web apps and 
Linux apps while staying fast, 
simple, and secure.  It boils 
down to the containers 
architecture which I will Steve 
tell you more about. 
SPEAKER: So when we were 
bringing Linux apps to ChromeOS,
it was important that we 
maintain all of the things that 
make ChromeOS ChromeOS.
So simpplicity was first.  It 
shouldn't feel like you are 
running a separate os.  But, 
instead, you should have the 
Linux terminal and GUI app 
blended in with Chrome and 
Android apps.  We managed to do 
this while keeping things fast.
So Android and Linux support do 
not do any emulation.  By using 
lightweight containers and 
hardware virtualization support,
your code will run natively.  
And security is always in the 
mind for ChromeOS.  So crus tini
uses virtualization containers 
to give security.
So speaking of security, we are 
starting with a secure 
foundation and moving up on 
features from there.  Linux is 
isolated from ChromeOS, but we 
are working on the ability to 
share files and folders with it 
and we will be adding support 
for Google drive as well, you 
will keep all of your projects 
and important work safe in the 
cloud.
So we will take a look under the
hood.  The first time you launch
a Linux app will start a 
lightweight VM and container.  
It gives you a real Linux kernel
and was designed to run 
specifically in containers.  And
the container inside is where 
you do all of your work.  This 
container is very tightly 
integrated with ChromeOS, like 
launcher icons and graphical 
apps work with any other 
ChromeOS or Android app.  The 
most important thing is you get 
a terminal.  So how does it 
actually feel, what is it like? 
And the answer should be like 
most of the Linux
systems.
So Cristini is based on Debian 
stable because many developers 
are familiar with app to package
management and Debian-based 
systems.  And for now, we are 
starting out targeting web 
developers, ChromeOS is a 
web-based OS and we think it is 
appropriate for you to develop 
web apps on a web-based OS.  To 
do this, we gave nice 
integration features.  We do 
port forwarding, it does not 
seem like you are running a 
container, you get local hosts 
to connect to and, as treated as
a secure origin as it should be.
If you want to treat your 
container like a separate 
system, you can.  And we provide
a penguin.Linux.test DNS alias. 
And we do want to support more 
developer workflows than just 
the web.  So we will be adding 
USB GPU audio support, file 
systems, user space, and better 
file sharing in upcoming 
releases.  And now Dan will talk
about using Chromebooks for web 
development and show us what 
Crustini looks like in action. 
SPEAKER: Thank you.  So we know 
how 
Crostini
works.  And it lets deppers do 
everything they need locally, in
development, but most Linux apps
work.
You can run in SQL server.
  So search for Crostini, search
for it, and you can see it 
install, depending on the 
network speed, you will have 
Linux installed on the 
Chromebook.  And this is a 
terminal.
We have a terminal, we will 
build the desktop app for
pixel 
booksmbooksm .
And this means chromium and 
node, you are shipping with the 
app, that might be useful, but 
consider Carlo, a Google project
that is a node for app framework
that gives Chrome rendering 
capabilities.  You don't have to
ship Chrome or a rendering 
engine with the app, it connects
with the instance of Chrome and 
exposes a high-level API for you
to render in Chrome for the node
script.  You can do something 
simpler, and build a Progressive
Web App.  This is what Spotify 
did, you click on the install 
app button.  Once I accept the
install prompt, it becomes its 
own PWA, and you launch it, and 
once you launch it, there is no 
more install because it is an 
installed Progressive Web App 
and it is installable from the 
shift.  So these system-level 
features are accessible by 
Chrome, and available on 
ChromeOS and Chrome 67, which is
ancient by now, and starting 
with Chrome 70, the current 
version 70, and on Mac with 
Chrome 72, if you want a sneak 
peak, check out the desktop SLAs
flag.  This goes to 
ServiceWorker support, 
implemented by all major 
browsers and they are working on
advanced features, such as 
PaymentRequest, Firefox is 
working on that, and they have 
add to home screen now and 
Safari is working on 
authentication APIs.
So we talked a lot, we will do a
demo and see if anything blows 
up.  I set up Crostini already, 
I will install node, npm, and we
will check out Squoosh, you may 
have seen it in an earlier talk,
an image compression app.  We 
will check out the code, the web
server, and we will open Squoosh
from the Android browser on the 
same device.  If things work, we
will do some remote debugging.
So these are the instructions
to install node, I will let it 
render because it takes
a bit.
And then you can
see that it tells you it 
supports all local addresses.  
We will run Chrome and it runs 
in the container, and we will 
know to the local host, 8080.  
And there is Squoosh.  I don't 
know why it said failed, but it 
works, you 
can open images, or not.  This 
is a live
demo, after all.
The point is that you have 
access to local host from the 
Linux container.  And we will 
try running Chrome dev from play
and then choosing Chrome dev 
here to get the distinguished 
icons.  It looks like we need to
update it, hopefully the update 
will not
break anything.
[ Laughter ].
So I'm going to launch it before
it gets a chance to
update.
[ Laughter ].
Local host here will not work, 
that is a known issue.  Steve is
working on it.
[ Laughter ].
We need to get, don't need to 
put you on the spot, Steve.  We 
need to get the IP address of 
the Android container, which is 
this one.  There is this 
command, the IP address shows, 
which has some long outputs, I'm
going to copy that and paste it 
in Chrome dev.
I thought I
launched it somewhere.
SPEAKER: I hope
I did not break anything.  Woah.
[ Applause ].
So this is Squoosh running in 
Chrome, and I will try something
more dangerous, we will try to 
remote debug it
with chromium.  
These are different containers, 
and to do that, we need to put 
the device in developer mode and
enable the debugging here, which
I have done.  And then we need 
to run this command, and that is
documented on our Android page, 
it is the IP of the -- our 
container, and we set up the 
bridge to it.  And so if things 
are on my side, we will be able 
to go to chromium spec and see a
number of targets here, there's 
12 of them.  We will open the 
Squoosh one and click inspect, 
and this appears to work 
surprisingly well for a demo.
[ Laughter ].
  So I will re-size the window 
and try something spectacular, 
I'm going to scroll.  So this is
live.  Not an animated gif, this
is actually remote debugging.
[ Laughter ].
  And whatever I'm doing here, 
whether this app works or not, 
you can remote debug it with 
chromium on Linux, debugging on 
Android, browser running your 
Progressive Web App.  Does that 
make sense?
[ Laughter ].
This is what I wanted to show, 
let's go back to
the slides.
So these are the instructions 
for installing node, nothing 
special here, you see it on 
GitHub, you download it using 
Git, the developer workflow, and
maybe Steve wants to show this. 
We can run the code to check out
the code.  So until we switch 
the demo, the screenshot shows 
what we are going to do.  But, 
great.  We are not doing it live
now.  So Steve is going to 
double tap that after he copies 
to the Linux container.  And in 
the Linux container, if you 
double tap the build file, you 
are prompted to install it as a 
Linux app.  Chrome supports it 
out of the box, and once the 
instillation completes, you 
should see visual code in the 
launcher.  And the instillation 
prompt will say find the visual 
code in the launcher.  And this 
is not network-dependent, so it 
should be as fast as we 
rehearsed, though 58 percent is 
not terribly fast.  91, okay.  
Cool.  
SPEAKER: All right, one second,
or two seconds.
There it is.  And there we go.  
VS code.
[ Applause ]. 
SPEAKER: I have a manifest, 
that's why the Progressive Web 
App has a start url.  We will go
to the slides for best practices
for -- we will look at this once
more.  It is cool how you can 
draw these in sync, yeah.
I had to brag about that.  The 
way to set it up is -- I sent it
to the medium post this morning 
with instructions, there's about
17 steps you need to follow.
[ Laughter ].
  So check out 
bit.ly/crOS-remote-debug or take
a picture of the slide.  How to 
optimize PWAs for ChromeOS which
is not a topic, more of a 
non-topic.  You should use 
Lighthouse for any PWA, if you 
have some to spin and optimize, 
check out Lighthouse and it will
give you a check list of what to
do.  And make sure that the app 
installs, this might be 
different on ChromeOS and all of
the versions of Chrome on 
mobile, your users are not 
prompts on the bottom to install
the app, you need to catch them 
before the prompt, save, and 
call the prompt method.  So this
is the prompt you do that, add 
Event Listener before you 
install prompt, promote the 
prompt, save the prompt in the 
deferred variable and show the 
display button, we set it to 
block.  And the click listener, 
you call the prompt method from 
the saved variable and you check
the user choice property and the
outcome pealed  field if the 
user accepted the instillation. 
As I said earlier, the answer to
this question is no.  You have 
the app installed on ChromeOS, 
you should do feature detection.
And the reason is, there's a 
wide variety of input devices 
and form factors that ChromeOS 
can run on.  So you might have a
task screen, or you might not.  
Some lower-end devices have a 
touch screen, there may be a 
track pad, or the acor 10 tablet
I mentioned earlier.  There may 
be a keyboard, if you can use 
keyboard short cuts, it is good 
to have support for them.  There
might be a mouse, and there 
might be a stylus, useful for 
drawing apps.  Make sure to 
build responsive and take 
advantage of the screen early 
state, this is an example of an 
app that has a wide display and 
it displays a number of days in 
the weather forecast and if it 
is re-sized for font size 
screen, it shows less 
information and can support the 
rolled up state if the user 
wants to glance at the
with  -- 
weather continuously.
So you can have the preview and 
next control.  I have that and I
do that often.
And this is an example from 
Starbucks, they found that 
buildive responsive pages pays 
off, because users order on the 
desktop and use the mobile 
device to pick up the order.  So
build responsive.  And it pays 
off to optimize your forms, 
nobody likes to fill them in, 
and you can go to
G.co/amazingwebforms.
.
And there are pointer events, 
these are a unifying model for 
input, touch, track pad, mouse, 
and you have a lot of events 
supports in Chrome, Firefox, 
Opera, and Samsung, you seem to 
have a listener for it, or you 
have a pointer and a pointer 
down, pointer up, cancel, lever,
more at the G.co/pointer events.
And this distinguishes between 
the pointer device, you can 
check if there is mouse, touch, 
pen, or something that is not 
supported by the browser.
So what is going to happen in 
the future? We are working on 
improving the desktop PWA 
support.  One is keyboard short 
cuts, the other is bagging for 
the launch item, you can display
a number of 
authentications, and when you 
click on the link, this is not 
captured yet.  So in the future,
we hope to enable this so when 
you click on the link, your app 
will open and handle the link.  
For that, we need to define the 
scope parameter in the manifest.
That is used to determine when 
the user has left the web app 
and it needs to be bounded in a 
tab.
We are also working on 
low-latency canvas context, 
introduced in Chrome 71 beta, 
this is useful for having 
interactive apps, they use open 
GLES for rastering, and how this
works is that the pixels get 
written to the front buffer 
directly, so this bi-passes 
several steps of the rendering 
process and Chrome writes in the
piece of memory that is used by 
the sub system and is scanned 
throughout the screen.  And if 
you do not interact with the 
DOM,
or in the interactive app, it is
useful to use it.  This is how 
to set up the low latency canvas
context, you set the parameter 
through and it needs to be 
opaque.  And this is the last 
slide, I had no idea what to put
on it.  But I figured that I 
should add that Chromebooks 
converges machines that runs 
Linux, Android, and Google play 
natively without emulation.  So 
they run very fast.  And you 
should totally take advantage of
the 75 percent off discount, and
please explore Chromebooks and 
give us feedback.  We love 
feedback, we have the chromium 
os Google group and the 
sub-Reddit,
Crostini.  You can check out 
CRbug.com, and you can check out
the 
Crostini tag.  Thank you.  
This looks a bit strange.
[ Applause 
]. 
SPEAKER: Thank you very much.
SPEAKER: Prizes, prizes, look at
this. 
SPEAKER: Do you like this? 
SPEAKER: This is the big winner,
right here.  
SPEAKER: You can put it on your 
desk and think about it every 
day, writing code.  
SPEAKER: And -- 
SPEAKER: Yes. 
SPEAKER: So that's the first, 
second, third place. 
SPEAKER: Yeah. 
SPEAKER: It is an acceptable way
to do it. 
SPEAKER: We have big web quiz up
there, we can do the leader 
board.  The tension is palpable.
SPEAKER: Do you have your music?
SPEAKER: Make your
own music. 
SPEAKER: Hey!  
SPEAKER: I feel like Masataka 
won two years ago. 
SPEAKER: So 200 points, what? 
SPEAKER: Yeah. 
SPEAKER: What? 
SPEAKER: You should hire these 
three people if you need someone
-- 
SPEAKER: If you need somebody 
with niche knowledge of the web.
SPEAKER: If there's a web quiz 
somewhere, they would be 
perfect. 
SPEAKER: So those three, come to
the front of the stage.
I was gesturing, come to the 
front of the stage later on and 
we will give you beautiful 
prizes. 
SPEAKER: That is the end of the 
two days that we've got for you.
SPEAKER: We made it!  
SPEAKER: If you have been with 
us, thank you so much for coming
along, if you have been watching
on the live stream, thank you 
also for tuning in.  Don't 
forget, you can -- 
SPEAKER: Try again, Paul. 
SPEAKER: You can get all the 
videos at 
YouTube.com/Chromedevelopers.  
We will catch you. 
SPEAKER: There's a lot of people
to thank for this.
So we will just start, you know,
getting a rolling round of 
applause going.  We need to 
thank the caterers, photography,
the live stream managers, the 
captioners, the runners, the 
organizers, the speakers, the 
content reviewers, and the 
music, and anyone else we have 
forgotten, like, they made this 
look really easy and making it 
look easy is really difficult.
And, with that, this time for 
real, see you in 2019!  
SPEAKER: See you 2019, thank you
very much!
[ Applause ] 
