[MUSIC PLAYING]
ALEX RUSSELL: Hi,
good afternoon.
I'm Alex Russell.
I'm an engineer
on the home team.
And to set expectations, This
isn't the talk I usually give,
so I hope you'll bear with me
through some difficult content.
These days, I usually
put this slide up
to talk about Progressive
Web Apps, which Francis
Berriman and I named last year.
Progressive Web Apps
are the culmination
of multiple years
of my team's work.
I've been working with Jake on
service workers for, I guess,
four years now.
And the team that I work on
has been designing and building
the core technology for
Progressive Web Apps, the stuff
that you've been hearing about
for the last couple of Chrome
Dev Summits, and all day today,
and probably all day tomorrow.
And I apologize a little
bit, but not really.
But as you can imagine,
building and maintaining
all of that stuff, and working
on the standards for it,
is a full time job.
But I'm not going to talk about
Progressive Web Apps today.
At least not directly.
You see, for the
past year, I've also
been working with
Thao and her team,
to partner with folks
who are about to launch
their Progressive Web Apps, to
make sure that they're really
high quality.
And this incidental
consulting work
has given me a broad
view into the practices
of many of the teams that are
building for the mobile web
today.
What I can say with only a
few exceptions-- exceptions
like booking.com
and the great work
that the Flipkart team
did-- is that most of us
don't really understand how
hard mobile actually is.
I haven't exactly been making
friends in the JavaScript
framework community by saying
that sort of thing out loud,
it turns out.
But, as a Polymer
team will tell you,
this is basically the
PG version of what
I've been saying to
them for something
like a year and a half.
We had some really
tense meetings.
I kept saying things like, it
needs to be more asynchronous,
or look at this
trace, you really need
to load a lot less JavaScript.
And 12k sounds good
to me, actually.
Or, you really need to
break up your script,
so you're not executing
this entire long block.
What the heck is going on here?
And at some point,
they said, we get it.
Just stop telling us
what to do and start
telling us what goal to hit.
And this was kind
of a breakthrough
in the conversation.
We'd been at
loggerheads for a while.
And so I put my finger
in the air and said,
it would be really great if you
could get me something that's
interactive in about three
seconds on a 3G connection
on first load, and interactive
in about a second when I launch
it from the home screen.
So the Polymers went
off, and they went
through the stages of grief.
We had some denial, I'll admit.
There was some denial.
Anger, yes.
Bargaining, absolutely.
Depression, luckily they
sit far enough away from me
that I couldn't actually see
them sobbing in their cubes.
But I finally accepted
the challenge,
and they came back with a
PRPL pattern, which Sam is
going to talk a lot more about.
Meanwhile, the rest of
the JavaScript community
hasn't internalized the same
message to the same degree.
So I'm here to let you down.
Not easily.
This may be a little bit hard,
and I apologize in advance,
but we need to get to the bottom
of what mobile actually means.
When you see me tweeting
things like this,
this is actually
kind of desperate.
I have spent years of my
life working in TC39 to make
JavaScript a better language.
I have spent countless hours
persistently advocating
for extensibility, to give you
more power when you're writing
JavaScript in the web platform.
I've designed features
like service workers,
with Jake and a load of other
folks, that are entirely
predicated on JavaScript
in the first place.
Back in the day, I used to
work on a JavaScript tool
kit with Scott and Steve
from the Polymer team.
It's not like I hate
JavaScript, or I don't like it,
or I think you shouldn't
be using frameworks.
I don't hate them.
It's just that we're really
in the midst of a crisis,
and collectively,
we don't understand
how bad that crisis is.
If we did, we would have
already modulated our behavior.
So what I'm seeing when
I do reviews is almost
universally bad news,
specifically in the context
of the RAIL performance model.
A quick recap of RAIL.
Basically, R stands
for responding to input
in under 100 milliseconds.
Animating at 60
frames a second, which
means that, because the browser
has to apply the things that we
hand it on every
frame, we probably only
have about 8 milliseconds
to do our work.
When you think about web
VR, it gets-- 60 frames
a second's hard, 120, oof.
When we're doing
background work,
we need to make sure that we're
breaking up that background
work into 50-millisecond
chunks, so that we
can respond to
subsequent input and stay
under that
100-millisecond budget.
And we really want to
complete actions for the user
in under a second.
Because over a second,
research suggests
that users lose
focus on the task
that they're trying
to accomplish.
If we can't actually
complete it,
we definitely need to
acknowledge that work
in under a second.
Give users the sense that
we're doing something for them.
So you've heard this
number all day today.
Darren had it in the keynote.
But the DoubleClick folks
went and did a bunch of work
to find out, what are the
bounce rates for sites,
and does performance matter?
And the answer,
of course, is yes.
53% of users bounce
from mobile sites
that take more than
three seconds to load.
You leave real money on the
table when your site is slow.
Yet most of the apps that
I've traced over the last year
have been performance
travesties.
My experience isn't an outlier.
That same report noted that
the average mobile site
takes 19 seconds to load.
19 seconds.
Collectively, we're failing.
Sam's got more data on
this, and I actually
don't have time to go
into it, because we're
going to talk about
some of the reasons why.
But I think one of the key
reasons that we are not
succeeding today,
to put it kindly,
is a lack of understanding and
respect for how hard mobile is.
I have faith that as
a community, if we
understood and
respected the limits,
we'd be doing much better.
Nobody here wants to make
bad user experiences.
I think part of
this is because we
are the only platform in the
world that tried to take all
of our desktop stuff with us.
You don't take a Java JAR
that was a swing application
and run it on your
Android phone.
You don't take a
Mac universal binary
and run it on your iPhone.
You don't take a win32 app and
run it on your Windows Phone.
Everybody else
switched their tools
when their form factor and
their constraints changed.
We didn't.
We didn't make that switch.
And the proof is in the pudding.
It's in the traces that
I'm looking at every day.
But that's not the only reason.
Why are most of the
popular frameworks
and the tools that we're using,
the tool chains that we all
wind up setting
ourselves up with,
unacceptably slow by default?
Why are our tools producing
such abysmal results?
It's not like we're bad people
who want bad things for users.
I think, in part, it's that
you all aren't actually
developing on mobile phones.
Paul showed you some
really great DevTools.
They're awesome.
But I want to understand,
who uses Chrome DevTools
to get their responsive
view, and understand how
things will look on a phone?
OK.
Now, keep your hands up if
you're using web page tests
for testing on real devices.
Oh, I like you.
Some of you are liars.
And keep your
hands up if you use
Chrome colon Inspect over USB
to do real on-device debugging.
OK, who's done it
more than once?
That's what I thought.
It turns out that
DevTools emulation
is nothing like real devices.
Network throttling,
CPU throttling,
they're all kind of a fudge.
They're better than nothing.
Please use them.
Please set them on by default.
Please use them all the time.
But they're not the real thing.
Not even close.
Let me give you a quick example.
This is the I/O 2015 website,
which was a Polymer 0.5
Progressive Web App.
It was super leading-edge.
It had push
notifications, which I
think we had launched in Chrome
like a week or two before.
There were bugs.
It was amazing.
On desktop, it felt super
fast on this Wi-Fi connection.
We get DOMContentLoaded
and a meaningful paint
at about 700 milliseconds.
On-load isn't far
behind, and that
starts a nice swooping-out
animation, which
is smooth most of the time.
Overall, we spend
about a half a second
in script, which is
well under our budget
for getting a good,
responsive experience.
And we get interactive
content in about four seconds,
including that long animation.
This is a really
great experience
on a desktop-class device.
Which is to say, my MacBook Pro.
This is the same site, running
on the same Wi-Fi network,
on the Nexus 5X.
DOMContentLoaded doesn't
show up until two seconds.
On-load triggers at the
six-second mark, which
is where that animation starts.
Part of that delay is down to
this huge honking script eval.
That's locked up the main thread
for nearly two full seconds.
Script execution balloons
up to four seconds in total,
and for all of that
work, we still don't even
get smooth animations.
Look at all those long frames.
Content doesn't become
interactive until 7 seconds.
This is what TTI will
tell you in Lighthouse,
and this is not acceptable.
Ouch.
So what did we learn?
Traces from real mobile
devices are a harsh master.
And when I show folks
their apps on real devices,
most have the same reaction.
And in fact, I do this a lot.
They're shocked at how slow
the median mobile CPU actually
is, not the iPhones
in their pockets.
They don't really understand
the difference between desktop
and mobile disks and storage.
And most of us
are super ignorant
about how crazy mobile
networks are in the real world.
I think we theoretically
know, at some level,
that they're bad.
Doesn't begin to cover it.
They're so bad.
It's important to understand the
depth of the deficit, though,
so we can start to adapt.
I've been letting down a bunch
of engineers for the last year.
Hard, soft.
I've just sort of been easing
them into this grief curve,
and then hopefully we
get out the other side.
And it goes well, but you
have to put in the work.
But unless we do that
work, unless we change
the way we're working, the web
won't work for the next billion
users.
Not practically speaking.
So something I say in
basically every meeting
is that the truth
isn't the trace.
And by that, I mean
DevTools and Chrome tracing
attached to real devices.
Nothing else cuts the mustard.
It doesn't get you there.
So this is what's sitting
on my desk on a typical day.
And aside from the Pixel XL
that's on the right hand side,
all those phones are
less than $300 new.
The folks at Conga report that
the most commonly used phones
in their market are in
the sub-$100 range new.
I carry most of these
phones around in this bag.
This bag is in my
bag, my other bag.
With me all the time.
And these are some slow phones.
And I carry them because
I have zero faith
that anything I trace in
Chrome, on the desktop,
is going to be like
the real world,
unless I've put it on the
phone and emulated 3G.
And we'll talk about 3G.
I don't trust that unless
I've done it on real hardware.
And you know what
else also don't trust,
is the marketing numbers
from phone vendors.
So here are some of the
headline specs for the devices
that are in my bag.
All these devices have
flash-based storage.
Naively, there's
no reason to think
that script should run 10
times slower on my Nexus
5X than it did in
my MacBook Pro.
Especially if I looked at
these headline numbers.
2.8 gigahertz versus
1.8 gigahertz.
That's not a 10x difference.
Naively, there's
no reason for that.
If we just looked
at these numbers,
we wouldn't really
understand what's going on.
It bears repeating.
If you think the $700
iPhone in your pocket
is what people are
going to be adopting
in the next couple of years at
the median, you're delusional.
The average selling
price of smart phones
is going down, not up,
because the next set
of people who are
going to buy phones
are not replacing
their current phone.
They're buying a new
phone for the first time.
And all the rich people
already have smart phones.
The next billion users aren't
buying high-end devices,
they're buying at the margin.
And that margin
is a cheap device.
Worldwide, phones
are getting slower.
So is the average
network connection.
Your test device needs
to represent that reality
so that we don't wind up
building what Bruce Lawson has
called the Wealthy Western Web.
So this is MotionMark.
It's a benchmark that Apple
put together earlier this year.
And it tests a bunch of
graphics performance,
but it's very JavaScript
bound, in many cases.
On apples to apples hardware,
with Safari and Chrome,
running the same
version of OS X,
Chrome ties or beat
Safari in most cases.
Basically, this is not something
that we're actually slow at.
So here is the same benchmark,
on the same version of Chrome,
on a Nexus 5X.
The desktop version
is 25 times faster.
For as slow as
the Nexus 5X is, I
was able to change
just one thing
and get the MotionMark
benchmark running 15% faster.
15% for one change!
What the heck did I do?
Is it magic?
[LAUGHTER]
We're all adults here.
So I think it's safe to admit
that magic isn't actually
a thing.
Instead, I used a
little bit of science,
and I added this
makeshift ice pack
to the bottom of the phone.
[LAUGHTER]
I got this idea from my
colleague Victor, who's
been looking into optimizing
this benchmark earlier
in the year, and was seeing
massive variance across runs.
What the heck is going on here?
Going back to basics, recall
that computers are basically
just a bunch of wires.
Those wires have resistance,
and voltage and power
running through them, which
means that they dissipate heat.
And we dissipate some
more heat every time
a transistor flips from the
on state to an off state,
or vice versa.
That same process
generates computation,
but it also generates excess
wattage in the form of BTUs.
So a chip built on
the same process,
with roughly the
same architecture,
with the same number
of transistors
that dissipates more power
and turns it into heat
is the chip that does more math.
And doing more math
is how you go faster.
When it comes to computing,
power is literally power,
and power equals heat.
So these are the guts of a
regular desktop-class machine.
The chip in there
is a slower version
of the one in my laptop.
The square fins on
the top of that thing,
they're the heat sink.
And the job of the heat sink
is to dissipate the heat coming
off of the chip.
Now, that heat sink
is seated on top
of a metallic top of the
assembly for the chip,
with a little layer of
thermal paste between it,
so there's no air gap.
That air gap would cause a thing
that would explode and break
the chip, because
it gets so hot.
There's a fan that's
running over the entire box,
extracting all that heat
that's getting dissipated out.
And the result is that a
desktop-class or high-end
laptop chip like this
can dissipate something
like 60 watts under load.
This is what 60
watts looks like.
I don't know about
you, but I haven't
chosen to hold 60 watts
in my hand dissipating
more than once.
And--
[LAUGHTER]
--this is the key reason that
mobile phones don't run as fast
as desktops or laptops.
Even if they can include
as many transistors,
or scale up to the
same frequencies,
these chips, in
these packages, just
can't dissipate 60 watts
without burning your hand,
as Taylor said earlier.
So let's look inside the
guts of one of these phones.
This is the remains
of the Nexus 5X
that I used as my daily
phone for a couple of years.
It gave off the
magical blue smoke
and stopped booting a
couple of weeks ago.
So now I get to dissect it.
And unlike
desktop-class machines,
where the GPU and
the memory might
be on different sections of
the board, or different boards
entirely, the whole
system on the chip
lives under the
other side of this,
and that thing there
is the power supply.
On the flip side of the same PCB
is the entire system on a chip.
It's got an aluminum cover like
this, sort of a heat spreader.
So when we flip it over,
this is what you see.
There's no thermal paste.
No fans.
I took the shield
off, but that's all.
In fact, the CPU module isn't
even visible on this board,
because it's sitting underneath
that Samsung-made RAM chip.
Think about that.
To get heat off of this CPU, it
has to go through another chip,
and then through the
casing of that chip,
then to air, then to
a thin aluminum thing
to maybe spread some of
that, and then out what?
The screen?
The two layers of polycarbonate
plastic in the back?
Two layers.
Separate layers.
They aren't even connected.
Remember that polycarbonate
plastic dissipates
heat 1,000 times less
efficiently than aluminum.
If you can't evacuate
the heat, then you
can't really generate
a lot of that
without the core
temperature rising to levels
that damage the circuitry.
And then the magic
blue smoke escapes,
and your phone stops booting.
I wonder what I did to mine.
So chip designers
saw this coming.
They've been putting dynamic
voltage and frequency scaling
into chips for
more than a decade.
And, more recently,
they've started
enabling features that
allow OSes to turn off cores
entirely.
All this reminds
me of this paper
that I read a few
years back, from 2011.
If you have some spare
time, I recommend it.
While perhaps not
intended to be,
it reads like a prophecy
from half a decade ago
about the experience that
we're all carrying around
in our phones, where a huge
percentage of the silicon
in our devices isn't
actually available to be
used, thanks to the power
and thermal constraints.
And the power thing is real.
There aren't any heat
sinks on your phone,
although I'm pretty sure
that you wouldn't want a heat
sink and a fan in your pocket.
But imagine if you
could have one.
Why don't those exist?
Why can't I get a bulky phone?
The basic reason is that
this battery only contains
10 watt-hours of power.
Think about that in
terms of a light bulb.
You'd only be able to keep it
lit for a couple of minutes
if you could drain the
battery that quickly,
which you don't, because
doing that causes
batteries to explode.
So just don't try.
Just FYI.
Not a fun experiment at home.
This is why mobile
phones are slow.
We can't dissipate power
because we can't carry power.
That battery has to deal
with all sorts of stuff.
It has to deal with the CPU, and
the GPU, and the Wi-Fi radio,
and the Bluetooth radio, and
the NFC radio, and the cell
radio, and the screen,
and the touch digitizer.
It has to power all
that stuff and keep
you satisfied for a day's worth
of use on a single charge.
On something that can't
keep that light bulb
lit for more than a
couple of minutes.
So to keep from
wasting power, there
is a lot more complexity
in modern phones.
Most of them today
use what's called
a big.LITTLE architecture.
And that means that
they try to move work
from high-power cores
to low-power cords
very aggressively.
The systems that move that work
around are called schedulers.
All kernels have schedulers,
and the [INAUDIBLE] schedulers
are a bit all over the map.
The big thing to
understand about them,
though, is that
your phone probably
isn't using what's called
symmetric multiprocessing.
That is to say, not
all of the cores
are spun up to the same
voltage and frequency,
running at the same
rate, all the time.
There are different levels.
Most of the phones that you
probably have have something
called a Global Task
Scheduler, which
moves work around
between those things.
The systems in Linux that wind
up doing that work management
are notoriously hard
to tune, and they
use all sorts of
heuristics to do it.
Some do something
called touch boosting,
and what that means is when
you put your finger down
on the glass, they power up the
big CPUs in anticipation of you
doing some work, like animating
something or flinging it
around.
Some of them have
special heuristics
for launching applications, so
that when you launch something
from the home screen, they power
up the big cores so that thing
launches very quickly.
Now, that looks nothing
like the web workload.
The web workload, today,
looks like tapping on the URL
and waiting for the network.
And maybe those cores
got scaled down again.
And then your content comes in.
And then we start processing it.
Basically, the web is
fundamentally not aligned
with the way mobile phones
have been optimized to work,
because our workloads don't
look like their workloads.
Lastly, remember
that light bulb?
The light bulb is
why you shouldn't
believe any of the numbers
that you see in benchmarking.
The idea that your mobile phone
CPU is as fast as a desktop CPU
may be true in the limited case,
but not in the common case.
You're going to get scaled.
You're going to get throttled.
Things are going to move
to a low-power state
as aggressively as possible.
That's not how the world works.
We don't keep things spun
up the way we do on desktop.
So here's the chart again, but
I added the details of the CPUs.
It looks a lot different
now, doesn't it?
I'm not even going to
get into the details
of the huge differences that
come from caches, and pipeline
depths, and in-order versus
out-of-order dispatch,
and memory bandwidth.
But all of that matters.
The tl;dr is that you
actually get what you pay
for on a mobile phone.
Importantly, the MacBook Pro
packs a 100 watt-hour battery.
That's the FAA maximum
limit that you're
allowed to carry in a
single battery onto a plane.
Design constraint.
But as a result of having
that much power available,
and because of the heat sink
and the fan in my MacBook Pro,
I can keep those four
cores spun up under load,
and they can dissipate something
between 40 and 60 watts.
All the phones in the chart
are big.LITTLE devices.
And that means that
many of the cores
are powered down
most of the time.
That Moto 4G at the
bottom has eight cores,
but if you get more
than three of them
working for you at any point
in time, you are really lucky.
So mobile CPUs aren't exactly
what you thought they were.
I mean, since when would
a hardware vendor ever use
an opaque number to mask a
major difference in performance,
right?
[LAUGHTER]
And maybe memory pressure
and smaller memory footprints
on mobile devices don't allow
us to, on the browser side,
trade away space for
speed as aggressively
as we do on desktop.
And maybe the storage systems
are roughly as fast, though.
So maybe we could get
something back there.
I mean, they're just solid-state
flash devices on a Linux OS,
if you're running
Android, right?
Who has heard of
the term MLC Flash?
OK, it's like 10 people.
That's about what I expected.
MLC flash is multi-level chips.
Basically, they are chips on top
of chips inside the same chip
package.
And that is how you
make storage cheaper.
And it's a primary
reason why Nexus 5X gets
400 megabytes a second
of read throughput,
and my MacBook Pro gets
two gigabytes a second
of read throughput.
In SSDs, the way you get better
performance is parallelism.
In order to read or write a
block, you want to distribute
reads and writes-- for large
reads and large writes--
to as many different
chips as possible,
because the latency for
getting data to and from each
of those chips is constant.
So you have a controller in
front, it's got some memory,
and then it distributes that
work out to those chips.
Now, physical space is at a
premium on mobile devices,
but so is power.
And so vendors have tried
to consolidate those chips
as far as possible.
And that means
that they're using
fewer and fewer-- usually
one-- chip for all that reading
and writing.
And that means low parallelism,
which means low performance.
We don't get that
benefit that my MacBook
Pro does, of having many chips
in a row on a mobile device.
MLC flash is also just
a [INAUDIBLE] slower,
and file systems haven't
really caught up.
Basically, what you should think
about the median mobile phone
having is spinning
disk from 2008.
Think of it that way.
That's probably a
pretty good parallel.
OK, that's kind of
a bummer, right?
Spinning metal.
And if mobile
disks make you sad,
the state of mobile
phone networks
will make you wish
that mobile disks were
the problem you actually had.
If you hadn't, I recommend
checking Ilya Grigorik's High
Performance Browser Networking.
it's free.
You can read it online.
And it goes over a huge amount
of the total end-to-end network
stack that gets bits from
your server to your phone.
Highly recommended.
If you spend some
time with it, you'll
get to where I got to, which is
that mobile networks hate you.
Cell networks are basically
kryptonite to the protocols
and assumptions that
the web was built on.
Where TCP in the web was
built around the assumptions
of a relatively stable
underlying transport condition,
cell networks gyrate
wildly, and from millisecond
to millisecond.
Where TCP assumes relatively
constant packet loss
and constant RTT
times, cell networks
deliver anything
but, transitioning
from one network type to
another, or one subtype
to another, in real time.
And where the web's model
of hot-linking sub-resources
assumes reliable networks
for the duration of the page,
we kind of know how badly
that breaks down in practice,
don't we?
At least those of us who use
public transport, or ride
in cars, or,
basically, use phones.
So this paper is a pretty
good entry point, especially
its references, into why these
networks hate you so much.
And they do.
They hate you a lot.
When you dig in, it turns
out that what's really
killing you is the
variance and the volatility
in the underlying
network substrate.
Now, some of you
might be thinking,
isn't LTE going to save us?
Well, yes and maybe.
Here's last year's
performance for US LTE users,
compared to the year before.
These networks are
actually getting slower.
In fact, the variance
in mobile networks
is so massive that
it feels like a farce
to call something 2G or LTE.
Some of the largest
emerging markets
have median RTT times
north of 400 milliseconds.
When you open DevTools and
you do network throttling,
and you put in the
regular 3G mode,
it sets the RTT to
100 milliseconds,
which is par for the course
in the US, but wildly wrong
in other markets.
Especially when you think
about how many carriers wind up
throttling things
even further down.
The same network type may mean
dozens of different things
for your users.
For the TCP geeks in
the house, thinking
about what that sort
of round chip time
does to the bandwidth
delay product
can really bring you down.
Channel capacity be damned.
That sort of latency eats your
transfer speed for breakfast.
But of course, this is mobile,
so it's worse than that.
As Ilya said to me recently,
a 4G user isn't even a 4G user
most of the time.
Cell radios are
magical things, sure.
They try to preserve
power too, though.
And they seamlessly transition
between their high-power states
and their low-power states
across different radio types,
different cell locations.
They do a ton of
work to make sure
that we never see what's
happening under the covers.
But that creates variance.
When users try to
connect, their phones
might be in a low-connectivity
or low-power state,
when they weren't
just a minute ago.
In those cases, the radio
resource control protocol
that Ilya's book goes into
some detail on determines
how the connection gets made.
For users in very-low-power
states on 3G connections,
it can take seconds to just
start the radio handshake
at the physical
layer, so that you
can start transmitting data.
If you want to get bits on
screen in three seconds,
you're in a really tough spot.
You can't do DNS, TCP,
TLS, or even start
sending those HTTP
headers down the wire
until all of that is complete.
Now, consider adding hundreds
of kilobytes of JavaScript
to the mix.
That's not theoretical.
The HTTP archive is
showing that the top 1,000
sites put almost a megabyte
of uncompressed script
on their pages today.
On those networks,
on these CPUs,
this is a recipe for disaster.
No wonder users have
the pervasive feeling
that the mobile web is slow.
I think it's only reasonable
to be sad about all of this.
The tools and
techniques that we've
brought over from the desktop
era really aren't serving us.
To make great
Progressive Web Apps,
we need to do
things differently.
We need to load less code.
We need to load it
at the right times.
And we need to let the
browser do work for us
whenever possible.
Use the platform
isn't a nice-to-have.
On mobile, it's
the only way to go.
Sam's going to go into a lot
more detail about the depth
of the crisis that
we're in, but make
no mistake, if you're
using one of today's more
popular JavaScript frameworks
in the most naive way,
you are failing by default.
There is no sugarcoating this.
Except for the tiny club of
fast-enough-by-default tools,
like the Polymer App
Toolbox and Preact,
with some good webpack foo,
today's framework's are mostly
a sign of ignorance,
or privilege, or both.
The good news is that we
can fix the ignorance.
So when we're armed with data,
we can make better choices
and avoid those
slow-by-default tools.
I've talked to a
lot of teams who've
gotten a long way into
the PWA development story,
and they've got very heavy
clients, like JavaScript apps.
Their apps feel like
Gmail, basically.
They get a loading bar,
or something like it,
while a ton of script
starts to execute.
And then, they get a fast UI.
All the subsequent
interactions wind up
being fast because they've
paid all that cost upfront.
But many developers find
that this is kind of slow.
It feels slow to use.
Once everything is loaded, it's
great, but as you saw earlier,
JavaScript execution
on phones makes
this strategy kind of a loser.
JavaScript execution
is single-threaded.
Sure, we can parse and
compile off-thread,
but we can't use
the preload scanner
to grab sub-resources if
they're embedded in that script.
We can't speculatively build
DOM, or parse CSS, or apply it.
When you go with one
of these tools that
doesn't use the platform
well, you bet the farm
on a single core,
on a phone that
might be thermally throttled
or in a low-power state.
Good luck.
So what we're seeing
now is something called
server-side rendering, a.k.a.
isomorphic rendering,
a.k.a. universal JavaScript.
I haven't looked in
a couple of months.
Is there a new term for this?
[LAUGHTER]
I'll take that as a no, OK.
So the idea is to run
JavaScript on the server--
the same JavaScript
on the server
that you run on the
client-- and then
send down a pre-computed
snapshot of the HTML
that you were going to send.
And then you load the
gargantuan JavaScript bundle
and hope that it all works out.
On my MacBook Pro, or an
iPhone with a fast connection,
or my Pixel,
something like that,
this works out pretty well.
But for the vast
majority of users
with less-expensive phones
or less-good connections,
you get this crazy
uncanny valley.
When the JavaScript
arrives, the main thread
locks up all the same.
Until it starts
executing and finishes,
the content might
be displayed, but it
isn't meaningfully interactive.
Now there's a
debate that starts.
Some folks think that,
because maybe you
can start scrolling this stuff,
because browsers are magic
and they do threaded
scrolling-- scrolling actually
isn't interaction.
If I can't put my finger
down and tap on your UI,
and have it start responding
and doing work for me
under 100 milliseconds,
it isn't loaded.
It's broken, OK?
What we really want is
progressive interactivity,
and this is what the PRPL
pattern that Sam will go into
delivers.
The insight here is
that you should only
load the code that you need
right now, if possible,
for the views that you're
actually sending to users.
And combined with service
workers and HTTP/2 Push,
it's possible to achieve
this without over-bundling.
PRPL and the Polymer
App Toolbox represent
what's possible, once we take
that mobile-by-default thing
seriously.
And it's night-and-day from
where most popular tools are.
So this is the shop app
that you've seen before.
The Polymer team
released it in I/O.
You can visit it right now
at shop.polymer-project.org.
And here it is running
on a desktop browser.
We get to interactivity
very quickly on Wi-Fi,
and it only takes a few
milliseconds of script overall.
So what?
You've seen this story before.
But what about mobile?
Again, Nexus 5X, same
Wi-Fi connection.
Despite the slower
CPU, the app sends down
an appropriate
amount of script, so
that we get
interactive performance
at under two seconds.
There's nearly a second and
a half of script execution
overall, but thanks
to the granular use
of HTML imports and
HTTP/2 Push, in contrast
to the major bundling,
most of the components load
with tiny execution
slices, which
means that the content that's
already on-screen stays
interactive.
This is what
mobile-first looks like,
and it's radically
different in a good way.
The other thing that
the PRPL pattern adds
is a service worker.
You might be thinking
that service workers are
about handling
offline, and while it
does allow you to
do that, that's
not the primary benefit
for most end users.
Service workers
matter because they
let you deliver
reliable performance,
because they can handle
the top-level resource
and always return
something from the cache.
You can dramatically improve
the performance of your apps
when you use service
workers this way.
That huge variability
in network conditions?
It evaporates when
you've done this.
So this is a chart that
Eric Bidelman gathered
from this year's I/O site.
It's also a Progressive Web
App, but with Polymer 1.0.
But as you can see from the
giant dark green spike, when
the service worker is active,
the distribution of load times
moves hard to the left.
And that's a good thing.
This is very literally
what "faster" looks like.
I'm seeing a lot of teams
try to add service workers
as some sort of transparent
pass-through thing,
using a network-first approach.
Don't do that.
Please don't do that.
Use the PRPL pattern,
and make sure
that your top-level app shell
never depends on the network.
If you do that, you can
compete with native apps
on the experience
that you deliver.
If you saw Darren's
keynote this morning,
you saw exactly that with
the CNET Tech Today PWA.
If you don't do
that, though, you
will never match
their performance.
Until recently, it's
been difficult to verify
in an automated way that your
service worker is installing,
and the rest of your
Progressive Web App properties
are actually met correctly.
You've heard a lot
about Lighthouse,
but please put it
in your CLI, put it
in your continuous
integration system,
and let it tell you
how you're doing.
So I think it's safe to say
that mobile is much, much harder
than we've collectively
understood it to be.
To make good apps
in this environment,
we need to change our
outlook, our tools, and most
of all, our priorities.
And the fastest
way I know to get
in touch with that,
that ground truth,
is to test on real hardware.
So please, if you don't already
have a circa 2014-ish Android
phone, go out and buy
something like a Moto G4.
If you can use one of these,
and Chrome Inspect on DevTools,
you'll find yourself in
touch with how it really
feels to be at the median.
If you can, get something worse.
Like, this is an Android
One from last year.
You probably can't buy one.
But get something
worse if you can.
And if you can't
afford any of those,
please use
webpagetest.org to select
from the list of real
mobile devices that
are sitting in a rack, at your
disposal, to test your URLs.
And whatever you do, implement
as much of the PRPL pattern
as you can.
Sam will fill you
in on the details,
so stay tuned for that.
And lastly, Lighthouse
and Chrome telemetry
are potent weapons in your
ability to not regress,
on keeping sure that you're
doing the right thing.
I want to apologize for being
a bit of a downer today.
Usually, I'm telling you
about how good PWAs are,
or how great the experience
is that you can deliver.
And that's all true.
And there is good news.
It's that modern
web technology makes
it possible to build
truly amazing experiences.
But it will require a ditching,
or radically reworking,
the way that we're using
the slow-by-default tools.
Addy's got a whole talk tomorrow
about what kind of elbow grease
you really do have
to put in if you've
bought into one of
today's major frameworks,
but I think now that the
challenge is much larger
than you probably
thought it was.
Now that you know,
I'm actually very
confident that the folks in
this room and on the live stream
are going to internalize
this, and use it to make
really great experiences.
So thank you.
[APPLAUSE]
[MUSIC PLAYING]
