[MUSIC PLAYING]
NEAL SCHLOSS: Today,
we're going to talk
about performance patterns
for building inclusive web
experiences.
Let's kick things off with Addy
ADDY OSMANI: Cool, so any user
can have a slow experience.
It could be the site.
It could be their hardware,
or it could be the network.
Now, have you ever
watched someone
that has poor network service?
They usually will be
staring at their phone.
And then, their arm just slides
to go up, and up, and up,
and up--
almost as if they're trying
to pierce some force fields
that will give them 4G.
Now, I personally think that I
probably have the worst network
service here.
If I go from one part of
the stage to the other,
if I just twirl
around a little bit,
I go from 4G to edge, edge--
edge being both my network
connection type, but also
my mental state.
[LAUGHTER]
Now, we've all had
user experiences
that are fast and also
plenty that are slow.
So to support a web
ecosystem that's
inclusive of users who are both
on low-end and high-end devices
and networks, as developers, we
need to start doing something.
We need to start respecting
the user's hardware and network
constraints.
You see, these
characteristics really matter.
The user's hardware
and their network type
can massively impact
the experience
that they're going to
have with your site.
So let's talk about why
that is, what are components
that can contribute to that?
Well, what's in a
modern smartphone?
We've got a CPU, memory,
storage, a screen, and battery.
Phone hardware can
vary quite a lot.
The hardware that's
in your pockets
right now can probably vary
by quite a lot compared
to the stuff that your users
are accessing your sites on.
L1, L2, L3 caches--
all of these things
can have an impact on
the overall experience
in pretty noticeable ways.
Now, let's quantify this
problem a little bit.
Here is a visualization of
the performance gap on mobile.
What we can see here are the top
10 highest-selling smartphone
sales for the
first half of 2019.
And what you'll
see is that there's
a huge gap between high-end
devices and everything else.
It's about a two or
three times a slowdown
if we're talking about
the other hardware.
Now, device characteristics
matter quite a lot
because one of the
things we're doing
is putting JavaScript
increasingly into our sites
today, a lot of it.
And given that JavaScript is
inherently single-threaded
and more single-threaded
than the rest the platform.
This stresses things like
single-core performance.
And so we need to care
about things like the CPU.
So if you're making
sites that are only
going to work on
high-end hardware,
you might be excluding
some of your users.
So this is something we've
said for multiple years--
if you want to build a fast
site, use slow hardware.
Alex Russell has been
on the stage plenty
of times saying the same thing.
But I just want to remind you
that that's one of the things
that we need to just make
sure we're constantly doing.
Now, we talked
about smartphones.
But this problem of variance
applies equally to desktops
as well.
There's a huge performance
gap on desktop.
Over here, we can see
the CPU performance.
This is using Geebench data,
same as the other side.
This is using Geekbench
CPU performance data.
And it shows us the
highest-selling laptops
on Amazon.
And at the very bottom, I have
a modern popular developer
laptop, a MacBook Pro.
And what you can see is
that the devices that we're
building our experiences on are
so much faster than the devices
that people are actually
out there buying on mass.
You can have old hardware
that sticks around
for years and years--
people that tend to
have longer refresh
cycles for desktop hardware.
And there's just generally
a huge performance disparity
between low-end and high-end.
So a question that Nate and I
would like to pose to you today
is, do we need to deliver
the exact same experience
to every user?
We think that the answer is no.
In a world with widely
varying device capabilities,
a one-size-fits-all experience
doesn't always work.
Sites that the light
users on high-end devices
can sometimes be very
unusable on low-end ones,
particularly in emerging
markets and on older hardware.
Now, we think that responsive
design was a really good start.
But we think we can maybe
increment on it a little bit
and improve it.
So today, we'd like to introduce
this idea of adaptive loading.
Now, adaptive
loading is this idea
that we build to support low-end
devices first and progressively
add high-end only
features on top of it.
This allows users to get a
great experience best suited
to their device and
network constraints
with fewer frustrations.
So one core
experience everybody--
but people in high-end
networks and devices
get something that's
just a little bit better.
Let's chat about a few
ideas in this space,
and how we can make
it easier for everyone
to give users a good
experience on low-end devices.
We're going to kick
it off with a demo.
So a switchover real quick.
Let's see if this
is working, cool.
So Paul Irish and
Elizabeth earlier today,
they mentioned this really neat
YouTube lazy loading element
that Paul had built for
improving performance.
And I thought it'd
be neat for us
to actually try integrating
that into an app
and show you a few ways
that we can improve on it.
Before that, how
many people here know
what styled console logs are?
That's like 30%,
40% of the audience.
So they basically
look like this.
They let us create these
nice fun, funky console logs.
One of the really nice things
about styled console logs
is that we can also abuse
them to create our own console
and messages-- so in my
case, speaking after Paul
Irish and Elizabeth Sweeny.
But we're going to try to give
you a decent demo of this idea.
So here what we've
got is basically
a reimplantation of YouTube.
It's using live YouTube data.
And I'm going to keep the
Network Panel open over here.
We're going to
navigate to a video.
So let's go to this
one real quick.
What we see is that 514
kilobytes worth of scripts
are loaded for this experience.
Now, imagine we were to swap out
the Core Video experience here
for something a
little bit lighter.
Now, I want to do something
real quick to show you
what we're going to do.
So we've got this little
window debug thing.
What we can see in green
is the core content
for this experience.
In red, we've got all
this extraneous stuff.
We've got Recommended Videos.
We've got Comments.
And I thought it'd be
interesting to think about what
if you're on a slow network
or a constrained device,
something with low memory, or
if you receive [INAUDIBLE],, what
if we were to do something
where we can navigate back
to the main experience,
we can emulate this?
Let's go into fast 3G.
And if we now go to the video
page, what we see instead
is that we've actually used that
lazy loading element from Paul
Irish earlier.
And it's only loading up
three kilobytes of scripts.
But we're being very intentional
with only shipping this down
to users who were in
the worst conditions.
So we're going to talk a
little bit about these ideas.
We're going to switch back
on to the slides right now.
Now, there are three
or four key signals
we'll be looking at for
adaptive loading today.
First of all, we've got
network for fine-tuning things
like data transfer to
use less bandwidth.
We've got memory for
reducing the amount of memory
consumption on low-end devices--
CPU for limiting costly
jobs' [INAUDIBLE] execution,
and reducing CPU
intensive logic.
And we'll talk a little
bit about client hints.
And we'll talk about
JavaScript APIs
we're doing some on this stuff.
Now, to make all of
this easier, today we're
releasing a new experimental
set of React Hooks
for adaptive loading that
you can go and check out.
If you're using React to
build experiences today,
whether it's React on
its own or next js,
you can use these
Hooks for everything
from network, memory, CPU
to employ some of the ideas
that we're going to be talking
about in just a second.
And by the way, all
of these are built
on top of web platform APIs.
And so if you're using
a different framework,
if you're using Angular,
or View, Svelte,
Lit, any of these
things, you can still
employ these techniques.
It's just that we're going to
be focusing on React for now.
AUDIENCE: Woo!
ADDY OSMANI: Well, thank you.
[LAUGHTER]
Thank you, one person.
So let's kick things off
with adaptive media loading.
Now, this is the idea that
we serve low-quality images
and videos to users,
reducing bandwidth and memory
consumption.
So picture, I've got a site
where maybe I'm shipping down
videos to everybody.
But do I need to?
Maybe I could be shipping down
low-resolution images instead,
if your network can't handle it,
if your device can't handle it.
So you could picture a
photo gallery application
and shipping those
low res images,
or using less
code-heavy carousels.
You could imagine a
search application
where maybe you're limiting the
number of media-heavy previews.
You can imagine
news-oriented site
where you're emitting some
popular categories that
maybe have preview
images in there as well.
And the way that we can
determine network connection
information on the platform is
using the network information
API.
So the netinfo api
summarizes performance
of users' network connection.
And on the web, it's what
allows us deliver experiences
based on how slow or
fast the connection is.
Now, you can use this
API via the web platform.
And you can also
use it for things
like conditional
resource loading,
using the React Hooks
that I just mentioned.
So let's actually take a
look at a quick demo of this.
We're going to switch back
over to the other machine real
quick.
And here we have an
experience called React Movie.
By the way, for all
the demos today, we're
integrating adaptive loading
on top of existing apps built
by the community.
None of this is just stuff
that was run from scratch.
You can employ these
ideas in your apps today.
So here we have an app
called React Movie.
And this is a movie
discovery app.
I can see all movies that
are out at the moment.
I can click through.
And I can browse
thumbnails for them.
But what you see is that this
core experience is currently
shipping 2.7 megs worth of
images if I'm any casual user.
We can employ adaptive
media loading techniques
and actually deliver
an experience
where if you're on slow 3G-- so
let's actually clear this out.
This might take a hot second
to load up given it's so slow.
But the idea that we're going
to be trying to represent here
if ever loads up is
that you can still
offer the users an experience
with slightly lower-resolution
imagery in a lot less bytes.
So it's taking its time.
It's taking this time.
It's getting there.
But these are all
lower-resolution images.
The overall payload size is
significantly smaller than what
we were showing you before.
And it actually didn't take
a lot of code to do this.
We're just using two or three
lines of additional code
after importing in
that network Hook.
And everything works
the way you'd expect.
So the next thing I wanted to
show you-- let's switch back
over to the slides, please.
So the next thing I
wanted to show you
was Data-Saver aware
resource loading.
So the Save Data Client
hint is something
that is a request header
that lets you deliver lighter
experiences to
your users who opt
in to data-saving
mode in their browser.
And when a user has
this feature on,
the browser can request
low-resolution images.
It can defer loading
some resources.
And this is available
as a JavaScript API but
is also something you
can use via Client Hints.
So once again
here, you see we're
using our React Hook in order
to achieve conditional resource
loading.
A company that's
using data saving
as a mode-- they've got a custom
mode quite effectively today--
is Twitter.com.
So Twitter is designed to
minimize the amount of data
that you use.
They've got a really
nice Data Saver mode.
And when you opt into it,
you can get anywhere up
to 80% reduction in overall data
usage for images on the web,
and anywhere up to 96% if
you're including things
like disabled video autoplay.
I thought it would
be neat for us
to try re-implementing something
like the Twitter data feed.
So we're going to
go and take a look
at another very quick demo.
So here, we have
the Twitter feed.
It's a simplistic version of it.
I can scroll through this feed.
I see plenty of Tweets,
plenty of resources.
And the overall
payload size of this
is something like 6.9
megabytes overall.
This is including
high-resolution images, videos,
anything else that's supported.
Now, using this Hook, or
using just the web platform
APIs for data saving, I can
go and I can toggle this.
And what you'll
see is that we've
switched out those
high-resolution images
for low-quality
image placeholders.
I can scroll through
this feed pretty quickly.
I don't have to be fetching
the original images
at full resolution.
And if I want to see the
full-resolution image,
I can just tap and get
that same experience.
Now, scrolling
back up here, there
is actually a video that
has its autoplay disabled.
This is by Mr. doob.
And I thought I'd
play this for you.
So this is basically
what it looks
like when we, as developers,
have a nice payday.
You're just like
doing all your--
I love that so much.
My version of this is
unfortunately a lot worse--
oh, oh, great, OK.
Live stage fail-- let's try this
out again and see if it goes.
OK, this is me.
[LAUGHTER]
Cool, very, very accurate.
So let's switch back up to
the slides for a second.
Now, for a while, people
have been asking for a media
query for safe data.
And although one
doesn't exist just yet,
there is an active
proposal that's
been made about introducing this
idea of a user preference media
query--
prefers reduced
data that would let
you design data-reduced
variance of your site for users
who expressed that preference.
If you're interested
in something
like this existing
on the platform,
there's a link
here where you can
get involved in the discussion.
I personally love to see
something like this existing.
Next up, let's
talk about memory.
So the device memory API
adds navigator.device memory.
And it returns how much RAM
the device has in gigabytes.
Round it down to the
nearest power of 2.
Now, this API also features
a Client Hints header device
memory that reports
the same value.
And similar to before, it's
relatively straightforward
to use the memory Hook in
order to conditionally load
different types of experiences.
Now, I thought I'd
show you a demo that's
slightly different using this.
Let's switch over to the other
machine real quick once again.
I discovered this really
awesome website called Dixie.
And Dixie do a bunch of
consumer electronics.
One of the things
that they do is
they sell mechanical keyboards.
And on their site, they have
this really nifty model viewer
usage, where you can go and
check out-- this is nice.
You can spin it around.
It's really pretty.
But one of the
downsides to this is,
if we go and we load
up our DevTools,
we go to the Network
Panel, and we
try to reload this
experience up,
if we organize things by
size and go to the very top,
you'll see that this
3D model is actually
almost five megabytes in size.
Now, in addition
to that, it also
uses quite a lot of
memory on low-end devices
to get something
like this running.
On high-end devices, on
desktop, it's perfectly fine.
But for users who are on
those low-end devices,
what if we were to
do something like--
let me reload this picture
really quick-- what if we were
to do something like
use memory signals
in order to decide whether
or not to just send them
down a static image.
We'd save on multiple megabytes
worth of resources being
sent down to those users,
while still giving users
who are on high-end
devices a really, really
slick experience.
I personally love these 3D
models-- love model viewer.
Let's talk about JavaScript.
So adaptive module serving is
something I'm excited about.
And this is this idea of
shipping a light interactive
core experience to all of your
users and progressively adding
high-end features on top--
so if a user's device
characteristics and network
can handle it.
Now, it's this
device awareness that
takes progressive
enhancement to the next step.
So in high-end devices,
we can conditionally
be loading more
highly-interactive components
or more computationally-heavy
operations.
You could imagine
servers of the future
being smart enough to use Client
Hints and ether signals that
come from the web platform to
decide what code to send down
to their users.
So bundles that are the core
experiences versus bundles
that are a little bit heavier.
In this example, we're
looking at something
like an e-commerce site, where
the core experience represents
the product images,
the cart experience.
And the higher-end
ones can include things
like zooming into images,
related products, videos,
and AR version of the
experience-- go crazy.
I wanted to demo
a slimmer version
of this idea, adaptive code
splitting and code loading.
Actually before we go
into that, some of you
might be familiar with
React.lazy and Suspense.
These are basically
primitives that
help you do things like add
code splitting to React apps
and then define fallbacks
for that content
as it's loading up.
And you can, in fact,
extend React.lazys.
You can get network-aware,
or memory-aware, or
Data-Saver-aware code splitting.
So in this pattern,
what we're doing is,
we're basically doing a
check for the user's network
information effective
type values.
And depending on
those values, we're
able to generate different
chunks for the people
who are on 3G,
people who are on 4G,
maybe a light experience, a
slightly heavier experience.
And I wanted to show you an
example of this real quick.
So eBay are a company
that are exploring
this idea of adaptive serving.
And they're able to
conditionally turn on and off
features like zooming if a
user's hardware or network
conditions don't necessarily
support them well.
So let's switch back over here.
We've decided to implement
a version of this.
And what you can see is, this
is a lot like eBay on desktop.
And if I hover over
this product image,
I can see this in a
very high resolution.
I've got this nice additional
magnifier dependency
that's being pulled in.
But overall, we're shipping
down about 62 kilobytes
of JavaScript to our users.
Now, picture that I wanted to
look at what this might look
like if I'm on a
narrower viewport site.
So let's imagine we're
in this situation.
And I'm loading this back up.
Now, in this case,
we're actually
only loading 45 kilobytes
worth of overall scripts.
We don't have that same
magnifying experience
on mobile.
What we're doing is, we're
just shipping users down
on experience that just
shows them the image.
And at most, maybe
we show them a modal.
So people who are on those
higher-end situations,
they can get the slightly
more enriched version of this.
And next, let's talk about CPU.
So desktops and smartphones can
have multiple physical process
or cores in their CPU.
And each core can run more
than one thread at a time.
So four-core CPU you might
have eight logical processors.
And in order to determine this
insight from the platform,
you can use the hardware
concurrency API.
Now, there is a Hook
available for this
as well that allows you to
use conditional resource
loading, very similar
to some of the others.
And one of the values
of this is that you
can use it to do things like
determine the optimal worker
thread pool if you're using Web
Workers in your application.
The platform does have, however,
limited information about CPU.
And I think that it's
interesting to consider
should we have more?
Could that unlock
other use cases?
Another pattern is adaptive
data fetching and prefetching.
So whether it's on the
client or the server,
reducing the quantity
of data that you're
sending down to users can
decrease overall latency.
And adaptive data
fetching can use
signals like the slow network
to send fewer results down
to your users.
And we've been talking about
a bunch of different patterns
today.
And you might be
wondering, OK, well, we're
seeing a few folks who are
using these in production.
Is anybody using most
of these in production?
And one example of a company
that is, is Tinder Web.
So Tinder Web and
Tinder Lite are
using a number of these
patterns in production
to keep the experience
fast for everyone.
If a user is on a slow network
or has Data Saver enabled,
they disable video autoplays,
they limit route prefetching--
so prefetching the additional
routes the user might navigate
to across the experience--
and they're also
able to do things
like limit loading the
next image in the carousel.
So they just load one at a
time when you're swiping.
They've seen some really great
stats off the back of this--
great improvements and things
like average swipe count.
So for Tinder Lite,
they saw 7% more swipes
in areas like
Indonesia, [INAUDIBLE]
of using some of these signals.
And finally, we've got
adaptive capability toggling.
Now, this idea that
instead of serving
animations down to
all of our users,
for people who are on
lower-end hardware,
maybe we consider not shipping
those animations at all,
or throttling the
frame rate in some way.
So what we're going
to do is demonstrate
using this with Client Hints.
Now, Client Hints are something
I've mentioned a little bit
in other parts of this talk.
But they're a mechanism for
proactive content negotiation.
The client advertises a set of
hints via HTTP request headers.
And the server
provides hints that
adapt to the serve resource.
They can be extended to a
number of different use cases.
Now, one of them is helping
automate the negotiation
of optimal resources based
on the client's Data Saver
preferences.
And I've got a quick demo
using Next.js and Client
Hints I'm going to switch
to right now just to show
you this idea in action.
So here, we've got
adaptive animation.
Imagine that we've got a blog
site or an e-commerce site
of some sort, where we've got
a number of different cards
worth of content.
Now, me on my high-end
device, lots of memory--
I can probably handle
things like nice navigation
transitions pretty OK.
And they look relatively smooth.
But if I'm on a low-end
device, I've tested this out
on Moto G-Force, you can end up
with pretty choppy experiences.
Maybe it doesn't make sense
to animate on those devices
instead.
So one thing we
can do is, we can
simulate all of those conditions
that Client Hints allow
us to do.
And if I now try
transitioning, I
just get a very simple,
basic navigation,
the same type
you're probably used
to seeing in many
single-page applications.
But we're still able to
give everybody an experience
that best suits their hardware
and network characteristics
in these cases.
So I'm really
excited about that.
Now, I'm going to invite Nate
to the stage in just a second.
He's going to talk a little
bit about how Facebook uses
these patterns in production.
And one of the
areas that we don't
have a great solution
for on the platform just
yet is this idea of
device class detection.
So you might have noticed
across some of these ideas,
we've been basically
bucketing things
into you're on a slow device,
or you're on a fast device,
you're in a slow network,
you're in a fast network.
Now, one thing we could do to
build like an ultimate solution
around this stuff
is have a setup,
where we're taking a look
at the user-agent string,
determining what is the
hardware we think you're on?
We could connect that up to
Geekbench performance data.
And then, we could decide
based on thresholds,
is the combination of your RAM,
your CPU, and your CPU score
considered low-end or high-end?
Now, this is very difficult
to duct tape together
in a way that makes a
lot of convenience today.
But I'm really
excited, actually,
for Nate to talk a
little bit about how
Facebook tackles this problem
in a little better ways.
So please join me in
welcoming to stage Nate.
[APPLAUSE]
NEAL SCHLOSS: Thanks, Addy.
So one thing Facebook
recently announced
is a redesign of the website.
We're calling this FB5.
One of the cool things about
going through and redesigning
the site is we've been able to
take a lot of the things we've
learned over the last few
years about adaptive loading
and different types of
hardware, and making
sure the site responds
correctly to that,
and really integrate that
into the core of FB5.
One of the core
principles we considered
when trying to go
through and design FB5
is, we didn't want to
just build a site that
responded based on screen size.
We wanted to build a site
that actually adapted based
on the user's actual
hardware-- so actually
changing what loaded and
how the site ran based
on what hardware it's on,
not merely responding based
on changes in the screen size.
There are a few steps that
we took to implementing this.
The first step is,
we actually needed
to define consistent buckets
for how we were talking
about different
types of hardware
and how we were
considering a hardware
classification across the
site, across different teams,
across different products.
The next step was
integrating these buckets
into all of our logging--
looking in at our
performance logging,
our general metric collection
logging, our engagement
logging--
and really making sure we were
able to see a holistic picture
of how things were working
based on these different types
of hardware.
Next, once we actually can
see the full complete picture
and understand what's going
on for different users
in different situations,
we can actually
adapt loading and change how
the site runs, how it loads,
what happens based
on the hardware.
So on mobile, grouping
hardware is not so complicated.
The mobile UA actually just
tells us what this device is.
And then, there's
tons of public data
sets where you can actually plug
in which type of device it is
and get information about
the CPU clock speed,
how many cores,
things like that.
And then, we can use
this predefined concept
of Year Class.
Year Class is a very
popular framework on Native.
You can use it basically
to figure out in what year
would this device have been
considered groundbreaking.
So by looking at Year
Class, and by looking
at the public information about
a device, and specifically
what this model is,
and how fast it is,
we're able to have a general way
to talk about different devices
across both the web and Native.
And that's pretty powerful.
So on mobile, we can just look
up exactly what this device is
and get all of its hardware
and performance information.
However, in desktop,
things aren't so obvious.
Sure, the user agent tells
us all right, is it a Mac?
Is it a PC?
Is it 64-bit?
What browser it is--
most of the time,
everything is 64-bit nowadays.
It doesn't really tell us
much about the actual hardware
that the user is on right now.
So what do we have?
Well, we have
navigator.hardwareConcurrency
which tells us generally
how many CPU cores.
And many browsers also give
us navigator.deviceMemory
which tells us how much RAM.
So maybe on desktop, there's a
way we can use these two fields
and figure out some
generalized buckets that we
can apply consistently
across different devices
and different metrics.
So the first step
for doing this is
to actually log hardware
concurrency and device
memory everywhere.
Once we actually have
these in our tables,
we can build metrics
and understand
how things are different
based on these devices.
So once we have done that, the
next step is actually group
by hardware concurrency,
device memory,
OS when looking at
a different metrics.
And basically, come up with
charts to really understand
what the full picture is like.
Once we did this, we
start to see natural bands
for different types
of hardware and how
they're performing and
different barriers.
So at Facebook,
once we did this,
we came up with five different
classifications for devices.
And the heuristics we
used for them actually
vary across different
OS and browsers.
So this is something that we
did some analysis to figure out
where the natural bands were.
Once we figured out the
buckets based on the groupings,
we're able to apply it
deep in our data sets,
and log it everywhere,
and actually
the consistent way to talk
about different performance
and different device types
across different metrics
and across different teams.
And this is pretty cool because
when integrating performance
logging, this
hardware class reveals
a much more complete picture.
We can actually see
everything that's going on
and understand how
the experience varies
for different types of users in
different situations much more
holistically.
So take, for
example, this chart.
So if you're just
looking at the average,
it looks like basically your
performance stayed the same--
maybe it got slightly
worse, but overall, it
looks like things are fairly
consistent in performance.
But when we break stuff
up by hardware type,
we can see that
maybe on the 6th,
we had an improvement
ship for low-end devices.
However, there is
a large regression
for mid-range devices.
The way low-end devices
and mid-range devices
are going to load your site
is going to be very different.
Low-end devices are
often going to be blocked
on pricing JavaScript, actually
executing the JavaScript
and execution.
Well, maybe
mid-range, they might
be blocked in-network or
other different bottlenecks.
The way users interact
with your site
is going to be very different
on both types of devices too.
Low-end users might
engage with your site
in different ways
and different spots
than mid-range users might.
So you might see
your metric shift
as your mid-range users end up
in a slower performance class.
We won't actually know why.
So by breaking stuff up by
different hardware type,
you're actually able
to see and pinpoint
where your regressions
are happening.
The other thing that
this can help you with
is shifts in user population.
Let's say, that a low-end
device suddenly goes
on sale in an emerging market.
And all of a sudden,
you have a lot
of users on your low-end device.
If you're just looking
at the overall average,
you might think, oh, I have
this big regression right now.
When in reality,
there's just more users
taking advantage of the
promotion on a low-end device.
So by breaking stuff
up, you can actually
see if you're loading
consistently or not
and counteract changes
in just populations
on different types of devices.
So once we have
this core metrics,
and we're able to
break it up and have
a consistent way
in understanding
of what different types of
hardware, we can actually
consider this in
our core frameworks.
One of the first things we did
is we looked at animations.
Animations take time to render.
The browser will attempt
to paint a frame.
Then if it cancels, it
just throws all that work.
But every frame the
browser is attempting
to paint something and
render an animation,
it could be doing
something else and actually
helping you load your page.
So on low-end devices,
animations look like this.
They were somewhat janky.
They would render like
a frame, wait awhile,
show another frame.
And eventually,
stuff would load.
But it's not a good experience
this kind of animation.
One of the first things
we did is, we just
stop shipping animations in many
instances on low-end devices.
This enabled many more users
to finish loading the page
and actually engage
with the site
much more because
they already weren't
getting much benefit
from the animation that
wasn't really loading.
And now, that wasted work
is no longer happening.
And they're actually able to
see the page much quicker.
Another thing we do is
on our mobile website.
Our mobile website we have
a totally different site
for Android phones and
iPhones that have touch
screens and have
powerful CPUs that
can run a lot of JavaScript
versus feature phones
that maybe don't have
as many powerful CPUs
and can't run
JavaScript as well.
Our feature phone site is
mostly static HTML, a little bit
of CSS, very few images.
It's really optimized for
this low-powered device case.
Even the feature
phone screen is big,
even when the feature
phone has a touch screen,
this is an instance where we're
not just scaling the site based
on screen size, we actually
have two totally different
experiences based on
the underlying hardware
and really optimized
for that hardware.
One of the cool things we're
doing too and especially
on the FB5 site
is we're actually
taking advantage of the fact
that there is this trade-off
right now on the web between
loading quickly and responding
quickly.
When you're loading
your site, often, you
get advice to chunk up
your JavaScript that's
needed to load the site
into different chunks
and yield to the browser
in between each chunk,
so the browser can dispatch
any events that may happen.
So if the user
clicks on something,
you don't actually have to wait
for the entire load to finish.
You can respond to that event
as soon as the click happens.
On high-end devices, yielding
to the browser after each chunk
is fairly cheap.
The browser will quickly
see there's no event
and just go back to
running your JavaScript.
However, on low-end devices,
this can be somewhat slow.
And it can often take
quite a bit of time.
So there's a
trade-off right now,
where you want to chunk up your
JavaScript into small chunks.
But on low-end devices, if
you make too small chunks,
you're actually going to slow
down the overall experience
by quite a bit.
So one thing we
have been able to do
is on-- in React in
current mode that Addy
talked about earlier,
one of the core things
is this concept of a scheduler.
This is an experimental API.
It's almost definitely
going to change.
But one thing we're
doing right now
is if it's on a low-end
device, Schedule
has this concept of
forcing a frame rate.
Generally, Scheduler
and React is
going to try to
schedule each frame
and run it whatever the
browser is currently
running at-- so 60 FPS, 30
FPS, something like that.
However, by forcing
a frame rate,
we're able to basically
tell React, all right,
ignored what the browser's
trying to do right now
and just take longer.
Run it 15 FPS and actually run
more JavaScript each frame.
So the user can actually
load more of the site
before we check for each event.
So yes, some events interactions
become slightly slower.
But overall, it's a much
better experience for users
because most users are just
waiting for your site to load.
And this can happen
now much, much quicker.
One interesting thing here too
is that hopefully, eventually,
this trade-off goes away
altogether with this new API
isInputPending.
Eventually, we hope
is isInputPending
will ship everywhere.
And it'll just be
a quick, cheap way
to check is their
input right now.
So then, this
trade-off will be gone
because we can just run all of
our JavaScript during loading.
And we don't actually
have to block stuff up.
We can still be interactive.
Hopefully, this gets
integrated into React too.
So if you're using the latest
React, once this ships,
you should be able
to get this for free.
So by using
consistent definitions
for our bucketing
and our logging,
and adapting based on those
definitions consistently,
we're actually able to share
this understanding of how
the site works across different
teams, across different orgs,
and really figure out what
is this overall picture
that we're seeing.
So when metrics change based
on something one team does,
we know that it's based on this
consistent hardware definition.
And it's a lot easier to
pinpoint changes and see
what's going on.
Now, I'm going to invite
Addy back to take it home.
[APPLAUSE]
ADDY OSMANI: Thank you, Nate.
I had some better network
service in the back.
So that's it for
adaptive loading.
Today, we talked about adaptive
media loading, code serving,
data fetching.
In general, a lot of these
ideas have got some promise.
And we're very
excited about them.
They do have some
potential drawbacks
that are worth being aware of.
Adaptive loading
does use this idea
of often point-in-time
information about the user's
device and network constraints.
And you do want to
keep in mind like what
impact is going
that going to have
on things like HTTP caching.
So just be very careful with
adopting these techniques.
I do think they can
have a lot of promise.
But that nuance is probably
useful to talk about as well.
And adaptive loading isn't this
groundbreaking, huge thing,
right?
It's an incremental practice.
Over many, many
years, we've been
talking about this idea of
trying to increasingly become
more lazy first
with our content.
And so adaptive
loading is really
just an incremental pattern
on top of those things.
And so in general, even if
you take nothing else away
from this talk, try to
reduce, defer, and coalesce
the resources that you're
serving down to your users.
Ultimately, what we're trying
to do with these patterns
is build experiences that
have inclusivity in mind.
Though the core
experience ideally
that works great for
everybody and toggle or layer
on features that can
make it even more
awesome if a user has enough
memory, CPU, or a fast network.
So that's it for
adaptive loading.
Thank you.
[APPLAUSE]
[MUSIC PLAYING]
