ADAM BARTH: My
name is Adam Barth.
I am the TL for Flutter.
Prior to working on
Flutter I worked on Chrome
for about seven years, mostly
on Chrome's rendering engine,
which started out as WebKit
and actually became Blink,
and then working on Flutter
for about two years.
And this talk is going
to be about the rendering
pipeline in Flutter.
So just to sort of
orient ourselves, this
is a sketch of the overall
architecture of Flutter.
So at the bottom,
there's an engine,
and the engine is-- exposes
a very low level API.
And if you saw
Ian's talk, he talks
a bit more about all
these different layers
and how they relate.
So the engine is
very smart about text
and also knows how to
take vector graphics
and draw them onto the screen.
Above that there
is the framework,
which is what this talk is
going to talk more about,
which is itself composed
of several layers.
This talk is one
of the-- about one
of the lower levels in the
framework called the rendering
layer, which is responsible
for organizing the screen,
so allocating
space on the screen
for various different widgets
and then actually making
those widgets appear on-screen.
And then above the
framework is where
you write your application.
So the full pipeline in Flutter
has a lot of steps in it.
And so first you
get some user input,
like the user
touches the screen.
Then maybe you have
some animations
running so they start ticking.
And then you get a
chance to build widgets.
So you get to build whatever
you want to have on your screen,
so you have a [? one on ?]
button or drawer, or that sort
of thing.
And so Ian went into
detail about how
that build phase
works, how it is
that you end up building
things and constructing
[? writer ?] objects.
And then after that you
could go to the rendering
phase of the pipeline which
consists itself of three steps.
So first is the
layout step, which
is about positioning and
sizing elements on the screen.
Then there's the
painting step, which
is about figuring out what those
elements actually look like.
And the compositing step
basically stacks them together
in draw order so that
they can be composed
on the screen as one thing.
And then finally,
the last step there
is rasterization
where you actually
go from that abstract
representation of what you're
going to draw to the
actual physical pixels that
are going to appear on-screen.
So this talk is going to
focus on layout, painting,
and compositing.
So the thesis or
the design principle
behind the rendering
part of the pipeline
is that simple is fast.
So basically, if we use simple,
straightforward algorithms
with very well
understood properties
then we could make
them go fast by taking
advantage of those properties
and optimizing them.
So for example, both
layout and painting
use a one-pass linear
timed algorithm.
So we walk the tree
from top to bottom,
and then we return up
the recursive structure
and that's it.
So that's in contrast
to other systems,
for example, where they'll
do multi-pass layout,
where they'll go to
a point in the tree,
they walk down to
gather some information,
and then walk down again to
adjust the sizes of things.
And if you imagine
nesting that, you
can see very quickly how
that becomes N squared work,
because you keep recursively
walking down the tree and back
up.
And so in this system, we
want to have a one-pass layout
that just walk the tree
once, touch every node once
on the way down,
once on the way up,
and then could
figure out how big
and where everything
should be with that.
The other, along
the same lines is
we use a very simple
constraint model to do layout.
So for example,
when you [INAUDIBLE]
this complicated
linear constraint
model and a whole
general purpose
constraint solver to figure
out where to position things.
And that's-- that
has some benefits,
but we thought what if we could
do something much simpler.
So our constraint
model is basically
a box with a min width
and a min height,
and a max width
and a max height.
And that constraint domain
is very easy to solve.
If I give you two
of those constraints
and ask you to unify
them, pretty obvious how
to unify them.
And so, you can write a very
simple constraint solver.
And the question
is-- our thesis is
that's enough to generate
very expressive layouts.
And finally, we do all of
our repainting structurally.
So instead of tracking which
rectangles on the screen
are invalid and need
to be repainted,
we do it structurally
in the same structure
that the overall tree has.
So we can say this sub-tree,
it needs to be repainted,
as opposed to keeping track of
which rectangles on the screen.
And that turns out to be a
very big performance win.
It takes advantage of some
of the hardware capabilities
in modern mobile devices.
They're very good at
compositing things.
So the first phase I want
to talk about is layout.
How does layout work?
So the base class
that everything
that [? puts space in ?]
layout and painting uses
is called render object.
So RenderObject itself is
a pretty abstract concept.
Basically, it has
an owner, which
is the object that's going
to drive the pipeline.
And each RenderObject
knows what its parent is.
But in general, a RenderObject
doesn't know anything
about its children.
All it knows how to do
is visit its children,
which means different
RenderObjects are free to have
different child models.
So for example, you can
have a RenderObject that
has a single unique child,
or a RenderObject that
has a list of children, or a
RenderObject that has several
named children.
And from the perspective of
all the rest of the algorithms,
we don't care what
the child model is.
It's totally up to
the RenderObject.
But it does know how to layout
and paint in abstract ways.
And importantly,
there's this concept
called parent data, which
is a slot on a RenderObject
that the parent
RenderObject can store data.
So if you're familiar with
other systems like the web,
you aren't allowed to put
an inline inside of a block,
for example, because block
needs to store information
on its children that inline
doesn't have slots to store.
And so, you get these
anonymous RenderObjects
in your render tree on the
web to basically convert
the data structures.
And so, we avoid that just by
having this parent data slot
that's managed by the parent
instead of by the child.
And that'll become important
when we talk about positioning.
So importantly
about RenderObjects,
there's no concept of any
coordinate systems or anything
like that.
It just knows that it exists
in a tree and has a parent.
And it defines this data
flow that I talked about.
So this is the
one-pass data flow.
So we walk the tree in
a depth first traversal.
And we pass down the
recursive walk constraints.
So from RenderObject's
point of view,
there are arbitrary constraints.
In practice, most people use
a thing called a render box.
So those boxes will be this box
constraint that I talked about.
And then up from the bottom
of the tree comes the size.
So I say here are the
constraints on how big you
are supposed to be.
And you say thank you
for those constraints.
I'm going to go talk to my
children for a little while.
And when I'm done,
I'm going to go
respond and say oh,
I figured out I want
it to be exactly this size.
Make sense so far?
Yes, good.
I'm seeing lots
of people nodding.
So that's sort of abstract,
but more concretely
it turns out that a very useful
coordinate system to work in
is Cartesian coordinates, so
x and y, and width and height.
So there's a specialization of
a RenderObject called RenderBox
that is much more opinionated
about how things are sized
and positioned.
In particular, it
has a size which
is a width and height, as
opposed to a RenderObject
can be an arbitrary thing
like a sector on a circle
or something.
It also adds some intrinsic
sizing information,
which is it comes up
in some esoteric cases.
So a box has the
idea that we're going
to use a particular kind
of constraints, these box
constraints that I've mentioned.
So a box constraint is basically
what's depicted on this slide.
So there's-- in the width
dimension there's a min
and a max, and in the height
dimension there's a min
and a max.
And the rule is, if the parent
gives you these constraints,
you have to be somewhere
in this light gray region.
You aren't allowed
to be too small
and you aren't
allowed to be too big.
And what's
interesting about this
is that you can actually express
a lot of different layout
algorithms using this
simple box constraints.
So for example, the simplest
kind of layout algorithm
is basically where the
parent determines the child--
the size of the child.
So if you imagine, you only
had a downward information.
So each parent was
like OK, you're
going to be exactly 100
pixels by 200 pixels,
and the child was like OK.
The first child, you're
going to 50 by 50,
and you're going
to be 50 by 100.
So this is commonly
used, for example,
in window managers
in operating systems.
The window in a desktop
operating system
has no opinion about how
big it's going to be.
The window manager says you're
going to exactly this big.
And you can actually model
that with box constraints.
What you do is you make
the constraints tight.
So you set the min and the
max width to the same value,
and the min and the max
height to the same value.
So the child is
basically dictated,
you have to be exactly this big,
because that's the only value
that satisfies the constraints.
So what this implies is that
any object in the system
has to be prepared for its
parent to dictate exactly how
big it is.
So for example, the
checkbox widget, normally
you'd think a checkbox
widget has a fixed size.
It can only be
exactly this size.
But it turns out in this
system, since the parent can
force it to be an
arbitrary size,
it has to think about
that and understand
what would it mean for me to
be twice as big as I expected.
And so the checkbox
does something simple,
like he centers
his little checkbox
in that available
space, but he's
able to occupy arbitrary space.
So another layout paradigm is
called Width-In, Height-Out.
So this is, for example,
what the web uses.
This is a very useful
paradigm for text.
So basically you say, I want you
to be exactly 200 pixels wide.
How tall would you like to be?
So if you could imagine
you have a bunch of text
and you set the width and
you start flowing the text
to make different line breaks.
And then you see how
many lines you got,
and that's how much
height you have.
And so, that actually arises
quite naturally in this model
where you just set the
width constraint to be tight
and you set the height
constraint to be loose.
And then the parent
is essentially
specifying the
width and the child
gets to report the height
that he wants to be.
What's interesting is actually
because this model treats
width and height, and basically
x and y symmetrically,
you get the opposite.
You get Height-In, Width-Out,
also arises naturally.
You could ask yourself why
would I care about this?
Why does this makes sense?
And the longer I
work on this project,
the more I realize that whenever
you have a horizontal use case,
there's always a
vertical use case that
arises for the same thing.
And so, later in the talk
we'll see this actually
arise naturally from something.
So I promised you I was going
to tell you about parent data.
So what's interesting, if you
notice about RenderBox here,
he knows his size but he
doesn't know his position.
So this is in contrast to other
systems like Cocoa where each
UI view in Cocoa knows
it's [? wrecked, ?]
and [? a wreck ?] combines
both size and position.
So here you know your
own size, but you
don't know your position.
Your position is here
controlled by your parent,
in your-- this opaque parent
data field that you hold.
So what that means is when the
parent gets the sizes for all
those children, he is then
free to reposition them
without talking to them again.
So without touching them
he can move them around,
and that turns out to be
quite powerful for things
like scrolling where you
want to scroll a widget
and move it around
without touching it.
You just want to do the
minimal amount of work
to translate things around.
So I want to walk
through an example
of how layout works, and I
was going to do a flex layout.
So a flex layout is a very,
very common layout paradigm.
So the idea behind
a flex layout is
you're either going
to lay things out
in a row or a column,
and here we're
going to do a row
for simplicity.
And some of your children
have a strong opinion
about how big
they're going to be,
so they have some
preferred or intrinsic size
that they want to be.
And the other
children are flexible.
Basically, they're going
to expand to fill however
much space is left.
And actually there's
a little detail
about this, which they have
different flex factors.
If you think of
them like springs,
they have springs with
different strengths.
So in this case,
this yellow guy,
he's got a flex
factor of two, which
means he likes to
expand twice as much
as the pink guy who's only
got a flex factor of one.
So these-- if you
like, you could
think of the green and red
guys as little wood blocks,
and the yellow and pink guys as
springs with different spring
constants.
So this is a very
common layout paradigm.
And I'm going to show you how
this works in this one-pass box
constraints approach to layout.
So the inputs to the algorithm
are the overall min and max
width and the min
and max height,
the constraints we
got from our parent.
And the outputs are, we've got
to figure out our overall size
to tell our parent, and
we have to figure out
the size and position
of each of our children.
So in this scenario,
for simplicity, I've
set the min width and
height to basically zero,
so we have flexibility.
And this gray box represents
the min and max height
that we're allowed to be.
And then I'm showing
you the answer,
but we're going to build
up the answer in steps.
So step one is we have to lay
out our inflexible children,
so the wood blocks in this
block and spring model.
We have to ask them how
big would you like to be,
since they're allowed to have
an opinion about how big they'd
like to be.
So what constraints
should we give them?
So well, for their-- we're
doing a row, so for their height
it's pretty easy.
They could either
be zero height,
or they can be as high
as I'm allowed to be.
If they're taller than
I'm allowed to be,
then I'm in trouble because
I can't fit them inside
of myself, so that's
sort of natural.
And then the width,
well, they're
allowed to be as small
as they want to be.
That's up to them.
And then actually
we let them be as
wide as they want to be all
the way out to infinity.
And why infinity?
[? Forest ?] has a
puzzled look on his face,
and that's a very good question.
And so, actually in the
first version of the system
we didn't have an infinity here.
We gave it the max width--
the incoming max width, so
our own max width.
But oh, that's natural.
But it turns out that causes
a lot of subtle problems.
So if you imagine a
child who doesn't really
know how big he
wants to be, he's
looking for some
guidance from you.
If you give him a max width,
he'd be like, sounds great.
I love that max width.
I'll be exactly that wide.
And what if you have
two of those guys?
Now you're out of space, so
you can't really fit them.
And for any value you
pick here is either
going to be too small-- meaning
if they all pick that size you
wouldn't fill up,
or it'd be too big,
so if they all picked that size
they'd be too-- you'd overflow.
And so there's actually no
good value to give here.
So we don't want to give zero,
because then they would all
have to be zero width.
So we give them infinity,
which says I have no opinion.
You have to tell
me how big you are.
So we give them these
constraints and they come back,
and the green guy says
I want to be this big
and the red guy says
I want to be that big.
Great.
So we write down
that information,
we add up their width,
and we say, well,
how big should we be.
So rows have the
opinion they're just
going to fill up all the
space, because that's usually
what you want.
So they fill up
the space, and they
want to say how
much space is left,
after we've allocated space for
all these inflexible children.
I just take my overall
width and I subtract out
the sum of my child's width.
And so now I have a
bunch of free space
left that I'm going to allocate
to these flexible children.
So I just take the free
space and I divide it
by the sum of
those flex factors,
and that tells me
how much space I'm
going to allocate for
each unit of flex factor.
So a child that has two
units of flex factor
will get twice that
amount of space,
and a child that has one unit
of flex factor will get one.
So when I lay out
the flexible children
I give them these
sorts of constraints.
So their width, I tell them
exactly how wide to be.
I say you're going to be
exactly enough size that you
fill up all my free space
they'll have left in my layout.
And the height,
well, it's up to you.
You can be as short as you
want or you can fill up all
my max height, all the way
up to my max height, whatever
you like.
So they come back and
they say, oh, I agree.
You told me exactly
how wide to be.
I'm exactly that wide,
and here's how much height
I need for that width.
So they follow the Width-In,
Height-Out model, if you will.
So that's great.
So now I've got sizes
for all of my children,
and I have to figure out
how to position them.
Well, the positioning
algorithm is pretty simple.
I just go through in
order and I position
the first one in the first
space and then increment
by its width, and the next
one, next one, next one.
And because the
constraints of all--
I've given them constraints
such that when this adds up
it adds up to exactly the
width that I was expecting,
and that's how big I am.
And so for the height,
there are many choices
for how to position people.
So there's a-- so for flex
or row there's a choice.
I want to align them all
to the top, or the bottom,
or the center, or
whatever you like.
So here I've
center-aligned them.
So I have all their
heights, so my height is
the max of all their heights.
And then, for each one I know
exactly how much to offset it
because I know its height.
But notice here I couldn't
figure out their position
until I knew all the
sizes of all the children.
And then, once I knew all the
sizes of all the children,
I could position them
without touching them.
So they-- in no way could
they-- could their size
depend on their position,
because they don't even
know what their position is.
This is in contrast
to other systems
like the web where on the
web the size of an object
depends on its
position on the screen.
Yep, and then I'm done.
I've laid out my flex just
the way I said I would do it.
AUDIENCE: [INAUDIBLE].
ADAM BARTH: Yeah, so you could
align to the top or the bottom.
So then, instead of
centering them vertically
I would just give them all
zeroes as their x coordinate.
[? Forest, you had ?]
a question.
AUDIENCE: So how come
[? your flexible children ?]
[INAUDIBLE]
ADAM BARTH: Yes, this is
a fascinating question.
So it turns out that just
because you're given--
AUDIENCE: Repeat the question.
ADAM BARTH: Oh, sorry.
The question is what happens
if the inflexible children is
too big.
I told them they could
be infinitely big
and they all decided
to be huge and I
don't have space for them.
What should I do?
Yeah, so it's fascinating.
What the box constraints
really say is here's
how much space you're allowed
to occupy during layout.
So that doesn't say how big
your children need to be.
So for example, if your
children are bigger
that just means that
they extend off the side.
So I could either
paint them out there.
I don't have to paint
within my bounds.
Or what actually what
we do is we clip them.
We say, OK, you're too big.
I'm going to only draw the
parts of you that are actually
visible.
So they occupy that
much space but they--
you can't see them, because
they're clipped away.
AUDIENCE: [INAUDIBLE]
ADAM BARTH: Yeah,
[? so he said ?]
[? if the green ?] guy is too
big you just won't see the pink
guy or the red guy.
That's accurate.
AUDIENCE: [INAUDIBLE]
ADAM BARTH: So in
this layout we do.
We have fancier layouts
that are smart and do things
like that, that avoid
unnecessary layout
and building.
But flex is sort of-- at
least as presented here
it's pretty simple.
So yeah, it's actually
there's a debug
you can turn on that will draw
a little red box whenever you
overflow your
flexes, just so you--
because you might not
want that, so you might
want to be told about that.
Oh, so I told you
that-- so you'll
notice that we have this
Width-In, Height-Out property
for the flexible children.
We told them exactly how
wide they were going to be,
and then we asked them how
tall they were going to be.
So if you imagine just
rotating this thing
to be a column,
totally reasonable
to have a flexible
layout that's vertical.
So now the flexible children
in a vertical layout.
You tell them their height
and they tell you their width.
So it turns out we needed
Height-In, Width-Out,
even though when we
first saw that it
seemed like a weird thing.
But it actually arises
quite naturally, just
in vertical flex layouts.
And it turns out this
constraint-based simple
algorithm is
sufficient to generate
a lot of different layouts.
In fact, we have a
complete implementation
of material design and all
of its visual and layout
properties just done
with this algorithm.
It's kind of remarkable.
So when we first
started the project
I was a little skeptical
that these simple constraints
would be enough.
And that's why we-- one of
the reasons why we have render
object as this very
general purpose thing.
We thought, oh, we
might need to specialize
it to do some other kind
of complicated thing.
But it turns out,
no, you can actually
do everything you want just
with this simple algorithm.
So what's neat about the
algorithm being simple is now
we can reason about it
and exploit its properties
to make things go fast.
So as an example, you
notice at some point
I might give a child a
tight constraint, which
means the child has to be
exactly a certain size.
And what that nicely does is
it provides a cut in the data
flow of the layout algorithm.
So if you imagine
that this edge here
that's labeled as a
tight constraint says
the child has to be
exactly a certain size.
Then whatever happens down
in that sub-tree with respect
to layout, can't possibly
affect the rest of the tree,
because his only communication
with the rest of the tree
is his size that he reported
back up the algorithm.
But since his size has to
be exactly the one that
matches the constraint,
there's no choice for his size.
So whatever crazy layout
thing is happening there,
that information can't propagate
to the rest of the tree.
This creates what we call
a re-layout boundary.
And we compute these
implicitly just from watching
the algorithm, that constraints
[? over-execute. ?] And so it
basically says, if somebody in
this sub-tree wants to change
his size or position,
that change is contained
in the sub-tree.
So when we produce
that next frame,
we only need to
consult this sub-tree.
We don't even need to touch
the rest of the entire tree,
and so that makes things
much, much more efficient.
So I said it was a
linear algorithm.
Actually, because
of these properties
it's actually sub-linear,
because you don't even
touch the parts of
the tree that are
isolated from the parts
that undergo a layout.
And actually there are
several different cases.
So tight constraints
are one case.
Another case that
we recognize is
when a child-- when parent
asks a child [? for the ?]
layout, he supplies a flag
that says whether he's
going to use the child's size
in the rest of this computation.
And if he says no, that also
creates a relayout boundary,
because then if the
child changes size
it doesn't affect anything
else, because the parent
didn't listen to the size.
It was irrelevant from the
parent's point of view.
And that actually comes up
in [AUDIO OUT] [? cases. ?]
And another case
is where a child
can report that his
size depends only
on his incoming constraints.
So it's not-- he
says, for example,
a child that always expands
to fill his constraints,
he's sized by his parent.
Whatever his parent told
him is his constraints,
he immediately knows
what his size is.
What his children
do don't matter,
and that also creates
a relayout boundary.
And just from these three
simple observations about
the [? constraint solver ?]
[? actually, ?] the incremental
layouts in this system turn
out to be really quite small,
just as you naturally write
widgets and naturally build up
applications.
So I wanted to touch
on one more point
before moving on to painting.
So you'll notice what
order did we visit
the children in our layout?
Well, first we visited
the inflexible children,
and then we visited
the flexible children.
So our first [? visit ?]
the green guy, and then
the red guy.
And then I came back
and I did the yellow guy
and the pink guy.
So that's in contrast
to the order in which
I'm going to paint my children.
So I paint my children in order
from left to right in the order
they exist in the tree.
But in layout, I visit
them in a different order.
So this is motivation
for why you
want painting to be a separate
tree walk from layout,
because you're going
to visit the children
in a different order.
So that's in contrast
to other systems
that unify the layout
and painting algorithms
into one walk of the tree.
They end up having to do these
careful shenanigans to deal
with the fact that the
paint order is not always
the same as the information
flow order for the layout.
So here we do them as
just separate walks,
and that just-- each
one is one linear walk
of the whole tree from top
to bottom-- conception.
AUDIENCE: And that
matters if they're
going to be overlapping
or transparent
or something like that?
ADAM BARTH: Yeah, so for
example, in this layout,
they're all next to each
other, but another layout
besides a flex is a stack.
So a stack, [? it ?] just puts
them all on top of each other,
and so it really matters
what order you paint them in.
And they're also-- so
a stack has positioned
and non-positioned children.
It similarly has to visit them
at a funny order during layout,
and non-positioned order.
Painting phase, so we figured
out where everything is
and how big it
is, but we haven't
figured out what it
looks like, which
is sort of only half the
battle as G.I. Joe would say.
So how do we paint?
Well, we say, oh,
paint's really easy.
You just walk the whole
tree in depth first order,
and you pass around your offset,
so where you are on the screen.
And then you tell each thing
to just paint itself there,
because we already know where
it is and how big it is.
There's not that much choice.
It just has to draw.
Simple, one slide.
Not quite.
So the complication
with painting
is that we have to
deal with layers.
So if you were painting
everything to one buffer,
then that would be
the end of the story.
But it turns out that
painting things to one buffer
is very constraining.
So for example, suppose you had
a-- suppose this yellow thing
in here was a video.
So it's something
that's going to be
drawn by some other part of the
system that you don't interact.
There's some
hardware video codec
that's just going to
write video textures
and then you're
going to draw them.
And you want to draw some
things behind the video
and some things on
top of the video.
It means you have to divide
up your drawing into two
different pieces,
the part that's
below the video and the part
that's above, so later, when
you're compositing the video,
then everything looks correct.
So for example, you could
draw a play button on top
of [? the video. ?]
So the tricky thing in
painting is basically
figuring out in which layer
the painting command should go.
So conceptually, you can
think of these layers
as buffers of pixels.
We don't actually make
pixels out of them.
We just keep them as
vectors, but you don't have
to worry about that too much.
So during the paint phase we go
walk the tree in depth order,
and then we paint
into these layers.
So here the green bubbles are
painting into the green layer.
Number four here, he's a
video or like a child view,
or something that
needs to be composited
in order to paint correctly.
So he gets painted
into his own layer.
And then everything that comes
after number four in paint
order gets drawn
into the red layer.
So the interesting
thing to observe
is that on the second
row on the left,
he's got some green and
some red aspects to him.
So what that meant
is, you should
imagine that I painted one.
I painted the
background for two,
and then two, painted
three and four.
And then on the
way up he decided
he wanted to paint
some more things,
and then the fifth
painting happened.
So this-- when he paints
after his children,
his painting commands go
into a different place,
a different layer than when he
painted before his children.
So they end up in the red layer.
And then when we
go up to the top,
the top has no more
painting to do.
He goes down to his
child, and his child also
ends up after the
yellow in paint order.
So there's this funny
thing where a given render
object isn't allocated
to a unique layer.
His painting can actually be
split across multiple layers.
So this is in contrast
to basically--
I'm not aware of any other
system that does this.
So for example, in Cocoa there's
a one-to-one correspondence
between UI views and CA layers.
You can't split a UI view
into multiple CA layers.
Similarly on the web, you can't
split a single render object
into multiple layers.
[? Its ?] painting
doesn't work, and there's
plenty of bugs because of that.
But in this system
we basically do that.
So the way we do it is
we-- it's not just offsets
that we're sending
down the tree.
There's actually
stuff that comes
back up in our one-pass walk.
In particular, the
target layer, so
which layer you ought to
draw into is something
your children tell you
as part of painting.
So you tell your child go
paint yourself over here.
And he tells you, hey, you
should continue painting
in this other layer.
So if you were a
foreign language person
you would think of this as
like continuation passing.
So he passes the
continuation of where
you should continue painting.
And in that way, the computation
of the compositing strategy,
so which things are
painted into which layer,
and the actual recording
of the painting commands
is unified into one walk that's
done in this simple, one-pass
down up [? toggle. ?]
So that's nice, but now
you see there are all
these funny, non-local effects.
The fact that this yellow
guy had to be composited
had an impact on this red guy
on some totally other part
of the tree.
So that would make
painting very complicated,
because any effect in
one part of the tree
could have an effect on some
other radically different part
of the tree.
And so, if this guy said, oh,
I want to change my painting,
in principle you'd have
to repaint everything
in the whole universe to
make that change happen.
And so, while we have this
clever idea from the layout
that we should introduce
these relayout boundaries,
what if we did something similar
for painting and artificially
introduced repaint boundaries?
So what a repaint
boundary does is basically
say I'm going to artificially
pretend that this child needs
its own composited layer.
And what that means
is it produces--
that the effects in that
sub-tree are then contained.
They don't affect other
parts of the tree.
So now this blue guy has to be
painted into the blue layer,
regardless of whether this
yellow guy exists or needs
his own layer, or
anything crazy like that.
The relayout boundary basically
has stabilized the algorithm
so that it's-- the non-local
effects are contained in that
sub-tree.
I'm getting a little
skeptical look,
so maybe I'll-- I
have another-- oh see.
Yeah?
AUDIENCE: Why is an
author [INAUDIBLE].
ADAM BARTH: Yeah, so
that's a good question.
So he asked will the
relayout boundaries
we computed
automatically for you
by looking at the data flow.
But these repaint
boundaries, I didn't tell you
how we computed
them automatically,
and he might be
suspicious that that
means that we don't know how
to compute them automatically.
And that's actually true.
So we can place these
repaint boundaries anywhere
in the tree.
It's a very flexible concept.
But we don't know where is the
optimal place to paint them.
So if you imagine if you
put every different render
object in their own
repaint boundary--
they all use composited layers,
but the really big stack
of composited layers.
And that might be
inefficient, because now you
have to manage all these layers,
or if you texturize them.
So you turn them into
actual pixels on the GPU,
then you have a lot of pixels,
many more pixels than you
had on the screen.
On the other hand, you
don't want to have zero.
So basically, you
don't want everyone
coupled into one painting pass.
So the optimal repaint
boundaries for your app
is somewhere between
everything and nothing.
And where to draw that
boundary is actually--
it has a large effect on
the performance of the app,
and it's something that's
difficult to compute
automatically.
So it's-- you should think
about the structure of your app
and say, when this part
of my app repaints,
what parts of the app
always repaint with it?
Or were there parts of the
app that are repainting them
for different reasons.
So a good example of that is
like a scrolling component
of your app.
When this thing scrolls, suppose
you have a scroller on the left
and a smiley face on the right.
When the scroller
scrolls, the smiley face
doesn't need to repaint.
So somewhere between the
scroller and the smiley face
you should have a
repaint boundary
to contain the effects
of its painting.
So this diagram is intended
to show how the repaint
boundary changes the structure
of the compositing layer tree.
So on the left was our
original layer tree.
We had the green layer,
this-- the yellow layer
and the red layer.
So they're painting here in a
[? pre-ordered ?] traversal.
And on the right, we
have what the tree
looks like after
you've introduced
the repaint boundary.
So you get this dark
blue or black layer
that is an artificial
layer that we introduced
to contain the effects.
And now we have this
extra blue layer
to paint after the black layer.
Yeah?
AUDIENCE: [INAUDIBLE] so
I presume that the widget
framework [INAUDIBLE].
ADAM BARTH: Yeah, so the
question is, do you really
have to add all these
repaint boundaries manually?
That sounds like a big pain.
And the answer is no, that a
lot of the basic widgets that we
provide you know where you
should put repaint boundaries.
Or the scrolling widget has
a repaint boundary in there
because it's a common case.
And it's when you're building
more complicated things,
or you're building
your own scroller
or your own scroll-like
interaction or something
that you might have to
think about where to put
in the repaint boundaries.
So that was painting.
So we generated
all these layers.
What do we do with them?
Why do we even have them?
So one benefit you have
for breaking your scene up
into these composited
layers is you
can update your visual
appearance very fast.
So if all you're doing is
moving around these layers
or changing their
offsets or transforms,
then you don't have to do
any of the rest of the work
that we've talked
to up to this point.
Because you have everything
split apart into pieces,
you just need to draw
those big pieces again.
So if you want to move the
yellow layer to the right,
you don't have to
touch anything else.
You just move to the right and
then re-composite your layers.
So a good motivating example
for why you want to do this
is scrolling.
So here, imagine that you have
a list that's going to scroll.
So the gray things are
the different items
on the list and the dark gray
boundary is the viewport,
so that's the part of
the list that we can see.
So as we scroll up here, if
you didn't do anything clever,
you would have to at least
repaint the entire viewport
every frame of this scroll.
Because this pixel change
from white to gray, and so
that pixel has to repaint.
So we go explore the tree until
we find a repaint boundary,
and then we repaint
that whole thing.
Well, that turns out to be less
efficient than it could be.
And scrolling is a very taxing
operation on the system.
You want to basically
have scrolling
be as efficient as possible.
So what you do is you
use a separate layer
for each of the items
in the scrollable list.
So here when I move from
the first part of the scroll
to the second part of
the scroll, all I did
was shift those boxes up.
I didn't have to repaint them.
I didn't have to relayout them.
I didn't have to do anything.
I just took their--
either their already
recorded drawn
commands, or if they've
been turned into pixels
just their pixels
and spew them back
onto the screen.
And as I scroll up I
reveal this new item.
So the only amount of
painting I have to do
is when I reveal a new
item I have to go create
a layer for him, paint him.
But now I have him,
and as I scroll I
don't have to do anymore work.
I just have to slide him around.
And when this green guy slides
at the top, I can reclaim him.
And that way I get this
nice recycling list
view almost for free
out of the whole system.
So as these buffers or layers
become available on the top,
they can appear on the bottom.
And you only ever have
a finite number of them
as you scroll
through this system.
This also connects
up to earlier where
we talked about where each
of these items in the list
don't know what their offset is.
So their behavior or
appearance can't possibly
depend on their offset,
because they don't
know what their offset is.
So then I know that
I can just move them
without talking to them.
And so that means I don't have
to do-- the amount of work
I have to do to do a composited
scroll is essentially
very, very little.
And so on like
three-year-old devices
we can do composited scroll
in about one millisecond.
That's pretty fast.
AUDIENCE: How does
this compare to what
other systems [INAUDIBLE],
this scrolling?
ADAM BARTH: It's
basically everyone uses
the same underlying
commands on the GPU
to do scrolling like this.
So it's only a question of
how much work was it author
and how-- so in
this system we've
built up the
instructions so that when
you use one of these things
it just feels totally natural.
So for example, on Android you
have this recycling list view
guy with a delegate with six
methods you have to implement.
It's very complicated.
But in this system,
all you did was you
just had a widget
with a build function,
and then we wrapped it in
a repaint boundary for you.
And then we-- because of the
invariance about the offsets
and stuff like that, everything
just works out perfectly so
that composited scrolling is--
it's the optimal path just
by default. [? Forest? ?]
AUDIENCE: So [? will you ?]
talk a little bit about
[? what you mean by ?]
[? composited here. ?] Are you
actually-- do you
render all [INAUDIBLE]?
ADAM BARTH: Yeah, yeah, OK.
That's good.
I glossed over this, and I
actually have four minutes.
I can actually cover
it, which is good.
So [? Forest ?] asked what
do you mean by compositing?
I'm a graphics guy.
This doesn't look like
compositing to me.
AUDIENCE: [INAUDIBLE].
ADAM BARTH: I'm just--
I'm punching up.
Yeah, so traditionally
compositing
means I had pixels
recorded in a texture.
And [? then what I'm ?] going
to do is I'm going to [? blit ?]
that texture onto
the screen in order.
And so we actually
do that sometimes,
but we don't always do that.
So each of these
layers can either
be represented as a vector,
so like a display list,
so a list of drawing
commands to execute.
Or we can bake that list-- that
display list into a texture.
And then once we
have the texture
we can [? blit ?] the pixels
directly to the screen.
And so the question
is, when do we decide
to texturize these layers?
And so we have--
so in other systems
they make very strong
commitments about this.
So Cocoa says every C
layer, it's a texture.
We're going to have
lots of GPU memory.
It's going to be OK.
Android framework
says the opposite.
It says I never ever
want to make textures.
I don't have a lot of memory.
I'm going to redraw my display
lists from scratch every frame
and I'm going to make
that really efficient.
So this system actually
takes a middle approach
to these things.
So what happens, if you draw
the texture three times--
when we draw a layer three
times as a vector, we'd be like,
we keep drawing this same layer.
I bet it's worth making
a texture out of it.
And the third time we'll first
draw it to a texture and then
[? blit it from the texture, ?]
and from then on as long as you
keep drawing it we'll just draw
it directly from the texture.
And it turns out that three
is kind of a magic number.
So if you picked
one, then that means
you would always draw
indirectly through textures.
And that would be not
efficient in some cases,
like imagine a circular progress
indicator in material design.
So it's arc that keeps changing
size and keeps rotating around.
It never draws the same
frame twice, like ever.
So there's no point in
drawing it indirectly
through a texture.
You might as well just draw
it directly from its command.
But imagine like a drawer
that's sliding out.
That thing is identical.
All that's changing
is its offsets.
So if you capture the drawer
into-- in its own repaint
boundary and you can translate
it in the compositor,
then after you've done this a
couple times, you're like, hey,
I bet this is going
to stay like this.
And so you can
actually just texturize
a door as a whole thing
and then move it out.
And so why three?
I don't know.
You could try four.
You could try one.
You could try two.
Three is actually-- it
seems to be pretty good.
[? There's ?] actually an
observation across many,
many systems in computer science
and electrical engineering
[? that if you have ?]
two [? bit saturated ?]
[? encounters, ?] it turns
out to be pretty good.
So that's where the
three comes from.
It's a two bit
[? saturated encounter. ?]
Yeah?
AUDIENCE: How
[? long ?] [INAUDIBLE]
ADAM BARTH: Yeah, OK, so
when I said three-- so
he said what if we
have lots of tiny stuff
and that seems like a waste
to make [? all that. ?] Yeah,
so three is not the only answer.
There's some more
heuristics that
decide when we should texturize
and we should not texturize.
And I suspect as
the system matures,
we'll need to tune
those heuristics.
So there's a heuristic
that says, hey,
this layer doesn't-- it's
not really complicated.
It's really just a big
frigging rectangle.
There's no point in
storing pixels for it.
We might as well just draw it.
Or there's a heuristic
that says, hey, it's
got a lot of empty space in it.
It's going to be
really inefficient
and I've got a lot of
transparent pixels.
It's kind of useless.
So there are various
heuristics that we
use to decide whether
to texturize something.
But the nice thing
is, as the author
is you don't have to
worry about any of that.
That's all done
by the compositor.
Maybe we'll-- we probably should
expose some sort of control
[? levers ?] for that to let
you tune it yourself I guess,
but we don't do that yet.
Yeah?
AUDIENCE: [INAUDIBLE] Is that
equivalent to automatically
making a layer?
ADAM BARTH: Yeah, so Andrew
was saying if we auto-texturize
after three things, why
can't we just auto-layerize
after three things?
Yeah so, we should
probably investigate that.
So if you run in debug
mode, we actually
keep track of all the
repaint boundaries
that you put into your app.
We keep statistics about them
about how effective they are.
We'll say like this repaint
boundary was awesome.
99% of the time the
child and the parent
painted at different times.
Or it will say this
one was terrible.
Basically, it was
always the case
that the parent and child
had to repaint together.
There was never a time
when this repaint boundary
actually separated two
different painting operations.
And so we maybe could
use that information
to automatically generate
repaint boundaries,
but we haven't really
investigated that too much.
AUDIENCE: About the
repaint boundaries--
so the repaint boundaries
are [? training ?]
how much of a tree you
have to [INAUDIBLE].
ADAM BARTH: That's right.
So it's basically using what is
the actual trade-off involved
in a repaint boundary.
So the actual trade-off
is the trade-off
between the amount of time
you spend in the paint phase
recording commands versus
the amount of memory you take
and the amount of
management overhead
you have for these layers.
So if you had infinite memory
and we had really good memory
management-- really good tools
for managing these things,
[? you would ?] make
everything a repaint boundary.
And you could imagine you could
make every pixel on your screen
a repaint boundary.
And then you'd be like,
we got really good
at managing the pixels.
We built some specialized
hardware for it.
We call it a GPU.
So it's sort of a moving
work from different parts
of the pipeline to other
parts of the pipeline.
So if you have a really
beefy GPU that could just
take every command you
fire at it and draw it,
then you would never want
any repaint boundaries.
It wouldn't make any sense.
But if you-- but in reality,
the pipeline that your app
goes to to render has
different constraints
on it-- [? excuse me, ?]
on a CPU
and a GPU which have
different relative strengths.
You have different
amounts of CPU memory,
different amounts of GPU
memory, and so a lot of implicit
in these thing-- in these
different knobs you can turn
are how do you re-balance your
workload across these diverse
[? set of ?] [? resources. ?]
And so what we've done
is we've basically
picked an approach that is
optimized for mobile devices.
So in contrast to
other systems that
were designed for, for
example, desktop devices
where the GPUs didn't even
exist when they were designed.
So then we designed
the whole system
to be roughly optimized
for mobile devices.
And then there are
a few little knobs
you can tune to basically
say, my particular workload
or my particular app, how do
I make efficient use of all
the resources that are available
at each stage of the pipeline?
AUDIENCE: [INAUDIBLE] All
the time you're saying,
oh, well now we're going to
add a repaint boundary here
[INAUDIBLE].
ADAM BARTH: Yeah,
so his question
was this auto-texturization and
doesn't this add a lot of noise
to your pipeline,
which might cause you
to miss your frame deadlines?
Yes, I was worried
about that too,
but it turns out not to
actually be that bad.
So the reason that-- my
hypothesis is they're
not all synchronized.
So they don't all hit-- it's
not like we say, OK, this frame
we're going to
texturize everything.
That would cause a big hiccup.
But as you see as you
scroll by, physically what
happens is this dark blue
guy appears on-screen
for one frame, two frame, then
he texturizes, then he goes.
So as long as the-- it's
not like you're drawing them
all at the same time.
So it turns out not to
actually add that much noise
to the pipeline.
So you have-- there's
lots of diagnostics,
so if you go into an observatory
and you look at the timeline
and you record a
timeline because you have
a time-oriented view
of what's going on,
you can see each phase of the
pipeline where we labeled.
And you see the order
that they execute,
how much time they take
relative to each other.
And you can see things
like texturize show up as--
and you can see things like
how much layout you're doing
or either-- if you
even visit the layout
phase of the pipeline at all,
or whether your frame can purely
be produced by
painting or purely
be produced by compositing.
Yeah?
AUDIENCE: So do you
always turn the layers
into pixels before
you composite them?
Because you were saying after
three you store the pixels,
or is it--
ADAM BARTH: [? Yeah, so his ?]
question is--
AUDIENCE: [INAUDIBLE]
ADAM BARTH: Yeah,
so his question
is do we always turn them
into pixels or do sometimes
we draw them as vectors.
No, so when we're
drawing them as vectors,
we just draw them in
[? Immediate ?] mode
as vectors.
We just issue a bunch of--
like if you have a path,
we'll issue all of the
triangles for the path.
AUDIENCE: [INAUDIBLE]
ADAM BARTH: How do you determine
whether frames one, two,
and three are the same?
That's a good question.
So the display [? lists ?]
are [? immutable. ?] So once
you've recorded a display list
there's no way of altering it.
All you can do is tear it
down and record a new one.
So they just have unique IDs
and so we just keep track.
AUDIENCE: [INAUDIBLE]
ADAM BARTH: Well, this
playlist had just unique IDs,
and so we just remember the
ID that we drew last time.
So [? it's  we ?] drew
display list 27 last time,
so this one is display list 27.
It's immutable, so it
must be the same thing.
And we also record the
matrix, so we'll always
draw exactly, perfectly under
the pixel grid of the device.
So if you change the matrix,
then we'll-- change the matrix
in a way that it changes the
projection from the layer
to the screen,
then we'll say OK,
that doesn't count as drawing
the same thing because we want
to hit the exact pixels.
So for example, in other
systems like Cocoa,
if you take a UI layer-- UI
view and you transform it,
you won't always hit
exactly the pixel grid,
which means you'll get a
little bit of aliasing.
So that's a trade-off
for performance.
So if you-- that
means that they're
able to draw from
textures more often
but you don't get
pixel perfect output.
And I expect eventually we'll
want to have that capability
[? in ?] [? system and we ?]
definitely can,
but right now the system is
tuned for pixel perfect output.
And then, if that's
too slow, then we'll
reduce the quality to get
performance, if necessary.
Yeah,
[? so just to orient, ?] we
talked about these three
phases of the pipeline.
Yeah, I want to thank
everybody for coming.
[APPLAUSE]
I can take more
questions if you have it.
I guess you guys asked a
lot of questions already.
AUDIENCE: [INAUDIBLE] So one
of the widgets is capable
of [INAUDIBLE] your
children, so does
it end up having N squared
behavior in order to do that?
ADAM BARTH: Yeah.
AUDIENCE: Is that why he says
this is expensive [INAUDIBLE].
ADAM BARTH: Yeah, so he
says some widgets can
take their size
from their child,
because that introduced
N squared behavior.
And so the answer
is slightly subtle.
So the simple answer to
that question is no, it
doesn't introduce-- in
general it doesn't introduce N
squared behavior.
Because remember my parent
gave me my constraints.
I was allowed to
talk to my children,
and then I reported my size.
So if I want my size to
exactly match my child,
all I have to do is
ask him, hey child.
What's your size?
They'll tell me,
and then I can just
tell my parent that was my size.
So in general, if you want
to shrink wrap your children,
that's basically free.
But there are cases
where you want
to do something slightly
different than that.
So they don't actually
come up that much,
but when they come out
there's sort of no other way
of solving these problems.
So a good example of that
is the-- a pop-up menu
in material design.
So how wide should a pop-up
menu in material design be?
So the answer is he should be
as wide as his-- the widest
line of text contained
in the menu, rounded up
to an integral number
of eight pixels.
OK, that's what they
wrote in the speck.
It sounds great.
So if you just asked each child
to please lay out at your size,
and then I'm going
to size myself
to the max of you
plus eight pixels,
that wouldn't
actually be correct.
And the reason it wouldn't be
correct is because of Arabic.
So in Arabic, instead of
writing from left to right
you write from
right to left, which
means the menu item
in Arabic should
be-- all of the text
of the menu item
should be aligned vertically
on the right edge,
on the right edge.
And which means you have to,
when you lay out the children,
you have to tell them how
big the menu is actually
going to be in order to get
the correct text layout.
And so how do you know?
It's like a chicken
and egg problem.
So this is the case
where you actually
need those intrinsic sizing
functions that I sort of put up
on the slides but didn't
really tell you much about.
So intrinsic sizing
lets you ask your child,
hey, how big would you
be if you-- well, you
get four different
questions to ask,
and they are sort of
subtle what they are.
But the one you
want in this case
is you ask the child how big
is your longest line of text,
effectively.
So what is your width-- beyond
width if I made you wider
you wouldn't get any shorter?
Or it's an abstract
way of saying
I don't want you to
take any line breaks.
Take as few line breaks
as possible and tell me
how wide you are. [INAUDIBLE].
In most cases you can
get N squared behavior,
because if you keep asking that
question recursively as you
go down the tree, then
you could be always asking
that same text at the bottom.
Hey dude, how wide are you
if you only had one line.
You could ask them that
question N squared times.
It is possible to get that.
So those are very rare.
So for example,
with the stocks app,
which is the kitchen
sink app of all widgets
that we ever
thought of, combined
in the craziest ways possible.
It has like two.
So it does occur, and you
need to do it in order
to get correct behavior.
Oops, I didn't want to do that.
But for the lion's
share of cases,
this one-pass simple
constraint basis
is sufficient to get
the correct layout.
Yeah, [? Yager. ?]
AUDIENCE: Any interaction
between the [INAUDIBLE].
[? In this ?] case
[? I'm thinking of ?] as a
result of layout
there's [INAUDIBLE].
ADAM BARTH: Yeah, so
[? Yager's ?] question is Ian
gave this talk that explained
the build phase in detail,
and I gave this talk
that explained the layout
and [? paint ?] phase.
And I told you it
was a pipeline.
What if my build wants
to depend on my layout?
And so, when I said it was a
pipeline I sort of actually
lied.
So the build phase
and the layout phase
are allowed to intermix
with each other.
So in the middle of layout
you can do some more building.
You can't do building--
you can't do layout
in the middle of building.
So I guess you're allowed
to build phases inside
of your layout phases.
And this actually follows
from a very important property
of the system that
I discussed, which
is that a render object
doesn't know anything
about its children.
So what you do is you
have this-- another kind
of render object, another
kind of render box.
So this child model
is lazy in some sense.
So it basically says, when I
need to go layout my children,
because no one
else has ever been
able to talk to them, because
no one else knows what my child
model is, I can create
them just in time
in order to do their layout.
So for example, we have
this lazy scrolling guy
who will only build widgets that
are actually in the viewport.
And the way this works
is during layout he says,
oh, I don't have children
to fill my viewport
and to go build one
more, lay him out.
Oh, I still don't have enough.
I got to go build another one.
OK, let him out.
And these [? spaces ?]
are basically--
because you've never visited
those children before
and you could never have
to visit them again,
you never have to
get more information
from some other
part of layout that
tells you, oh, I
need more children
to deal with this problem.
Then you get this very clean
infinite scrolling mechanism.
I want to give a whole
talk about that thing,
because I think it's frigging.
To really understand it, you
have to go in a little bit more
depth but--
AUDIENCE: Thanks, [? Todd. ?]
ADAM BARTH: Great, thanks.
