Thank you everybody for coming.
We're gonna get started.
So today's speaker is Sylvia Herbert.
PhD candidate in electrical engineering
from UC Berkeley, go bears.
Advisors Claire Tomlin,
received her bachelors degree
in mechanical engineering from Drexel,
one of the rising stars in EE
at UC Berkeley and outstanding GSI.
So we're looking forward to your talk.
An author of five journal papers,
12 conference papers,
and has already given
almost 20 invited talks.
How many PhD candidates in the room
can brag about that?
Today she'll be talking about
safe real world autonomy
in uncertain and
unstructured environments.
So please, join me in welcoming her.
[crowd applauds]
So my motivation for this talk
is fairly straightforward.
It is that, we want our autonomous systems
to operate safely in
real-world environments.
Now traditional methods
of handling safety,
try to guarantee the safety
of every possible trajectory
of a system.
And this works well when you
have these lower dimensional
systems, but we're really excited now,
introducing these
complicated, high dimensional
non-linear systems,
into a interesting environments
that may be unstructured
and have things that are very
difficult to predict over.
Such as, for example, humans.
And in a lot of these cases,
it's too difficult to reason traditionally
over the safety analysis.
And so instead, we end up
resorting to heuristics.
Trying to ensure safety within companies.
And this isn't going very well.
So here is a couple examples
of headlines from the news.
"Driver says Tesla car gets confused
"and crashes on highway."
"Time to panic, a robot
killed a factory worker."
And so really, there's this big need
to work towards this goal of safe,
scalable, and adaptive autonomy.
So I've tried to approach
reaching this goal
from three different approaches.
One is control theory.
Next is cognitive science.
And the third is learning.
And I worked on a variety of projects
across this span.
All trying to work towards
the safe, scalable,
and adaptive autonomy.
I don't have time to get
into all of these projects
so today I'd like to talk about
three major thrusts of my work.
First, I'm going to talk about
scalable safety analysis.
So as I mentioned, traditional approaches
of guaranteeing safety struggle to handle
high-dimensional, non-linear, complicated
models of systems.
I have developed scalable
techniques to give guaranteed,
theoretical solutions for
these complicated systems
that can reduce computations
potentially by orders of magnitude.
This will set the foundation
for then trying to do safe
decision making in real time.
Because if we want our
autonomous systems to operate
well, in real-world environments,
we need to be able to make
decisions in real time
while reasoning about safety.
So I've developed this
framework called FaSTrack.
Or fast and safe tracking.
That is able to robustify
a very simple and fast
planning algorithm,
while maintaining safety
with respect to the
high-dimensional system.
This is now been applied in various labs
across universities and government labs.
And has also gotten
some industry attention.
And so once we build up that framework,
we're going to apply it
to safety and human-centered robotics.
Because if we wanna start thinking about
real-world environments,
one of the struggles of
dealing with the real world
or dealing with things like humans
that are very difficult to predict over.
And hard to give safety
guarantees with respect too.
And so those are the three directions
that I'm gonna talk about today.
And then we'll get into some future work.
So let's begin with
scalable safety analysis.
And to do this, I'm
going to first introduce
our traditional method of doing
safety analysis in my lab,
Hamilton-Jacobi Reachability Analysis.
I will then, build off of this
using my work on System Decomposition
and Warm-Start Reachability Analysis.
So let's begin with understanding
Hamilton-Jacobi Reachability.
In this, we assume that we're given
a model of our system.
Where the change in the state over time,
is a function of its state,
its control or input into the system.
And potentially a disturbance.
You could think of this as, for example,
wind working against you
as you're trying to reach your goal.
And we also assume that you have some goal
that you want to reach
in your state space.
Or alternatively some unsafe area
that you're trying to avoid.
In this case, let's
assume that it's a goal.
And now I'm,
the point of Hamilton-Jacobi Reachability
is to build up this reachable set.
This is the set of states,
where if I start in this set,
I am guaranteed to be
able to reach my goal,
despite worst case disturbances,
within the time horizon that I care about.
We also provide a
corresponding controller.
To make sure that you maintain
these safety guarantees
and are able to reach your goal.
So let's talk about how we build this up.
I'm gonna build up three
lines of math for this.
And it's important because
the rest of my work
builds off of this
I promise that we'll
get through it together.
So first let's introduce
the cost function.
So this cost function is lifting our goal
to a higher dimension and
reasoning over the cost
of trajectories of our system
with respect to the school.
So you can just think
of it as negative cost
inside of the goal.
Which is good.
And positive cost, outside
of the goal, which is bad.
And our goal here is going to be
to try to minimize the cost over time.
So this is what that looks like here.
Where our cost function is
negative inside of the goal.
Next we're going to build
up a value function.
And the point of this value function,
is to tell me find the minimum cost
of trajectories of the system
starting from state z and time t.
So I wanna find the
minimum cost, or closest,
that trajectories ever get to the goal.
And so here, minimum cost over time
of trajectories of our system.
Here I'm gonna introduce our trajectory
notation here.
Which typically says if I
start at state z and time t,
apply my control u, and disturbance d,
this will output the state that I get to,
along the time horizon.
Okay, there's one more part
to building up this value function.
And that is the fact
that we don't really care
about any trajectories of our system,
we care about the optimal trajectories
of the system.
So if the control is doing its best to try
to get inside of this goal,
to minimize this cost, and we
assume that the disturbance
in the worst case is
actively working against us
and acting adversarial,
and trying to maximize this cost.
We wanna introduce this
institute notation over this.
So this is what the value function is
that we wanna build up over time.
And this value function will tell me
again the closest that
the optimal trajectory
starting from state z and time t,
will ever get to my goal.
If it is negative, that means
that it was able to achieve
negative cost over time.
And so I'm able to reach the goal.
If it's positive, then I'm
not able to reach the goal.
Now solving this optimization problem
can be very difficult.
And so what we do in our lab,
is we do essentially dynamic programming
over the system.
We're going to start at the goal,
and work our way backwards in time
to find that reachable set.
And to do this, we're going to
discretize our state system.
Oh, it's warped.
Oh my gosh.
[audience laughs]
Sorry, hold on a second.
[audience laughs]
I was told to prepare for anything,
that wasn't what I was expecting.
That's actually, with that
said that's a great question.
So that was a disturbance
that you didn't want?
How do you guarantee you
can model all your services?
That's, and we will get to that.
So if I don't mention that
please make sure that I come back to it.
But you're absolutely right
that currently in the
traditional formulation
we are assuming that we have known bounds
on the disturbance.
And so we know the worst possible thing
that the disturbance can do.
But in hopefully in 10
slides we'll get to,
and hopefully we'll get
to that at a normal pace.
[laughs]
Okay, and so the update equation,
so we've gridded up our state space
and now we're going to update each point
in our state space that every
single instance in time.
And we're going to use
this update equation
called the Hamilton-Jacobi Isaac's
partial differential equation.
And what this is going to do,
is say, let's assume that I'm going to fix
a particular point in the state space.
The change in value backwards in time
in that particular point is essentially
trying to do gradient descent.
We want our control to move in directions
of negative gradient.
But it must obey the law of physics.
And so it must obey its
dynamics of its system.
So instead, we look at the Hamiltonians.
We look at the dot product
between the float field of our system,
and the gradients and try
to minimize over that.
The disturbance is doing the opposite.
It's trying to move us away from our goal.
So this takes care of our
mid over u, and max over d.
There's one more step.
Which is to take care of
this minimization over time.
And so I have to throw in one more
minimization term to handle this.
Now this equation is going
to be my update equation
that I use to do dynamic programming.
Okay, we got through the math.
Now let's look at what happens
when we actually propagate this over time.
So again I'm starting at the end of time
and I'm working backwards
to find the set of states I can start at,
such that I'm able to reach my goal.
And this is what it looks like.
So here, the time horizon
you see is moving backwards.
So this is four seconds out,
five seconds out, etcetera.
And you see my belly function growing.
In this particular case,
I cared about 10 second time horizon.
Now this is my value function,
that we wanted to compute.
And remember, the part
of the value function
that I care about,
is the part where the states
produce a negative value.
'Cause this indicates places
from which trajectories
are able to reach the goal.
So therefore I wanna slice this,
and just look at all the places
where I'm able to obtain a negative value.
And this, will be my reachable set.
The set of states where if
I start inside of the set,
I am guaranteed to reach my goal
despite worst case disturbances.
So this is really nice.
Because traditional
Hamilton-Jacobi reachability
analysis provides safety guarantees
and corresponding control policies.
And that is because we have
this value function here.
That you can think of as a generalized
control lyapunov function.
Who's gradients and formula
of the direction that you need to go in,
in order to guarantee that you
are able to reach the goal.
So this has a lot of benefits
in terms of very strong safety guarantees
and we worked very hard
on developing toolboxes
that are fairly user friendly
in order to be able to make
this a general approach.
But there are some issues with this.
It's that, it struggles
with complex models
in uncertain environments.
So let's discuss why that is.
Well when we talk about complex models
the reasoning for this is
because we are essentially
doing dynamic programming.
Which means that we are suffering
from the curse of dimensionality.
With every extra dimension
I'm adding to my system,
I am adding exponentially more grid points
that I have to reason over.
And so that means that
we have to use fairly
simple models of a true system.
So we might turn a real quadcopter model
into a point that's moving up and down.
A car into a simple stuven's car,
or assume that a human body is a single
or double pengent hook.
Which clearly it does not capture
what human bodies can actually do.
And so this can struggle
when we're trying to
produce safety analysis.
Next, let's talk about
uncertain environments.
So, I mentioned before, that
we can handle disturbances.
However, we are assuming
that we know bounds
on this disturbance.
We know what the disturbance might be.
If you find out later,
that you were wrong
about these disturbances,
re-computing this from scratch
could take a really long time.
Because of the curse of dimensionality.
And so handling uncertain disturbances
can be very challenging.
Additionally, something that is very cool
is that we can handle obstacles.
So here I'm introducing an
obstacle into my system.
It's in red.
And I'm introducing a second
cost function, also in red.
That is now positive inside the obstacle.
So bad high cost inside,
and negative outside.
If I know the exact
behavior of this obstacle
over time, ahead of time,
I can actually account for
this in my safety analysis.
So I can actually incorporate
this into my computation
of the value function.
Which is very cool, in my opinion.
I think this is my favorite
video of all of them.
And we end up with this nice
reach a void set.
Which says the set states
where I'm able to reach my goal
while avoiding the obstacle at all times.
However in order to compute that,
I needed to know exactly how that obstacle
was going to move over time.
And again, if you learn about updates
to your environment, because of the curse
of dimensionality, it's very
hard to update the system.
In real time.
So, given that, I'm going to talk a bit
about my work in system decomposition.
And warm start reachability analysis.
So system decomposition, the motivation
is fairly straight forward.
It said if we can take
some high-dimensional
coupled system, and kinda rip it apart
and study each component separately,
we can reduce the dimensionality
and recombine this back into a higher
dimensional safety analysis.
However, this can be kind of challenging
for a lot of these
non-linear coupled systems.
So I'm gonna use kind
of the toy model here,
the stuven's car, which
is the change in my x,
is a function of my speed
times cosign of theta.
Change in y is speed
times fine with theta,
change in theta is omega,
my rotational velocity,
and change in speed is acceleration.
Now let's think about how these
states relate to each other.
So x here, depends on v and theta.
Y similarly depends on v and theta.
And theta and v each
depend on the inputs here.
So it's not very clear how to decompose
the system immediately because it's very
couple-less you see here.
So we asked, what if we just
forcibly rip this apart?
What if we go ahead and
break up these edges,
take the system apart,
and now analyze these separately.
Well the obvious issue
is that you now have missing information.
Here I'm missing v and over
there I'm missing theta.
And so we reasoned, since
we wanna do safety analysis
and worst case analysis,
let's assume that the values
of these missing information
is the worst case value
of what I'm trying to do.
So essentially treating
them as a disturbance.
That we incorporate into
our safety analysis.
So now, I have two d systems.
Which exponentially reduces
the cost of computing these.
And when I combine them back together,
I'm going to say the value at a particular
x, y, theta, and v is going to be the max,
or worst case.
Between the two values of
each separate sub-system.
What this looks like, is that if I have
a goal in my four-dimensional space,
that I projected down onto x theta,
and similarly projected down onto yv,
I will then grow my reachable
set in each sub-system.
And then lift it back up into
the four-dimensional space.
I will then take the max, or worst case,
or intersection between these two systems,
and if I compare it to
what I would have gotten
doing the full formulation,
I know that I will have
a guaranteed conservative
under approximation.
Because I'm replacing
that missing information
with disturbances.
So this is what it looks like in practice.
Here, since we can only see in 3D,
I'm showing x, theta, and y.
And x, y, and v.
In green, is the full
formulation of the reachable set.
And in gray I'm showing our
d-coupled approximation.
And here, we're able to
recover 20% of the volume
of the reachable set in 86 seconds,
compared to the half a day
that it took me to do
the full formulation.
So we're only getting 20%.
But this is a 20% that is guaranteed
if you start in it to be
able to reach your goal.
And we're able to get it
on the order of seconds
instead of on the order of hours.
Here's another computational
time comparison.
Here's number of grid points.
Here's the full formulation.
The d-coupled approximation
with reconstruction.
And the d-coupled approximation
without reconstruction
with the idea that you
could reconstruct locally
around the states that you care about.
Okay, so this is nice because
it's a very general approach
of how you could just rip apart any system
and replace the missing
information with disturbances.
However, it can be fairly conservative
depending on how you split this up.
Introducing these disturbances.
So one of the questions that I asked,
was can we instead just replace this
with the actual state information?
So can I put v in here, and theta in here?
Now I'm reasoning over two 3D systems.
So it's slightly worse
in terms computation.
But I no longer have a disturbance.
So it's recombined in the exact same way.
And see what it looks like.
So here now I'm showing an unsafe set
in my x, y, theta space.
If I project this down,
onto the x theta space,
and the y theta space,
this is what it looks like
when I lift it back up.
And now I'm going to process
these both simultaneously.
And keep an eye on the intersection
between the two.
So in blue here, I'm showing
the direct computation.
And in the hash marks I'm showing
the decomposition computation.
We're able to recover it exactly.
And more over because we
reduced down a dimension.
It was an exponential
reduction in computation time.
Which in this case
was 254 times faster.
So decomposition can be really beneficial
in order to handle these
higher dimensional models
that we couldn't handle before.
Next I wanna talk about
warm start analysis.
So there are a lot of cases
in which you may want to
update the safety analysis
of your system.
So cases where you have
changes in your actual model,
or your autonomous system.
Changes in the external
disturbance that you had assumed.
Or changes in the
behavior of other agents.
Like this mall robot
that's being terrorized
by children when their parents leave.
And in a lot of these cases,
it would be really nice to be able to use
warm start techniques.
So this is used a lot
in value-ideration and optimization.
And it's beneficial to
initiate your computation
with a guess of a correct solution.
With the idea that it'll
converge more quickly,
to the correct solution.
And this works very well in practice,
but when we care about safety analysis
and this minimum cost over time,
it can be challenging
to ensure the correct
convergence of these solutions.
If you warm start from anywhere
other than the initial cost function.
However, in very recent work
that I'm going to be
presenting next month at CDC,
we found that if you
initialize with something
overly optimistic, so you
assume that the world is better
than it actually is.
So here I'm initializing with this blue
value function, when the world is actually
that gray one.
So we are unsafe, with
our optimistic assumption.
We are guaranteed to converge
to the correct solution.
If instead you do the opposite, you assume
that the world is a lot
worse than it actually is
and have an overly
conservative assumption.
And then you find out that the world
actually isn't as bad.
We are guaranteed to converge
to a less conservative solution.
And so it may not recover
the exact solution,
but it'll at least be better
than what it originally was.
So there's a lot of interesting
implications of this
for safe learning and
updating safety analysis
as we learn information over time.
And there's also some
pretty exciting theory
to explore with Professor
Demetris Bertsimas,
at MIT, I'm trying to unify this concept
of approximate dynamic programming
for reachability and also
for discreet systems.
So, the story here is
that theory and tools
for scalability makes safety
analysis more tractable.
So we're able to take these systems up
before we can compute it all,
and compute them on the
order of minutes to hours.
Systems that took hours, and compute them
on the order of seconds to minutes.
But it still might not be that beneficial
for when we have to do
real time decision making
while maintaining safety.
And so this motivates my
next section of FaSTrack.
Which is fast and safe tracking.
So using reachability analysis,
I am able to do slow
and accurate planning.
If you give me all the time in the world,
I can give you the optimal trajectory
for your system to reach your goal
while avoiding obstacles.
On the other end,
if I need to make a
decision very very quickly,
there are approaches like for example,
rapidly exploring random trees.
That can very quickly generate a path
from the start to the goal.
The issue here, is that this may not be
dynamically feasible or
robust to disturbances.
And so we may actually try to track it.
You may not be able to track it exactly.
So can we find a balance
between these two methods?
Well, this is where we try to
get inspiration from people.
So people are remarkably good
at making decisions in real time.
Because we have to.
And we're pretty good
at being able to navigate through a space.
Especially compared to some of our current
autonomous systems.
And so how do people do this?
Well, we talked with
researchers in cognitive science
and learned that there's this model
that assumes that people
use very simplified models
to plan with.
With an understanding of
the noise around that plan
and how it maps back to their
high dimensional system.
So for example, when you
are pouring a cup of coffee,
you're not thinking about
how to move every single
joint in your body.
You use a lower dimensional
system to plan with,
with an understanding
of how that maps back
to the policy that you should use
of in your high dimensional body.
Can we do something similar
with autonomous systems?
Instead of planning with my full dynamics,
can I plan with a simple, for
example, point mask model.
With an understanding of the error
that I will accrue by using that.
And how it maps back to
the safety constraints
in my high dimensional system.
So, what we wanna do here
is still plan using the simple methods.
But have a bound around this.
Where I know that my
high dimensional system
will be guaranteed to
stay within this bound.
This is what we're going to try to get to.
Is something like this.
Where this plane is
tracking a simple path.
And we know the bound around this path
where the plan is
guaranteed to stay within.
And this path isn't computed ahead of time
or anything, it's using
real time planning.
But we know that we can
stay within a bound,
regardless of what the planning algorithm
decides to do.
A quick aside, keep in
mind that augmenting
the planning algorithm would be equivalent
to augmenting obstacles.
And then planning using
those augmented obstacles.
With the understanding
that as you track this,
you may violate the augmented obstacles
but you will not violate
the true obstacles.
Okay so how do we find this bound
and this corresponding controller?
Here I'm showing those
two trajectories again.
And you can think of these as essentially,
two different instantiations
of your true system.
One is this low dimensional system
that you're going to use for planning.
And this other high dimensional system
that you're going to use
to try to track this plan.
And so accordingly we call
them the tracking agent
and the planning agent.
For today I'm going to use a fairly simple
tracking system model
since we see in 3D again.
But normally this would
be high dimensional.
Here I'm just using a 3D stuvens car.
The planning system I'm
going to use a 2D point
directly controls its velocity.
Now what is going to be the
behavior of these two agents?
Well we know that the tracking agent
is going to try to do its best
to follow the planning agent.
And we have control over what
the tracking agent should do.
The planning agent is being controlled
by some sort of planning algorithm.
Whether it's model predictive control,
or rapidly exploring trees,
a star, whatever it is.
And we don't know ahead of time,
'cause we don't know the environment.
What exactly the planning
algorithm will want to do.
So again, because we have
this missing information,
we're going to replace
that missing information
with a worst case assumption.
So we're going to assume
that the planning agent is
actively trying to evade us.
So we have set up a pursuit evasion game,
between the tracking agent
and the planning agent.
And given this game, what
we wanna try to solve
is what will be the
largest relative distance
that occurs over time.
So what will be this bound
around the planning agent
that we will stay within?
Note the largest relative
distance over time,
will be equivalent to the worst possible
tracking error over time.
Which will be equivalent to
our tracking error bound.
So this is what we wanna try to compute.
So because we care
about relative distance,
we care about relative
state, because we care about
relative state, we care
about relative dynamics.
So the first thing we
need to do is determine
the relative dynamics
between these two systems.
To do that, we set our
planning agent at the origin.
And we look at the dynamics
of the tracking agent relative
to the planning agent.
In this case, it's simply
taking the stuvens car dynamics
and subtracting out the point dynamics.
So now we've defined our dynamics.
The next step is to
define our cost function.
So our cost function here,
is going to be pretty simple.
And it's simply going to be
the distance to the origin.
Where remember, the origin
and the relative state space
is where the planning agent is.
So this cost function is
distance to the planning agent.
And we want to determine,
what will be the maximum cost
of optimal trajectories of
a relative system over time?
The maximum cost here means
maximum relative distance
of the two agents.
So we're going to try to
compute the maximum cost
of our trajectories.
Now we know that the tracking agent here
is going to try to minimize cost.
Because it's trying to
track the planning agent.
It wants to reduce the relative distance.
And we are assuming in
the worst case situation,
the planning agent is actively
trying to get away from us.
It is trying to maximize this cost.
And so we introduce this institute
over their two controllers.
And now we've set up this value function.
And we can plug this into our
reachability analysis toolbox.
And we end up propagating it over time
and it looks something like this.
So let's dig in to what
exactly this means.
So here, I'm gonna reset
to the cost function again.
And I'm taking three different
slices of this function.
And I'm showing them over here.
And you can think of these slices
as candidate tracking error bounds.
Here I'm showing the slices in 2D.
So this is x and y, with
a relative angle of zero.
So this is with the tracking
agent moving towards the right.
Here I'm showing what it looks like
in the full 3D space.
Where I care about distance in x and y,
I don't care about distance in theta.
And so it ends up looking like a cylinder.
Now, let's think about
what should be happening
inside of these candidate
tracking error bounds.
Well, as I mentioned
we're at an angle of zero
for this slice, so the plane
is pointing to the right.
And we know that is has a fixed radius
of curvature, the tightest
that it can turn over time.
We also know that the planning
agent will be at the origin
for each of these times.
And I want to know, given a
candidate tracking error bound,
what are the set of states
where if I start inside of that set,
I am guaranteed to stay
inside of this bound.
And so you can just think about
placing this at different points.
If I put it over here on the right,
well I know that in order to come back
and try to track this planning agent,
I'm going to have to exit this bound.
And so this is not a viable
state that I can start within.
Instead I would have to
kind of push this back,
until it just barely stays
inside of that bound.
If I start on the edge, well this is fine
as long as my radius
and curvature is tighter
than that of the candidate area bound.
And if I start instead at this point,
well that's great because I
can go straight into the bound.
So the set of initial states
might look something like this.
So this is the set where if
I start inside of this set,
I am guaranteed to stay
inside of this error bound.
Now, as we go to smaller
and smaller candidate error
bounds, you can imagine this
gets tighter and tighter.
So we have to move this further away.
We might end up with something like this.
And eventually we get to the point
where even if we start over here,
we might have to exit the
bound in order to come back in.
And so it is not a viable set.
So now with this intuition,
let's watch this again.
So we propagate this
value function over time,
and you see that we're
pulling away from the right
as we expected.
And at a certain point,
this becomes invalid.
A tracking bound of 0.5
meters in this case.
Once we've finished computing,
we realize that these two slices
that I kind of arbitrarily took,
are valid possible tracking error bounds.
Where if I start inside of this set,
I'm guaranteed to side here,
and same thing over here.
In fact, if this converges over time,
this becomes even stronger.
If I start inside of this red set,
I'm guaranteed to stay
inside of this red set
for all time.
You can see what it
actually looks like in 3D.
So this is if I were to
stack and rotate this
for different angles.
You end up with this cool twisty shape.
So, we've computed our value function.
And we can take various
slices of this value function
and receive a good candidate
tracking error bound.
Because we typically care
about minimizing error,
we're going to take the value
of the smallest invariant level set,
plus the little delta for noise,
and look at what that is.
So this is going to be the
error bound that we choose.
And this is what it looks like
in the full three-dimensional space.
However the planning agent
is not aware of 3D, it's
only aware of x and y.
So I need to project this
down onto the xy space.
And this is what the error bound
will look like to the planning agent.
The intuition behind here is that
the planning agent is
at the origin, remember?
And the tightest bound that you can get
with respect to this,
is one in which you are doing donuts
around the planning agent.
Okay, so we've gotten
our tracking system model
and planning system model.
We've computed the relative dynamics
that have given us our
tracking error bound
that we'll use for the
online planning algorithm.
And our tracking controller
that we will use for online control.
'Cause again remember,
we have a value Function
whose gradients inform us
of how to stay inside
of this invariant set.
So what happens online?
Well I initialize where my environment is.
Let's say I haven't
sensed the obstacles yet.
First I figure out where I am.
The tracking system.
Next I use that to initialize
where I want the planning
algorithm to start.
Sets that we are inside
of a bound initially.
I will then input that planning state
into whatever path or trajectory
planner you want to use.
Again, model predictive
control, a star, etcetera.
And I'll also input
locally sensed obstacles.
Where I'm going to augment
the locally sensed obstacles
by this error bound.
Then this path or trajectory planner
is going to do whatever it is that it does
to tell me where it wants to go next.
Based on this, I'm simply going to look
at the relative state
between me and where it is.
And use this to look
up what my precomputed
control should be to
try to chase this state.
And so I apply the tracking control.
And I may not be able to track it exactly
but I know I'll never deviate by more
than this error bound.
So I then just rinse and repeat.
So you see here, that
really what's happening
is I'm taking this
whatever path or trajectory
planner you wanna use,
and just adding two small steps.
One is augmenting the obstacles,
which we often do in robotics anyway.
And the second is looking
at the relative state,
and using that to look up
my precomputed controller.
And so this is very
lightweight architecture
that allows us to robustify
these simple planning algorithms.
Here's the first simulation
that we did of it.
So here I have this 10 dimensional
near hover quad-copter model,
and I wanna get it from a start position
to an end position.
And they're hidden obstacles in here.
If I tried using standard
reachability analysis
on this using the techniques
that we use in our lab,
computing this would take the time
that the universe has existed
or something to actually
be able to compute.
And more over it wouldn't
be able to take into account
the hidden obstacles there.
If we instead do decomposition
and split this into three sub systems,
well now we can compute it on
the order of minutes to hours.
But again we won't be
able to take into account
the obstacles ahead of time.
If instead we use that decomposition work
to precompute this bound
between the high-dimensional system
and a very simple 3D point model,
we can then plan using this simple model.
Using in this case RRT,
rapidly exploring random trees.
So it senses obstacles.
The green is where the
planning algorithm is.
And it'll re-plan whenever
it senses obstacles.
The blue is the 10D system
that is trying to chase this plan.
And the box shows the 10 dimensional
tracking bound projected
down onto 3D space.
And you see, even when the algorithm
does something unexpected like this
sharp 180 degree turn, because
we accounted ahead of time
for worst case planner behavior,
we are able to stay
safe with respect to it.
So this is happening in real time now.
Instead of taking minutes,
hours, the universe,
in order to compute.
So this is when a really fun project
that started to get used
a lot in other labs.
Here's a project in Professor
Shankar Sastry's lab.
Where they're working on this ARVR project
where humans tell robots
to go to different places
and go through hoops and stuff.
And they're using FaSTrack
as the underlying controller.
Here's a project with Professor
Marco Pavone at Stanford,
for trying to move past
grid based solutions,
and using some of squares
optimization instead.
To try to get the higher
dimensional systems.
And here's a project in my lab
trying to do a similar thing
where you train a classifier
to learn the tracking policy.
Rather than doing the grid
based solutions directly.
So those were those three labs.
There's another project in my lab
that I'm not on that's follow up on this
to do recursive safety and feasibility
using FaSTrack.
Additionally, there's
been a couple projects
in Professor Sandra Sesha's lab.
Combining this with
formal methods techniques.
Just this week, last week,
Professor Vesudavan at Michigan,
incorporated this successfully
into his autonomous driving work.
Which means it's gonna go in a real car.
Which is exciting.
There's a project TU Munich combining this
with some of the gaussian process work.
Professor Lydia Kavraki and Mark Moll
are interested in incorporating
this into their open
motion planning library.
And Gary Hewer and his
colleagues are using this
at China Lake Naval.
So I think that the draw to this
is that it's fairly simple
the FaSTrack allows us to robustify
fast and simple planning algorithms.
And it's very modular.
With respect to the planning
algorithm that you use.
And this is really great
for a lot of different applications.
One of the issues with
introducing this modularity
is that we can sometimes
end up with fairly
conservative solutions.
So for example, because
we're assuming worst case
planner behavior, I'm saying,
squeezing between these two obstacles
is not possible because
if the planning algorithm
does a sudden 180, I won't
be able to stay safe.
But generally the
planning algorithm is not
trying to actively mess with us.
It is just trying to get to its goal.
So there's a question of,
how can we reduce conservativeness,
while maintaining this
modularity with respect
to the planning algorithm?
So while I was working on this project,
on the side I was reading this book,
Thinking Fast and Slow by Daniel Kahneman.
He's a nobel prize winner
and a famous psychologist.
And he suggested that
humans are essentially
a hybrid system.
That works with two modes.
The first is a fast thinking mode
that continuously scans the environment,
it's fast but error prone.
And it works automatically
and effortlessly.
And then we adaptably
choose when to switch
into the slow mode, that
we use only if necessary.
It takes effort and
it's slow but reliable.
And I asked, can we do a similar thing
regarding our planning
algorithms and planning models?
So I have still my high-dimensional
tracking system model,
but here I'm going to give it choices
over multiple planning models.
In the simplest case, let's assume
that it's the exact same planning model.
But just moving at different speeds.
So here, the slow mode
would be as if you were
trying to plan using me.
I am very slow.
And so it's very easy to keep up with me.
But I might navigate the
space at a slower pace.
And so the resulting error bound
will be tight, it'll take longer
to get through the environment.
If instead, you're using the same bolt,
he's very fast at navigating
through your environment
but he's hard to keep up with.
And so you'll have this fast
but hard to maneuver error bound.
So what this means is now
we have multiple options
of what to choose when we are
planning using our system.
So I would like to
default to my fast mode.
'Cause why not?
But my sample point that's in
collision with the fast mode,
I can ask, well if I had
switched into my slower
and more careful mode, would
this still be in collision?
And so then you can choose
when to switch between these
two modes automatically.
Now note that,
so we are able to navigate very quickly
and slow down, to squeeze
through these two obstacles.
Note that switching safely is non-triable.
So switching between these modes
also requires some pre-computation.
But given that, we have a proof
of guaranteed safety
throughout the entire process.
Here's what it looks like in practice.
So the same set up with these obstacles
and trying to reach a goal.
But here it has multiple bounds
that it can adaptably choose between.
And when it realizes
that it's in collision,
it tries to sample using a slower mode.
And if it's successful,
then it goes ahead and
switches into that slow mode.
So it's automatically deciding for itself
when it needs to squeeze
in between obstacles
versus when it can relax
and go into a larger bound.
And again, this is happening in real time.
And also we see this in hardware.
Here we're hanging some chinese lanterns
across the room and planning
using our quadcopter.
And you see the error bound areas here,
and once it sense the obstacles
it switches into this slower mode
that has this tighter area bound
in order to squeeze between the obstacles.
Okay, so that is safe
decision making in real time.
So that is our FaSTrack framework
and we're now going to
use this to try to think
about how to apply this to
more realistic environments.
Because you're not always
just gonna have balls
hanging in the space that
you need to navigate around.
Real world environments
are much more complicated.
And so moving beyond static obstacles,
we wanted to ask,
what are some of the most
challenges obstacles to deal with?
And we found that human pedestrians
falls in line with that.
So how do we predict over humans?
If I introduce a human here
and I know exactly how this
human will move over time,
well we know from before
that I can just incorporate
this directly into my
reachability analysis.
So I can incorporate the movement of them
and get this reach avoid set from them.
However, it's hard to
know exactly how humans
will move over time.
So traditionally in my lab,
the way of handling this
would be to assume that
humans might go anywhere
or do anything.
And so you have to just take into account
any possible behavior.
Which ends up looking more like this.
And so you build up what is called
their forward reachable set.
And if I now plan using this
growing cost function instead,
I end up with something like this.
Which provides me with very
conservative reach avoid sets.
And so, is there a way that we can balance
between making really strong assumptions
over what humans do?
Versus just assuming that
humans are totally crazy
and might do anything at any time.
So again, we wanted to draw inspiration
from how humans reason over other humans.
And so we approached
Professor Anca Dragan's
Human-Robot Interaction Lab.
And she informed us of some work
in cognitive science and AI.
That suggest that humans
again use very simple models
of other pedestrians in this space.
And reason over where they might go.
Assuming that they have intent
that they wanna go towards.
But that this intent is noisy.
And so you've had this prediction
that has a distribution over
where they're going to go.
You can think of this for yourself.
If you're walking down the sidewalk
you might not be trying to reason
over every single possible trajectory
that a person down the
sidewalk could take.
You might instead employ simple models
like humans tend to move in straight lines
with somewhat fixed velocity
when they're on sidewalks.
And so what this means in math,
is we're going to use a noisily
rational bolt-swing model.
So the probability of a particular action
of the human at times zero,
is going to be based on
the state of the human
that we've seen so far.
This theta term, that's going to dictate
learned human objectives.
This beta term that controls the variants
of our distribution.
And we're going to say that
humans are exponentially
more likely to take actions
that lead them towards their objectives.
And so we propagate this
through their dynamics
and we end up with a
probability distribution
of where we believe
they will be over time.
Here's what this is going
to look like in hardware.
When humans act as we expect them to act.
So here, I am the human
being subject here.
And I'm going to move across this space.
There's also going to be quadcopter
moving across the space.
Here's a top down 2D visual of that.
The robot's using FaSTrack
to get to its goal.
And then you see the distribution over me
that's predicting me and assuming that
there's a modeled human goal.
That it's literally just a point
in the state space in this case.
And now if we got this totally right,
if humans are nothing more than points
that are just trying to
move towards other points.
Then we're able to use this distribution
and predict humans very well.
So in this case when humans
are perfectly rational,
it's great.
However, clearly this will
not always be correct.
Our model may be incorrect
at different times
for various reasons.
One potential reason is an
unmodeled obstacle in the space.
So the robot may not be
aware of this obstacle,
but it's between the human and her goal.
And so she's going to
have to deviate around it.
However, this is going to
seem irrational to the robot.
And so it's always going to assume
that the most rational
thing she's going to do
at any point, is make
a quick 90 degree turn
and go straight towards her goal.
And so when this model is incorrect,
we end up making bad assumptions.
And bad predictions.
And we may end up in a collision.
So how again do humans deal with this?
We're pretty good at not just constantly
colliding with each other.
Even when people start acting erratically
and maybe not matching with
the models that we expect.
We assert that humans
do a phase and update
over the variants of this distribution
based on their confidence and how well
the model matches with what they expected.
So you might, in the extreme,
revert back to this full
forward reachable set.
Again think about this on the sidewalk.
If people are acting normally,
you might have pretty tight distributions
over where you believe
that they will be going.
Once somebody starts
acting somewhat erratically
on the sidewalk, we become
less certain about their future actions
and you may become more conservative
in your trajectory around them.
So in the math,
what this means is that
we are going to reason
over this beta term
that controls the variants
of our distribution.
When we set it to something really low,
then it's going to revert back
to the forward reachable set.
When we set it to something really high,
then we're gonna have a
very tight distribution
overall we believe.
And so we are going to maintain
a belief over this theta,
that we are going to update
at every instant in time.
Based on asking, is the
human action that we saw,
how well does it match
what we expected to see?
So this is what it looks like.
So again, same exact scenario
but now we're using this
adaptive beta reasoning.
So as I move around this obstacle,
you see the distribution spreads out.
The robot takes a
conservative route around me,
and then as I start going back on track,
it tightens up that distribution again.
So the point here is that
we can't always find a perfect
model, especially of humans.
And we may not even want
to use a perfect model.
Because we wanna use simple enough models
that we can reason over very quickly.
Bayesian reasoning allows us
to use structure when it exists,
and to revert to cautious predictions
when it does not.
We've also found that
this reasoning works well
when we try to scale
up to multiple people.
So here, let's assume
that we have two people.
Each trying to go to goals
that we are currently just
assuming in this space.
You would really like to reason
over their joint state space.
So understanding that
humans react to other humans
and don't walk through each other.
However, this becomes again,
very computationally challenging
as you introduce more and
more humans into your space.
If instead, you do the dumb thing
of just assuming humans
act totally independently,
are not aware of each other, are going to
walk through each other,
then you can very quickly
reason over the prediction
of where they're going to go.
When they don't in fact
walk through each other,
you simply become more confused
about what they're doing.
And act more conservatively.
Until they regain the
direction they're going in.
And then you can tighten
that distribution again.
So additionally on the multi-robot side,
we wanted to scale up here as well.
We worked with some previous work
on sequential trajectory planning.
That assumes that you have a priority
ordering over these robots.
And each robot has to plan with respect
to the higher priority robots.
Now previously you had to
compute this all offline.
But now using FaSTrack you can generate
very, very quick and easy trajectories
with an understanding of the robots
that will stay within a
bound relative to that.
And so when we actually plan,
we can update this in real time
as we learn more about the human.
So this is what it looks like.
Here, Himay is going to go to his goal.
I'm going to go to my goal.
We're gonna have to kinda
move around each other.
And we have two robots
that are also planning
in real time around us.
First let's just look at what
this looks like in practice.
In the real world.
Himay insisted on dancing the whole time,
I don't know why.
[laughs]
But there we're able to get to our goals
and then the quadcopters
move towards those goals.
Now let's look at what this is happening
in the top down view.
So here's me moving towards my goal.
Here's Himay moving towards his.
The robots are going to move accordingly.
And you'll see that it
becomes more confident
about what I'm going to do.
Never really becomes
confident about Himay.
But when we move around each other,
the confidence will degrade
and one of the robots will
actually have to stop and wait.
So you see, here we approach each other.
Confidence degrades as we
take different actions.
This robot had to stop and
pause before the end again.
So that's safety in
human-centered robotics.
And so these are three of
the approaches I've taken
all trying to work again,
towards this goal of safe,
scalable, and adaptive autonomy.
Now this is still my goal
as I look towards the future
and I'm trying to determine
what path I wanna take.
And I would like to go over
a few of the directions
that I'm excited about.
First, there's a lot of
really interesting work
on developing performance based controller
and trajectory optimization
that doesn't necessarily guarantee safety.
That is really scalable and computable
for really fast systems.
So trajectory optimization,
control barrier functions,
and reinforcement learning are very good
at developing control policies
for these high-dimensional
systems that work well in practice,
but may not guarantee safety at all times.
And so I'm really interested in asking,
can we blend theory and
tools for high-dimensional
trajectory optimization with safety?
And if people are interested
in talking more about this
at the reception I would be really eager
to talk about some ideas for that.
So that's scalable safety for analysis
for learning and adaptation.
Next, I'd like to talk
about cognitive science
for autonomous systems.
So you'll note that in
a lot of these projects,
I relied on understanding human models
of decision making, in order
to inform how the autonomous
system should balance speed and safety.
And I think that it's
very interesting to study
cognitive science because
originally they assumed that
humans were perfectly rational agents.
That have all the time in
the world to reason over.
Then there's this discovery that no,
in fact humans have this wide variety
of biases and heuristics that they use.
And that they employ that work very well
in practice but may not work all the time.
And this is because humans have
low computational abilities.
And so we have some limits
on our cognitive mode.
And now there's an understanding
that maybe there's something
in between where humans
are trying to do some sort
of principle of computation,
but have limited cognitive
resources to do that,
and are subject to compute.
And this leads to a lot of
the behaviors that you see
in these biases and heuristics.
And think there's a similar thing
in autonomous systems where
we have some approaches
that assume the autonomous
has all the time in the world
to optimize over.
We have other approaches that use fast
and simple heuristics and robotics
that work pretty well in
practice but may not be safe.
And there's a lot of
interesting areas to explore,
that ask what happens when
we try to do some principle
but we have limited computational power?
Can we use our understanding
of how humans do this
to inform how we should
design systems to do the same.
I think studying cognitive
science can also help us
with understanding human behavior.
So on the cognitive
planning model spectrum,
we can start with that
assumption from before
that humans are basically just blocks
moving down the sidewalk
with a constant velocity.
On the other extreme,
you could try to actually
map out every possible neuron
on the human and
understand them completely.
And in between we have
things like assuming humans
are greedily rational, bounded irrational,
and the fact that humans are
aware, that the robot is aware
of the human, etcetera.
And I think there's a very
interesting question to ask here
of how complicated is
necessary for a given task?
And what do we lose by simplifying
to get more computationally
efficient predictions.
I think also that cognitive
science and autonomous systems
can help each other.
So here's a project
we're just starting off
with psychology at Berkeley.
Jack Gallant, working on if we put humans
inside of a driver simulation,
and we control or
measure their pupil size,
eye tracking, and heart rate,
we can get an estimate
for their cognitive load.
As they perform different
tasks and reason about
different driving scenarios.
They also have an fMRI machine
that's going to measure
different parts of the
human brain that are active
at this time.
And we're interested in
looking at the relative
cognitive load while performing
different types of planning tasks.
To try to inform lightweight architectures
for our autonomous systems,
to do simple tasks.
So that's cognitive science
for autonomous systems.
Lastly, I'd like to talk about safety
throughout and beyond robotics.
So here I introduced a lot
of potential applications
that I'm very interested in.
And in some of these applications,
safety is kind of what
we've been talking about.
Which is try to make sure
that your autonomous system
doesn't intersect with any obstacles.
But in some cases,
safety means something a bit more nuance.
So in the case of for
example, surgical robotics
or human and robot interaction,
what safety means may not be boiled down
to just reduce intersections.
And so I'm very interested in exploring
what does safety mean in
these different contexts?
How do we formally define
this, and how do we
try to analyze over it?
I also think that we
can think about safety
beyond just these autonomous
system applications.
For example, Professor
Sankaranarayanan's project
on the artificial pancreas,
I think is a really great motivation
for how to try to maintain
safe insulin levels
in diabetic patients.
And there's so many more
different biological systems
that we could try to explore.
Such as how drugs get
digested throughout the body.
Such as exploring the
progression of cancer
over time and applying chemotherapy drugs
to keep patients within safe levels.
I also think that it's
interesting in broader
context of understanding
that humans are different.
And different humans may have different
requirements for safety.
There's this one project that was recently
on the start up challenge here at Boulder,
from several undergrad students
that designed this walker.
Where, if you lean down on the walker
and you're in an unstable position,
it gives you vibrational
feedback to stand back up.
Similarly same thing
if you lean to far out.
And these kinds of assisted devices
that help ensure that you stay
within this safe trajectory
zone of your body, something that I think
would be very valuable for
the elderly and disabled.
And then finally, lastly,
I think we could go
even beyond this.
I know that there's a lot of
strong research in Colorado
exploring things like the
propagation of forest fires,
the reachable set of those.
And similarly, with exploring for example
glaciers over time, and
what controls you can input
into this to observe the changes
in those trajectories over time.
So that is safety throughout
and beyond robotics.
With that, that is my future
work, my current work.
I'd like to really quick give a shoutout
to all my collaborators
that I've published with.
This is by no means my only work.
And I had some really fantastic people
that I've gotten to work
with over the years.
And with that, if you have
any ideas or questions
please feel free to ping me.
And here are our repositories on get help.
Thank you.
[audience applauds]
We have plenty of time
for questions [mumbles].
[laughs]
Yes so we've [mumbles].
So one thing I wanted to ask
about your FaSTrack algorithms,
a lot of it seems like midpoint fracking,
that oddly exists in you and me.
So what kind of additional
ideas can you give
beyond let's say the M1 controller
that Jonathan Howell and.
Yeah absolutely.
So you're right that in here
we just made our cost functions
and things fairly simple
which is just strictly the,
in this case L2 distance
to the planning agent.
In general this cost function can be,
more general.
Depending on, for example, if you not only
want your system to stay within
a certain physical space,
but also its orientation or velocity
must stay within certain bounds.
So if for example, if you're
planning using a model
predicted control
algorithm that has a notion
of orientation, you may want to ensure
that your high-dimensional system
stays within that particular orientation
or within that particular velocity.
We can additionally
look at trying to relax
these assumptions, so
on one extreme you have,
we don't know what the
planning algorithm might do
at any time.
On the other extreme, we
can do things like defining
certain motion primitives that fix exactly
what the planning algorithm would do
and build them up together.
I think there's a lot of interesting space
in between of asking, how do
we break up the trajectory
space of what we assume that
the planning algorithm may do?
And in fact, your student
who is now at Berkeley,
is actively working on a project on this.
For the goals of humans and sort of like a
[movement drowns out sound]
based on humans not going
towards their estimated goal,
how are you judging those goals?
And that the goal is
actually, something else?
Right, so what if the objective
that you're assuming about
the human is incorrect?
So for example, I have
assumed that that human, me,
wanted to move towards
that particular fixed point
in the state space.
And that's where our common goal is.
That was your question?
What if that's wrong?
Yes, how would you update it I guess.
Yeah so there's a lot of work in AI
on trying to update the
objective of the human.
And so as you learn that
the human is actually
taking directions towards,
for example, unmodeled goals.
Incorporating that into
your hypothesis space
over what you might reason
that the human might do.
And so in our project, we didn't do this.
We just kind of assumed that
you had some simple model.
And that you were just going
to reason over your confusion
when it didn't match with
this model and the subjective.
But this could very easily be joined
with work that reasons
over how to add new goals
to your hypothesis space.
And how to reason over
which ones are most likely.
And in fact the person sitting next to you
is Zach Sunberg, would
be a really useful person
to talk to about exploring
when there are new goals
in this space that we've
talked about for this.
I have a question about
splitting the dynamics.
Have you looked at, is there a limitation
on breaking up the system
dynamics like that,
there's so much information
that you would not be trying in.
Yeah, absolutely.
And we actually did experience this
in the first time that I worked on this.
I split up these dynamics
and then I thought
that something was wrong
because my reachable set
wasn't growing at all.
And it turned out it was
because I split it up
in such a way that these new disturbances
had more power over the
system than my control input.
And so you're absolutely right
that we need to think about
principle and ways of how
to decompose these systems.
Such that we try to
reduce the number of edges
that we're breaking.
And also such that the missing information
is kind of further down
the integrator chains.
So it's better to have
something like acceleration
be a disturbance.
Than something like position,
where you're allowing
this adversarial element
to physically move your
system away from the goal.
In general, this is not an area
that I've actively explored.
But it's something that I
hope to explore in the future.
In the paper, there is some
work that we did on kind of
decomposing the grid over the disturbances
that we're assuming, in order to regain
some of this space that
we're losing by assuming
this adversarial nature.
But then there's this trade off
between how much are you
willing to break up your grid
and break off the
trajectories that need to go
from one part of the grid to another.
I'd be happy to talk more
offline if you're interested.
But I'm really curious about talking
with some of the networking people here
who do these kind of
rigorous decomposition
to see how can a form this work.
Thanks very much.
So at that part that you
used the low-ability model
to generate the nominal trajectory.
So then you said you
compute these safety region.
Even the high-intensity model,
such that you always
guarantee you are safe
and you can track this point.
But then your assumption was
you who get the worst case
scenario that is going, can
actually go in any direction.
My question is, at that point,
how strong is low-fidelity dynamic?
So actually cannot go any further
because you should respect
their low-fidelity dynamic.
Yeah you're absolutely right,
I must have misspoken it.
It may do anything
within its control space.
Yeah low, okay, low ability dynamic.
Right, right.
But it still obeys the
model of its system.
So in principle you would assume then
that you need to use planning algorithms
that plan using this low-fidelity model.
In practice, something like
rapidly exploring random trees,
you can kind of interpret
this path that's generated
as one that could be generated
by a low fidelity model.
But you're absolutely right
that we do place assumptions
over the low-dimensional system.
And use those assumptions
when actually planning
using the planning model.
Yeah, so the other question
is going from high-fidelity
to low-fidelity, so you may
not use a control strategy
for the low fidelity.
But may exist for the high-fidelity
because of the complex dynamic.
So do you do any sort of [mumbles].
Up close is there high-fidelity
and low-fidelity [mumbles].
Based on the low-fidelity?
Yeah that's a great question.
That's something that I'm
really interested in exploring.
So to back off, in the
cases where you just care
about avoiding an obstacle in a 3D space,
just using a model of that
only reasons about x, y, and z
is perfectly acceptable.
If you want your system
to instead do for example,
parkour with cars,
then you need to have
a better understanding
of how you need to plan.
Like let's say I went to do a backflip
with my quadcopter,
doing a backflip by just
reasoning over x, y, and z
is not going to be sufficient.
I need to have a finer grain model
that can actually generate
these trajectories.
And the answer so far is no this so far,
the planning model, the
lower-fidelity model
has been user defined.
I think that there's a really interesting
area to explore of, given the objectives
of your system, and the
environment that you have,
and the configuration
space that you care about,
what is the lowest
dimensional planning model
that you can use that still
maintains the trajectories
and the behavior that you
want to be able to pursue.
But so far we haven't done that.
So since you're mentioning
low-dimensional things,
what's the highest-dimensional
models that your tools
are able to work with?
You know, without any additional.
Sure, so we haven't tried to.
Oh without any decomposition?
Right.
So just using the grid
based approach with our,
so we have a mat lab tool box
and we also have a c plus plus toolbox.
With the mat lab toolbox, it's like four.
Four dimensional systems.
With the c plus plus,
we can get up to five.
So we can handle exponentially
more grid points.
But only by one dimension.
So that, when I joined
the lab and I was excited
about applying reachability
to all sorts of different systems,
I very quickly learned this fact.
And that was a very strong motivator
for doing that decomposition work.
And how many grid divisions
are you usually doing per state?
It depends on how high the fidelity is,
as we get to four dimensional
systems, for example.
The kind of limited mat
lab, I would be doing
about 200 points in each dimension.
So I'm gonna interrupt,
that's all the time we have.
But we have another half hour after this
where we can ask informal questions
so please feel free to
stick around for that.
But let's thank our speaker again.
[audience applauds]
And please feel free to contact me
if you have any follow up questions.
