We're ready.
>> Okay.
>> Welcome everybody.
>> So I'm Peter
Carr and this is
the last zooms seminar
in the Brooklyn quant
experience seminar series.
>> And today we're
very pleased to have
ego professor
Igor Cialenco
from Illinois Institute
of Technology.
Igor got his PhD in
applied math from
University of
Southern California.
After which he joined as
a permanent
faculty member in
the Department of lite maps
at IIT, where he is now.
His primary
research interests
are in math, finance,
stochastic control, and
statistical inference
for S PDEs.
Currently serves as
elected Program
Director for
the same activity group by
financial math,
engineering.
And I understand
you guys are
doing some Zoom talks.
If you could tell us a
bit about
He's also managing
editor for
the International
Journal of Theoretical
blight finance and on
editorial boards of several
scientific
journals including
siam Journal of
Financial Math
and applied math finance.
>> And he and I both
attended a conference at
Stony Brook
earlier this year,
and unfortunately, I missed
that and I felt
really bad about it.
>> So invited Igor back.
>> And I originally
plan for
him to travel to Brooklyn
but of course,
the coronavirus
prevented that.
So he's graciously agreed
to present via Zoom.
>> Igor, please
take it away.
>> Yes.
>> Peter, thank you
very much actually,
for first of all,
for the invitation.
>> And in fall, I
was really looking
forward to come
and see and see,
definitely, but it
didn't happen still.
I'm super excited
to give the talk.
Actually, yeah, thanks
for for, you know,
not bailing out on
these realities.
Everybody should mute except for Igor
just me and I'll
make a comment
during your talk.
Just go ahead and
comment
Unmute. Is that gonna be okay with you
Igor?
>> Yep, sure, sure.
>> And I mean,
I don't know how
you want to run it.
You can interrupt with
questions at any
time. If Peter? Yeah.
>> I think that if
you don't mind,
that'd be great.
>> I mean, it's
tried to keep it
less formal so and yes,
as as Peter said,
does this discussion on
related to the talk I
gave to Stony Brook?
And I decided to
give the same,
more or less the same talk,
although it half of
the talk we'll have
actually in some
new results
that we got recently.
In the meantime, we've
been working on this.
I will talk about
adaptive robust control,
stochastic control,
of course,
with applications
to finance.
>> That's a joint work
with Tomasz Bielecki
and Tao Chen, and the
>> Okay?
>> The motivation
actually starts at some,
some good years
back with the value
of this question that
I'm circling right here.
The last bullet
point, we've
been working with
a problem on
model selection and trying
to minimize
the hedging error
for some models,
which we are coming from.
At that time, we've been
working on credit risk.
>> And we wanted to use
what was available
at that time.
And the state of
the art technologies
are robust Finance.
And when we start
working on that,
we realized that something
very fundamental kind of
is missing in the holes,
the high C control theme
with model uncertainty.
And that those,
that the control
itself or setup of course,
correct when it made sense.
But through time,
there is no reduction
of uncertainty.
>> So the model uncertainty
or uncertainty about how
the model behavior
so is not
incorporated in all of
these classical approaches,
which I'll mention today.
>> So secondly,
we thought to
that to so secondly,
if you start implementing
the robust framework,
the famous robust
methodology,
you will see that it's
quite conservative.
>> Conservative to
the point that,
to the problem
that time that
I discuss today as
an application,
optimal investment is
literally telling,
put everything
in the banking
account so they are,
and it's clear why,
and I hope that that
will be also clear from,
from, from authors discuss
and eventually sure
Yes, these problems in
general are related to
stochastic control
with uncertainty,
with nation type
of uncertainty
are important not
only to finance,
but of course
the engineering,
management science,
economics.
>> And you named it, and
you'll see that the problem
is formulated
quite generically.
So that's, that's
the motivation
for what we're gonna do,
will propose a new method
that has two keywords.
>> One is adaptive,
One is robust.
That's all solves
Markovian type
control problems
subject to nation I
certainty or model
uncertainty.
We'll apply it to
two portfolio
selection problem.
So I'll talk about one
of these applications
I'll mention of
was other applications that
are viable to within
this framework.
What is important
that we develop?
>> Actually the, we
develop an algorithm.
>> And when I say
I'm an algorithm,
and we have the
theoretical parts,
which is Belmont
Bellman principle of
optimality or dynamic
programming principle.
But also it allows to,
to solve these
problems numerically,
infeasible time given
the current technology.
So initially we
started with
classical type problems
which are time
consistent, so-called
time consistent.
And, and there is
a separate discussion
watches time consistent.
>> And in a sense,
and more recently,
and this is what
I was mentioning.
>> We'll also deal
with time
inconsistent problem.
And as time
inconsistent problem,
think mean-variance,
optimal optimum
mean various type
of optimization.
>> Ok, so what
is done with you
regarding these?
>> So we have couple
of papers now.
There are several other
papers not by us,
but people are picking up
and working
of this theory,
I will focus on
the first one.
And as I already
mentioned to you to
be mean-variance portfolio
selection problem.
>> But in the
framework of adaptive
robust that we introduced
in our original paper.
>> Ok.
>> So the talk
will be structure,
the quiet too in
the usual way.
I will start with some
preliminary review
the existing methods.
>> And I believe
that this is very
important to, to, to,
since this is a
completely new framework
in place it
appropriately in what
is already done in
classical methods and
compare it listed
heuristically
with where we are.
And then I will
formulate as,
as precise as I
can our problem,
our formulation of
these problems.
And then I'll show
the main results,
which is DPP for if,
for this type of, of,
of control problems.
And eventually I'll
cover my simple example.
>> Well, simple to
the point that it's
easy and well understood.
>> On the other
hand, it's not
a trivial extension
of typical mean-variance
one-step criteria,
but rather in
stochastic framework.
>> Alright, so let
me start with
notations next.
And we will work
when of course,
will have a
probability space
on which everything
will be defined,
will have a time and find
the finite time horizon.
>> We'll have
everything will be
working discrete-time
set up.
And the first object
that will work on and
we'll go through
entire talk
would be this process Z,
which is the mind
stochastic driver.
>> So the whole
system, the randomness
in the system is
driven by ZT.
So when it's observed,
full observed by the
controller by us.
So think about two
might be Stock who,
loggers, tariffs, or
any stochastic factor
could be multidimensional.
So that's not a problem,
but the randomness
is driven by Z
and the filtration is
generated by this process.
>> Next, we'll
assume that about Z,
although it is observed,
we will not know
precisely what is
the parameter
that describes
or will not exactly
know that rule of z,
which is denoted
by theta star.
So the, but rather
it's a funnel of loss.
>> So we have a set of
probability laws
that describe
the distribution of
these process Z or describe
the behavior of z
at each time t.
>> And they are
parameterized
by a parameter theta.
So we take a usual
statistical setup
where we assume that
the set of
probability measure
is actually
the parameterization
of the process.
>> And for now we're
assuming that this is
a finite dimensional
parameter in RD,
but too, so it's a
parametric set-up,
but it can work also
in non-parametric set up.
And we know how to
deal with that's two.
>> Ok, so now what just
clarifying question.
>> The set of plausible LAS
contains the true
law. That right.
>> That's right,
that's right.
So, and we don't know what
is this theta i started,
but it contains all of
these plausible loss.
So for example, mean and
variance two belonging
to some intervals.
So that would
be one scenario
that that will work.
>> Okay. Thank you.
>> You can always take
the whole set has to
be the entire RDD.
But mathematically,
I mean, technically
speaking,
you want to have this set
to be at least going back,
at least to be
compacts though
to avoid any
technicalities,
because cells, you'll
not have the minimum,
we'll not have the maximum.
So that's the only
reason that you want.
On the other hand,
he also wants
to have a smaller set
else thinks will blow up.
>> Okay?
>> So the problem is the
classical stochastic
control problem
that we have in mind and
that everybody usually has,
has in mind when talking
without model uncertainty.
Assuming so we know
the parameter,
that parameter.
So what kind of
problems we're
trying to solve?
The problems that
we're trying
to solve are of
the form that you have
the control which is
a stochastic process fee.
So and a will
denote the set of
admissible controls,
of course adopted
to two to the four.
If information and x
is the controlled process.
So x will depend
on z and on phi.
Z is the driver
fees, the control.
And in our language,
the easiest to think is
phi is a trading strategy,
may be actually is
the portfolio valid?
>> And then if we would
now the parameter,
then we are just taking
a loss function.
>> Any type of
loss function may
be a risk measure,
might be negative
of the utility.
And we measure somehow
the loss over the
entire thing,
not necessarily of
the general will,
and that will be a number.
And then take expectation,
average it out,
and minimize or find
the best control
for which you
have the smallest
loss on average.
So it's info for
the expected
loss application
in finance in this
general set-up,
as I said, is portfolio
selection problem.
That's where it's
typically taught in
classes and probably
the most used,
practically speaking,
approach, but
not all of that,
to ensure optimal
degradation,
which is by the way,
where optimal controller or
the high sequence
all came to
the place in the
last 15 years or so.
Even, even older is
pricing hedging.
And in many ways he wants
to do either through
a utility or
indifference pricing or
through minimally error of
the hedging, minimum
hedging care.
All of these are control
problems of this form.
>> Contract theory more
complicated when you
have mean super and so on.
>> So there are
many even more problems
from finance
related to this.
But that would be
the bullet points
where you would find
these generic set up.
>> I want to move on.
>> Question for you.
>> God, there's Leon in
the process fee ride.
>> There could be other
sources of risk in
the evolution of bea beyond
what Dr. Z I assume, right?
Not not in our set-top
about In principle, yes.
So in our setup
we are assuming
that all randomness comes
from Z for simplicity.
But in principle, you can
enlarge this
year, I'll take,
I'll take a larger
space where that
drives the randomness
so frequently,
sorta like B,
let us say some unknown
deterministic function
on x of z and t,
let's say for
instance, correct,
correct, correct.
>> Yes, exactly.
>> Exactly. Exactly.
>> So yeah. So right.
So in the problems and I'll
formulate exactly
like you said
in couple of
slides precisely.
>> So in the problems
that we solve,
the initial words that
maximizing the
expected utility
of the journal wealth.
So this X t is the
germinal Well fees,
the self-financing
trading strategy,
you is a utility function.
So classically,
you take soup of
the expected utility across
all possible self-financing
trading strategy,
maybe with some
constraints.
And we initially developed
our theory for these
type of problems,
but not, not this problem.
A little bit more
complicated,
adopted the model
I certainty,
but that would be one of
the problems that
we worked out.
That this is a
time consistent,
which means it will satisfy
a nice Bellman
type of equations.
>> What we did lately,
we work on a version
of mean-variance
portfolio selection.
>> So you will
also maximize,
but we are maximizing
the expectation of
the journal wealth minus
the, the variance.
Well, times this
risk aversion type
of thing, gamma,
which is now
mathematically, well,
this is much
harder problem to
solve dynamically
because it's not so-called
time persistence.
So the decisions
or the control
you make through time are
not remaining the
same instance.
You change your
mind as time goes.
>> You spent time
consistent, please.
But I mean, I'm sorry,
the term com consistent.
>> Let's say let's
take time consistent,
on-time inconsistent
for an hour or something,
vague and I'll define it
properly and spend time
on explaining more.
What's this time
consistencies.
But roughly
speaking is that
you strong form of
time consistency means
that I am deciding,
I mean, I'm solving
the problem today,
which means I know
how to behave through
the time from today
till tomorrow.
And depending what
happens in time,
I'm sticking with
my decisions
that I had at time 0.
>> However, in the time
inconsistent problem,
what happens?
>> And mean-variance
is one of those why
you arrive at time one
next period of time,
you solve again,
the stochastic
ones for problem
and you start having
different type of behavior.
I'm not consistent
with what
you've been thinking to do
one step ahead and
then you have to
do something else.
>> Not that there
is anything
wrong actual
philosophically with that.
>> And we're out with
Tom couple of papers on
different form of
time consistency and
why people should
not be scared.
But when you try to
solve numerical,
then if this is
not satisfied,
then you are out of luck.
And I will have
equations on that.
But that would be
the description of
time consistency and
inconsistency issue and why
people are spending in fact
lifetime discussion on,
on how to solve this.
>> Okay? So now
what happens with
model uncertainty?
With the model uncertainty
here is the thing.
>> So if you
don't know theta,
this theta over
here is not known.
How do you incorporate
that into,
into the stochastic
problem?
>> So the way to do,
and it may sound
very simple,
you the most likely
direct way and it's
trivial way to do, yes.
Okay. Instead
of solving inf
over model expectation of
the lawsuit that I know,
I'll look at all
possible model,
which means first
I will take
the soup across
all possible models
which, which,
which means the worse
model or the model
that produces the
worst loss on average.
And then I'll find
the best model.
>> So it's an infinite
loop problem.
And again, that means
that I am looking at
the best strategy fee
across all possible models.
And I'm picking and
I'm thinking of
the worst-case scenario
around the models.
And this theta is
our sets that we
fix the a priori.
>> And that too,
I think the way I am
explaining make
perfect sense.
>> And well, it's
started in 90's
So-called min-max
to approach
of Gilbert and smiler
and Dan Hanson.
Insurgent developed
their robust.
The keyword here is robust.
And this approach is
the robust control
to the point that too well,
both Hansen and starch
and picked up a Nobel Price
around to this type of
problems in economics.
>> So that's popular.
The only thing that
intuitively can
be already kind
of understood,
that that would
be conservative
to the extent that
looking at the worst
possible model,
that means I am so
risk averse that I
will try to
minimize my, my,
my losses, pick up the
worst possible model
across all the times
and translate it.
>> If we, if we apply
this to portfolio
selection,
that the solution will
be put everything in
the banking account or
most of the funds in
the banking account.
>> So now another
version is to look at,
to look at the
Bayesian approach.
Bayesian approach is, well,
you're treats models
differently according
to a function nu naught,
which is the prior.
>> So you put a density
on all possible models.
And then wherever aging
out across models
according to the
prior theta naught.
>> That also makes sense.
>> It's a huge
literature. So again,
when I'm saying that
this is classical approach,
that means there are
hundreds and hundreds
of papers and
books written.
>> And I just edit here a
couple of classical books
that also make sense.
>> And sure this is
the formulation of
the problem solution
would go with
priors and
conjugate priors.
And the whole Bayesian,
Bayesian setup,
it is adaptive in the sense
that once you
solve the problem,
going step two step is
typically happens in
a Bayesian approach,
will update to nu
naught two from
prior to posterior.
>> And, and do
you go on and
the density of
this theta naught
is changing with time.
>> But theta, this
theta is not changing,
the support will remain the
same, same as here.
And robust approach,
this theta is
capital theta,
where the parameter
leaves the set
or certainly to set
remains fixed through time.
>> Now the adaptive,
and here's the, the,
the, the Worthy
is the adoptive.
>> You're adopting
information
in your parameter,
but not in the set,
but rather in the
parameter itself.
And the way you construct
you using the
past information,
you just make a
point estimate.
>> So you build an
estimator theta college.
>> Theta hat
Ns timeless of sample
means, sample variance.
Or maybe a maximum
likelihood estimator.
Doesn't matter which one,
as long as you
can compute it.
Preferable ignited
current weight.
Then at each time t,
you are applying
the control
that corresponds to
the parameter that you
learned up to time t.
>> So the time
goes on next to
Stapp who you'll learn that
the parameter probably
something else ideal,
of course, theta t hat,
who will converge
to c star,
which means you'll
be closer and
closer to the
true parameter.
>> And as time
goes, obviously,
if the time
horizon is long,
you will do
better and better
and more relevant decisions
on the short-term
that could
be quite far from the true.
And you can be off,
which means you can
have like bad losses.
Strong robust control
is a version of
robust that I'll show
you the picture.
So these, roughly speaking,
these are the big
approaches to,
to, to stochastic control
with model I surf.
>> So none of them,
as I already said,
none of them
actually talks about
reducing the
uncertainty about,
about the, the parameter,
which means these
capital bolt Sita
remains the same
through time.
>> And you may
concentrate a little
bit through May,
maybe in the
Bayesian approach
that the parameter is
closest to something.
You can approach
the parameter
or the long branding
and adoptive.
In robust, this is
treated equally
through, through
the full-time.
And we thought
it's, you know,
this incorporating
learning makes sense,
that this will learn
something about,
well,
this uncertainty about
unsorted into sound.
>> And yeah, this question.
So theta star is
known by nature,
but not by us.
And lets say you
talk about reduction
of uncertainty.
I'm imagining that at
the beginning there's
a measure of uncertainty.
And as time goes by,
the measure gets
closer to 0.
So what measure of
uncertainty is there?
>> Times 00:00 AM, I honor,
you are you know,
if you would be close to
OSU some five years ago,
you'd be a co-author.
Maybe political
approach, because that's
exactly what we've been
coming up due to this.
>> How, how we're
learning about
uncertainty
through time and
what is that measure
of uncertainty?
And exactly the
natural way of doing
it is of course
to do through
confidence regions.
>> Okay?
>> So the whole
thing is that
what we are saying is that
indeed we are
looking it as a,
as a game and the
nature picks up.
>> Nature knows that we
are learning about
the parameters set.
The way we are learning
about the parameters
set is through
confidence regions,
which means
>> We construct through
time the confidence
regions.
And with high probability,
we know that the
true parameter is
inside of those
confidence regions.
And then natural peak,
we assume that nature
peaks the parameter
index set.
And that set than
searching to
set this theta,
theta now labelled by t,
will get smaller
and smaller in
time if the confidence
regions are
done in the right way
and the problem
is still in suit,
so it is adaptive
and robust cell.
>> We adopt the model,
the model uncertainty set.
>> We still treat
it as a robust.
Namely, we solve across
all possible
models except that
now is parameter leaves
in the confidence regions.
>> Yesterday's
confidence regions,
are they define
with a single
probability measure
or more than one?
>> For now, just
one single.
So everything is four.
Now that's, that's a
very good question.
>> So for now, think
about there is
a reference to a
probability measure on
all other measures
are absolutely
continuous to that
probability measure.
>> So eventually
you of course,
would like to yell of
a things when you have
more probability measures
or maybe even singular to
each other to some
extent, like thing.
The way probably
you are thinking
is that the volatility in
a continuous framework
belongs to an interval,
which is not part of this.
But in discrete-time,
that's typically not
the case because
all the measures
are more or less
absolutely continuous.
It's hard to come up with
measures which
are singular.
And so this measures,
there is a
reference measure.
So that's the question.
There is a reference
measure P and everything
is absolutely continuous
with Louis therapy.
Now the set to
the set q of
probability measure.
>> So as Peter said,
we are thinking that,
well, instead of labeling
the models by
parameter theta,
the way you formalize it,
you define also in soup,
expectation of the loss is
bought two now
through different
probability measures.
This probability measure C,
If we want to
incorporate the
confidence regions,
they have to be defined
canonical space or the
space of trajectories.
It's a, it's a long
construction, but two,
you will have to to to to
follow up a little
bit of what
I'm saying and trust that
the construction
is properly done.
>> Construct I will
construct in a second.
This Q is if we
had one thing,
we weren't, we were
in continuous GBM Ri.
We have two parameters,
drift and
volatility, right?
There was one
density function,
bivariate for those
two parameters, right?
You could solid products
was that you could
find the fee,
the best strategy that
maximizes whenever
you are something right
here, okay? Right?
>> Think we, we have
a, we have a set of
distributions like two
bivariate distributions.
>> We don't know which
one is the real line.
>> And I can think
about that, right?
>> I mean, yeah,
that's exactly right.
So the only think way
through with the PGM is
that the volatility
is sort of now,
otherwise you observe
one trajectory.
>> But in principle,
yes, that's the,
that's the message
that we always think,
you know, that the drift
and the volatility
will belong to some set.
>> And we don't know
what is true mu
and we don't know
what is through sigma
>> Yes. And we aren't
just to clarify, yes,
you don't put probability
mass on mu or sigma.
>> You only know that
same use between
5200% and goes
to competence.
Religion, I'm not a,
I'm not a, not a,
not a bivariate
density function.
>> Me Peter. Alright.
>> So not a bivariate
density function,
the support.
>> And that's it.
>> Yeah, okay.
>> Or Yap, bivariate in
a sense that not,
not, not precisely,
I mean not exactly
bivariate, bivariate in,
on the space of
trajectories than it
is maybe bivariate.
>> But for now, you
start at time 0.
>> As Peter said, it's
in-between two numbers,
the vowel and the.
And then you learn
about these somehow
through the
confidence region
says that I will
construct in a second.
But we don't put
the Apriori know,
we don't know
what is actually
the distribution
of mu and sigma.
>> Okay?
>> So, so the, the
cartoons that I like
to show it when,
when giving these talks,
the description now
should be more or less
clear, at least in words,
so that if T is
time on the y axis,
I have my uncertainty.
>> And if there is
no uncertainty,
if the parameter is
already fixed
and picked them,
there is nothing to
change about it.
>> So the robust
paradigm of
Hanson insertions and
the and Min-Max
is the following.
>> So you have the time,
and as you can see,
this will be my set theta,
both theta and you
solve the entire
problem through,
through the, the
entire time.
And you pick up
one parameter,
I call it robust wishes
fixed through
withdrawal times.
>> Now it could be close,
it could be far from true,
and it depends what
you are minimizing maximize
typically is foreign,
typically one.
Overall, it is on the
conservative side.
>> So the strong
robust that I
didn't introduce
is similar,
except that the
parameter is
allowed to be
changed by nature
through time
and assume that
we don't learn anything
cabal design starting.
>> So our picture
is the following.
>> So you start with
the set theta here,
which is large, and
then you adapt it.
>> And at time
t, we are using
this theta two balls t,
which is the
confidence region for
the parameter theta or
confidence interval
if you want.
>> And we are, at each
time we'll solve,
solve a robust problem,
which means at
each time we'll
pick up the wrong, I mean,
the worst model,
but on the smaller
and smaller set.
>> And, and if
the parameter,
if the strategy
is constructed
through well done,
that's precisely what's
going to happen.
>> And the nature knows
that we're doing that.
So that would be
probably the punchy,
punchline description
of our method.
>> Without going into into
>> The math itself.
>> So now we'll move to,
I'll, I'll move to,
to introduce
properly what is
this set theta t
And give you of
course, the examples.
So to, to, to,
to, to put more structure,
I will take a z, my
stochastic driver.
I will assume that
this is a sequence
of IID random variables,
but generally
speaking could be
just a Markov process.
>> And the process X,
the control process editor,
is determined by a
deterministic function
f that takes current, well,
current decision
future valid
and spits out the,
the realization.
>> So sort of the
very process of
the to follow if you want,
that will be v would
be the value process,
Z will be the stock price
FI, investment
trading strategy.
And F is whatever
the dynamics we
are assuming.
>> So now, as I said,
Z has, there
are these sets.
>> That is the set
theta by which
the probability law of
zs is parametrized.
And as you can see here,
the way I am
defining, of course,
I'm in the, in the
Markovian set up.
So here I assuming
that z are IID,
but in principle could be
any Markov, Markov chain.
>> That's, that's the,
the general dynamics.
Now I will step a
little bit aside
and make this claim here
about the sixth
grade I just said.
>> So he said Z is IID and
the probability
measure its ID under
is p theta end.
>> Okay?
>> So, and then theta
isn't an interval.
>> Okay?
>> Okay? So we
know a fair bit
about the dynamics
because they're IID,
but we don't know the
parameters, Gregg.
>> Okay. Okay.
>> Right. And I
specifically put
it instead of complicated
Markov chain,
I put satiety because
that's the way that
it's a nice way to
think exactly like
you describe.
So we now something
and we still want to,
you know, not to,
to, to, to use,
we want to be robust,
but we want to,
we also adoptive cell
and actual numbers
show that this is the
right thing to do.
So as I said, I mean,
generally speaking,
this process z could
be a Markov chain.
And then, you wants
to construct by,
because it's a
control process,
control problem,
you need to have any
current construction of
this confidence regions or
confidence intervals
which are,
which are C here.
>> It's a nice little
statistical problem.
>> Can you construct
confidence
regions or confidence
intervals?
And by confidence
regions, I
literally mean what
is written here.
That some sets such
that probability
to the true
parameter belongs to
the set is equal
to one myself.
So, but you want to
construct them in
a recursive way.
And this is important
because without this
will not be able to
formulate the problem,
and especially to
solve it numerically?
>> And the answer
is yes, you can.
I'll show you for mean and
variance at the end.
>> Generally speaking to
it's solid, it's
not solved.
It's solved in
this paper that
we we've done it
Quite interesting
mathematical problem,
itself a statistical
problem.
But the answer is
yes, you can actually
construct estimators
which are consistent
recursively.
Which means
again, recursive.
By recursive, Well, I mean
the previous value plus
the new Valley
deterministic function
of them speeds in
the new valid.
>> And then for the sets
you can do the same.
So it's,
it's possible and also
in the right way
that the set,
the confidence region
will converge to the true.
>> So my picture shrinking
is actual a correct
picture. Okay?
>> So if you I mean,
I'm not going to discuss
the details of this
construction is
quite interesting.
The construction, but it's
possible to do for
mean and variance.
>> You don't
need this paper.
You can do by hand.
And I will show you
exact formulas for,
for a sample
mean and sample
variance of IID normal?
You can precisely,
but it's much more
general than that.
>> Okay, once I, excuse me,
can we separate the data
completely characterizes
the moment
generating function of Z.
>> Is that, is that
a valid way of,
of thinking about Twitter?
>> Not sure what
is characterizing
the moment-generating
function.
>> So maybe yes,
well enough you'll
know theta,
you'll now, but again,
this cell
characterizing
moment-generating function.
>> In a sense, you are
thinking about what
the method of moments
or it is fully
determines the
moment-generating function,
or it just determines
the full distribution of Z,
is this cool, right?
>> It actually does
because it converges.
So theta is a
consistent estimator,
which means theta hat t
will converge to the
true theta star,
which means it will
decrement precisely.
The only thing that
I want to emphasize.
>> These are not
the maximum likelihood
estimator or
non end your father
estimators that you'll,
you, you typically
know from
statistics because
they are not required.
And if you want to,
then you have to work
a little bit out.
And then it's a
version of approximate
it to MLE, maximum
likelihood estimators.
Okay? So I will show
you again this,
but think about
theta four now,
sample mean and
sample variance.
And it turns out you
can write them in these
four and in this form
and the confidence
region would be,
well, an ellipsoid 
I'll tell that for
a sample means,
sample variance
confidence region
is an ellipsoid.
>> Okay? So I'll
have to move a
little bit faster.
>> So how we define this,
this probability measures
q, such that two,
we pick up who eventually
the parameter belonging to
the set in the right way.
>> And the way you do,
you take the process y,
which is the original
state processing cristae
enlarged by by the
by the estimator
>> Now, since the
estimator hydrodynamic,
which means
recoloring secondary
currency for theta hat,
then I have a recurrency
for this bigger set,
larger and larger
process or state-space.
>> Why call it
the dynamics G,
The Wall death,
the new art form.
>> One from four theta,
you have the history.
Now the history is
augmented history too,
is that x and theta.
And the confidence
regions of course,
is, is where we learn
about these parameters.
>> Now how we
incorporate that we
are as radical question.
>> So I know axes markup
and is why MacArthur? Yes.
>> Okay, yeah.
>> So now, as the
actual Peter
mentioned that at
the beginning of the talk,
it's good to
look at now at,
at this min max problem as
a game between
the controller,
me and adversity or nature.
And the rules are
that me is as
a controller is speaking up
the control fee based
on the history,
which means the original
state process X,
but also they
estimators that I use
with valleys in
my controls.
Or the values of admissible
set eight trading
strategies could be,
for example,
weights between 01.
And nature also knows
what I'm doing.
>> And based on that place
against me and pigs,
based on the, on
the histories.
Now this is the problem
with the critical point
where we are postulating
that the nature of peaks,
the parameter for
the next step in
the confidence region
at time t that we
are estimating that we know
the rule for, for that.
>> In contrast, what
was done before
this t theta t is
not even mentioned.
>> It's fixed in
the robust setup and
strong robust versions,
the set is assumed
that we don't know
anything and we don't learn
about the parameter,
which is counter-intuitive.
Again, the way
we've been talking
about drift and volatility
were mean and variance.
It's impossible to two.
If we observe something
about the past
in Nashville,
we know something about
the insertion of this,
of this parameter, but also
where this parameter
leaves with
high probability or which
is in our sense
confidence sets.
So once this is
fixed and one,
this is setup, then
things are at a
clarifying question.
>> So Nietzsche is playing
a history dependent
strategy.
>> In, earlier I
asked about whether
Y was markup and
you said Yes.
>> So is nature
able to, let's say,
take a worst-case scenario
that uses, you know,
if I'm out of time, t uses
what happened more
than It's ago.
>> Share history,
dependence on nature,
and that's, that's, that's
exactly what we don't know.
And then, and since
we don't know
how nature is
choosing this,
we just play the
worst-case scenario.
I assuming that
the model happens
bought when theta
t over here
>> Yeah.
>> So now with
this what, what,
what typically how it is
done, you construct two.
Once you know our, our
rules and nature's
new rules of playing,
you construct
this transition
probability kernels
or transition probabilities
given the current state,
hours and nature
state, I mean our,
our state, what would be
the next value of the
extended process?
And then you substitute
and glue all of
these probabilities
together through
all the times
and the way they're
constructed,
we can, we can go through
a couple of them.
So you can see delta 0.
It's some uninformative.
If you want prior at time
0, we don't know anything.
Can we pick randomly
one parameter
or fixed one parameter
that we choose?
That does matter. And then
this is what nature peaks.
This is what we picked,
assuming that this
is what we, our,
our choices during
the times here
is this is the probability
of what will happen next
time and is this
is the case,
then the next time
and so on and so on.
And we glue all
of these probabilities
together.
And after you have
the probability,
what we call two,
that phi and c phi
is our choice.
C is the, the,
the, the play by
the adversary,
this would be B 0, b1, b0,
c0 is at each time
the well of the process or
on the canonical space.
>> Okay?
>> So it takes
a little bit of
time to digest,
but essential is this
will mimic what I,
what I, what I describe
in the picture.
>> And that's
actually correct.
>> And now the
problem is again,
precisely formulate it
in the same way it is
an IV soup with respect to
the models that belongs
to a class of models.
But the models
are picked up
and design in
such a way that
the nature is
choosing them or
the parameter at time t
is something that belongs
to this confidence region.
And since this is
expectation of L and
L is deterministic,
that is again a time
consistent problem.
And when I say time
consistent problem,
that means nothing else
but the fact that you
can use the classical
Bellman principle
of optimality.
And in fact, you
can prove with
all of this
construction at hand,
that the solution
to this problem,
two solution to this
problem in soup,
can be done recursively,
recursively in
backward fashion.
You start from the
terminal time and
you solve it
recursively backward
as any you would
solve what HTML investment
or by maximizing
the utility
that's in discrete time,
you can solve it directly.
But again, the
key problem was
that the key feature
was that this is
expectation of a
deterministic time
of the terminal wealth.
And then that is what,
what people will say.
It's a strong consistent
or time consistent problem.
In other words, you
solve it once backward.
And once you solve
this problem backward,
starting from time t
and going through
two minus 12,
minus 1212, you are
finding the optimal,
the optimal solution
and the optimal strategy.
Which means as
time afterward,
as time goes forward.
As time goes forward.
Q. Now, depending
what happens,
which strategy to use,
or how to make your
investment if you want
and if you will
solve because WT,
WT plus one is here,
if you even restart
and resolve
the problem as ignoring
everything can from time,
anytime you'll restart
and solve problem again,
you'll come up with
the same strategy
S you decided to
do at time 0,
and that makes the
time consistency.
Another definition
would be that it
satisfies the solution can
be written as a
backward induction,
and that holds true.
And of course,
right here you can
see that at each time t,
I'm solving and
in soot problem
in across all my possible
choices at time t,
which are where
I can pick up
the controls and the model.
I know at time t, the
model parameter model
is in the
confidence region.
>> This tau is meant to
do the confidence region.
>> Okay? So of
course we have a
proof will for that.
Now things are becoming
trickier when we
have problems of
the four expectation
of a function of
the terminal wealth plus
a function of
the expectation.
>> Okay?
>> These are time
inconsistency,
which means there is
no Bellman principle.
So one can show that,
for example, again,
the invariance,
think about this is just
mean, and this
is the variance.
The mean-variance
problem that I
described initially is
not time consistent,
which means I'm
solving for,
for this phi ki
sci fi t at time
0 that I solve it
tomorrow and now
have completely
different processes,
which is optimal?
Which means somehow I
am changing my mind,
how I act in the future.
And these are very
actual hard problems
to deal with without
even model uncertainty,
just,
just simply usual
stochastic
control problems.
And there are attempts
to various types
of ways of dealing
with this time
inconsistency.
And I usually like
to call it not even
time consistency,
but the strong time,
time inconsistency.
Because there are
problems which are
still sort of thumb
consistent but
weaker sense.
But the way to do
that for about four
approaches on,
on this one approaches
the approach by
Bjork and Murgochi
Where from it is called
the theory of Markov,
Markov and timing
consistent
stochastic control
problems in discrete-time.
As you can see, it's
relatively recent,
So it's 2014 with no
model uncertainty,
just teacher simplicity is
the way you work it out.
Philosophically, you
play not against nature,
but you are playing
against the copy
of yourself.
In other words,
you're making like
we are forcing this problem
to be time consistent.
Technically this,
this becomes tricky,
but that's a way to, to do.
And if I'm not
wrong, last week,
I believe I gave a talk on
a robo advising so
that he also had the
And that to approach had
a stochastic
control problem
which was also
time inconsistent,
and he also used
Bjork A Murgochi
That was the reason I
was asking actually
for the talk.
So so that's a way to Bell,
and we cut it a little
bit shorter here,
just saying that indeed
we are working with,
with what is called
subgame perfect strategies.
And some game
perfect strategies
as you can see here,
is that if I solve
the problem today,
which means I'm picking
fetal that TPP CTL,
that T, this is
the same ***.
To solve it,
use the solution that
owls have tomorrow,
which is F0 t plus
one, C t plus one,
and solve it again for
one period of time,
I will get to
the same result.
So it's written here.
And once we fix this,
that will not solve
the original problem,
but it will be close to
two to have a solution
to this problem.
And the name is
that you are
formulating in
so-called subgame
perfect control setup.
And then on the other hand,
I can say that
you are literally
enforcing within
reasonable framework,
you are enforcing
the Bellman
principle to hold true.
And once everything
is done properly,
it's still not
easy to prove.
But then Yeah,
we can show that
we have this
recurrency backward,
and that's critically
important.
Without this, there
is no chance to solve
numerically the
entire problem for,
I don't know, 100
days worth 250 days.
But now instead of solving
a big problem for
all periods of time,
you solve it
period by period.
And solving period
by period is
the same as solve
the entire problem.
As usually,
the dynamic programming
principle says.
The theorem says that,
yes, indeed you can,
you can do that even for
time inconsistent problem
in the sub-optimal
framework,
there are some
additional assumption
on that I'm skipping.
In particular, for example,
you can see maximum here
we are assuming
days finite.
We still don't know
how to, how to deal
with a general,
general set.
A compact one doesn't work.
We also have
some other assumption,
of course,
ingenuity measurability
that allow to
have the strategy
existence satisfies.
This will also show,
which is actually
quite not trivial,
to show the existence of
the Seba subgame
perfect strategy,
which means the existence,
the existence of
optimal selector.
But all in all the theory
it's solid for and
it's valid for,
for mean-variance
lists problem.
So now I believe I have
about five minutes left.
I'll wrap it up with
this example that we,
I was mentioning
at the beginning.
It's a mean-variance
criteria.
So I'll have
one risky asset
and the risk-free
banking account.
I'll invest optimally
according to
the mean-variance criterion
in these two assets.
I will assume the
simplest setup,
but yet not trivial setup,
that my wonky terms of
scarcity is normally
distributed and the
distribution is z.
So z are the
The luggage Earth. It has
unknown mean,
unknown variance.
So mean and
variance belong.
The only thing
that I know is
that mean belongs to,
let's say to 20
interval mu bar.
Mu bar and sigma belongs
to another interval,
let's say
underbar, toolbar.
And I know what is
this, this bounds.
So 5%, 10 percent between
520% and maybe volatility
or sigma would
be between 540% are 10,
40%, whatever we choose.
Larger, small, depends
how much uncertainty we
want to put a priority
on the model.
The dynamics is, is,
is written clearly
self-financing,
conditional Tm
into the place?
And then the question is,
one question that too that
may come is why
mean-variance?
I mean, well,
the reason for me
invariance is not
because it's the
simplest actually is the
most use criteria,
practically speaking.
So and it's yet to
the simplest problem which
to some inconsistent
that can be solved.
So, and here is
the answer to the question
of the confidence regions.
So that the
asymmetry sample
mean and sample variance.
So again, mu and sigma
is the mean and variance
or standard deviation of
the normally distributed
random variable.
The estimators are sample
mean and sample variance.
That occurrences here
for the mean is easy.
So this is of course the
previous estimation.
Plus one over t plus
sine t plus one for
the variance is a
little bit longer,
but together they
are recurrent.
So the previous value
gives me the next.
Well, the confidence
region is an ellipsoid.
It's a payer, you,
and it's an ellipsoid
because of the central
limit theorem.
So again, statistics 101.
If the mean and variance or
standard deviation of
a normally distributed
direct girls
both are right now.
And we now how,
how that we have to use
the chi-square
distribution to
test and to build the
confidence regions.
And the confidence
region is an ellipsoid.
The interesting part is
that this ellipsoid is
also computed recurrently
and rigorous.
He's done well
because the hat,
the estimator,
sorry, currents than
the ellipsoids are
also required.
And we have the full
formulas for this.
There is no need
for general theory,
but actual dishes
three, general.
Now the dynamics are
whatever they are,
you, you put
them altogether.
It's complicated.
Long formula,
the Bellman equations
or even longer.
I'm not writing them,
but I fed them a
couple of slides back.
The results are
the following.
So we take a particular
set of parameters.
So I believe the
interest rate
50 tool is because
there's 52 weeks,
so everything is on weekly.
Why not daily?
We are still in the mode of
Making our code,
our actual numerical parts
to work for, for
a long time.
So it takes a
long, long time.
I wanted to mention that
this type of problem
or max in problem max,
max min or min-max
doesn't matter.
They are numerically
challenging to solve.
>> And he is
third in Europe
that you describe how
nature is now picking
the worst history,
your, you know, and then
how you're countering that.
I'm trying to understand
it in this context.
>> Is my question clear?
>> So, so how,
how nature pick somehow
make that we don't know.
And it's essentially
irrelevant
for us what we know that
the nature of peaks,
yeah, but let's say
we're doing OK.
>> Where were our
control is how much to
invest in the risky
asset, right?
And I guess the worst thing
that nature can do is to
make the mean
be really that
low and to make the
variance we really high,
I suppose if rely on
correcting correct in
thinking that that's,
that would be
really, really
bad if nature does that.
>> And we are
exactly trying
to avoid that
by saying that.
Okay, that means
according to my criteria,
I will, I will try to take
this minimum at
time t2 to, to,
to minimize somehow my,
my expected
mean-variance criterion
across all models.
The only think I
know the *********,
No Swap I'm doing peaks.
The, the, the, the
parameter we seal the
sapped see touchy.
>> Yeah. So I know you've
made that assumption
very clear.
And so as we're
moving through
time and theta,
I think like you had
that nice picture
where theta t is
a narrowing cone in your,
you know, your assumption,
which is very
explicit right there,
is that, yeah,
this is great.
So let's say we're
at this time t,
the circled nature is
picking a history up to
this time that is
in the yellow
confidence region.
>> It yeah.
>> Yeah, yeah.
Okay. So I mean,
in the particular
content I know you've
only like in this diagram,
you have one theta,
which I understand
gravity's gonna find that,
yeah, it's
one-dimensional course.
So, so since it's
one-dimensional me,
let's just say
that, you know,
maybe it's just the mean
is when I talk in
the context of
this diagram.
So maybe it's just the
mean that we don't know.
And let's say we
actually know
the standard deviation to
second type about
one unknown,
because that's
what's diagram.
So the nature good
The the the lowest me I'm
thinking accurate
pick for the mean,
the lowest allowed.
I mean,
I'm thinking like
the lower the
lower boundary
of this yellow cone.
>> Yeah.
>> Yeah. Okay. Yeah. Okay.
>> Niger can do
I mean, well,
if this is the mean will
sort of minimize and think,
and according to
our criteria,
well, mean variance
is, variance is fixed.
Then we minimize or
maximize with
respect to, I mean,
we'll pick up maybe min
will also think
that this is
the worst case error
that can happen.
And actual will
pick that one too.
>> If nature picks
another mean.
>> And somewhere here we,
if that is the mean, will
think that this
is our problem,
will pick up and show
because we are optimizing
and say, Well,
the worst possible model,
we don't know what nature
peak but will minimize.
And if that is the me,
and I'll think dwarfs,
what can happen is
the smallest
millennial, right?
>> Right.
>> Okay. Those are
helpful. Thank you. Yep.
>> Yep.
>> Bar like this yet,
but yes. Yes question.
So you talked at
the beginning
intuitively what timing
consistent means right now,
if you're using this
function capital R let,
you showed example
and let with
the log-normal,
I mean our enables
you to update
the point estimate of theta
once you observed the
next increment of Z,
I guess, right,
the incubators he
has observed z, t plus one.
You update the point
estimate, right?
Does that does that clearly
means that it must
be a time and
consistent because
you're constantly
confronting each new T,
a new range of values
for the payers, right?
>> Right. So that,
that BTS time consistent.
>> What is timing
consistent?
>> Here's my decision.
So my decisions, this fee.
So and again, this time,
consistency, as
I said, I mean,
the way you are thinking
and I'm thinking
is my decisions are
still time consistent.
Unfortunately,
unfortunately
for people from
stochastic control,
time consistency has a
very, very clear
definition.
Mathematically
means not what
we are economically with
themed mathematically
means that this theorem
holds true and that's it.
So that that means that
my decision sexual
these fees that
time computing maybe
Phi (inaudible)
If I compute them at
time 0 and I get I,
and I'm getting
this entire process
which depends on
the stage T2.
And then he filed compute
to them at a time,
let's say S, which
is bigger than T.
>> The same
thing, they have
to be the same
thing, right? Right.
 in, in, in,
in the, in this
mean-variance.
>> Unfortunately
they are not.
And because primarily
because the variance,
variance is that the
linearity is lost and
it's square of
the expectation
rather than expectation
of a squared.
And then essential
that if you
want Jensen's
inequality summary,
screwing up things.
>> So even though Europe,
Europe to every
time new till,
you're updating a range
of possible
values, Correct?
>> Yep.
>> Of of their range.
>> The dynamics
is the dynamics
is boot for for for,
for the state process.
The dynamics for the
for the SAT is good.
The criterion
itself is bad.
The criterion because its
expectation
minus variances.
But now what happens
actually is a t is
still time consistent,
but in a different
language.
It's what we're saying
in our not these
paper button.
In another discussion.
Moral firm of a way
of thinking financial
means that I'm making
maybe not worse decision,
but may be slightly,
slightly more risk averse
decision in the future,
which means I'm improving
my decisions that makes
me also time consistent.
I'm kind of becoming
smarter as time goes.
That is not allowed
in the Setup cell.
And as I said,
it's a x is time
consistent,
sort of my, my q is
time consistent.
But the criterion is what
makes things not
to 22 squared.
And mathematically,
as I said,
here is expectation of X.
Here is, is, is
expectation squared.
And then it's simply
when you start writing
the dynamics, it
doesn't squared.
So we pick up what I
wanted to mention,
that actually solving
this numerically,
these problems, even
step-by-step,
is challenging.
On the first paper,
one time persistent,
we did it was not easy.
We did all
one-dimensional case,
two-dimensional case,
in very limited
number of steps
than tau actually did a
machine learning
approach to
solve this in a smart way.
And then we adapted it to
this timing
consistent set up.
So the number
assignment I'm taking,
its initial value is a
100%, sorry, a $100.
I'll have to do two sets
of risk aversion
parameter 0.2.9.
The 52 is because
52 weeks mu star,
sigma star is given by
these two numbers there
on a weekly basis.
We can, we can convert them
to yearly if it's easier,
but it's less relevant
at least for now.
Mu bar, this is where
I believe my (inaudible)
So overall that is,
that is a rectangle
and for mu and sigma,
and we ran our process.
So this is how I actually
this confidence
region look like.
You may not see, but
maybe you can see.
So it's the square
is this rectangle.
Of course, the confidence
region at time t,
as I said, is an ellipsoid.
So you have to
intersect with
your support where you
believe the parameters
live a priori.
So it decays, it,
it becomes smaller.
This is an ellipsoid.
Some parts, maybe it's
less feasible,
but numerically,
you can clearly see that
this shrinking ellipsoid
intersected with the,
the, the, the rectangles
where the support is.
And this is at 10% with
a confidence
regions at 10%.
The true parameter,
which is the true,
true model parameter,
is in the middle.
I mean this solid line,
this dots are
the estimators
that too in this case,
coincide with families.
And that's almost the end.
This is the US I'm running
really over
time, but you've got
a good question.
So I didn't expect that
an online seminar
could be so engaging.
So thank you. You
made my day to day.
So there
Adaptive robots. Does
our result strong,
robust is the
classical one.
The mean is the same.
What, what matters is,
of course, we try to
minimize the variance.
And our strategy,
as you can see,
significantly gives
smaller variance
like real signifi,
all other statistics
are the same.
But the variance, what,
what pops up and
we check two on
various sets of parameters
and we are
getting the same.
That would be
my last slide.
>> We search instead of at
the last slide before this.
>> So the variance
is smaller.
>> And that's
because you're
using an adaptive approach,
which you take into account
the realizations of
the price credit.
And let's say update
the confidence
interval based
on the observed
price, correct?
Great. Okay. And so the,
the variance you're
calculating is
the variance of wealth
when you're pursuing
a strategy.
>> Is that the
second row here,
that variance also,
this is the variance of
the terminal wealth.
The wealth again,
V is a function
that will be the
mean minus x2
minus variance.
So well, mean minus
gamma times variance.
But the first draw is,
I mean x is
the terminal wealth should
be W. Actually in
the paper is w,
but I wanted to keep
the same notation.
>> Okay? Okay, things,
and this is the histogram.
>> The dark one is
adaptive robustus,
our realization again
of the terminal wealth.
As you can see,
I didn't, didn't
change the picture.
So it's WT, as
you in the paper.
But that's
the realization of
the terminal wealth.
And the light gray is the,
is the strong robust,
which is a version
of robust.
And I believe I
can conclude,
but the conclusions
are unclear.
We have a new framework
that reduces our source
and can be applied journal.
Speaking to many problems,
we're still very excited
about it, working on it.
And I really thank you
for the for the seminar.
It has it has level.
Thank you.
>> Yes.
>> It was
literally I agree.
>> All right.
Let's open it up.
>> I mean, if anybody would
like to ask a question
at this eager,
feel free to chime in LA.
Can I ask a question?
Here you go. Header event?
>> Yes.
>> So I think that if in
nature potatoes
have some kind
of time dynamics and
the non stationary,
even if it's like a
more gradual one,
then would it be,
still be true that the air
will have a lower variance
and we'll be like
actually better than
the classical result.
>> So in this numbers,
So yes, yes, correct.
Comparative total.
Again, I think I
missed the main point
of the question. So
If the nature rule have
what if it has like
if the theaters have
non-trivial time dynamics,
so say they're like
non-stationary, Right?
Yeah. So that's
a hard question.
That's the answer
is I don't know.
So what you are
saying now that
actual x and theta are
processes themself?
Tentatively, Yes, it is
not a finite
dimensional or a setup.
That would be
infinite-dimensional setup.
And we, it's matter
of, you know,
I believe it's
still the answer
is the same as long as
some stationarity
where some
reasonable
assumptions that are
made on these parameters.
But the problem itself
is not yet in
our framework.
So we are clearly still in
mathematical speaking in
the finite
dimensional set up.
And, and the reason
is it's modulated
to simple questions like
why this minmax problem
that makes sense
and why this ever existed.
Control, which
is measurable.
And, and the moment
you will ask this,
what I put it under
the rug here,
existence of sub game
perfect strategy.
It's non-trivial
topological questions
because you are
playing with and,
and in fact the, I
can tell you for,
for these problems
which are not time,
time consistence, we had to
use semi analytical
functions.
Week topologists onset
of semi analytical
function.
So it became
nastier than even I
wish I had a question.
>> So the table
you just showed us in
the very last slide,
we go back to it one,
yes, this was computed
using 52 weeks.
You told us he
and I'm just wondering
what the second row
in particular
would look like
if we change 52
to infinity.
So in other
words, if we just
kept learning and
learning and learning,
I assume the variance
in the AR column,
we go to 0.
>> I don't know.
>> I don't know.
>> Nobody probably yes,
probably are. Right.
>> Okay. Well, that's
what I understood
from much earlier slide.
>> It was a picture
you showed where
needless kept narrowing
the cone a
correct yourself.
>> So the only
thing I know for
sure that the confidence
region, this,
this whole ellipsoid will
shrink to this one dot.
>> Yeah, okay.
And then let's say,
so that means that the end
of utter in theory,
but I will still have
>> Have some
uncertainty in my,
in my, my wealth
at the end.
>> So variance being 0,
that means editors
are okay, okay,
anything right at the
end into maritime,
I'd say you know
what mu and sigma
R is what you're
saying, right?
But thickness
deposit, let's say in
your long risky
asset, okay?
>> And so you still
have variance of your,
Well, I mean we
can think maybe in a
different context.
>> So let's say
infinite horizon
optimization.
You infinite horizon sell.
Well, if we have a
stationary process,
then of course
I will converge
to the stationary
distribution.
If it's a single bond,
there is no findings that
he's nothing if it's
still a stationary
distribution,
which means it
will converge
to the invariant
measure of Tang manner.
I'll still
covariance that will
build what I would
think and visualized.
>> Okay. Thank you.
Okay. All right.
So the the x sub t's are
showing here their
wealth at 52 weeks.
And let's say in
the AR column,
those numbers are
based on doing,
I'd say the robust
strategy like nature,
picking the worst
possible thing.
You know, while
that's allowed for
nature, while we learn,
let's say as we
move through time
more about where the,
Via narrower
confidence intervals
where the mean
and variance.
>> Are that a fair
summary? Edf?
>> Yeah, I would
I would say yes.
>> Okay. Okay.
>> And then the gamma
is the risk return.
>> Was the more
precisely the
mean variance
tradeoff incorrect.
>> So this is these
gamma, yeah, that gamma,
this is a figure is
gamma the more risk averse,
more penalized on
the, on the variance.
>> You Exactly. Yeah.
>> Okay.
>> Precisely, yes.
>> Okay. Now that I
appreciate that you
might go into that
last sighting
in church, Russia.
Okay? Right. So the Goldman
gamma point to again 0.9,
we're penalizing
variance more
and you get a lower
variance because I'm,
I'm not allowing, I'm,
I'm sort of more
risk-averse instance.
>> I don't like
being variance,
so variance will
be smaller.
>> Okay? Okay.
>> Yeah, because
this is the variance
of your optimal, correct?
>> Correct. Makes sense.
>> This is variance
of X of t.
>> So yes,
nonetheless, presence.
Okay? A sanity
check that works.
>> That's good.
I mean, we did
the sanity checks,
many of them.
I don't remember that that
we thought about, you know,
checking this one, but
we had quite a few of them.
And as I said,
every single run
it takes still,
even now with the
special learning,
which is sort of a
Gaussian process,
surrogates towels
using in their paper.
And it still takes
time, but we can deal
now with 23
dimensions easily.
And that's, and any, any,
any utility function
or mean-variance,
not necessarily some
something very specific.
>> Cell one.
>> All right.
>> One more question.
>> Versus the mind.
In relation to
your last example.
Let's say if I sell
a call option on
a stock that I
believe it has
discrete geometric
evolution, right?
And my P is the
hedge ratio.
How many shares
I should be long
against being short.
>> One call, Right?
>> So I mean,
will this lead
you to getting
that Black-Sholes delta?
>> So, so, so, so yeah,
that's, that's an
excellent question.
So the answer is
I don't know.
We didn't check.
Unfortunately, that's
where we started.
As I said, that's pretty
much how we'd been starts
looking that basket
of food or fluids,
CDSs try to pick up
the right model
and then the
minimize it by minimizing
the hedging gear.
When I don't
know the model,
WE diversity and we
start building up
the whole theory.
We didn't check that maybe
in the future, maybe.
But that's that's
definitely
a question to us.
I think tau, a DVD in
their paper on
numerical in this,
as you can see that the
title is sketching.
I'm not sure they did
the caching problem.
I don't remember exactly
what model to use,
but I would expect,
Yes, I would expect
something very,
very close to the
Delta-Hedging,
but I don't know.
>> We didn't check.
>> Thank you. Thank you.
>> Okay, let's say you can
now apply the
speaker using zoom,
using the reactions
button on
the lower right and yes.
>> Uh-huh. Yeah.
>> Okay. Okay.
So for our site,
for our siam, a series of
talks that we've
been working on.
We can borrow this idea.
I'm yeah, okay.
>> Right. Actually, like
the people listening
to your talk might
be interested in
the Sime talks too.
So cuticular
quick description
of what those are.
>> Yeah, sure.
>> So so as Peter said,
I'm part of actually
weave together was Jeff,
I'm storing who is
visiting you and mitigate.
And we see now who
gave last seminar,
so the officers of
the siam activity group
on financial mathematics
and engineering.
So we started a series
of visual talks
because as all of us
are now stuck at home,
essentially online
seminars given
by people from
the community
for mathematical
finance community.
The talks are
happening every
other Thursday to 1pm.
New York's time
registration
comes from the SIR
mailing lists,
but we're also distributing
it through bascially,
finance, societally stand
the private emails
that are going around.
If anybody is
interesting, of course,
email me and I'll
give you the link.
But also we have
now a webpage you
associated to
the series where you can
find the information
and how to register.
>> And who is the speaker?
>> I can tell you
the next speaker.
Who is the next speaker?
Is Bruner, the Pier.
And not talking about
a very fancy title
I received today,
cell, but yeah, it
would be on march
14 and it would be
Bruno speaking.
Then I can even
tell you you'll be
the first explorers.
Okay? Yeah, you'll be
even the first ones.
We'll now that unfolds
on the sorry, May,
on May 28th, which is next,
next stock wheels
have actual
a panel on energy markets.
So that would be fine.
And we have people
from both sides
in raising Bruno
is giving a talk.
>> May 14. Yes.
>> Okay. I'm actually
getting one thing.
Oh my god
Yep. Okay.
>> So I'll click the link
and you can all get it.
>> They're Thank
you, Stefan.
>> Oh, yeah.
I was thinking to do the same
same, but to perfect.
Thank you. Thank you.
Thank you. Thank you.
>> Yeah. In the chat? Yes.
>> Yeah.
>> Okay. Okay.
Thanks so much.
Igor. That was
really nice.
So you did a great job
controlling the amount of
time in your talk to,
I guess that was robust
adaptive control.
We did our nature
to slow you down,
but we didn't succeed.
So your strategy worked.
>> And so this is
the conclusion of
the BQE seminar
for this spring,
will resume again
in the fall.
If you're hungry
for for talks,
I encourage you to do
the same talks there
every Thursday.
Well, that our
Thursdays at one,
I guess they
alternate with the
bascially A1's is array.
>> That's correct.
At the same time
I'm and they have
the same Talk
>> Yeah. Yeah.
>> So I suppose next
Thursday, May seventh,
there's going to be at
a bascially a talk,
let's say probably
the same website as
well as this one will
will tell us about it.
Okay. So it looks
like we can
continue with thursday
seminar is just at
one o'clock and
statistics are
a terrific
thanks, everyone.
>> And I think
it'll be safe.
Okay. Thanks a
lot. Thank you.
Thank you. Thank you.
I could be there.
Thank you everyone. Bye
