so we have been looking at stability of a
stability asymptotic stability of integration
schemes numerical solvers for ordinary differential
equation initial value problem and then we
looked at certain cases right we looked at
explicit euler implicit euler trapezoidal
rule okay and we carried out analysis of under
what conditions the difference between the
true and approximate will vanish as at least
asymptotically okay
so for example for trapezoidal rule i considered
a simple system dx/dt=ax a<0 and xn is the
initial condition okay and then we wrote this
difference equation when we use trapezoidal
rule we use this difference equation and we
finally found that the dynamics is governed
by en+1 so this is actually pade approximation
of e to the power ah okay and a way pointed
out that this is nothing
but if you call this vector as z n+1 call
this matrix as b and call this as zn okay
then we have this linear difference equation
z n+1=b zn we have looked at equations of
this type earlier very very similar equations
except that the context was different we looked
at iterative schemes there was no time involved
here n represents the time instant earlier
it was k+1=b*ek and the criteria was you know
the spectral radius of b should be strictly<1
okay
now in this particular case what are the eigen
values of b? okay if you do this you have
to solve for determinant of lambda i-b=0 okay
and with this you will get lambda-there are
2 eigen values spectral radii should be strictly<1
okay if the spectral radii should be<1 then
absolute of each eigen value also should be<1
spectral radius is the maximum magnitude eigen
value okay
so i need 1+1-ah/2/1+ah/2 to be strictly<1
and anyway since a is<0 okay we know that
e to the power ah is strictly<1 so this condition
is satisfied because a is<1 this has to be
satisfied in order that asymptotically error
goes to 0 error between the true and approximate
okay so our en is nothing but x star n x star
n is the true solution and xn is the approximate
solution using trapezoidal rule in trapezoidal
rule it turned out to be this okay
we have derived similar conditions for other
cases for explicit euler we got condition
1+ah strictly<1 right and this tells you how
to choose h because you are given a problem
so a is given to u you are asked to solve
particular problem a is given to you you have
to choose h so this tells you how to choose
h h has to satisfy this condition for implicit
euler we got condition 1/1-ah strictly<1 okay
now the question is how do i take this analysis
to a higher level that is to multivariate
case? okay we are still talking about a scalar
case i want to graduate to the multivariate
case that is again for the multivariate case
the situation where i know the solution is
when you have linear ordinary differential
equations okay given an initial condition
just now we looked at those kind of multivariate
equations
so i want to extend this analysis to that
case okay now we will have some more complications
there because you have this a matrix okay
and it is a full matrix here in this particular
case a is a scalar and life was simple okay
how do you deal with the case where a is a
matrix okay? are these ideas clear first of
all for the scalar case? okay likewise if
you go or doing deriving these things you
know let us say if i derive it for the second
order runge kutta method what will i get?
i will just write down that what you get for
second order runge kutta method second order
runge kutta method you will get here 1+ah+a
square h square/2 factorial and here you will
get 1+ah+a square h square/2 factorial and
ah so this will be case for the second order
runge kutta methods okay so this will be the
matrix equation for the second order runge
kutta method
and the condition will reduce to mod of 1+ah+a
square h square/2 factorial should be strictly<1
okay and so on see can you guess now what
will be for the third order runge kutta method?
you can see the pattern first order runge
kutta method is explicit euler second order
runge kutta method is we have looked at heun’s
rule and so on so that will give you this
third order will give you 1+ah+a square h
square/2 factorial+a cube h cube/3 factorial
mod of that should be strictly<1 okay if those
conditions are satisfied then only you get
approximation error asymptotically going to
0 otherwise it will not happen okay so one
can derive for scalar case okay is this clear?
okay let us graduate to the vector case 
in a vector case there are 1 or 2 different
variants the way we can introduce this one
way is you know let us take a case where we
know that for a vector case if xn is the initial
condition okay we know that the true solution
xt=e to the power or we can write a general
solution starting from 0 let us do that at
t=0 x=x0 this is my initial condition
then the true solution is xt=e to the power
at where a is the matrix here a is n cross
n matrix and x belongs to rn x is n cross
1 vector a is the n cross n matrix and you
have given initial condition which is a vector
the true solution at any time t is written
by e to the power at x0 okay using the same
trick that we did earlier okay for the scalar
case it is not difficult to show that in the
new notation xt n+1=e to the power ah x tn
this is true solution this is star okay it
is possible to show that or in other words
the true solution evolves according to x star
n+1=e to the power ah 
x star n okay the true solution evolves according
to this what about the explicit euler method?
or what about implicit euler method okay?
what about explicit euler method? explicit
euler method is xn+1=xn+h fn right so this
is nothing but xn+ha xn right
so this is nothing but i+ha xn now we are
not dealing with scalars we are dealing with
vectors and matrices pre-multiplication post
multiplication these are very very important
you cannot take it lightly whether xn should
be before or after you should be very very
careful when you write okay what would be
the case for implicit euler? can you write
down just try and trapezoidal rule?
for implicit euler what we are going to do
is something like this xn+1=xn+hf n+1 so this
will give me xn+ha x n+1 okay so if i rearrange
i get i-h times a this matrix*x n+1=xn right
i am just taking this on the left hand side
okay what i get here is xn+1 is i-ha inverse
x n mind you i cannot write divided by i have
to write pre-multiplied by inverse that is
the only correct way okay
similarly in trapezoidal rule what will it
be? trapezoidal rule will turn out to be xn+1=i-h/2
inverse i+h/2a*xn just i follow the same thing
for trapezoidal rule i will get this thing
here okay now the question is how do i do
analysis of evolution of error okay? i want
to do analysis which is very very similar
to the previous case okay so i want to analyze
en which is x star n-xn
i want to analyze how this error behaves okay
so let us see whether we can get some insights
into this now we will visit trapezoidal rule
and implicit euler a little later let us begin
with the simplest one explicit euler method
okay
what is my explicit euler method? xn+1=i+h
a*xn okay now i want to find out well if i
subtract these 2 okay i will get en+1 okay
i will get en+1=i+ha en+e to the power ah-i+ha*
x star n 
if i subtract and rearrange i will get an
equation which looks very very similar to
the scalar case no difference okay for the
time being well how do you define e to the
power ah?
e to the power ah is defined as i+ha+h square
a square/2 factorial+ and so on right so if
h is very very small which is the condition
for euler integration right h is very very
small then you can think that first 2 terms
will suffice in terms of approximating e to
the power ah and the later on the terms afterwards
can be neglected okay so for the sake of analysis
let us assume that this difference is very
very small just to get initial insights okay
we will do the formal analysis little later
so formal analysis in the sense by combining
the 2 difference equations and the way we
wrote in terms of a matrix that will do later
so right now let us assume that this difference
is negligible and this is what dominates let
us look at this part of the equation okay
now what i am going to do is i am going to
use this idea of diagonalization
so i am going to make an assumption that to
do analysis i am going to do 2 assumptions
one assumption is 
eigen vectors are linearly independent 
if a is diagonalizable eigen vectors are linearly
independent then i can write a as psi lambda
psi inverse where psi is a eigen vector matrix
with columns as eigen vectors lambda is a
diagonal matrix with all the eigen values
appearing on the diagonal okay lambda 1 lambda
2
so second assumption i am going to make is
that real part of lambda i of a that means
strictly<0 due to the analysis i am going
to make one more assumption that all the eigen
values of a are in the left half complex plane
why i do talk about complex numbers? eigen
values need not be real given the matrix which
has all real entries i can have eigen values
which are complex okay
i can have all the eigen values which are
complex i am talking about the systems which
have all the eigen values with negative real
part okay advantage of this is that such systems
solution will decay to 0 as t goes to infinity
you can show this very very easily so i am
considering a special class to get insights
okay and using this special class of system
we are going to get insights into what is
happening okay
how does this help me okay? so let us go back
and start looking at our error dynamics let
us start looking at our error dynamics
here you know en+1=i+ha let us look at this
part okay i am going to write this as psi
psi inverse psi psi inverse is i okay+h psi
lambda psi inverse en okay which is nothing
but psi i+h lambda psi inverse en is everyone
with me on this? what is psi? psi is a eigen
vector matrix and what is lambda? lambda is
a diagonal matrix with eigen values appearing
on the diagonal okay
what will be i+h lambda okay? i+h lambda is
nothing but 1+h lambda 1 okay 1+h lambda 2
1+h lambda n this i+h lambda is also a diagonal
matrix because lambda is a diagonal matrix
i+h lambda is also a diagonal matrix with
1+h lambda i appearing on the main diagonal
it is fine right this is appearing on the
main diagonal now error equation this particular
equation is same as something that we have
encountered earlier okay
when will you say that you know error will
asymptotically go to 0? what is the condition?
spectral radius of this i+ah should be strictly<1
we can translate that condition to you know
spectral radius of i+h a should be strictly<1
okay i can also say that lambda of i+ha lambda
i that means each eigen value of this matrix
okay should be strictly<1 now the question
is what are the eigen values of i+ah? if you
look here this actually is nothing but diagonalization
of this matrix what are the eigen vectors
of i+ah?
what are the eigen vectors? they are same
as eigen vectors of a okay the columns which
are appearing here in this matrix are nothing
but eigen vectors of i+ah they happen to be
same as eigen vectors of a we have just proved
that okay this is the diagonal matrix when
you do diagonalization what appears on the
diagonal matrix? eigen values so this must
be the eigen values of this matrix okay
so my condition for stability translates to
1+h lambda i should be strictly<1 for i=1
2 up to n where lambda i are 
eigen values of a 
okay so i+h lambda i should be strictly<1
for every i for every eigen value this should
happen okay this is the condition under which
the error will asymptotically go to 0 okay
we are looking at right now approximate error
dynamics we have neglected one component
so this is what will happen this is the condition
under which so now the condition that we had
earlier is a subset of this condition because
earlier we considered a to be a real negative
number right now i am expanding it and saying
that we may get complex numbers and then the
stability condition in this particular case
it translates to 1+h lambda i where lambda
i are eigen values of a this should be strictly<1
so actually what is typically done is in the
complex plane you draw stability envelope
or the region for values for which you get
stable dynamics of the error in this particular
case this will be -2 -1 0 and there will be
a circle here 
this is lambda h plane this is imaginary this
is real
only in this region okay only in this region
if lambda h value lambda i times h value falls
in this region okay we can translate this
to our we can draw region in which this condition
will be satisfied in the complex plane okay
and this is called as stability envelope for
explicit euler method likewise one can derive
stability envelops for implicit euler for
you know trapezoidal rule and so on
what will be the case? how will you analyze
this case? in the implicit euler you have
en+1 will turn out to be i-or approximate
because we are neglecting one small part i-ha
inverse en okay how will you write this? what
will this turn out to be? see this is i-ha
inverse using the same trick okay that is
psi psi inverse-h psi lambda psi inverse inverse
okay i want to take out psi i want to take
out psi inverse
do you remember the rule for inverse of multiplication
of 2 matrices? what is a b whole inverse?
b inverse a inverse so we use rules properly
you will see that this turns out to be psi
i-h lambda inverse psi inverse okay i-h lambda
inverse is very easy to compute okay i-h lambda
inverse is very very easy to compute it is
a diagonal matrix inverse of a diagonal matrix
is 1/each diagonal element okay
so in this particular case this matrix will
turn out to be 
this matrix here i-h lambda inverse will turn
out to be a diagonal matrix okay with 0 0
half diagonal elements and other diagonal
element will be 1/1-h lambda 1 1/1-h lambda
2 and so on what is the stability condition?
in this particular case the stability condition
for implicit euler will turn out to be mod
of 1/1-h lambda i should be strictly<1 okay
and then you can show when eigen values of
a have negative real part okay the region
of stability is nothing but entire left half
plane for any value of h you will get okay
that is not the case for explicit euler explicit
euler there is a very small region
you have to be very very careful when you
choose the integration step size implicit
euler okay when if you make slight error or
if you choose slightly larger integration
or step size it will not give you asymptotically
wrong results okay this is more to get insights
the insight that we get here is that even
for a multivariable case okay if you use explicit
euler method you still have to be very very
careful
first of all how do you choose h? it depends
upon the eigen values of a matrix okay how
do you choose h depends upon the eigen values
of a matrix okay second thing is and then
of course you should choose the one which
is most conservative okay because you will
get 1+h lambda 1 1+h lambda 2 you will get
so many inequalities for i going from 1 to
n the most conservative value of h which you
get that is what you should choose okay
because it may happen that you might choose
h for lambda 1 and that may be a wrong h for
lambda 2 okay so the most conservative one
you have to choose the smallest h that you
have to choose which satisfies all the inequalities
has to be chosen that is very very important
okay you do not need to worry so much when
it comes to implicit euler okay implicit euler
is relatively much more stable algorithm than
explicit euler okay
trapezoidal rule can you guess if i do a similar
manipulation okay it will be for trapezoidal
rule it will be 1+h lambda i/2/1-h lambda
i/2 it will turn out to be very very similar
okay for the trapezoidal rule okay
well so i said that the exact analysis would
be little more you will have to consider the
entire matrix so you can write en+1 x star
n+1=i will just directly write for the trapezoidal
rule or for implicit euler for implicit euler
it is i-ha inverse e to the power ah-i-ha
inverse this is null matrix e to the power
ah*en x star n well you can show if all eigen
values of a have negative real part then x
star n will asymptotically go to 0
this is not difficult to show so ultimately
it will boil down to eigen values of this
matrix because you have to look at determinant
if you call this matrix as b matrix then you
have to look at determinant of lambda i-b
roots of determinant of this okay which will
turn out to be nothing but roots of this and
roots of this if eigen values of a have negative
real part these will always be stable or these
will asymptotically go to 0
you have to worry about this part okay and
this part will change depending upon whether
you are using implicit euler or explicit euler
runge kutta whatever if you are using runge
kutta you will get i+h+ah square second order
runge kutta will give you that if you are
using third order runge kutta this will change
and this will change and so on okay so how
you approximate?
you know e to the power ah will change depending
upon the method that you are using but the
way of analysis finally remains same okay
we are going to look at in literature you
can find out this stability envelops for different
methods and that way you can compare different
methods which methods are you know where it
is easier to choose integration interval where
it is difficult to choose integration interval
so one can actually compare methods based
on these for the last part okay so this is
all fine spectral radius of this should be<1
and so on what about the real problem? the
real problems are never linear for linear
ordinary differential equations and we are
trying to get insights why we use linearity?
because for linear ordinary differential equations
we know the true solution okay
the true solution is e to the power ah or
e to the power at this is not a case for a
general nonlinear differential equation so
what do i do? okay i can do a local analysis
using taylor series approximation locally
okay so i can extend this idea to non-linear
case by doing repeated local linearization
okay again the analysis of stability of these
methods is fairly involved topic
and then i cannot do a justice in a course
in which i have to pack many many things so
as to prepare you idea is to sensitize you
that there exists something called stability
analysis stability envelopes and if you really
get doubts whether this method is performing
well whether i am choosing h step size correctly
go back and look at stability envelops okay
those are available in the literature and
if you want to have a relative you know comparison
of different methods one basis could be looking
at these stability envelopes
so if i have general non-linear differential
equation dx/dt=f of xt where x belongs to
rn and f is a function vector 
and f here is a n cross 1 function vector
then way i could do analysis i am currently
at x tn=xn then what i can do is locally i
can write dx/dt is approximately=f at xn+dou
f/dou x evaluated at x=xn
locally i can differentiate okay and then
rewrite this right hand side as dou f/dou
x at x=or with the notation that we have used
earlier dou f/dou x at x=xn*x+fn-dou f/dou
x n since xn is known fn is known this matrix
is known this is a constant vector okay your
equation becomes similar to a linear differential
equation this is dx/dt=this is my a matrix
this is a jacobian matrix okay and*x
so this becomes similar to dx/dt=ax my a is
this matrix local jacobian okay i can look
at eigen values of the local jacobian and
do the analysis okay i can look at eigen values
of the local jacobian and do the analysis
but the problem with this eigen value of the
local jacobian is that eigen values will change
as you move in time okay so suppose you happen
to choose you know integration step size based
on time 0 jacobian okay
jacobian eigen values change drastically as
you advance in time okay what is giving you
stable results in one region may give you
unstable results in other region okay that
is where the fix of variable step size comes
handy when you do not know anything use variable
step size method okay but if you want to understand
if you want to get insights you could look
at local jacobian eigen values of the local
jacobian
and if eigen values of the local jacobian
do not change too much in the region in which
you are integrating you could choose one fixed
step size and use it for solving your problem
at hand okay so the way to extend this analysis
to the non-linear case is through repeated
local linearization okay so each jacobian
will have different eigen value actually but
typically jacobian will be a smooth function
of eigen values
eigen values will change smoothly but they
can change and if they change over a period
of time they change drastically then choosing
integration step size becomes tough in such
cases the fix that we describe right in the
beginning variable step size integration is
the best okay
so matlabs most popular solvers rk45 or rk23
are variable step size solvers okay when you
give some 0 to h internally it will you know
keep dividing h till it gets and it will ask
you for accuracy so it keeps finding out h
for which you get a desired accuracy and it
proceeds okay so all this things i presented
because you should know how to analyze these
methods
how to have a critical look at these methods
okay practically when you actually solve problems
later on you might use variable step size
okay but you should know if you are getting
stuck somewhere okay you probably should look
at the eigen values and when i am going to
describe something in my next class differential
algebraic systems these eigen values will
play a very very crucial role okay
we have something called stiff systems stiff
systems are one in which some eigen values
are large some eigen values are small okay
if some eigen values are 10 to the power -5
and some eigen values are 1000 10000 okay
you know large differential equation this
can happen how do you choose integration step
size with respect to 00001 or with respect
to 10000 okay?
in such cases you get what are called as differential
algebraic systems you approximate certain
derivatives to be 0 and you only solve for
certain other derivatives and so on so will
look at differential algebraic systems but
you have notion of stiff systems stiff systems
arises from eigen values of jacobian and we
look at the ratio of the real part of eigen
values the ratio of you know real part of
eigen values of jacobian maximum and minimum
values that gives us what is called as stiffness
ratio
if a system is stiff you have to use a stiff
differential equation solver matlab will give
you stiff solver it is called ode 15s s stands
for stiff solver okay so if you have this
eigen value you know disparity you have to
go for stiff solvers and you should know about
this business of stiff solvers and so basically
this is touching tip of the iceberg there
is lot more to it but you can look at the
references i have given and try to understand
more about this
