You would like to continue with our analysis
of the oscillator and related problems. But,
before I do that, let us do one more problem
of general potential, where I like to introduce
the concept of stable as well as unstable
fix point or critical point in the same potential.
So, I am going to look at potential, where
you have a maximum as well as a minimum and
you know the maximum is going to be unstable
equilibrium point. And the minimum will be
stable equilibrium point and the question
is what kind of phase portrait you get when
you have both is equilibrium points present
in same potential.
So, let us simple take an orbiter shape potential
V of x like to have minimum as well as a maximum,
so perhaps something of this kind. So, let
us say you have maximum and minimum and goes
off in this fashion. Of course, this point
here is a unstable point and this point here
is stable equilibrium point and we would like
to know what the phase portrait is going to
look like, you do not really need to know
the equation for this potential, the actual
equation nor to draw the phase portrait at
least qualitatively.
But, let see you can do this you can write
a formula down for the V of x. The simplest
formula would be well; it is got a simple
minimum here it is roughly parabolic in this
region, so it is quadratic at that point.
So, let us say 1 half a x squared and then
it goes of to plus infinity when x goes to
plus infinity minus infinity on this sides
you need an odd term therefore, the next logical
term is cubic term.
Let us put plus 1 3rd b x cube where a and
b are positive, that would be a kind of simple
formula for a potential with this qualitative
shape. And the question is what happens to
a particle moving in such of potential, what
is the phase trajectory look like. Let us
draw the phase trajectory right here on this
place itself, so here is x and here is the
velocity v.
And we have as always in these conservative
systems 1 half m v squared plus V of x equal
to the energy of system. And this real number
E in this problem as you can see can take
values of minus infinity to plus infinity,
because V of x does so.
What do the phase trajectory’s look like
well it is evident, from what we said yesterday,
that if energy is sufficiently low if this
is your total energy negative value. Then,
the system cannot penetrate into this region
at all and must restricted to region the classically
accessible region to the left of this point.
And as you know if you come all the way from
the minus infinity you can go up till here
than you can roll down and the trajectory
would perhaps look something like this.
And that is about it the full trajectory as
we increase the energy you penetrate further
and further to the right till of course this
point here corresponding to the maximum of
the potential. Then you would have a trajectory,
which does this asymptotically reaches that
point or as asymptotically flows away from
that point. But at this critical value of
the energy, let me call it E subscript s,
s for separate x because it is going to separate
two different kind of motion. It is also possible
that you have an oscillator motion in this
region here.
And of course if you start here, it is going
take you a often an long time to fall down
in this potential well go up till here and
then crawl back up here. Because, as you come
back the restoring force goes to 0 therefore,
it is going to takes longer and longer to
reach this point. And this would correspond
to trajectory which does this, in this direction
at intimidate values of the energy.
So, this for E less than 0 at an intermediate
value of the energy say between 0 and E sub
s. There are two kinds of motion you could
have motion in this region, so that could
be trajectory which does this but, could also
have oscillator motion in this region here,
which would corresponds something like this.
I made a mistake, I exaggerated this, because
you are here at this point you could never
go be beyond that point. So, it is clear that
I need to draw this more carefully, here is
that point and come back.
So, notice that for 0 less than E less than
E sub s you have both unbounded motion here,
on the left hand side as well as periodic
motion in this well. In the moment you reach
E sub s you reached separate tricks, you reach
the end point here. Little great energy, little
higher than E sub s would imply that you can
actually potential come down, come up of all
the way cross this barrier go all the way
up till there and then you fall down.
Because when you come back here, you still
have that much energy you zip point fall of
here. That would mean the trajectory does
this does this, it comes here it goes all
the way round that point and then close of
to infinity.
And this for E greater than E sub s this trajectory,
this thing here corresponds exactly to E sub
s as do this 2 trajectories. This trajectory
as well as this trajectory these 2 trajectory
corresponds to these E sub s precisely. So,
now the separate x separates unbounded motion
from another set of trajectories, one part
which could be unbounded and the other part
could correspond periodic motion.
And it is clear from this figure that this
set of trajectories this separates; now comparison
is really three different trajectories. One
of them asymptotically flowing in towards
this unstable fix point the saddle point.
The other one flowing away from it, and the
third one starting their close to it as asymptotically
tending back to it. Unlike the previous case
where we have four asymptotes quite separate.
Here one of them as curve back on itself and
it starts of as part of unstable direction
here. And I will explain by what I mean by
unstable direction things are flowing away
from it but, then when it comes back here
flows towards it again.
And it forms kind a close loop here, asymptotically
because this point these things never touch
this point as you can realize and this kind
of orbit is called homoclinic orbit or homoclinic
cycle and it is going to big role. So, this
is the qualitative phase portrait of this
for this potential of course, you can justify
this you solve the equations of motion, so
on. But I want you to notice to things in
particular one of them is that all these trajectories,
when the vertical axis when horizontal axis
is intersected the trajectory appears do.
So, at right angles and this is indeed, so
because you look at the equations motion you
have x dot equal to v and v dot equal to minus
1 over m V prime of x those were the equations
of motion. And therefore, the slope of phase
trajectory given by d v over the d x you just
divided one by the other and this is equal
to minus 1 over n V prime of x divided by
v. And whenever it intersect x axis little
v is 0 and therefore, the intersection is
at right angles the slope is infinite unless
V prime x happens to vanish also then of course,
the slope is indeterminate you have to a take
limit. And that precisely what happens at
these points notice that these are not at
right angles and the reason is at this point
V prime of x is also 0.
Therefore, you have to calculate the limit
as you tend towards this point of what the
slopes are and typically they would not be
right angles. These would not intersecting
right angles but, all these other points here
the intersection is actually is at right angles
except the separatrix trajectory. And again
I would call this a centre, because there
are small oscillations about it on this in
this point I would call a hyperbolic point
or a saddle point, because locally the whole
thing looks like a set of hyperbolas around
it.
So, here is a problem where you have a stable
and as well as unstable point. And you can
see you can generalize this you can have many
more these, we will look at some problems
were you do. It is immediately obvious that
in these simple problems the maximum must
be followed by a minimum you can have two
maximum of a curve without minimum in between.
And therefore, stable and unstable equilibrium
points must alternate it is abundantly clear;
because it is very difficult to see, how could
draw trajectory if we have two centers next
to each with no other single point in between
it is not possible. So, already this gives
us some in inkling of what a general situation
would look like, having done this, let us
now ask look at the case of two equilibrium
points.
One stable and another unstable and the same
problem, let us look at some more cases let
us look at a case where we include dissipation,
we have not done. So, so far at all and we
would not for a quite while in the formal
development but, we may as well look at it
in very simple example the damped simple harmonic
oscillator. So, let us look at what happens
if I take just a ordinary simple harmonic
oscillator and I put damping in there or friction
in the problem, what does this analysis do
for us.
Well the damped harmonic oscillator, simple
harmonic oscillator looks this all of you
know that the equation is going to be x double
dot, m x double dot force is equal to mass
times the acceleration is equal to minus m
omega squared x. Let me call them natural
frequency oscillator omega not in absence
of friction just, so that I keep notation
straight and then I would like to include
the effect of friction.
I pretend that this oscillator is moving in
viscous fluid for instance, many ways of modeling
friction. For instance dry friction should
correspond to what happen to this object says
if I push on it, it is not go move till certain
critical stress is reached after the threshold
it is going to start moving. But, then in
a fluid this is not the way friction operates,
in a fluid if the friction would be typically
propositional to the velocity, instantaneous
velocity and directed opposite to it.
It is like a particle moving in a, person
moving in crowd the faster you try to move
in crowd the more you get buffeted in front.
And therefore, you have retarding force on
you, this is exactly the way you have a force
propositional to your velocity, the faster
you try go the great the retarding force.
And therefore, it is reasonable to assume
the simplest instant, that this damping force
is of the from minus m and it is propositional
to x dot itself. So, let us the write to come
proportionality as gamma and put an x dot.
I do this because I know that this quantity
and cancels out by the by the equation and
I know this is length divided by time squared,
this is 1 over time squared here. So, this
insures that gamma as dimension of time inverse.
So, I put that is way I put m in there and
now if I write this way were use to well you
know the solution to this equation this is
ordinary second order differential equations.
So, x double dot plus gamma x dot plus omega
naught square x equal to 0.
Well this is not quite, so trivial to solve
as the simple harmonic oscillator were the
solutions were cos or sine omega naught t,
what is the solution look like; it is got
an exponential damping. There is damping which
is whose coefficient is propositional to gamma
here we could write this solution down. But
you really need to know how big gamma is relative
to omega naught the both have dimension of
one over time.
And you need know which is bigger, if gamma
exceeds certain critical value in this case
it is going to be something like to omega
naught. When we know the damping dominates
over the oscillations where as mega naught
is bigger than half gamma, then the situation
is that of an under damped oscillator.
So, we have three situation depending on weather
under damped over damped and critically damped
oscillator. We would like analyze all of them
in one shot, it is simply like to know what
qualitative behavior the phase trajectories.
So, let us do that, let us rewrite this as
x dot equal to v and v dot equal to minus
omega naught square x minus gamma v that is
a set of, yes please.
It does not need have a mass dependence I
put this this some constant here multiplying
the velocity, I extracted an m and call the
rest of it gamma. So, this gamma has dimensions
of time inverse the same as omega not otherwise
the algebra gets messy. So, it is for just
dimensional reasons without any loss of generality.
We are not talking about more complicated
questions like, why should be linear in the
velocity, why not a square of the velocity
or cube of the velocity and so, on. This entirely
possible and in fact, ask if you ask, what
the drag force is on an object moving in air
like an airplane. Then the drag force is propositional
to the velocity only for sufficiently small
velocities and as the velocities increase,
it becomes highly non-linear right and it
is much more complicated problem.
So, if I plot for example, this speed here
versus F drag what is this curve typically
look like well it starts of linearly. But
pretty soon becomes quadratic higher powers
and so, on; then increase very steeply and
then it comes down once you break the sonic
barrier. So, as soon as you hit the speed
of sound after that drag force comes down
dramatically but, before that it is increase
extremely rapidly. And that is a highly non-linear
situation and we are not going to look at
that just going to look at this little region,
where the damping is propositional to the
velocity first power.
So, here our set of equations and I prefer
to work with this set of equations for several
reasons. One of them is that it is set of
first order differential equation as a post
to second order differential equation which
is always hard at solve. And the advantage
of the physical incite you get in to set off
a first order differential equations is that,
to specify a solution uniquely you need to
specify initial conditions that is it.
I just have to tell you what x and v is at
t is equal to 0, I do not have tell you what
v dot was and so, on etcetera, etcetera Just
have to tell you two initial conditions, two
couple linear equations and problem is state
forward to solve. So, it looks a little formidable
in this language, we think that better to
element and write it in this form and then
try to solve it. But it is actually easier
to solve in this set in this format we will
see. But, before the solve the equation, since
you are lazy would like ask in physical terms,
what do I except, what can I expect of this.
So, let me do that part of general frame word,
I would like to write following, I would like
to write to x as x v. Let me just define a
vector a two dimensional object with components
x and v they have different physical dimensions
better take are care of that and be careful,
another call underscore x that is not co-ordinate,
that is point in phase space in x v.
And then this set of equations is nothing
but, d over d t x dot is equal to some 2 by
2 matrix acting on x v and this 2 by 2 matrix.
Let me call it L and it acts on x of once
again and the matrix L is 0 1 minus omega
naught square minus gamma. So, it is advantages,
when you have set off couple linear equations
differential or otherwise to write everything
down in the matrix form then of course it
is very easy to solve, such equations.
So, this equation looks very, very simple
as it signs and what is the formula solution
of this equation. Well suppose x was just
a number I mean it was an ordinary quantity
not not a matrix, not a column vector, then
off course it will be e to the power L t.
So, the same goes through goes through when
this is a constant matrix there is no x depends
there no t depends there. So, this implies
that x of t is in fact, equal to e to the
L t but, I have of to impose initial conditions
I have to tell you what x of 0 is. So, x of
0, v of 0 has to be given you has to be specified
and let me call this x (0) by definition.
So, where does as x vector of zero appear
there, should it be on the right hand side
or left hand side. Should be on the right
hand side because, we would like have 2 by
2 matrix act on a column vector to give another
column vector. So, we have to be careful you
write this because remember this is column
vector that is a matrix 2 by 2 matrix to be
have to careful you have to write this thing
here because remember this is a column vector
and that is matrix 2 by 2 matrix.
So, we have to careful we cannot write it
in other order as you could for ordinary functions.
That is the formula solution but, now I ask
what is the meaning of this e to power L t
what is what is the meaning exponential of
a matrix, what is that mean? I should, I define
this exponential by it is power series, I
define it by it is power series. And since
we are going to play around these things might
as well have them properly defined.
So, e to power L t, t is just scalar parameter
here running from zero to infinity just a
real number, but L is 2 by 2 matrix and definition
of this is equal to power series. What is
the first time in the power series, the identity
matrix plus of course, as usual L square t
square over 2 factorial plus etcetera all
the way infinity.
So, let us write this summation n and equal
to 0 to infinity L t to power n over n factorial
L to the power n is also by 2 by 2 matrix.
And of course we have faced with the apparently
formidable task, which is to find all powers
of L and then add them up and sum it. But
you do not have to do that but, even before
I do that I need to know whether the series
makes sense do you think this converges after
all it is infinite series even it converges.
Well let us go back, we have our we have channel
two here let us go back and ask.
What is the definition of e to the power x
this is summation n equal to 0 to infinity
x to the n over n factorial. Of course when
does it converse to all values of x all values
less than 1 in magnitude. All values of x
of course it converges of all real values
of x all finite values, the reason is x to
power n increase like x is large may be it
increases million. Then it will increase like
million to the n but, n factorial in the denominator
increases like n to the power n faster than
e power.
So, this dominates in process series converges
all of x the write with look at the all its
ask when does e to power z converse where
z is complex number that is way look at power
series always the arguments complex number.
When does that converge for all z all z infinite,
infinite will it converge z is infinite no
of course not.
So, all finite z converge all finite z, as
long as the magnitude of z is finite everywhere
in the complex plane, this series converges
absolutely it converges. So, well that I can
differential it term by term integrate it
term by term I can do all kinds of things
to it. But, now we are phase with matrix know
and I have to decide whether it converges
or not for the complex number. I just said
the magnitude is less than infinite that is
fine, but what about a matrix, how do I measure
the size of a matrix.
One way to do this would be to looks at Eigen
values and ask for all if it is Eigen values
are finite this would certainly be true. The
other way to look at it would be define a
size of this matrix size for this matrix called
the norm of the matrix.
You would not going to the it is technicality
but as long as let me just state that as long
as all the elements of elder finite, this
converges is no problem at all to you guaranteed
that all the Eigen values are finite and so
on. And there is no difficulty this is one
of the important properties of the exponential
series, plays a very fundamental role in all
of analysis. And in this case there is no
difficulty with convergence, you can close
our eyes and go ahead and do this.
Of course the next job is to find this number,
you have to find this quantity here, then
sum it, but we have spared this problem. Because,
actually what happens finally, is that what
would happen I take this L, I raise it all
powers and do this summation and finally,
I get an answer, which would depend on the
Eigen values ultimately. You could imagine
for instants that I diagonlised this L by
x similarity transmission if I could do this
we will come to when you can do this, if I
could do this.
So, I take this matrix L and I apply a transformation
on either side yes and it gets return in a
diagonal form. So, it becomes D and the elements
of diagonal form are of course, two Eigen
values of L which I called lambda 1 and lambda
2. If I could do this, then it is immediately
clear that S L to the power n S inverse is
equal to D to the power n which is, in fact,
lambda 1 to the n 0.
So, you immediately get this result, because
I write this L square for example, L S inverse
S and then L again and so on. So, this immediately
obvious and in fact you go little further
and you discover that S e to the L t S inverse
equal to e to the lambda 1 t 0 0 e to the
you discover that. Any function of L once
you diagonalise it becomes just a diagonal
matrix with a corresponding function written
at in which of the diagonal elements. And
then to find e to the L t itself, all you
have to do is to undo this. So, I put S inverse
on this side and get rid of that and I put
an s on this side and get rid of that.
And what is final outcome it is says e to
the power L t acting on x of 0 does not do
anything very much. It says that both x of
(t) and v of (t) are just liner combinations
of e to the lambda 1 t and e to the lambda
2 t some suitable liner combinations.
So, for example, in general this would look
likes sum A e to the lambda 1 t plus B e to
the lambda 2 t. This would look like whether
combination with the constant A B C D would
depend on x of 0 and e of 0 and numerical
factors. But the time dependence after this
hole rigmarole is essentially this and this
is what you write down when take the second
order differential equation for x, x double
dot plus something. And you say let us assume
a trial solution of form e to the lambda 1
t plus e to the lambda 2 t. Then you discover
that lambda 1 or lambda 2 or Eigen value of
this matrix, so exactly what we have done.
So, this is kind of thing justifies finally,
you are going to get this behavior nothing
more than that. There is one exception there
is one exception, when would this not be the
solution. Well no, we will come to that is
separate good point, this point is you may
not able to diagonalise this matrix but I
do not need to do the diagonalization.
I just need to find the Eigen values of the
matrix and that I can do whether the matrix
is diagonlised or not it can be diagonlised
or not. If the Eigen value is repeated, if
you have just one Eigen value of lambda 2
is equal to lambda one then what you do what
is solution look like. Yes the solution no
longer a linear combination e to the lambda
1 t and e to the lambda 2 t but, it is linear
combination of e to the lambda t and t e to
the lambda t; if L is 3 by 3 matrix of n by
n matrix.
And an Eigen values repeated r times, then
for that Eigen value the linear independent
linearly independent fundamental solutions
are e to the lambda t, t to the lambda t,
t squared e to the lambda t over 2 factorial
for good luck up to the t r minus 1 over r
minus 1 factorial e to the lambda. Well that
is a general case look at the case of repeated
roots later on nothing much happens. But this
is what the thing looks like i have a little
puzzle here that is we started by saying.
I give you x of 0 and v of 0 and end up with
solutions which look like this but, there
are four arbitrary constantans here A B C
D. But, I give you only two pieces of information
how may I going to determine 4 constant from
2 pieces of information. And I use the equation
themselves and use the equation themselves
the differential equations are also valid
at t equal to 0. So, used the fact that x
dot equal to v and v dot equal to minus omega
naught squared x minus gamma v.
And if I look at this set of equations at
t equal to 0 at t equal to 0 x dot of 0; of
course v of 0 but, v dot of 0 is given by
this and I plug in these in to the solutions.
So, I take this equations and form these 4
equations I can find out all 4 constants provided
I use the differentially equation themselves
we look at the examples. So, there is no conflict
here this is sufficient, you actually find
all the constant of motions and the solution
looks like that.
Now, if the solution looks like that, what
does phase trajectory look like that, was
aim after all our ambition was simply to draw
the phase portrait, what is the phase trajectory
look like. So, I still to solve that problem.
Here is x, here is v and we have a mess of
this kind and you really have eliminate t
before you can draw the phase trajectory but
in practice we can do this much more simple
way what would the phase trajectories look
like. I think physically and I say look I
know that if this an under damped oscillator.
Then x of t as a function of t x of t would
start at some point and it would oscillate
and there is oscillation amplitude decrease
to 0.
If it is over damped oscillator then, here
is t, here is x of t it would start damp of
in this fashion and if x of t is oscillates
about it is central point with decreasing
amplitudes. So, does v of t it is after all
the derivative and if it monotonically decreases
to 0 in the over damped case, so does v of
t.
So, immediately suggest that in general if
I start at of some here point here this thing
would spiral in towards the origin. Unlike
the old case where you had ellipses you do
not have conservation of energy in this problem
and phase trajectories would simply be some
kind of spirals in the under damped case.
And in the over damped case in the over damped
case you would have something starting here
and essentially going off asymptotically to
0 without really oscillating. In fact, would
not even change sign it is over damped. So,
it is really start of somewhere here and simply
fall in in this fashion.
We need to determine, when it is under damped,
when it is over damped for which we need to
know the Eigen values lambda 1 and lambda
2 that is trivial to do. So, let us do that
or matrix L was 0, 1 minus omega naught square
minus gamma, that implies that lambda times
lambda plus gamma, plus omega naught squared
equal to 0.
That is a secular equation and the roots or
lambda 1, 2 equal to minus gamma plus or minus
square root of gamma squared minus 4 omega
naught squared over 2. And if I am interested
under damped case and then this becomes a
imaginary numbers. So, let me write this is
minus gamma over 2 plus or minus I times omega
s, where omega s is square root of omega naught
squared minus gamma squared over 4 pull out
the 2.
And I define a shifted frequency omega sub
s, which is omega not squared minus 1 quarter
gamma squared taken the square root, provided
omega naught as greater than gamma over 2.
So, this root correspond to under damped omega
naught greater than gamma over 2 and this
is over damped. In fact, this monotonic decreases,
so it is just do this, over damped omega naught
less than the critically damped cases omega
naught exactly equal to gamma over 2.
And that is a case we looked at and talked
already about, where this quantity vanishes
and you have a lambda one and two each equal
minus gamma over 2. I did that a carelessly
it would 
it would part of the spiral, going in some
passion depends on the initial conditions.
It is not very interesting to me at the moment,
because I know I can always go to the over
damped case by going back to this writing
by going back to this formula.
The point is that in both cases in the over
damped as well as under damp case the system
is actually reaching a state of equilibrium
at the origin finally, because in the under
damped case there is no doubt about it this
point this part here.
So, e two lambda 1 t and e to the lambda 2
t they both co like e to lambda minus gamma
over 2 t; and then the exponential of e to
the plus i omega s t or minus i omega s t
that does not the matter, because these things
are essentially cosines or sine’s of omega
s t which are oscillatory functions. In the
other case, but they are damped due to this,
in the other case one of the roots minus gamma
minus this square root defiantly damps out
very fast. And the other root minus gamma
plus this number minus gamma dominates over
that, because it is bigger than this square
root here and there for it still damps.
So, both the roots have damping they are not
explosive roots both damped down to 0as we
expect we have friction. So, whatever the
oscillators does where you start eventually
the oscillation die down and this thing goes
to the equilibrium point at the origin; whether
it does. So, by by oscillating across this
point or does, so monotonically matter of
detail it is not important at this stage.
But this what phase trajectory looks like,
what is the lesson from this everything depends
on this Eigen value, on the Eigen values of
this matrix L that controls everything.
So, once the real part of this Eigen value
if it is complex set of Eigen values, once
the real part of negative you would have damping
automatically. So, this is a new set new kind
of equilibrium point it is not a center and
it is not a saddle point but, it is something
else towards which system asymptotically tend
the system tends asymptotically. So, we have
to classify this but, before I do, so let
me settle a small mathematical point here
I mentioned that you start with this L.
And it is finally, the Eigen values of L that
make a difference but, to get to this point
to justify it I said let us assume L is diagnosable.
You have to understand that all matrices cannot
be diagonlised by similarity transformations
only some of them can but, you do not need
the diagonalization. Independent of that this
is what the solution looks like but, as an
aside, when can you diagonalize a matrix,
when can you diagonalize matrix.
Matrix is real symmetric you guaranteed you
can diagonalize it, in fact, you diagonalize
by an orthogonal transformation but, what
is the general case when can diagonalize n
by n matrix. This is a problem in linear algebra
it is nothing to do with this course but it
is good to get. Yes indeed indeed yes. So,
well here is a simple sufficiency condition
it is not necessary but, a sufficiency condition.
If you have a matrix A and it is Hermitian
conjugate A dagger by that I mean take its
complex conjugate and transpose it is called
Hermitian conjugate of the matrix. If A communities
with A dagger, which mean that A A dagger
equal to A dagger A by the way this stands
for complex conjugate transpose.
The Hermitian conjugate of this matrix, if
it commutes with it is Hermitian conjugate
namely if A dagger is equal to A dagger A,
does not matter in which order you multiply,
this is called commutation. This is sufficient
to ensure that you can diagonalise A by A
similarity transformation, by that I mean
you can find a matrix S such that, SAS inverse
is equal to the diagonal diagonal matrix,
it is sufficient it is not necessary. We will
also like to have sufficient and necessary
condition, well you know that a every matrix
obeys a polynomial equation, its own characteristic
equation this is called Cayley Hamilton theorem.
So, if you write down determinate of lambda
I minus A equal to 0, you get a algebraic
equation whose solutions form the set of Eigen
values of matrix A, and this equation looks
like in general, if this an n by n matrix,
it is look like lambda to power n plus a 1
lambda to the minus 1 plus a 2 lambda to the
n minus 2 plus a n equal to 0. That is the
secular equation, in general this is what
it looks like and the constant a 1, a 2, a
3, a determined from the elements from the
matrix A.
And Cayley Hamilton theorem says that, the
matrix A itself satisfies this equation in
other words, guaranty that A to n plus a 1
A to the n minus 1 plus a n I equal to 0;
and this the Cayley Hamilton theorem the statement
is every matrix satisfy its own characteristics
equation. But, of course, it is nothing to
stop the matrix from satisfying another algebraic
equation, polynomial equation of a lower degree,
this could well happen; that the matrix satisfies
an equation of degree lower than n.
And the lowest such equation polynomial equation
is called the minimal polynomial of this matrix,
the lowest degree polynomial equation that
polynomial equal to 0, that is called the
minimal polynomial. And a sufficient necessary
and sufficient condition, that A can be diagonalise
by a similarity transformation is that the
roots of minimal polynomial should be simple
roots, there are many ways of saying this.
One of them has to do with the rank of matrix,
which is mention in directly here, but this
is necessary and sufficient condition, we
will try to use it little later. But let me
give you an example here suppose you have
a matrix, n by n matrix all of whose elements
are 1, 1, 1 everything is 1, and that is a
n by n matrix what is the Eigen values of
this matrix 0 certainly an Eigen values, because
the determine of this matrix is obviously,
0. What happens if you do this, this is just
playing with this, but what happens if you
apply it column vector 1, 1, 1 what you get,
you get n times the same column vector n times
the same column vector.
So, would not you say n is an Eigen value
n is an Eigen value and all the Eigen values
are 0, every other Eigen vales is 0, because
if you took this matrix it is determinate
is 0, if you took the first minor that is
also 0. So, you hid one of these columns and
rows and rest of it also has a 0 determinate,
and that is keeps going till you hit this
elements itself, that is not 0. So, all the
minors are 0 and therefore, this matrix has
0 Eigen values n minus 1 of them, the last
one is n.
So, this matrix has lambda 1 equal to 0, lambda
2 equal to 0, lambda n minus n equal to 0
just one of them, so it is easy to write down
the characteristic equation of this matrix,
it is immediately obvious from here what the
characteristics equation is. So, it says A
to the power n minus 1 A minus n, that is
the characteristics equation of this matrix,
it is a nth order polynomial equation, and
it guaranties that n minus 1 roots are 0,
and 1 root is n instead of A you replace it
by lambda and that immediately follows.
But this is not a minimal polynomial of this
matrix, what is the minimal polynomial of
this matrix, what is the least lowest order
equation that this matrix satisfies, what
you think is lowest order equation, but it
cannot be a linear equation, because A is
not equal to constant, A is not the identity
matrix. So, it cannot be alpha a plus beta
i equal to 0 cannot be an equation of this
kind
The quadratic
What happen, if you square this matrix A
n A, A square is n A, so we know that, so
the minimal polynomial is in fact a quadratic
in this case, what are the roots of the minimal
polynomial?
0 and n 0 and n, are they simple roots yes,
so you guaranteed this matrix can be diagonalise
by similarity transformation, and absolutely
guaranteed
What are the roots of this equation 0 and
0 that is a double root, say repeated root,
when I say it is simple I mean the root is
not repeated, it is multiplicity is 1.
What about this matrix 2 by 2 matrix, can
you diagonalise this by a similarity transformation
you think, well you can easily see A dagger
is going to have 1 here, and they do not commute
with each other. These two do not commute,
you write A A dagger and you get one matrix
you write A dagger a you get another matrix,
they do not commute with each other. So, it
does not satisfy this sufficiency condition,
but off course it may not be necessary, a
question is do you think is matrix can be
diagonalized by similarity transcription at
all.
What are the Eigen values of this matrix,
0 and 0 because, any triangular matrix in
which everything below the principal diagonal
is 0s the Eigen values of diagonal elements
themselves, and that true for either upper
or lower triangular matrices. So, the Eigen
values of this matrices is 0 and 0 the characteristics
function therefore, must be of the form a
squared is equal to 0; so square of this matrix
is null matrix, that is also the minimal polynomial
because, the matrix itself is not 0 nor the
identity matrix.
So, it cannot satisfy linear equation, the
next equation is it satisfies the quadratic
equation, but that must be characteristics
equation, because it is a 2 by 2 matrix, and
that is a squared equal to 0, and the roots
are not simple. So, you guaranteed this matrix
cannot be diagonalized by a similarity transformation.
But, it does stop you from finding the Eigen
values here and here, so the best you can
do to any arbitrary matrix is not necessarily
diagonalise it, but put it in what is it called
Jordan normal form.
So, what you can do is to take this matrix
this huge matrix and you can put it in blocks
of different dimensionalities etcetera. So,
you can have something here, you can have
something here, you can have something here
and so on. In diagonal from and in each block
corresponding to a given Eigen value of some
multiplicity you can bring it to the form
lambda 1, lambda 1, lambda 1 or 1 here, 1
here, 0 lambda 1, 1 etcetera and then zeros
everywhere.
For instance if this Eigen values lambda 1
has multiplicity three it is possible finally,
to bring the matrix to a form, where you lambda
ones everywhere in the diagonal and upper
triangular matrix with ones on this side and
similarly for these other. If it is simple
Eigen value then you just have 1 by 1 matrix
lambda 1 in that block this is called Jordan
normal form. And that is the best you can
do for a arbitratory matrix you do not need
you may not be able to diagonalize it at all.
So, so, much of linear algebra we would not
need all this, but it is useful to know all
we needs there is the Eigen values. Now, that
we have this example let us go immediately
and generalize more general arbitrary case
of a dynamical system and I would like to
now define a dynamical system in the following
way. Many definitions and there is a little
bit of digration in to more general mechanics,
then what were really planning to do but,
I like to do this because it puts thing in
a proper frame work.
Now, what we have learnt looked at different
kinds of equations for very simple systems
particles moving in potential and so on. We
discover we can write these equations of motion
Newton’s equations in first order form is
a set of first order deferential equation.
And we are interested in the qualitative behavior
of these solutions not the quantity of behavior
specific initial conditions and so on.
But qualitative behavior of the solutions
as a whole and we had the simplest examples
we found that, you have stable equilibrium
point unstable equilibrium points. Sometimes
equilibrium points around which the trajectories
move and other cases where they tend to be
these equilibrium points and yet other cases
where they repelled outwards. So, I would
like to generalize this and I start by saying
independent of mechanics, let us assume that
all our dynamical systems are defined by a
set of variables, varying continuously in
time. And that time is parameter arena in
which these variables move.
And you have set of evaluation equations that
is, it I do not make any further assumption.
So, I start by saying my system is defined
by a set of variables x 1 x 2 up to x n; n
of them. I do not care by weather they are
positions or velocities or momenta, I would
not even care if they are actual mechanical
objects they could be very complicated objects.
They could even been populations of species
competing with each other, they could be all
sorts of variables temperature, pressure,
whatever I do not care. I have a set of variables
which changes with time each of them is function
of time. And I prescribe the way these variables
change and I argue by saying, that I have
a set of first order differential equations
first order, because I assume that these are
the only variables needed to specify the state
of the system completely.
And once have that as an article of faith
I start by saying let us assume that they
obey a set of first order deferential equations.
So, that specifying the initial conditions
would tell me the future completely, what
is the most general set of the equation write
down write.
X 1 dot equal to something on the other right
hand side it, could sum function arbitrary
function of x 1, x 2 up to x n it could in
fact, change with time. So, I could perhaps
as t as well in general then x 2 dot equal
to sum other function f. So, let me call that
an f 1 and make this f 2 x 1, x 2, x n possibly
t as well and i go down all the way. And I
have x one dot equal to f n x 1, x 2, x n,
t 
to start with yes I could think of more horrible
competitions. I could think of a situation
where this set of variables x is itself continuous,
I certainly think of that I could think of
x 2 be say the pressure field in this room
and if assume changes a point to point and
for time to time. Then of course, you have
continuous set of equations and what would
then have, what would you, then have we will
come that very interesting question we will
come to that in a minute. But for the moment,
I assume it is descript set of variables the
other assumption I have made is that I have
assumed that they obey a set of the first
of order differential equations. There could
be problems were intrinsically you cannot
find this set of equations we want ignore
that for that moment; I have also assumed
the time is continuous.
If I did not assume that if I said I monitor
the system every year or every month or every
hour, then time itself could be desecrate
variable; and then have difference equations
rather than differential equations. So, I
have not been that general but I have been
fairly general in this sense. Now I of course,
I have said it could depend on time itself,
this means this system is not autonomous the
rules are changing as function of time.
But, then I would like to say let us look
at simpler case where the rules do not change
with time. I remove this explicitly dependence
here by the by, even if you have the t dependence
I can subsume it in an autonomous system as
follows I play trick and I say. Let me define
x n plus 1 to be equal to t define add a dimension
define it in this passion. There of course
I have x n plus 1 dot equal to 1 that is a
nice function it is just 1 on the right hand
side and all these are replaced by x n plus
1.
So, what I done is taken n dimension n plus
1 dimensional non autonomous system and replaced
n dimensional autonomous system and replaced
it with n plus 1 dimensional non autonomous
system get rid of the t in this form. The
physics could be very different of course,
we would not get into that for the moment
I will given example as we go along, what
I would like to point at the moment is, because
of this possibility let us look at only autonomous
systems; so I simplify have this.
And I play the usually trick I call this a
vector a column vector with elements x 1 to
x n and f 1 to f n another column vector.
And then this set of equations can be written
very compactly as x dot equal to some vector
valued function of x. With this f underscore
stands for a column vector with elements f
1, f 2, f 3 etcetera. And each element is
a function of all the axis.
And this is the n dimensional autonomous system,
now task is to try to analysis this in generals
in formidable task. But life can be become
very simple if you make certain assumptions
on f which are not always valid and we would
like to go beyond those the assumption but
what would be the simplest assumption I could
make.
Well if I, if this is linear if these are
all linear functions if this is a linear function
of all its arguments. Then the problem becomes
x dot equal to sum L acting on x and write
the solution down by saying its e to the L
t acting on x. So, linear system is very simple
very easy to do but, before I do even do that
I like to ask. Do you thing as hope of solving
such in a equation at all this is possible,
the answer is no, the answer is no, because
what would you do to solve without using a
matrix method.
If this is non-linear then even matrix method
would not be simple to do what you do naively
is to say, whatever we did in the earlier
case. If I have two equation for x dot and
v dot I eliminate v and write it the second
order differential equation in x, what would
you do here. If I start with eliminating in
principle you would say, let us get rid of
x n, x n minus 1 etcetera.
And write everything as function of x 1, what
order of differential equation would that
be an n th order differential equation. Then
you have a problem of solving very complicated
non-linear equation matters are made verse
by the fact, that there is no grantee you
can do this. There is no grantee that in general
you can eliminate all these variables and
do this. If I started with n th order differential
equation in one variable I can convert it
to set of n coupled first order differential
equations by defining the higher derivates
to be new variables but, I cannot go backwards.
So, the real problem is this and you cannot
go backwards you really stuck with this and
the analysis of this is incredible complicated
all sorts of possibilities exist. And in fact,
beyond the small values of n really don not
know, what is possible very, very complicated
things can happen, extremely intricate things
can happen.
And big discovery made actually long ago but,
codified may be 25 or 30 years ago is that
beyond n equal to 2 you already have unbelievable
complications. You have phenomena of kiosk
after n equal two till two nothing happens.
But once you have n equal 3, 4, 5 etcetera;
things can get really nasty they can get extremely
nasty. So, nasty that it is not computable
any more at all.
So, that is the big discovery and that is
why we now say that kiosk in classical dynamics
is generic, is the rule rather than the exception.
And all things you study in normally classical
mechanics are exceptions the real system is
really very, very complicated much more, more
systems are . But there is method in the madness
we found lots of ways of handling this difficultly.
And this is sum of what I would like to communicate
to you but, for I do that even I would like
to point of that there is no difficulty what
is so, ever. Once you give me nice functions
f 1, f 2 etcetera; to put in this on a computer
and solving the equations numerically and
always do that.
Because, you give me an initial point in this
n dimensional phase space, I use this set
of equations you given me x at sum initial
instant of time t equal to 0 at an initial
infinitasble delta t. I used this differential
equations to tell me what x is and then form
that point is next delta t so, on. Therefore,
once you give it me here I find out what it
is here, what it is here, what it is here
I, join all these guys and I have my phase
trajectory.
So, it is really sort in that sense you have
to be very careful you may need a very fine
time step. If the function is very non-linear
you may need a very fine time step but, in
principle you can do this always. So, solvability
is not the issue you can always solve it for
some initial conditions for specific initial
condition. Do you have much more ambitious
program do you want to find out what is solution
for long times arbitrary long times.
So, we would like know if I start here what
happens after million years, this is of interest
to me, because I could be talking about the
solar system. It is deep interest to know
what happens long times and maybe it is not
possible to integrate, you may not have enough
computing power or time to integrate. In fact,
what happens is errors multiply exponentially
and you cannot compute. So, this is really
the difficulty in dynamical system but, in
principal solvability per say is not issue
at all, you can always solve these equation
locally at any point. And there the hand you
cannot write the solution down explicitly.
So, that is integrability, you cannot integrate
this and that is the harder problem but, I
want to emphasizes again solvability does
not imply integrability. By integrable I mean,
I should be able to write these solutions
down as explicit formulas as functions of
time.
So, I can put t equal to 20 million years
and get what happens 20 million years from
now it is not clear whether you can do this.
And it is not just hypothetical in terms of
millions of years because it is a matter of
scale for the solar system, it could be millions
of year or billions of year. But for elementary
particles like particles inside a accelerator
which are moving at speed comparable to the
speed light, it could be once second and that
is equivalent to millions of years of solar
system.
So, within 1 second things become un predictable
that is very bad news and therefore, you would
like to know how to handle such systems. And
there is no question of I would like to know
what happens of long times, that is going
to be the phrase I use I like to see if I
can handle this kind of thing at long times.
Let me, start at this point tomorrow and show
you, what this solvability means and why it
is not integrable in general. After that we
will go back to the general set of equations
and be little less ambitious and start with
2 by 2 systems analyze that completely and
then see what we can say for n by n cases.
And In fact, once one understand that frame
work the rest of mechanics the rest of classical
dynamics is actually special cases various
interesting special cases. So, I would rather
do that and introduces Langrage and the Hamilton
and so on. Special kinds of dynamical systems
which are part of this more general frame
work here, is use full to know this kind of
thing.
Especially the question of stability I would
like to especially understand this because
I have in mind situation whereas, engineers
you would face not just mechanical systems
not just electrical system. You could have
electromechanical chemicals systems some of
your variables could be machine parameters,
some of the variable could be positions of
rigid objects orientations and so on. Yet
others could be concentrations of chemical
species we would like to be able to have frame
work, where you can handle all these situations
in one shot. So, let me stop here.
