The following content is
provided under a Creative
Commons license.
Your support will help
MIT OpenCourseWare
continue to offer high quality
educational resources for free.
To make a donation, or to
view additional materials
from hundreds of MIT courses,
visit MIT OpenCourseWare
at ocw.mit.edu.
PROFESSOR STRANG:
Shall we start?
The main job of today is
eigenvalues and eigenvectors.
The next section in the
book and a very big topic
and things to say about it.
I do want to begin with a
recap of what I didn't quite
finish last time.
So what we did was solve this
very straightforward equation.
Straightforward except
that it has a point source,
a delta function.
And we solved it, both
the fixed-fixed case
when a straight line
went up and back down
and in the free-fixed
case when it
was a horizontal line and then
down with slope minus one.
And there are different
ways to get to this answer.
But once you have it,
you can look at it
and say, well is it right?
Certainly the boundary
conditions are correct.
Zero slope, went through
zero, that's good.
And then the only
thing you really
have to check is does
the slope drop by one
at the point of the impulse?
Because that's what this
is forcing us to do.
It's saying the slope
should drop by one.
And here the slope
is 1-a going up.
And if I take the derivative,
it's -a going down.
1-a dropped to -a, good.
Here the slope was zero.
Here the slope was
minus one, good.
So those are the right answers.
And this is simple, but
really a great example.
And then, what I
wanted to do was
catch the same thing
for the matrices.
So those matrices, we all
know what K is and what T is.
So I'm solving,
I'm really solving
K K inverse equal identity.
That's the equation I'm solving.
So I'm looking for
K inverse and trying
to get the columns
of the identity.
And you realize the
columns of the identity
are just like delta vectors.
They've got a one
in one spot, they're
a point load just
like this thing.
So can I just say how
I remember K inverse?
I finally, you
know-- again there
are different ways to get to it.
One way is MATLAB, just do it.
But I guess maybe
the whole point
is, the whole point of these
and the eigenvalues that
are coming too, is this.
That we have here the chance
to see important special cases
that work out.
Normally we don't
find the inverse,
print out the
inverse of a matrix.
It's not nice.
Normally we just let eig
find the eigenvalues.
Because that's an even
worse calculation,
to find eigenvalues, in general.
I'm talking here about our
matrices of all sizes n by n.
Nobody finds the eigenvalues
by hand of n by n matrices.
But these have
terrific eigenvalues
and especially eigenvectors.
So in a way this is a little
bit like, typical of math.
That you ask about
general stuff or you
write the equation
with a matrix A.
So that's the
general information.
And then there's the
specific, special guys
with special functions.
And here there'll be sines
and cosines and exponentials.
Other places in
applied math, there
are Bessel functions
and Legendre functions.
Special guys.
So here, these are special.
And how do I complete K inverse?
So this four, three, two, one.
Let me complete T inverse.
You probably know
T inverse already.
So T, this is, four,
three, two, one,
is when the load is way
over at the far left end
and it's just descending.
And now I'm going to-- Let me
show you how I write it in.
Pay attention here
to the diagonal.
So this will be three,
three, two, one.
Do you see that's the solution
that's sort of like this one?
That's the second column of
the inverse so it's solving,
I'm solving, T T
inverse equals I here.
It's the-- The second
column is the guy
with a one in the second place.
So that's where the load
is, in position number two.
So I'm level, three,
three, up to that load.
And then I'm dropping
after the load.
What's the third
column of T inverse?
I started with that
first column and I
knew that the answer
would be symmetric
because T is symmetric,
so that allowed
me to write the first row.
And now we can fill in the rest.
So what do you think, if
the point load is-- Now,
I'm looking at the third column,
third column of the identity,
the load has moved down
to position number three.
So what do I have
there and there?
Two and two.
And what do I have last?
One.
It's dropping to zero.
You could put zero in
green here if you wanted.
Zero is the unseen
last boundary,
you know, row at this end.
And finally, what's
happening here?
What do I get from that?
All one, one, one
to the diagonal.
And then sure enough
it drops to zero.
So this would be a case
where the load is there.
It would be one, one,
one, one and then boom.
No, it wouldn't be.
It'd be more like this.
One, one, one, one
and then down to--
Okay.
That's a pretty clean inverse.
That's a very beautiful matrix.
Don't you admire that matrix?
I mean, if they were
all like that, gee,
this would be a great world.
But of course it's not sparse.
That's why we don't
often use the inverse.
Because we had a
sparse matrix T that
was really fast to compute with.
And here, if you
tell me the inverse,
you've actually slowed me down.
Because you've given me now a
dense matrix, no zeroes even
and multiplying T inverse
times the right side
would be slower than
just doing elimination.
Now this is the kind of
more interesting one.
Because this is the one that
has to go up to the diagonal
and then down.
So let me-- can I fill in what
I think-- way this one goes?
I'm going upwards
to the diagonal
and then I'm coming
down to zero.
Remember that I'm coming
down to zero on this K.
So Zero, zero, zero, zero
is kind of the row number.
If that's row number zero,
here's one, two, three, four,
the real thing.
And then row five is
getting back to zero again.
So what do you think, finish
the rest of that column.
So you're telling me now
the response to the load
in position two.
So it's going to look like this.
In fact, it's going to
look very like this.
There's the three and then
this is in position two.
And then I'm going to have
something here and something
here and it'll drop to zero.
What do I get?
Four, two.
Six, four, two, zero.
It's dropping to zero.
I'm going to finish
this in but then I'm
going to look back and see
have I really got it right.
How does this go now?
Two, let's see.
Now it's going up from
zero to two to four to six.
That's on the diagonal.
Now it starts down.
It's got to get to zero,
so that'll be a three.
Here is a one going up
to two to three to four.
Is that right?
And then dropped fast to zero.
Is that correct?
Think so, yep.
Except, wait a minute now.
We've got the right
overall picture.
Climbing up, dropping down.
Climbing up, dropping down.
Climbing up, dropping down.
All good.
But we haven't yet got,
we haven't checked yet
that the change in the
slope is supposed to be one.
And it's not.
Here the slope is like,
three, It's going up by threes
and then it's
going down by twos.
So we've gone from going
up at a slope of three
to down to a slope of two.
Up three, down just like this.
But that would be a
change in slope of five.
Therefore there's a 1/5.
So this is going up with
a slope of four and down
with a slope of one.
Four dropping to one when I
divide by the five, that's
what I like.
Here is up by twos,
down by threes, again
it's a change of five
so I need the five.
Up by ones, down by four.
Sudden, that's a
fast drop of four.
Again, the slope changed
by five, dividing by five,
that's got it.
So that's my picture.
You could now create K
inverse for any size.
And more than that, sort
of see into K inverse
what those numbers are.
Because if I wrote the
five by five or six
by six, doing it a
column at a time,
it would look like
a bunch of numbers.
But you see it now.
Do you see the pattern?
Right.
This is one way to
get to those inverses,
and homework problems
are offering other ways.
T, in particular, is
quite easy to invert.
Do I have any other
comment on inverses
before the lecture on
eigenvalues really starts?
Maybe I do have one comment,
one important comment.
It's this, and I won't
develop it in full,
but let's just say it.
What if the load is
not a delta function?
What if I have other loads?
Like the uniform load of
all ones or any other load?
What if the discrete load
here is not a delta vector?
I now know the responses to each
column of the identity, right?
If I put a load in position
one, there's the response.
If I put a load in position
two, there is the response.
Now, what if I have other loads?
Let me take a typical load.
What if the load was, well,
the one we looked at before.
If the load was [1, 1, 1, 1].
So that I had, the bar was
hanging by its own weight,
let's say.
In other words, could
I solve all problems
by knowing these answers?
That's what I'm
trying to get to.
If I know these
special delta loads,
then can I get the
solution for every load?
Yes, no?
What do you think?
Yes, right.
Now with this
matrix it's kind of
easy to see because if you
know the inverse matrix, well
you're obviously in business.
If I had another load, say
another load f for load,
I would just multiply by
K inverse, no problem.
But I want to look
a little deeper.
Because if I had other loads
here than a delta function,
obviously if I had
two delta functions
I could just combine
the two solutions.
That's linearity that
we're using all the time.
If I had ten delta functions
I could combine them.
But then suppose I had
instead of a bunch of spikes,
instead of a bunch
of point loads,
I had a distributed load.
Like all ones,
how could I do it?
Main point is I could.
Right?
If I know these answers,
I know all answers.
If I know the response
to a load at each point,
then-- come back to
the discrete one.
What would be the answer if
the load was [1, 1, 1, 1]?
Suppose I now try to solve
the equation Ku=ones(4,1),
so all ones.
What would be the answer?
How would I get it?
I would just add the columns.
Now why would I do that?
Right.
Because this, the
right-hand side,
the input is the sum of
the four columns, the four
special inputs.
So the output is the sum
of the four outputs, right.
In other words, as you saw,
we must know everything.
And that's the way
we really know it.
By linearity.
If the input is a
combination of these,
the output is the same
combination of those.
Right.
So, for example, in this T case,
if input was, if I did Tu=ones,
I would just add those and the
output would be [10 9, 7, 4].
That would be the output
from [1, 1, 1, 1].
And now, oh boy.
Actually, let me just
introduce a guy's name
for these solutions
and not today show you.
You have the idea, of course.
Here we added because
everything was discrete.
So you know what we're
going to do over here.
We'll take integrals, right?
A general load will be an
integral over point loads.
That's the idea.
A fundamental idea.
That some other load, f(x),
is an integral of these guys.
So the solution will be the
same integral of these guys.
Let me not go there except
to tell you the name,
because it's a very famous name.
This solution u with
the delta function
is called the Green's function.
So I've now introduced the idea,
this is the Green's function.
This guy is the Green's function
for the fixed-fixed problem.
And this guy is the
Green's function
for the free-fixed problem.
And the whole point
is, maybe this
is the one point I want you to
sort of see always by analogy.
The Green's function is
just like the inverse.
What is the Green's function?
The Green's function is the
response at x, the u at x,
when the input, when
the impulse is at a.
So it sort of depends
on two things.
It depends on the position a
of the input and it tells you
the response at position x.
And often we would use
the letter G for Green.
So it depends on x and a.
And maybe I'm happy if you
just sort of see in some way
what we did there is just
like what we did here.
And therefore the
Green's function
must be just a differential,
continuous version
of an inverse matrix.
Let's move on to
eigenvalues with that point
sort of made, but not driven
home by many, many examples.
Question, I'll take
a question, shoot.
Why did I increase zero, three,
six and then decrease six?
Well intuitively it's
because this is copying this.
What's wonderful is that
it's a perfect copy.
I mean, intuitively the solution
to our difference equation
should be like the solution
to our differential equation.
That's why if we have
some computational,
some differential equation
that we can't solve,
which would be much more
typical than this one,
that we couldn't solve it
exactly by pencil and paper,
we would replace derivatives
by differences and go over here
and we would hope that they
were like pretty close.
Here they're right,
they're the same.
Oh the other columns?
Absolutely.
These guys?
Zero, two, four, six going up.
Six, three, zero coming back.
So that's a discrete
thing of one like that.
And then the next
guy and the last guy
would be going up
one, two, three, four
and then sudden drop.
Thanks for all questions.
I mean, this sort of,
by adding these guys in,
the first one actually
went up that way.
You see the Green's functions.
But of course this
has a Green's function
for every a. x and a are running
all the way from zero to one.
Here they're just
discrete positions.
Thanks.
So playing with
these delta functions
and coming up with
this solution,
well, as I say,
different ways to do it.
I worked through one
way in class last time.
It takes practice.
So that's what the
homework's really for.
You can see me come
up with this thing,
then you can, with leisure,
you can follow the steps,
but you've gotta do
it yourself to see.
Eigenvalues and, of
course, eigenvectors.
We have to give
them a fair shot.
Square matrix.
So I'm talking
about general, what
eigenvectors and eigenvalues
are and why do we want them.
I'm always trying to say
what's the purpose, you know,
not doing this just for
abstract linear algebra.
We do this, we look
for these things
because they tremendously
simplify a problem
if we can find it.
So what's an eigenvector?
The eigenvalue is
this number, lambda,
and the eigenvector
is this vector y.
And now, how do I
think about those?
Suppose I take a vector
and I multiply by A.
So the vector is headed
off in some direction.
Here's a vector
v. If I multiply,
and I'm given this
matrix, so I'm
given the matrix,
whatever my matrix is.
Could be one of those
matrices, any other matrix.
If I multiply that by v,
I get some result, Av.
What do I do?
I look at that and I say that
v was not an eigenvector.
Eigenvectors are the
special vectors which
come out in the same direction.
Av comes out parallel to v. So
this was not an eigenvector.
Very few vectors
are eigenvectors,
they're very special.
Most vectors, that'll
be a typical picture.
But there's a few of them
where I've a vector y
and I multiply by A. And
then what's the point?
Ay is in the same direction.
It's on that same line as y.
It could be, it might
be twice as far out.
That would be Ay=2y.
It might go backwards.
This would be a
possibility, Ay=-y.
It could be just halfway.
It could be, not move at all.
That's even a possibility.
Ay=0y.
Count that.
Those y's are eigenvectors
and the eigenvalue is just,
from this point of view, the
eigenvalue has come in second
because it's-- So y was a
special vector that kept its
direction.
And then lambda is just the
number, the two, the zero,
the minus one, the 1/2
that tells you stretching,
shrinking, reversing, whatever.
That's the number.
But y is the vector.
And notice that
if I knew y and I
knew it was an eigenvector, then
of course if I multiply by A,
I'll learn the eigenvalue.
And if I knew an
eigenvalue, you'll
see how I could find
the eigenvector.
Problem is you have
to find them both.
And they multiply each other.
So we're not talking about
linear equations anymore.
Because one unknown is
multiplying another.
But we'll find a way to look
to discover eigenvectors
and eigenvalues.
I said I would try to make
clear what's the purpose.
The purpose is that in this
direction on this y line, line
of multiples of y, A is
just acting like a number.
A is some big n by n,
1,000 by 1,000 matrix.
So a million numbers.
But on this line, if we find
it, if we find an eigenline,
you could say, an eigendirection
in that direction,
all the complications
of A are gone.
It's just acting like a number.
So in particular we could solve
1,000 differential equations
with 1,000 unknown u's with
this 1,000 by 1,000 matrix.
We can find a
solution and this is
where the eigenvector
and eigenvalue
are going to pay off.
You recognize this.
Matrix A is of size 1,000.
And u is a vector
of 1,000 unknowns.
So that's a system
of 1,000 equations.
But if we have found an
eigenvector and its eigenvalue
then the equation will, if
it starts in that direction
it'll stay in that direction
and the matrix will just
be acting like a number.
And we know how to
solve u'=lambda*u.
That one by one scalar
problem we know how to solve.
The solution to that
is e to the lambda*t.
And of course it could
have a constant in it.
Don't forget that these
equations are linear.
If I multiply it, if
I take 2e^(lambda*t),
I have a two here and a two
here and it's just as good.
So I better allow that as well.
A constant times
e^(lambda*t) times y.
Notice this is a vector.
It's a number times
a number, the growth.
So the lambda is now, for
the differential equation,
the lambda, this number
lambda is crucial.
It's telling us whether the
solution grows, whether it
decays, whether it oscillates.
And we're just looking
at this one normal mode,
you could say normal
mode, for eigenvector y.
We certainly have not found
all possible solutions.
If we have an eigenvector,
we found that one.
And there's other uses
and then, let me think.
Other uses, what?
So let me write again
the fundamental equation,
Ay=lambda*y.
So that was a
differential equation.
Going forward in time.
Now if we go forward in
steps we might multiply by A
at every step.
Tell me an eigenvector
of A squared.
I'm looking for a vector
that doesn't change direction
when I multiply twice by
A. You're going to tell me
it's y. y will work.
If I multiply once by
A I get lambda times y.
When I multiply again by A I
get lambda squared times y.
You see eigenvalues are
great for powers of a matrix,
for differential equations.
The nth power will just take
the eigenvalue to the nth.
The nth power of A will just
have lambda to the nth there.
You know, the pivots
of a matrix are all
messed up when I square it.
I can't see what's
happening with the pivots.
The eigenvalues are a different
way to look at a matrix.
The pivots are critical numbers
for steady-state problems.
The eigenvalues are
the critical numbers
for moving problems,
dynamic problems,
things are oscillating
or growing or decaying.
And by the way, let's just
recognize since this is
the only thing that's
changing in time,
what would be the-- I'll just
go down here, e^(lambda*t).
Let's just look and see.
When would I have decay?
Which you might want
to call stability.
A stable problem.
What would be the
condition on lambda
to get-- for this to decay.
Lambda less than zero.
Now there's one little
bit of bad news.
Lambda could be complex.
Lambda could be 3+4i.
It can be a complex
number, these eigenvalues,
even if A is real.
You'll say, how did
that happen, let me see?
I didn't think.
Well, let me finish
this thought.
Suppose lambda was 3+4i.
So I'm thinking about what would
e to the lambda*t do in that
case?
So this is small example.
If I had lambda is (3+4i), t.
What does that do as time grows?
It's going to grow
and oscillate.
And what decides the growth?
The real part.
So it's really the
decay or growth
is decided by the real part.
The three, e to the 3t,
that would be a growth.
Let me put growth.
And that would be,
of course, unstable.
And that's a problem
when I have a real part
of lambda bigger than zero.
And then if lambda
has a zero real part,
so it's pure oscillation, let
me just take a case like that.
So e^(4it).
So that would be,
oscillating, right?
It's cos(4t) + i*sin(4t),
it's just oscillating.
So in this discussion we've
seen growth and decay.
Tell me the parallels
because I'm always
shooting for the parallels.
What about the growth of A?
What matrices, how
can I recognize
a matrix whose powers grow?
How can I recognize a matrix
whose powers go to zero?
I'm asking you for
powers here, over there
for exponentials somehow.
So here would be A to
higher and higher powers
goes to zero, the zero matrix.
In other words, when
I multiply, multiply,
multiply by that matrix I get
smaller and smaller and smaller
matrices, zero in the limit.
What do you think's the
test on the lambda now?
So what are the
eigenvalues of A to the k?
Let's see.
If A had eigenvalues
lambda, A squared
will have eigenvalues
lambda squared,
A cubed will have
eigenvalues lambda cubed,
A to the thousandth will
have eigenvalues lambda
to the thousandth.
And what's the test for
that to be getting small?
Lambda less than one.
So the test for stability will
be-- In the discrete case,
it won't be the
real part of lambda,
it'll be the size of
lambda less than one.
And growth would be the size
of lambda greater than one.
And again, there'd be
this borderline case
when the eigenvalue has
magnitude exactly one.
So you're seeing here
and also here the idea
that we may have to deal
with complex numbers here.
We don't have to deal
with the whole world
of complex functions
and everything
but it's possible for
complex numbers to come in.
Well while I'm saying that,
why don't I give an example
where it would come in.
This is going to
be a real matrix
with complex eigenvalues.
Complex lambdas.
It'll be an example.
So I guess I'm
looking for a matrix
where y and Ay never come
out in the same direction.
For real y's I know, okay,
here's a good matrix.
Take the matrix that rotates
every vector by 90 degrees.
Or by theta.
But let's say here's a matrix
that rotates every vector
by 90 degrees.
I'm going to raise
this board and hide it
behind there in a minute.
I just wanted to-- just to open
up this thought that we will
have to face complex numbers.
If you know how to multiply two
complex numbers and add them,
you're okay.
This isn't going to
turn into a big deal.
But let's just realize
that-- Suppose that matrix,
if I put in a vector y and
I multiply by that matrix,
it'll turn it
through 90 degrees.
So y couldn't be an eigenvector.
That's the point
I'm trying to make.
No real vector could
be the eigenvector
of a rotation matrix because
every vector gets turned.
So that's an example where you'd
have to go to complex vectors.
and I think if I tried
the vector [1, i],
so I'm letting the square
root of minus one into here,
then I think it would come out.
If I do that multiplication
I get minus i.
And I get one.
And I think that
this is, what is it?
This is probably
minus i times that.
So this is minus
i times the input.
No big deal.
That was like, you
can forget that.
It's just complex
numbers can come in.
Now let me come back to the
main point about eigenvectors.
Things can be complex.
So the main point is
how do we use them?
And how many are there?
Here's the key.
A typical, good
matrix, which includes
every symmetric
matrix, so it includes
all of our examples and
more, if it's of size 1,000,
it will have 1,000
different eigenvectors.
And let me just say for
our symmetric matrices
those eigenvectors
will all be real.
They're great, the eigenvectors
of symmetric matrices.
Oh, let me find them for one
particular symmetric matrix.
Say this guy.
So that's a matrix, two by two.
How many eigenvectors
am I now looking for?
Two.
You could say, how
do I find them?
Maybe with a two by two,
I can even just wing it.
We can come up with a vector
that is an eigenvector.
Actually that's what
we're going to do
here is we're going to
guess the eigenvectors
and then we're going to
show that they really
are eigenvectors and then
we'll know the eigenvalues
and it's fantastic.
So like let's start here
with the two by two case.
Anybody spot an eigenvector?
Is [1, 0] an eigenvector?
Try [1, 0].
What comes out of [1, 0]?
Well that picks the
first column, right?
That's how I see,
multiplying by [1, 0],
that says take one
of the first column.
And is it an eigenvector?
Yes, no?
No.
This vector is not in the
same direction as that one.
No good.
Now can you tell me one that is?
You're going to
guess it. [1,  1].
Try [1, 1].
Do the multiplication
and what do you get?
Right?
If I input this vector
y, what do I get out?
Actually I get y itself.
Right?
The point is it didn't
change direction,
and it didn't even
change length.
So what's the
eigenvalue for that?
So I've got one eigenvalue
now, one eigenvector. [1, 1].
And I've got the eigenvalue.
So here are the
vectors, the y's.
And here are the lambdas.
And I've got one of them
and it's one, right?
Would you like to
guess the other one?
I'm only looking for two because
it's a two by two matrix.
So let me erase here,
hope that you'll
come up with another one. [1,
 -1] is certainly worth a try.
Let's test it.
If it's an eigenvector,
then it should come out
in the same direction.
What do I get when I do that?
So I do that multiplication.
Three and I get three
and minus three,
so have we got an eigenvector?
Yep.
And what's, so if this was
y, what is this vector?
3y.
So there's the other
eigenvector, is [1, -1],
and the other
eigenvalue is three.
So we did it by
spotting it here.
MATLAB can't do it that way.
It's got to figure it out.
But we're ahead of
MATLAB this time.
So what do I notice?
What do I notice
about this matrix?
It was symmetric.
And what do I notice
about the eigenvectors?
If I show you those two
vectors, [1, 1] and [1, -1],
what do you see there?
They're orthogonal. [1, 1]
is orthogonal to [1, -1],
perpendicular is the
same as orthogonal.
These are orthogonal,
perpendicular.
I can draw them, of course,
and see that. [1, 1]
will go, if this is
one, it'll go here.
So that's [1, 1].
And [1, -1] will go
there, it'll go down,
this would be the
other one. [1,  -1].
So there's y_1.
There's y_2.
And they are perpendicular.
But of course I don't draw
pictures all the time.
What's the test for two
vectors being orthogonal?
The dot product.
The dot product.
The inner product. y
transpose-- y_1 transpose * y_2.
Do you prefer to write it
as y_1 with a dot, y_2?
This is maybe better because
it's matrix notation.
And the point is orthogonal,
the dot product is zero.
So that's good.
Very good, in fact.
So here's a very important fact.
Symmetric matrices have
orthogonal eigenvectors.
What I'm trying to say is
eigenvectors and eigenvalues
are like a new way
to look at a matrix.
A new way to see into it.
And when the matrix is
symmetric, what we see
is perpendicular eigenvectors.
And what comment do you
have about the eigenvalues
of this symmetric matrix?
Remembering what
was on the board
for this anti-symmetric matrix.
What was the point about
that anti-symmetric matrix?
Its eigenvalues were imaginary
actually, an i there.
Here it's the opposite.
What's the property
of the eigenvalues
for a symmetric matrix
that you would just guess?
They're real.
They're real.
Symmetric matrices
are great because they
have real eigenvalues and they
have perpendicular eigenvectors
and actually, probably if a
matrix has real eigenvalues
and perpendicular eigenvectors,
it's going to be symmetric.
So symmetry is a great property
and it shows up in a great way
in this real eigenvalue, real
lambdas, and orthogonal y's.
Shows up perfectly
in the eigenpicture.
Here's a handy little
check on the eigenvalues
to see if we got it right.
Course we did.
That's one and three we can get.
But let me just show you two
useful checks if you haven't
seen eigenvalues before.
If I add the eigenvalues,
what do I get?
Four.
And I compare that
with adding down
the diagonal of the matrix.
Two and two, four.
And that check always works.
The sum of the eigenvalues
matches the sum
down the diagonal.
So that's like, if you got all
the eigenvalues but one, that
would tell you the last one.
Because the sum
of the eigenvalues
matches the sum
down the diagonal.
You have no clue where that
comes from but it's true.
And another useful fact.
If I multiply the
eigenvalues what do I get?
Three?
And now, where do you
see a three over here?
The determinant.
4-1=3.
Can I just write those two
facts with no idea of proof.
The sum of the lambdas,
I could write "sum".
This is for any matrix, the sum
of the lambdas is equal to the,
it's called the
trace, of the matrix.
The trace of the matrix is
the sum down the diagonal.
And the product of the lambdas,
lambda_1 times lambda_2
is the determinant
of the matrix.
Or if I had ten eigenvalues,
I would multiply all ten
and I'd get the determinant.
So that's some facts
about eigenvalues.
There's more, of
course, in section 1.5
about how you would
find eigenvalues
and how you would use them.
That's of course the key point,
is how would we use them.
Let me say something more about
that, how to use eigenvalues.
Suppose I have this system of
1,000 differential equations.
Linear, constant coefficients,
starts from some u(0).
How do eigenvalues
and eigenvectors help?
Well, first I have to
find them, that's the job.
So suppose I find 1,000
eigenvalues and eigenvectors.
A times eigenvector number
i is eigenvalue number i
times eigenvector number i.
So these, y_1 to y_1000, so y_1
to y_1000 are the eigenvectors.
And each one has
its own eigenvalue,
lambda_1 to lambda_1000.
And now if I did that work,
sort of like, in advance,
now I come to the
differential equation.
How could I use this?
This is now going to
be the most-- it's
three steps to use
it, three steps
to use these to get the answer.
Ready for step one.
Step one is break u
nought into eigenvectors.
Split, separate,
write, express u(0)
as a combination
of eigenvectors.
Now step two.
What happens to
each eigenvector?
So this is where the
differential equation
starts from.
This is the initial condition.
1,000 components of u
at the start and it's
separated into 1,000
eigenvector pieces.
Now step two is watch
each piece separately.
So step two will be multiply
say c_1 by e^(lambda_1*t),
by its growth.
This is following
eigenvector number one.
And in general, I would multiply
every one of the c's by e
to those guys.
So what would I have now?
This is one piece of the start.
And that gives me one
piece of the finish.
So the finish is, the answer
is to add up the 1,000 pieces.
And if you're with me, you see
what those 1,000 pieces are.
Here's a piece, some multiple
of the first eigenvector.
Now if we only were
working with that piece,
we follow it in time
by multiplying it
by e to the lambda_1
* t, and what do we
have at a later time?
c_1*e^(lambda_1*t)y_1.
This piece has grown into that.
And other pieces have
grown into other things.
And what about the last piece?
So what is it that
I have to add up?
Tell me what to write here.
c_1000, however much
of eigenvector 1,000
was in there, and
then finally, never
written left-handed
before, e to the who?
Lambda number 1,000,
not 1,000 itself,
but its eigenvalue, 1,000, t.
This is just splitting, this
is constantly, constantly
the method, the way to use
eigenvalues and eigenvectors.
Split the problem
into the pieces that
go-- that are eigenvectors.
Watch each piece,
add up the pieces.
That's why eigenvectors
are so important.
Yeah?
Yes, right.
Well, now, very good question.
Let's see.
Well, the first
thing we have to know
is that we do find
1,000 eigenvectors.
And so my answer is going to
be for symmetric matrices,
everything always works.
For symmetric matrices,
if size is 1,000,
they have 1,000 eigenvectors,
and next time we'll
have a shot at some of these.
What some of them are for
these special matrices.
So this method
always works if I've
got a full family of
independent eigenvectors.
If it's of size 1,000, I need,
you're right, exactly right.
To see that this was
the questionable step.
If I haven't got
1,000 eigenvectors,
I'm not going to be
able to take that step.
And it happens.
I am sad to report that
some matrices haven't
got enough eigenvectors.
Some matrices, they collapse.
This always happens
in math, somehow.
Two eigenvectors collapse
into one and the matrix
is defective, like it's a loser.
So now you have to, of
course, the equation
still has a solution.
So there has to be
something there,
but the pure eigenvector
method is not
going to make it on
those special matrices.
I could write down
one but why should we
give space to a loser?
But what happens in that case?
You might remember from
differential equations
when two of these roots,
these are like roots,
these lambdas are
like roots that you
found in solving a
differential equation.
When two of them come together,
that's when the danger is.
When I have a double
eigenvalue, then there's
a high risk that I've
only got one eigenvector.
And I'll just put in this
little thing what the other,
so the e^(lambda_1*t) is fine.
But if that y_1 is like, if
the lambda_1's in there twice,
I need something new.
And the new thing turns
out to be t*e^(lambda* t).
I don't know if
anybody remembers.
This was probably hammered back
in differential equations that
if you had repeated
something or other then this,
you didn't get pure
e^(lambda*t)'s, you got also
a t*e^(lambda*t).
Anyway that's the answer.
That if we're
short eigenvectors,
and it can happen, but it
won't for our good matrices.
Okay, so Monday
I've got lots to do.
Special eigenvalues and vectors
and then positive definite.
