The following content is
provided under a Creative
Commons license.
Your support will help
MIT OpenCourseWare
continue to offer high quality
educational resources for free.
To make a donation, or to
view additional materials
from hundreds of MIT courses,
visit MIT OpenCourseWare
at ocw.mit.edu.
GILBERT STRANG: So last time
was orthogonal matrices--
Q. And this time is
symmetric matrices, S.
So we're really talking about
the best matrices of all.
Well, I'll start with any square
matrix and about eigenvectors.
But you've heard of
eigenvectors more than once--
more than twice--
more than 10 times, probably.
OK.
So eigenvectors.
And then, let's be sure we
know why they're useful,
and maybe compute one or two.
But then we'll move
to symmetric matrices
and what is special about those.
And then, even more
special and more important
will be positive definite
symmetric matrices--
so that when I say, positive
definite, I mean symmetric.
So start with A. Next comes
S. Then come the special S--
special symmetric
matrices that have
this extra positive
definite property.
OK.
So start with A.
So an eigenvector--
if I multiply A by
x, I get some vector.
And sometimes, if x is
especially chosen well,
Ax comes out in the
same direction as x.
Ax comes out some
number times x.
So there are-- normally,
there would be,
for an n by n matrix--
so let's say A is n by n today.
Normally, if we
live right, there
will be n different
independent vectors--
x eigenvectors-- that have
this special property.
And we can compute them
by hand if n is 2 or 3--
2, mostly.
But the computation of
the x's and the lambdas--
so this is for i
equal 1 up to n,
if I use this sort
of math shorthand--
that I have n of
these almost always.
And my first question is,
what are they good for?
Why does course after course
introduce eigenvectors?
And to me the key property is
seen by looking at A squared.
So let me look at A squared.
So it's another n by n matrix.
And we would ask, suppose
we know these guys?
Suppose we've found
those somehow.
What about A squared?
Is x an eigenvector
of A squared also?
Well, the way to find out is
to multiply A squared by x,
and see what happens.
Do you see what's
going to happen here?
This is A times Ax,
which is A times--
Ax is lambda x--
and now what do I do now?
Because I'm shooting
for the answer yes.
X is an eigenvector
of A squared also.
So what do I do?
That number-- that
lambda is just a number.
I can put it anywhere I like.
So I can put it out front.
And then I have Ax, which is?
AUDIENCE: Lambda x.
GILBERT STRANG: Lambda x.
Thanks.
So I have another lambda x.
So there's lambda squared x.
So I learned the
crucial thing here--
that x is also an
eigenvector of A squared,
and the eigenvalue
is lambda squared.
And of course, I can keep going.
So A to the nth--
x is lambda to the nth x.
We have found the right vectors
for that particular matrix A.
What about A inverse x?
That will be-- if
everything is good--
1 over lambda x.
Well, yeah.
So anytime I write
1 over lambda,
my mind says, you
gotta make some comment
on the special case where
it doesn't work, which is?
AUDIENCE: Lambda
is not equal to 0.
GILBERT STRANG: Yeah.
If lambda is not 0, I'm golden.
If lambda is 0, it
doesn't look good.
And what's happening
if lambda is 0?
AUDIENCE: A inverse [INAUDIBLE].
GILBERT STRANG: A doesn't
even have an inverse.
If lambda was 0--
which it could be--
no rule against it.
If lambda was 0, this would
say, A times the eigenvector
is 0 times the eigenvector.
So that would tell me
that the eigenvector
is in the null space.
It would tell me the
matrix A isn't invertible.
It's taking some vector x to 0.
And so everything clicks.
This works when it should work.
And if we have other fun--
any function of the matrix,
we could define the
exponential of a matrix.
18.03 would do that.
Let's just write it down,
as if we know what it means.
Does it have the
same eigenvector?
Well, sure.
Because e to the At--
the exponential of a matrix--
if I see e to the something--
I think of that
long, infinite series
that gives the exponential.
Those-- all the terms in
that series have powers of A.
So everything is working.
Every term in that series--
x is an eigenvector.
And when I put it
all together, I
learn that the eigenvalue
is e to the lambda t.
That's just a typical
and successful work use.
OK.
So that's eigenvectors
and eigenvalues,
and we'll find some in a minute.
Now, so I'm claiming that this--
that from this first thing--
which was just about certain
vectors are special--
now we're beginning to
see why they're useful.
So special is good.
Useful is even better.
So let me take any
vector, say v. And OK,
what do I want to do?
I want to use eigenvectors.
This v is probably
not an eigenvector.
But I'm supposing that
I've got n of them.
You and I are agreed
that there are
some matrices for
which there are not
a full set of eigenvectors.
That's really the main
sort of annoying point
in the whole subject
of linear algebra,
is some matrices don't
have enough eigenvectors.
But almost all do,
and let's go forward
assuming our matrix has.
OK.
So if I've got n independent
eigenvectors, that's a basis.
I can write any vector
v as a combination
of those eigenvectors.
Right.
And then I can find out
what A to any power.
So that's the point.
This is going to be the
simple and reason why
we like to have--
we like to know
the eigenvectors.
Because if I choose those
as my basis vectors,
v is a combination of them.
Now if I multiply by A, or A
squared, or A to the k power,
then it's linear.
So I can multiply each
one by A to the k.
And what do I get if I multiply
that guy by A to the kth power?
OK.
Well, I'm just going
to use-- or, here
I said n, but let me say k.
Because n-- I'm sorry.
I'm using n for the
size of the matrix.
So I better use k for
the typical case here.
So what do I get?
Just help me through
this and we're happy.
So what happens when I
multiply that by A to the k?
It's an eigenvector,
remember, so when I
multiply by A to the k, I get?
AUDIENCE: C1.
GILBERT STRANG: C1.
That's just a number.
And A to the k times
that eigenvector gives?
AUDIENCE: Lambda 1.
GILBERT STRANG: Lambda 1 to
the k times the eigenvector.
Right?
That's the whole point.
And linearity says keep going.
Cn, lambda n to
the kth power, Xn.
In other words, I can take--
I can apply any
power of a matrix.
I can apply the
exponential of a matrix.
I can do anything
quickly, because I've
got the eigenvector.
So really, I'm
saying the first use
for eigenvectors-- maybe the
principle use for which they
were invented-- is to be able
to solve difference equations.
So if I call that Vk--
the kth power-- then the
equation I'm solving here
is a one step
difference equation.
This is my difference equation.
And if I wanted to
use exponentials,
the equation I would be solving
would be dv, dt equal Av.
Solution to discrete steps, or
continuous time evolution comes
is trivial, if I know
the eigenvectors.
Because here is the
solution to this one.
And the solution to this
one is the same thing, C1, e
to the lambda, 1, t, x1.
Is that what you were expecting
for the solution here?
Because if I takes
the derivative,
it brings down a lambda.
If I multiply by A, it
brings down a lambda--
so, plus the other guys.
OK.
Not news, but important to
remember what eigenvectors
are for in the first place.
Good.
Yeah.
Let me move ahead.
Oh-- one matrix fact
is about something
called similar matrices.
So I have on my
matrix A. Then I have
the idea of what it
means to be similar to A,
so B is similar to A.
What does that mean?
So here's what it
means, first of all.
It means that B can
be found from A, by--
this is the key operation here--
multiplying by a matrix
M, and its inverse--
M inverse AM.
When I see two
matrices, B and A,
that are connected by
that kind of a change,
M could be any
invertible matrix.
Then I would say B was similar
to A. And that changed--
that appearance of
AM is pretty natural.
If I change variables
here by M, then I get--
that similar matrix
will show up.
So what's the key factor?
Do you remember the key
fact about similar matrices?
If B and A are
connected like that--
AUDIENCE: They have
the same eigenvalues.
GILBERT STRANG: They have
the same eigenvalues.
So this is just a useful
point to remember.
So I'll-- this is like
one fact in the discussion
of eigenvalues and eigenvectors.
So similar matrices,
same eigenvalues.
Yeah.
So in some way in the
eigenvalue, eigenvector world,
they're in this--
they belong together.
They're connected by this
relation that just turns out
to be the right thing.
Actually, that is-- it gives
us a clue of how eigenvalues
are actually computed.
Well, they're actually
computed by typing eig of A,
with parentheses around
A. That's how they're--
in real life.
But what happens when
you type eig of A?
Well, you could say
the eigenvalue shows up
on the screen.
But something had
to happen in there.
And what happened
was that MATLAB--
or whoever-- took that matrix
A, started using good choices
of m--
better and better.
Took a bunch of steps
with different m's.
Because if I do another m, I
still have a similar matrix,
right?
If I take B and do a
different m2 to B--
so I get something
similar to B, then
that's also similar
to A. I've got
a whole family of
similar things there.
And what does MATLAB do with
all these m's, m1 and m2 and m3
and so on?
It brings the matrix
to a triangular matrix.
It gets the eigenvalues
showing up on the diagonal.
It's just tremendously-- it
was an inspiration when that--
when the good choice
of m appeared.
And let me just say--
because I'm going on
to symmetric matrices--
that for a symmetric matrices,
everything is sort of clean.
You not only go to
a triangular matrix,
you go toward a diagonal matrix.
They off-- you
choose m's that make
the off diagonal stuff smaller
and smaller and smaller.
And the eigenvalues
are not changing.
So there, shooting up on the
diagonal, are the eigenvalues.
So I guess I should
verify that fact,
that similar matrices
have the same eigenvalues.
Can we-- there can't
be much to show.
There can't be much in the
proof because that's all I know.
And I want to know its
eigenvalues and eigenvectors.
So let me say, suppose m
inverse Am has the eigenvector
y and the eigenvalue of lambda.
And I want to show--
do I want to show that y is an
eigenvector also, of A itself?
No.
Eigenvectors are changing.
Do I want to show that lambda
is an eigenvalue of A itself?
Yes.
That's my point.
So can we see that?
Ha.
Can I see that lambda
is an eigenvector?
There's not a lot to do here.
I mean, if I can't do it soon,
I'm never going to do it,
because--
so what am I going to do?
AUDIENCE: Define the
vector x equals my--
GILBERT STRANG: Yeah, I could.
Yeah.
X is-- m-y is going to be a
key, and I can see m-y coming.
Just-- when I see m
inverse over there,
what am I going to do
with the darn thing?
AUDIENCE: [INAUDIBLE]
GILBERT STRANG: I'm going
to put it on the other side.
I'm going to multiply
that equation by m.
So I'll have-- that will
put the m over here.
And I'll have A-M-y
equals lambda My, right?
And is that telling me
what I want to know?
Yes.
That's saying that My--
that you wisely suggested
to give a name x to--
is lambda times My.
Do you see that?
That the eigenvalue
lambda didn't change.
The eigenvector did change.
It changed from y to My.
That's the x.
The eigenvector of x.
This is lambda x.
Yeah.
So that's the role of M. It
just gives you a different basis
for eigenvectors.
But it does not
change eigenvalues.
Right.
Yeah.
OK.
So those are similar matrices.
Yeah, some other
good things happen.
A lot of people
don't know-- in fact,
I wasn't very
conscious of the fact
that A times B has the same
eigenvalues as B times A. Well,
I should maybe write that down.
AB has the same eigenvalues--
the same non-zero ones--
you'll see.
I have to-- as BA.
This is any A and B same size.
I'm not talking
similar matrices here.
I'm talking any
two A and B. Yeah.
So that's a good
thing that happens.
Now could we see y?
And then I'm going to be really
pretty happy with basic fact
about eigenvalues.
So if I want to show
that two things have
the same eigenvalues,
what do you propose?
Show that they are similar.
I already said, if
they are similar.
So is there an m?
Is there an m that will
connect this matrix?
So is there an m that will
multiply this matrix that way?
So that would be similar to AB.
And can I produce BA then?
So I'll just put the
word want up here.
I want-- if I have
that, then I'm
done, because that's saying that
those two matrices, AB and BA,
are similar.
And I know that then they
have the same eigenvalues.
So what should m be?
M should be-- so what is M here?
I want that to be true.
Should M be B?
Yeah.
M equal B. Boy.
Not the most hidden fact here.
Take M equal B.
So then I have B times
A, times BB inverse--
which is the identity.
So I have B times A. Yes.
OK.
So AB and BA are fine.
Now, what do you think
about this question?
Are the eigenvalues-- I
now know that AB and BA
have the same eigenvalues.
And the reason I had to be
careful about non-zero is that
if I had zero
eigenvalues, then--
AUDIENCE: [INAUDIBLE]
GILBERT STRANG: Yeah.
I can't count on those inverses.
Right.
Right.
So that's why I put it
in that little qualifier.
But now I want to
ask this question.
If I know the eigenvalues of A--
separately, by
itself, A-- and of B--
now I'm talking about any
two matrices, A and B.
If I have two matrices, A--
I have a matrix
A and a matrix B.
And I know their eigenvalues
and their eigenvalues.
What about AB?
A times B. Can I multiply
the eigenvalues of A times
the eigenvalues of B?
Don't do it.
Right.
Yes.
Right.
The eigenvalues of A
times the eigenvalues of B
could be damn near anything.
Right.
They're not connected to the
eigenvalues of AB specially.
And maybe something could
be discovered, but not much.
And similarly, for
A plus B. So yeah.
So let me just write
down this point.
Eigenvalues of A plus
B are generally not
eigenvalues of A plus
eigenvalues of B.
Generally not.
Just-- there is no reason.
And the reason that that's--
I get that no answer is,
that the eigenvectors can
be all different.
If the eigenvectors
for A are totally
different from the
eigenvectors for B,
then A plus B will have probably
some other, totally different
eigenvectors, and there's
nothing happening there.
That's sort of thoughts
about eigenvalues in general.
And I could-- there'd be a
whole section on eigenvectors,
but I'm really interested
in eigenvectors
of symmetric matrices.
So I'm going to move
on to that topic.
So now, having talked
about any matrix A,
I'm going to specialize
to symmetric matrices,
see what's special
about the eigenvalues
there, what's special
about eigenvectors there.
And I think we've
already said it in class.
So let me-- let me
ask you to tell me
about it-- tell me again.
So I'll call that matrix
S now, as a reminder
always that I'm talking here
about symmetric matrices.
So what do I-- what are
the key facts to know?
Eigenvalues are real
numbers, if the matrix is.
I'm thinking of real
symmetric matrices.
Of course, other
real matrices could
have imaginary eigenvalues.
Other real matrices-- so just--
let's just think for a moment.
Yeah.
Maybe I'll just put it here.
Can I back up, before I keep
going with symmetric matrices?
So you take a matrix like that.
Q, yeah.
That would be a Q. But it's
not specially a Q. Maybe
the most remarkable
thing about that matrix
is that it's anti-symmetric.
So I'll call it A. Right.
If I transpose that
matrix, what do I get?
AUDIENCE: The negative.
GILBERT STRANG: The negative.
So that's like anti-symmetric.
And I claim that an
anti-symmetric matrix
has imaginary eigenvalues.
So that's a 90 degree rotation.
And you might say, what
could be simpler than that?
A 90 degree rotation--
that's not a weird matrix.
But from the point of
view of eigenvectors,
something a little odd
has to happen, right?
Because if I have a
90 degree rotation--
if I take a vector x--
any vector x-- could it
possibly be an eigenvector?
Well, apply A to it.
You'd be off in
this direction, Ax.
And there is no way that
Ax can be a multiple of x.
So there's no real eigenvector
for that anti-symmetric matrix,
or any anti-symmetric matrix.
So you see that when we
say that the eigenvalues
of a symmetric matrix
are real, we're
saying that this
couldn't happen--
that this couldn't happen
if A were symmetric.
And here, it's the very
opposite, it's anti-symmetric.
Well, while that's on the board,
you might say, wait a minute.
How could that have any
eigenvector whatsoever?
So what is an eigenvector
of that matrix A?
How do you find the
eigenvectors of A?
When they're 2 by 2, that's a
calculation we know how to do.
You remember the steps there?
I'm looking for
Ax equal lambda x.
So right now I'm looking
for both lambda and x.
I've got 2.
It's not linear, but I'm going
to bring this over to this side
and write it as A minus
lambda I, x equals 0.
And then I'm going
to look at that
and say, wow, A minus lambda
I must be not invertible,
b because it's got this
x in its null space.
So the determinant of
this matrix must be 0.
I couldn't have a null space
unless the determinant is 0.
And then when I look at A
minus lambda I, for this A,
I've got minus
lambdas, minus A--
oh, A is just the 1.
And that's minus 1.
I'm going to take
the determinant.
And what am I going to
get for the determinant?
Lambda squared--
AUDIENCE: Plus 1.
GILBERT STRANG: Plus 1.
And I set that to 0.
So I'm just following
all the rules,
but it's showing me
that the lambda--
the two lambdas-- there
are two lambdas here--
but they're not real, because
that equation, the roots
are i and minus i.
So those are the eigenvalues.
And they have the nice--
they have all the--
well, they are the eigenvalues.
No doubt about it.
With 2 by 2 there are two quick
checks that tell you, yeah,
you did a calculation right.
If I add up the two
eigenvalues in this--
if I add up the two
eigenvalues for any matrix,
and I'm going to do
it for this one--
I get what answer?
AUDIENCE: The trace?
GILBERT STRANG: I get the
same answer from the adding--
add the lambdas gives
me the same answer
as add the diagonal
of the matrix--
which I'm calling A. So if I
add the diagonal I get 0 and 0.
So it's 0 plus 0.
And this number adding the
diagonal is called the trace.
And we'll see it again
because it's so simple.
Just adding the diagonal
entries gives you
a key bit of information.
When you add down
the diagonal it
tells you the sum of
the eigenvalue-- some
of the lambdas.
Doesn't tell you each
lambda separately,
but it tells you the sum.
So it tells you one
fact by doing one thing.
Yeah.
That's pretty handy.
Gives you a quick
check if you've--
when you compute
this determinant
and solve for lambda--
the thing you-- this is a way
to compute eigenvalues by hand.
You could make a
mistake, because it's
a quadratic formula
for 2 by 2, but you can
check by adding the two roots.
Do you get the same
as the trace 0 plus 0?
Well, there's one other check,
equally quick, for 2 by 2,
so 2 by 2s--
you really get them right.
What's the other check to--
we add the eigenvalues,
we get the trace.
AUDIENCE: [INAUDIBLE]
GILBERT STRANG: We
multiply the eigenvalues.
So we take-- so now
multiply the lambdas.
So then I get i times minus i.
And that should equal--
let's-- don't look yet.
What should it equal if I
multiply the eigenvalues
I should get the?
AUDIENCE: Determinant.
GILBERT STRANG:
Determinant, right.
Of A. So that's
two handy checks.
Add the eigenvalues--
for any size--
3 by 3, 4 by 4-- but
it's only two checks.
So for 2 by 2, it's
kind of, you've got it.
3 by 3, 4 by 4--
you could still have
made an error and the two checks
could potentially still work.
Let's just check it out here.
What's i times minus i?
AUDIENCE: 1.
GILBERT STRANG: 1.
Because it's minus i
squared, and that's plus 1.
And the determinant of
that matrix is 0 minus--
is 1.
Yeah.
OK.
So we got 1.
Good.
Those are really the key
fact about eigenvalues.
But of course they're
not-- it's not
as simple as solving Ax
equal B to find them,
but if you follow through on
this idea of similar matrices,
and sort of chop down the
off diagonal part, then
sure enough, the
eigenvalue's gotta show up.
OK.
Symmetric.
Symmetric matrices.
So now we're going
to have symmetric,
and then we'll have the special,
even better than symmetric,
is symmetric positive definite.
OK.
Symmetric-- you told me the main
facts are the eigenvalues real,
the eigenvectors orthogonal.
And I guess, actually--
yeah.
So I want to put those into
math symbols instead of words.
So yeah.
I guess-- shall I just jump in?
And the other thing hidden
there-- but very important is--
there's a full set
of eigenvectors,
even if some eigenvalues
happen to be repeated,
like the identity matrix.
It's still got plenty
of eigenvectors.
So that's a added point
that I've not made there.
And I could prove
those two statements,
but why don't I ask you to
accept them and go onward?
What are we going
to do with them?
OK.
Can you just-- let's
have an example.
Let me put an example here.
Suppose S-- now
I'm calling it S--
is 0s, 1 and 1.
So that's symmetric.
What are its eigenvalues?
What are the eigenvalues of
that symmetric matrix, S?
AUDIENCE: Plus and minus 1.
GILBERT STRANG:
Plus and minus 1.
Well, if you propose
two eigenvalues,
I'll write them
down, 1 and minus 1.
And then what will
I do to check them?
AUDIENCE: Trace and determinant.
GILBERT STRANG: Trace
and determinant.
OK.
So are they-- is it true
that the eigenvalues
are 1 and minus 1?
OK.
How do I check the trace?
What is the trace
of that matrix?
0.
And what's the sum
of the eigenvalues--
0.
Good.
What about determinant?
What's the determinant of S?
AUDIENCE: Minus 1.
GILBERT STRANG: Minus 1.
The product of the
eigenvalues-- minus 1.
So we've got it.
OK.
What are the eigenvectors?
What vector can you
multiply by and it
doesn't change direction-- in
fact, doesn't change at all?
I'm looking for the eigenvector
that's a steady state?
AUDIENCE: 0, 1?
GILBERT STRANG: 0, 1?
AUDIENCE: 1, 1.
GILBERT STRANG: I
think it's 1, 1.
Yeah.
So here is the lambdas.
And then the eigenvectors are--
I think 1, 1.
Is that right?
Yeah.
Sure.
S is just a permutation here.
It's just exchanging
the two entries.
So 1 and 1 won't change.
And what's the
other eigenvector?
AUDIENCE: Minus 1?
GILBERT STRANG: 1 and minus 1.
And then, I'm thinking--
remembering about this similar
stuff--
I'm thinking that S is
similar to a matrix that
just shows the eigenvalues.
So S is similar to--
I'm going to put in an M--
well, I'm going to
connect S-- that matrix--
with the eigenvalue matrix,
which has the eigenvalues.
So here is my--
everybody calls that
matrix capital lambda,
because everybody calls the
eigenvalues little lambda.
So the matrix that has them
is called capital lambda.
And I-- my claim is that
these guys are similar--
that this matrix, S, that
you're seeing up there--
I believe there
is an M I believe
there is an M. So that S--
what did I put in here?
So I'm following this pattern.
I believe that there would
be an M and an M inverse,
so that this would mean that.
And that's nice.
First of all, it would
confirm that the eigenvalues
stay the same, which
was certain to happen.
And then it would also mean that
I had got a diagonal matrix.
And of course, that's
a natural goal--
to get a diagonal matrix.
So we might hope that
the M that gets us there
is like an important matrix.
So do you see what
I'm doing here?
It comes under the heading
of diagonalizing a matrix.
I start with a matrix, S.
I find it's eigenvalues.
They go on into lambda.
And I believe I can find an M,
so that I see they're similar.
They have the same eigenvalues,
1 and minus 1, both sides.
So only remaining
question is, what's M?
What's the matrix
that diagonalizes S?
The-- what have we
got left to use?
AUDIENCE: The eigenvectors.
GILBERT STRANG:
The eigenvectors.
The matrix that-- so, can
I put the M over there?
Yeah.
I'll put-- that M
inverse is going
to go over to the other side.
Oh.
It goes here, doesn't it?
I was worried there.
It didn't look good, but yeah.
So this is all going
to be right, if--
this is what I'd like to have--
SM equal M lambda.
SM equal M lambda.
That's diagonalizing a matrix.
That's finding the M
using the eigenvectors.
That produces a
similar matrix lambda,
which has the eigenvalues.
That's the great fact
about diagonalizing.
That's how you use--
that's another way to say,
this is how the
eigenvectors pay off.
You put them into M. You
take the similar matrix
and it's nice and diagonal.
And do you see that
this will happen?
S times-- so M has
the first eigenvector
and the second eigenvector.
And I believe that first
eigenvector times the second--
and the second eigenvector--
that's M again, on this side.
Let me just write
in 1, 0, 0, minus 1.
I believe is has got to
be confirming that we've
done the thing right--
confirming that the
eigenvectors work here.
Please make sense out
of that last line.
When you see that
last line, what do I
mean to make sense out of it?
I want to see that that's true.
How do I see that--
how do I do this--
so what's the left side
and what's the right side?
So what-- if I
multiply S by a couple
of columns, what's the answer?
AUDIENCE: Sx1 and Sx2.
GILBERT STRANG: Sx1 and Sx2.
That's the beauty of
matrix multiplication.
If I multiply a matrix
by another matrix,
I can do it a column at a time.
There are four great ways
to multiply matrices,
so this is another one--
a column at a time.
So this left hand
side is Sx1, Sx2.
I just do each column.
And what about the
right hand side?
I can do that multiplication.
AUDIENCE: X1 minus x2.
GILBERT STRANG: X1 minus
x2 did somebody say?
Death.
No.
I don't want-- Oh, x1--
sorry.
You said it right.
OK.
When you said x1 minus
x2, I was subtracting.
But you meant that that's--
the first column is x1,
and the second
column is minus x2.
Correct.
Sorry about that.
And did we come out right?
Yes.
Of course, now I compare.
Sx1 is lambda one x1.
Sx2 is lambda two x2.
And I'm golden.
So what was the
point of this board?
What did we learn?
We learned-- well, we kind of
expected that the original S
would be similar to the lambdas,
because the eigenvalues match.
S has eigenvalues lambda.
And this diagonal
matrix certainly
has eigenvalues 1n minus 1.
A diagonal matrix--
the eigenvalues
are right in front of you.
So they're similar.
S is similar to the lambda.
And there should be an M. And
then somebody suggested, maybe
the M is the eigenvectors.
And that's the right answer.
So finally, let me write
that conclusion here--
which isn't just for
symmetric matrices.
So maybe I should
put it for matrix A.
So if it has lambdas
and eigenvectors,
and the claim is that A
times the eigenvector matrix
is the eigenvector matrix
times the eigenvalues.
And I would shorten that
to Ax equals x lambda.
And I could rewrite
that, and then I'll
slow down, as A equal
x lambda x inverse.
Really, this is
bringing it all together
in a simple, small formula.
It's telling us that A
is similar to lambda.
It's telling us the matrix
M, that does the job--
it's a matrix of eigenvectors.
And so it's like a shorthand
way to write the main fact
about eigenvalues
and eigenvectors.
What about A squared?
Can I go back to
the very first--
I see time is close
to the end here.
What about A squared?
What are the eigenvectors
of A squared?
What are the eigenvalues
of A squared?
That's like the whole
point of eigenvalues.
Well, or I could just
square that stupid thing.
X lambda, x inverse,
x lambda, x inverse.
And what have I got?
X inverse, x in the middle is--
AUDIENCE: Identity.
GILBERT STRANG: Identity.
So I have x, lambda
squared, x inverse.
And to me and to you that
says, the eigenvalues
have been squared.
The eigenvectors didn't change.
Yeah.
OK.
And now finally,
last breath is, what
if the matrix is symmetric?
Then we have different letters.
That's the only-- that's
the significant change.
The eigenvector matrix is
now an orthogonal matrix.
I'm coming back to the key
fact of what makes symmetric--
how do I read--
how do I see symmetric
helping me in the eigenvector
and eigenvalue world?
Well, it tells me that the
eigenvectors are orthogonal.
So the x is Q. The
eigenvalues are real.
And the eigenvectors
is x inverse.
But now I'm going to make those
eigenvectors unit vectors.
I'm going to normalize it.
So I'm really allowing--
I have an orthogonal matrix
Q. So I have a different way
to write this, and this
is the end of the--
today's class.
Q lambda.
And what can you tell
me about Q inverse?
AUDIENCE: It's Q transpose.
GILBERT STRANG:
It's Q transpose.
Thanks.
So that was the last lecture.
So now the orthogonal
lecture is coming up
at the last second of the
symmetric matrices lecture.
And this has the name
spectral theorem,
which I'll just put there.
And the whole point
is that it tells you
what every symmetric
matrix looks like--
orthogonal eigenvectors,
real eigenvalues.
