GILBERT STRANG: Moving
now to the second half
of linear algebra.
It's about eigenvalues
and eigenvectors.
The first half, I
just had a matrix.
I solved equations.
The second half,
you'll see the point
of eigenvalues and
eigenvectors as a new way
to look deeper into the matrix
to see what's important there.
OK, so what are they?
This is a big
equation, S time x.
So S is our matrix.
And I've called it
S because I'm taking
it to be a symmetric matrix.
What's on one side
of the diagonal
is also on the other
side of the diagonal.
So those have the
beautiful properties.
Those are the kings
of linear algebra.
Now, about eigenvectors
x and eigenvalues lambda.
So what does that equation,
Sx equal lambda x, tell me?
That says that I have
a special vector x.
When I multiply it
by S, my matrix,
I stay in the same
direction as the original x.
It might get multiplied by 2.
Lambda could be 2.
It might get multiplied by 0.
Lambda there could even be 0.
It might get multiplied
by minus 2, whatever.
But it's along the same line.
So that's like taking a matrix
and discovering inside it
something that stays on a line.
That means that it's really a
sort of one dimensional problem
if we're looking along
that eigenvector.
And that makes computations
infinitely easier.
The hard part of a matrix
is all the connections
between different
rows and columns.
So eigenvectors
are the guys that
stay in that same direction.
And y is another eigenvector.
It has its own eigenvalue.
It got multiplied by alpha
where Sx multiplied the x
by some other number lambda.
So there's our couple
of eigenvectors.
And the beautiful fact is
that because S is symmetric,
those two eigenvectors
are perpendicular.
They are orthogonal,
as it says up there.
So symmetric matrices
are really the best
because their eigenvectors
are perpendicular.
And we have a bunch of
one dimensional problems.
And here, I've included a proof.
You want a proof that the
eigenvectors are perpendicular?
So what does perpendicular mean?
It means that x transpose
times y, the dot product is 0.
The angle is 90 degrees.
The cosine is 1.
OK.
How to show the
cosine might be there.
How to show that?
Yeah, proof.
This is just you can
tune out for two minutes
if you hate proofs.
OK, I start with what I know.
What I know is in that box.
Sx is lambda x.
That's one eigenvector.
That tells me the eigenvector y.
This tells me the
eigenvalues are different.
And that tells me the
matrix is symmetric.
I'm just going to
juggle those four facts.
And I'll end up with x
transpose y equals 0.
That's orthogonality.
OK.
So I'll just do it
quickly, too quickly.
So I take this first
thing, and I transpose
it, turn it into row vectors.
And then when I transpose
it, that transpose
means I flip rows and columns.
But for as symmetric
matrix, no different.
So S transpose is the same as S.
And then I look at this
one, and I multiply that
by x transpose, both
sides by x transpose.
And what I end up
with is recognizing
that lambda times
that dot product
equals alpha times
that dot product.
But lambda is
different from alpha.
So the only way lambda
times that number
could equal alpha
times that number
is that number has to be 0.
And that's the answer.
OK, so that's the
proof that used
exactly every fact we knew.
End of proof.
Main point to
remember, eigenvectors
are perpendicular when
the matrix is symmetric.
OK.
In that case, now, you always
want to express these facts
as from multiplying matrices.
That says everything
in a few symbols
where I had to use all those
words on the previous slide.
So that's the result
that I'm shooting for,
that a symmetric matrix--
just focus on that box.
A symmetric matrix can be
broken up into its eigenvectors.
Those are in Q. Its eigenvalues.
Those are the lambdas.
Those are the numbers
lambda 1 to lambda n
on the diagonal of lambda.
And then the transpose, so
the eigenvectors are now
rows in Q transpose.
That's just perfect.
Perfect.
Every symmetric matrix
is an orthogonal matrix
times a diagonal matrix
times the transpose
of the orthogonal matrix.
Yeah, that's called
the spectral theorem.
And you could say it's up there
with the most important facts
in linear algebra and
in wider mathematics.
Yeah, so that's the fact that
controls what we do here.
Oh, now I have to say what's the
situation if the matrix is not
symmetric.
Now I am not going to get
perpendicular eigenvectors.
That was a symmetric
thing mostly.
But I'll get eigenvectors.
So I'll get Ax equal lambda x.
The first one won't
be perpendicular
to the second one.
The matrix A, it
has to be square,
or this doesn't make sense.
So eigenvalues and
eigenvectors are the way
to break up a square matrix
and find this diagonal matrix
lambda with the eigenvalues,
lambda 1, lambda 2, to lambda
n.
That's the purpose.
And eigenvectors are
perpendicular when
it's a symmetric matrix.
Otherwise, I just have x and its
inverse matrix but no symmetry.
OK.
So that's the quick expression,
another factorization
of eigenvalues in lambda.
Diagonal, just numbers.
And eigenvectors in
the columns of x.
And now I'm not
going to-- oh, I was
going to say I'm not
going to solve all
the problems of applied math.
But that's what these are for.
Let's just see what's special
here about these eigenvectors.
Suppose I multiply again by A.
I Start with Ax equal lambda x.
Now I'm going to
multiply both sides by A.
That'll tell me something
about eigenvalues of A squared.
Because when I multiply by A--
so let me start
with A squared now
times x, which means A times Ax.
A times Ax.
But Ax is lambda x.
So I have A times lambda x.
And I pull out
that number lambda.
And I still have a 1Ax.
And that's also still lambda x.
You see I'm just talking
around in a little circle
here, just using Ax equal
lambda x a couple of times.
And the result is--
do you see what that
means, that result?
That means that the eigenvalue
for A squared, same eigenvector
x.
The eigenvalue is
lambda squared.
And if I add A
cubed, the eigenvalue
would come out lambda cubed.
And if I have a to
the-- yeah, yeah.
So if I had A to the n
times, n multiplies-- so when
would you have A
to a high power?
That's a interesting matrix.
Take a matrix and
square it, cube it,
take high powers of it.
The eigenvectors don't change.
That's the great thing.
That's the whole
point of eigenvectors.
They don't change.
And the eigenvalues just
get taken to the high power.
So for example, we could
ask the question, when,
if I multiply a matrix by itself
over and over and over again,
when do I approach 0?
Well, if these
numbers are below 1.
So eigenvectors, eigenvalues
gives you something
that you just could not see by
those column operations or L
times U. This is looking deeper.
OK.
And OK, and then you'll see
we have almost already seen
with least squares, this
combination A transpose A. So
remember A is a
rectangular matrix, m by n.
I multiply it by its transpose.
When I transpose
it, I have n by m.
And when I multiply them
together, I get n by n.
So A transpose A is, for
theory, is a great matrix,
A transpose times
A. It's symmetric.
Yeah, let's just see
what we have about A.
It's square for sure.
Oh, yeah.
This tells me that
it's symmetric.
And you remember why.
I'm always looking
for symmetric matrices
because they have those
orthogonal eigenvectors.
They're the beautiful
ones for eigenvectors.
And A transpose A,
automatically symmetric.
You just you're
multiplying something
by its adjoint, its
transpose, and the result
is that this matrix
is symmetric.
And maybe there's even more
about A transpose A. Yes.
What is that?
Here is a final--
I always say certain
matrices are important,
but these are the winners.
They are symmetric matrices.
If I want beautiful
matrices, make them symmetric
and make the
eigenvalues positive.
Or non-negative allows 0.
So I can either say
positive definite
when the eigenvalues
are positive,
or I can say non-negative,
which allows 0.
And so I have greater
than or equal to 0.
I just want to say
that bringing all
the pieces of linear
algebra come together
in these matrices.
And we're seeing the
eigenvalue part of it.
And here, I've mentioned
something called the energy.
So that's a physical
quantity that
also is greater or equal to 0.
So that's A transpose
A is the matrix
that I'm going to use in
the final part of this video
to achieve the
greatest factorization.
Q lambda, Q transpose
was fantastic.
But for a non-square
matrix, it's not.
For a non-square
matrix, they don't even
have eigenvalues
and eigenvectors.
But data comes in
non-square matrices.
Data is about like we
have a bunch of diseases
and a bunch of patients
or a bunch of medicines.
And the number of
medicines is not
equal the number of
patients or diseases.
Those are different numbers.
So the matrices that we see
in data are rectangular.
And eigenvalues don't
make sense for those.
And singular values take
the place of eigenvalues.
So singular values,
and my hope is
that linear algebra
courses, 18.06 for sure,
will always reach,
after you explain
eigenvalues that everybody
agrees is important,
get singular values
into the course
because they really have
come on as the big things
to do in data.
So that would be the last
part of this summary video
for 2020 vision
of linear algebra
is to get singular
values in there.
OK, that's coming next.
