>> Let's take a look at this
diagonalization theorem.
So we'll see what it says,
why it says what it says,
and we'll take a look
at a concrete example.
So first, suppose I
have a two-by-two matrix
and suppose I also
have some eigenvectors,
some linearly independent
eigenvectors.
Suppose it's V1 with
eigenvalue lambda 1 and V2
with eigenvalue lambda 2.
That tells me that E
times V1 is lambda 1, V1,
and A times V2 is lambda 2, V2.
I'm going to construct
this matrix P,
whose columns are V1 and V2.
And notice, since
I'm assuming that V1
and V2 are linearly independent,
this matrix P is invertible
by the invertible
matrix theorem.
If I compute what
A times P is, well,
it's A times this matrix whose
vectors are column vectors,
or V1 and V2.
The first column is
going to be A times V1,
and the second column is
going to be A times V2.
But from above, we see
that that's just lambda
1, V1, and lambda 2, V2.
And I can write that
as a matrix product.
The first matrix is going
to be the matrix V1, V2,
and the second will be
this diagonal matrix
with eigenvalues on
the main diagonal.
All right, so I'll call that
P. And then, this one is going
to be a diagonal
matrix D. And so I get
that A times P equals P times
D, where P is this invertible 2
by 2 matrix, and D is a
diagonal 2 by 2 matrix.
Now let's see what we can
do with this equation.
If I multiply on the left
on both sides by P inverse,
notice that P inverse times P
is just the identity matrix,
and so on the right side
diagonal matrix, D. And so I get
that P inverse AP is
a diagonal matrix.
Similarly, I could
redo the computation,
multiplying on the right
by P inverse on both sides.
P, P inverse is the
identity, and so I get
that A equals PDP inverse.
And so what this
says is the fact
that A has linearly independent,
two linearly independent
eigenvectors tells me
that A is similar to
a diagonal matrix.
And this technique
works, in general,
for any N by N matrix A,
whenever we have N linearly
independent eigenvectors.
What we're going to do, right,
if the eigenvectors are V1
up to VN with eigenvalues,
lambda 1 up to lambda N,
we're going to be able to write
A as P times D times P inverse,
where D is this diagonal matrix
whose main diagonal entries are
the eigenvalues, and
this invertible matrix P,
is gotten by just taking
those linearly independent
eigenvectors and putting them
as columns of a matrix P.
And this is called the
diagonalization theorem.
It tells us that a
matrix is diagonalizable.
It's similar to a diagonal
matrix if, and only if,
we can find a basis
eigenvectors.
So N linearly independent
eigenvectors
for an N by N matrix.
Let's take a look at concrete
three by three example.
Let's try to diagonal eyes
this matrix A, which is 2, 0,
minus 2; 1, 3, 2; 0, 0, 3.
And just to make things a little
bit easier so that we don't have
to factor a characteristic
polynomial,
suppose you're also given the
eigenvalues So in this case,
we're given that the
eigenvalues are 2 and 3.
Well, if we want to
diagonalize it, this is going
to break up into two pieces.
We need to find a basis for
the lambda equals 2 eigenspace,
and then we also need
to find the basis
for the lambda equals
3 eigenspace.
Now remember, any eigenvector
with eigenvalue 2 is going
to be linearly independent
from any eigenvector
with eigenvalue 3, because
2 is not equal to 3.
Distinct eigenvalues
give you linearly
independent eigenvectors.
And so what this is
going to tell us is
that we can diagonalize A.
We can find three linearly
independent eigenvectors,
as long as the dimensions
of these eigenspaces
add up to 3, all right?
Then A is going to
be diagonalizable.
So we'll just compute
each of these eigenspaces,
find the basis for each one.
See if we have the
dimensions add up to 3 or not.
So we look at A minus
2 times the identity,
and I get this matrix, 0, 0,
minus 2; 1, 1, 2; 0, 0, 1.
So remember, when you're
subtracting off two times the
identity, all you're
going to do is subtract 2
from the main diagonal.
Now we want to compute the
null space of this matrix,
and so I set up the
augmented matrix,
and now I'm just
going to row reduce.
All right, because all I want
to do is find the null space
of that matrix, so
that I can find a basis
for the null space
of that matrix.
Now here it is in
reduced echelon form.
Let me put boxes around my
pivot so that it's easier
to see what the basic
and free variables are.
And I see that the eigenspace
can be described as, well,
x2 is free, and I see that
x1 is minus T if x2 is T,
and this for all T in
R. What this tells me
in parametric vector form,
this solution spaces just given
by T times this column
vector minus 1, 1, 0.
All right, so the
eigenspace corresponding
to the eigenvalue 2,
is one-dimensional,
with basis minus 1, 1, 0.
What about the eigenspace
for three?
I'm going to compute A
minus 3 times the identity.
I get this matrix here.
Again, I want to compute the
null space of that matrix.
So I set up the augmented
matrix and row reduce.
In this case, the row
reduction is easy.
Let me put a box
around the pivot
to highlight the free
and basic variables.
I see that the eigenspace
is described by, well,
now in this case, x2 and
x3 are non-pivot columns.
So those are free.
Let me call them pre-parameters
S and T, and X1 is minus 2T.
And so, in parametric form,
this solution set looks
like x equals x times 0, 1,
0 plus T times minus 2, 0, 1.
So the lambda equals 2
eigenspace has basis minus 1,
1, 0.
The lambda equals 3 eigenspace
has basis 0, 1, 0 and minus 2,
0, 1, which tells me now by
the diagonal is a theorem,
I can write P -- A,
sorry, as PDP inverse,
where D is this diagonal
matrix with 2, 3,
3 on the main diagonal and P.
Now the important thing here is
since I listed in the diagonal
matrix the eigenvalue 2 first.
I need to put eigenvector
corresponding
to eigenvalue 2 first, so
I've got minus 1, 1, 0.
And then next, I'm
going to list the basis
for the lambda equals
3 eigenspace,
0, 1, 0, minus 2, 0, 1.
