In the last video, we talked about how
to deal with non-diagonalizable matrices
by perturbing them and
making them diagonalizable.
Now this involves a bunch of analysis;
you have to take limits,
you have to use L'Hôpital's Rule,
sometimes you don't want to do that.
In this video, we're gonna talk
about a different approach,
which is accept that
they're non-diagonalizable,
and find vectors that behave
in an appropriate way
with respect to those matrices.
Those vectors are called power vectors.
Now the idea of a power vector is
it's a generalization
of the idea of an eigenvector.
See, an eigenvector is something
where Ax is λx.
So that means that (A - λI)x = 0.
A power vector isn't killed by A - λI,
it's killed by a power of A - λI.
So if you have a vector where when
you hit it with A - λI p times, it dies,
but if you hit it p - 1 times,
it doesn't die,
then we call that a
power vector of degree p.
Sometimes these are called generalized
eigenvectors—I don't like that term.
I'm gonna call them power vectors.
And eigenvectors are a special case
of power vectors where p = 1.
(A - λI)x = 0, and the 0th power
of (A - λI)x is just x.
So these are a generalization
of the idea of an eigenvector.
So let's look at an example.
The most basic example is—
We've looked at 1, 1, 0, 1,
we're gonna look at 2, 1, 0, 2 this time.
You look at that, that's an
eigenvector with eigenvalue 2.
'Cause if you multiply—subtract
off twice the identity
and you multiply by &lt;1, 0&gt;, you get 0.
Or if you prefer, if you multiply the
original matrix by &lt;1, 0&gt;,
you get 2&lt;1, 0&gt;.
So this is (A - λI)x = 0,
or you can say Ax = λx.
They're equivalent.
This is an eigenvector.
But this isn't diagonalizable.
If you look at &lt;0, 1&gt;,
that's not an eigenvector.
But &lt;0, 1&gt; is a power vector.
You see, if you multiply A - 2I
times A - 2I times &lt;0, 1&gt;,
you do the first multiplication
and you get &lt;1, 0&gt;.
And then you multiply by this again,
and you get &lt;0, 0&gt;.
So you start with a
power vector of degree 2,
you hit it with A - 2I,
and you get a power vector of degree 1—
in other words, an eigenvector.
Hit it with A - 2I again, you kill it.
So here's another example.
A little bit more subtle.
Let's look at the matrix 3, 1, -1, 1.
If you find its characteristic polynomial,
it works out to λ^2 - 4λ + 4,
which is (λ - 2)^2.
It has 2 as an eigenvalue,
and it's a double root.
You could also tell that because
the trace is 4; 3 + 1—
and the determinant is 4; 3 - (-1).
So two numbers that add up to 4
and multiply up to 4
have got to be 2 and 2.
Now we can look for our eigenspace.
To find our eigenspace,
we look at A - 2I,
and we row reduce it,
and we find the null space
of the row reduced matrix.
That gives us the equations x_1 = -x_2,
x_2 can be anything it wants,
so we could say that our eigenvector is
&lt;-1, 1&gt;, and I prefer to use &lt;1, -1&gt;.
So that's an eigenvector.
But not let's look at the vector &lt;1, 0&gt;.
I claim that's a power vector of degree 2.
'Cause let's check.
If we apply A - 2I to it twice,
here's A - 2I,
you multiply it out.
If you multiply it once,
you get &lt;1, -1&gt;.
And you multiply it again,
you get &lt;0, 0&gt;.
So that's a power vector.
So what's so great about power vectors?
Well these two together form a basis.
These are linearly independent,
and they give you a basis for R2.
And let's see what the matrix
looks like in that basis.
See, if you apply A to the eigenvector,
you get twice the eigenvector.
And you apply A - 2I to the second vector,
you happen to get the first vector.
Which means that A applied to the
second vector gave you the first vector
plus twice the second vector.
And now we can look at that
in the B basis.
Ab_1, the coordinates
in the B basis are &lt;2, 0&gt;.
Ab_2, the coordinates
in the B basis are &lt;1, 2&gt;.
And we discover that if we use this basis,
that this example looks just
like our previous example.
It's not diagonalizable; you can't make
it into an diagonal matrix,
but you can make it into
what's the next best thing.
This is called Jordan form.
And we're gonna see more about
Jordan form in a couple of minutes.
So here are what we do in general.
Instead of looking at an eigenspace E_λ,
we're gonna look at the power space—
E_λ with a twiddle on top.
And that's gonna be all of the power
vectors that have that eigenvalue,
no matter what their degree is.
And that's the same
thing as the null space,
but it's not the null space of A - λI,
it's the null space of a power of A - λI.
And it turns out that the algebraic
multiplicity is big enough,
has a big enough power
to catch all of them.
So this contains the eigenspace,
because all the eigen—
You know, because those
are killed by one power,
but it can contain more
than the eigenspace.
In general, all of the degrees
of the power vectors,
they can never be bigger than
the algebraic multiplicity,
and the dimension of the power space
is always equal to the
algebraic multiplicity.
So, back—If the geometric multiplicity was
the same as the algebraic multiplicity,
then E_λ would have that dimension,
and this has that dimension,
so they're the same.
You don't get any extra power vectors.
If the geometric equals algebraic, the
eigenvectors are the only thing in town.
But if the geometric is smaller
than the algebraic,
then the eigenspace is smaller
than the power space,
and you need extra power vectors
to give you this dimensional space.
So the upshot—
here's the big theorem
is that if you have any nxn matrix,
whether it's diagonalizable or not,
you can always find a basis
consisting of power vectors.
Some of them are eigenvectors,
others are power vectors
of degree 2, 3, 4, whatever.
And if you choose the basis
in exactly the right way,
you can write the matrix to be
block diagonal,
where each block looks just like
what we saw before.
It had eigenvalues on the diagonals,
and ones above the diagonal,
and this is called a Jordan Block.
So here are some examples of
matrices in Jordan canonical form—
some 3x3 matrices.
If it's a diagonal matrix, then it's
in Jordan canonical form,
the blocks just happen to be 1x1 blocks.
And here, likewise, they're 1x1 blocks.
Two of them happen to have
the same eigenvalue,
but if you have eigenvector—
a two-dimensional eigenspace,
that's actually gonna give you two
Jordan blocks for that eigenvalue.
Now here we have a situation
where we have a 1x1 block,
because &lt;1, 0, 0&gt; is an eigenvector,
and we have a 2x2 block.
&lt;0, 1, 0&gt; is an eigenvector,
&lt;0, 0, 1&gt; is not.
It's a power vector,
but not an eigenvector.
Here we still have a 1x1 and a 2x2.
Here we have two eigenvectors
with eigenvalue 2,
and a power vector with eigenvalue 2.
And here you have one
eigenvector with eigenvalue 2,
namely &lt;1, 0 ,0&gt;,
a power vector of degree 2,
and a power vector of degree 3.
So, diagonalizable matrices
go in Jordan form,
and all the blocks are 1x1 blocks.
If it's not diagonalizable,
then you have bigger blocks.
And for each block, &lt;1, 0, 0, 0, ...0&gt;
is an eigenvector.
&lt;0, 1, 0, 0, ...0&gt; is a
power vector of degree 2,
&lt;0, 0, 1, 0, ...0&gt; is a power
vector of degree 3, and so on.
