In this video we're going to
consider the question of when a matrix
is diagonalizable. 
Now most matrices are diagonalizable
but there are some exceptions. 
Even if you allow complex eigenvalues
and complex eigenvectors,
there are some matrices that
simply do not admit a basis
of eigenvectors.
And you know, the space 
of all n by n matrices,
well that's n^2 dimensional.
Turns out that the space
of not diagonal n by n matrices
is (n^2 - 1) dimensional. 
So if you pick a random matrix
It's almost certain to be
diagonalizable but if you
pick a one parameter family
of random matrices,
somewhere along the way
there's a good chance that
you'll run into a non-diagonalizable one.
So let's see how a matrix can fail
to be diagonalizable.
The standard example is the
matrix 1001.
And let's go find its
eigenvalues and eigenvectors.
Now as always we compute λ (lambda)
times the identity minus the matrix.
λ minus 1, -1, 0, λ -1.
And we take this determinate to get
the characteristic polynomial.
(λ - 1)  (λ -1) - (-1)  0
just gives us (λ -1)^2
and that has one and only one root.
So the only eigenvalue is λ = 1.
Now only having one eigenvalue
isn't necessarily a problem.
If we have two eigenvectors with
this eigenvalue, great.
We'd have a basis of eigenvectors.
So let's find out how many eigenvectors
we actually have.
You have to take A-λ * identity.
That's 0100 and row reduce it.
Now in this case, there's not much
the row reduction.
It already is in reduced row echelon form.
The pivot variable is the second variable.
The free variable is the first.
Our first equation is 0x_1 + x_2 is 0, in
other words x_1 can be whatever it wants
and x_2 has to be 0.
And so our eigenvector has to be
a multiple of 1, 0.
That means there is only one
linearly independent eigenvector.
If you take any two eigenvectors,
they have to be multiples of each other
and the matrix isn't diagonalizable.
So only one eigenvector,
that's not enough to form a 
basis for R2 or C2 and
the problem is that one was
a double root of the characteristic
polynomial but it only gave us a 
single eigenvector.
In general, we know that the eigenvalues
are always the roots of the
characteristic polynomial.
This gives us two different ways to decide
what the multiplicity of an eigenvalue is.
You can say is it a regular root 
or a double root or a triple root
or quadruple root of the polynomial?
That's called the algebraic multiplicity.
So if you have a characteristic
polynomial that's
(λ - 1^2)  (λ - 2^3)
 (λ - 7)
we say the algebraic multiplicity of
one is two, the algebraic multiplicity
of two is three, the algebraic
multiplicity of seven is one,
that's just a regular root.
And we denote algebraic
multiplicity by m_a.
Now, the geometric multiplicity
describes how big
the eigenspace is.
Now for that, you have to actually
figure out what E_{λ} is,
and you do that by taking
A minus λ times the identity
and row reducing it.
So the geometric multiplicity of one
is the dimension of E_1
and that's going to be n-Rank(A-I)
because that's the number
of free variables.
You have this many pivots,
this many total variables, so the
difference is the number
of free variables.
Likewise, m_g(2) is
n-Rank(A-2I), m_g(7) is going to be
n-Rank(A-7I).
Now, you might ask how these two
different multiplicities are related.
And the answer is that the geometric
multiplicity can never be
bigger than the algebraic multiplicity,
it's always less than or equal to
the algebraic multiplicity.
On the other hand, it's always
at least one.
If {λ} is a root of the
characteristic polynomial,
then there exists an eigenvector,
and so the geometric multiplicity
is at least one.
Now, in particular, that means that
the algebraic multiplicity is one
and the geometric multiplicity is
at least one and at most one,
so it's just one.
There's nothing to check when
the algebraic multiplicity is one.
The only time things get interesting
is when the algebraic multiplicity
is bigger than one. Then you have to
figure out the geometric multiplicity
and figure out is one, two, three,
or some number up to
the algebraic.
So the big theorem, this is the theorem 
that tells us when a matrix
is digonalizable.
It's diagonalizable if and only if
the geometric multiplicities add up to n.
So how do you get a basis
of eigenvectors?
We take a basis of eigenvectors
for each eigenspace and you
just concatenate them.
And that's true if and only if for
every eigenvalue, the geometric
multiplicity is equal to
the algebraic multiplicity.
And you only need to check the cases
where the algebraic is bigger than one
because if the algebraic is one,
then the geometric is one.
So for example, look at
this matrix here,
100 001 010.
If you worked out what
the characteristic polynomial is,
the determinant of this matrix,
winds up being (λ - 1)^2*(λ+1).
So the eigenvalues are plus and minus
one and the algebraic multiplicity
of -1 is one, the algebraic multiplicity
of +1 is two, it's (λ-1)^2.
So this one is a double root, so
the algebraic multiplicity of one is two.
Don't have to worry about the single
root, we do have to worry about
the double root, so let's figure out
what the dimension of the eigenspace is.
We have to take A-I, and there it is.
You row reduce and you see that
there's one pivot. If there's only
one pivot there are two
free variables, x_1 and x_3, and
so that means that we have
a two dimensional eigenspace.
So in this case, m_g(1) is also two,
and we win, it's diagonalizable.
In our next example, it's just like
the first example except
over here in this corner, instead
of putting a zero here
I put a one there.
That doesn't change the characteristic
polynomial, the characteristic
polynomial still winds up being
exactly the same as before.
So the algebraic multiplicity of one
is two, and now if we go about
figuring out the geometric multiplicity,
take A-I and that's this matrix.
And to row reduce it, we'll swap
the first and third row.
Then we'll add the first row to
the second, and I'll swap
the second and third rows, add
the second row to the first
and you see now we've got
two pivots.
There are two pivots, so there's
only one free variable, so
the geometric multiplicity is one.
The geometric multiplicity
did not equal the algebraic, this
was one, this was two,
so A is not diagonalizable.
And remember, when I say the geometric
multiplicity, I mean the geometric
multiplicity of one. There's also
a geometric multiplicity of
the other eigenvalue. Every
eigenvalue has a geometric and
an algebraic multiplicity.
Last example is just like the second
example, except I've changed
this one to a two.
Now if you compute
the characteristic polynomial, it's
(λ-2)({λ}^2-1),
so the roots are one, negative one
and two.
They all have algebraic multiplicity 1,
so they all must have geometric
multiplicity 1. So it's diagonalizable.
By the way, all three examples were
invertible matrices.
People often confuse invertible matrices
with diagonalizable matrices.
They have nothing to do with each other.
You can have invertible matrices that
aren't diagonalizable. You can have
non-invertible matrices that 
are diagonalizable.
Two completely different concepts.
