Okay, so now we're going to look at A x
equals lambda x, where the idea is that
we're trying to find values lambda for
which there is a nonzero vector x such
that A times x is equal to lambda x. Now
let's think about that for a second.  What
it really means is we're looking for a
direction x -- a vector is a direction --
such that if you hit the vector with A
then the net result is the same as if
you simply scaled that vector -- scaling
meaning you're either increase its
magnitude, decrease it, either in the same
direction or in its opposite direction.
Alright? Alright, now this can be
re-written as lambda I minus the matrix
A quantity times x is equal to 0.  And you
know, in a lot of courses you see
this written as A minus lambda I versus
lambda I minus A.  Obviously since there's
a zero on this side, we can always
multiply both sides by minus one to flip
that around. Alright?  And the reason why we like to write it as lambda I minus A has
to do with the fact that then the
characteristic polynomial, which we're
going to talk about later, automatically
has a first term that's lambda to the m if
A is an m by m matrix.  Alright?  Now, this
right here allows us to say a few things.
Okay? And let's recall if we have an m by
m matrix that is nonsingular, then that
statement is equivalent to saying that B
has an inverse.  And that's equivalent to
saying that B has linearly
independent columns, that B has a null
space that only contains the zero vector,
it's dimension of the null space
is equal to  zero and the determinant B is not equal to zero.  And as a matter of fact,
there was a whole list of these kinds of
equivalent statements.  And it turns out
that as we talk about eigenvalue
problems, eigenvalues and eigenvectors,
what we'll do is we'll strategically pick
the equivalent statement that is most
appropriate for a situation.  Okay?  Now,
let's think about this.
This problem right here we reformulate
it as lambda I minus A times x where x
is a nonzero vector, because we're
looking for a nonzero vector where this
is true.  Now when we look at this
particular matrix what we notice is that
it has a vector x that is in its null
space.  What that means is that it is not
the case that the null space of this
matrix only has a zero vector in it.  And
all of sudden none of these statements are true. Okay?
And we probably should start right here.
Inherently there exists the vector x
such that it is in the null space and
it's not zero.  And what that then
automatically means is that the matrix
is singular.  It does not have an
inverse.  It has linearly dependent
columns. The dimension of the null space
is greater than zero.  And the determinant
of this matrix right here is zero.  And
that's going to allow us to sort of
start characterizing what eigenvectors
and eigenvalues there are.  And then
eventually that allows us to get to
where we can come up with robust
practical algorithms for computing the
eigenvalues and eigenvectors, which then
allows us to get back to how to
practically diagonalize a matrix.
