Early in the course we linked linear
transformations to matrices. Well let's
have a quick look at that again.  If you
have a linear transformation that map's
a vector from C^n into C^m.  And now that
we're going to be talking about the
algebraic eigenvalue problems, the
matrices are always going to be square.
So therefore the length of the input
vector must be equal to the length of
the output vector. Then if this
transformation is a linear
transformation then we know that we can
write y is equal to L of x nstead as y
is equal to A times x where A is the
matrix that captures what our linear
transformation does. Now in the launch for this week we talked about, hmm, if we just
view our problem in the right basis, then
life is so much more beautiful. Okay now
what does that mean? Well remember
doing the change of basis is the same,
can be viewed, as saying let's multiply
by the identity matrix.  And what this
here that really gives are the
coefficients that you use to take a
linear combination of the columns of
matrix X so X is now a matrix that is m
by m.  And then similarly we can take our
vector x and view it in the same basis.
And we get this right here.  And what that
means is that this right here is our new
view of X in our new basis. We'll call it
X^hat.   And this here is our new view, our
vector y in our basis that are the
columns
matrix X.  And we'll call it y^hat.  And if
we then bring X to the other side, what
we notice is that y^hat is equal to X
inverse A X X^hat.  So what we really have
done is we've taken our linear
transformation viewed it as a matrix and
now noticed that, hmm, if we viewed that
linear transformation in a new basis, we
can think of that as taking the matrix
that's associated with the linear
transformation, hitting it on the left
with X inverse and hitting it on the
right with X. We're going to call that
a similarity transformation.
Okay?  So this now really is the matrix
that represents the transformed linear
transformation viewed in the new basis.
Now let's play with this a little bit
and let's just look at this new matrix
right here.  What we said was, "Gee, wouldn't
it be nice if this were diagonal?"
Wouldn't it be nice if there was a
matrix, let's call it the capital Lambda
that was this right here, because then
all of the sudden, everything falls nicely in place.  And isn't it much nicer to multiply with
a diagonal matrix than it was to
multiply with sort of an arbitrary
matrix.  Okay, well let's have a look at
that.  What that then means is that X
inverse times A times X is equal to and
AX. Alright?  Hmm, we can rewrite that as
A times X is equal to X times lambda. Now, what do we like to do in this course?
Well we like to take matrices and bust
them up by columns.  And
we already talked about the fact that X
was the matrix where its columns were
the basis vectors that we were using to
express our vector in the new basis.  So
let's think our X and let's partition it
into its columns and let's do this on
both sides. but notice two things we need
to do something with lambda because we
need to be able to do this times that
and this times that, etc. And we also need
to be able to then eventually set
columns on this side equal to columns on
that side.  And that leads me to figure
out that I should take my diagonal
matrix and I should view it by elements.
And then the rest of these, of course, are
0.  This here on the left is just a matrix
that consists of the first column, the
second column, and so forth.  Alright? Matrix times the matrix that's partitioned by
columns. It's just that matrix times the
first column, the matrix times the second
column, and so forth.  On the right, we see that the first column is this times that
plus 0 times blah blah blah.  And
therefore the first column on the right
ends up being x_0 times lambda_0 but
notice that the vector times a scalar is
the same as the scalar times the vector.
So, we get a lot of the 0 times x_0
lambda_1 times x_1 and so forth.
Interesting. If we now equate columns on the left to
columns on the right, what we find that A
times x_pi is equal to lambda_pI times
x_pi.  And lo and behold, that's something that
you recognize from your earlier linear
algebra course because what this here
says is that lambda_i should be an
eigenvalue of matrix A and x_i should
be a corresponding eigenvector of that
matrix.  So what that means is that this
very useful concept of diagonalizing a
matrix --which by the way you can't do for
every matrix,, but for a lot of them-- can
be transformed into the computation of
eigenvalues and eigenvectors of that
matrix.  What we've noticed before is that
it's often good to be able to look at the same problem in different ways.  And
it turns out that it helps to initially
look at this problem as finding
eigenvalues and corresponding
eigenvectors.  And then later we can
return and talk about what this means
about diagonalization.
