Hello, in this video we're going to be
finding a
diagonalization representation for
matrix A,
so this is where all the theory that
we've learned comes together and all the
examples really come together. It's quite
exciting!
I really enjoyed learning this when I was
taking linear algebra.
We're going to use an example that we
have for quite some time now.
We're gonna be using the example of that
linear transformation which reflects vectors
in R^2, in the plane, and takes them
to R^2.
Vectors in the plane when it reflects about
the line y=-x.
And you may recall we found that
transformation before,
we used that change a basis, so we have
T(x)
is equal to you A*x where A
is this matrix here (0, -1,
-1, 0). Okay,
so we're going to be using
the idea of these eigenvalues and eigenvectors
now that we know how to find those.
We actually have a really nice theorem that
helps us find that diagonalization, that
diagonal representation.
So to create this diagonal matrix D,
and invertible matrix P such that we have
D is equal to P inverse*A*P, that is the definition have
diagonalization.  We're going to be using
the
the eigenvalues first, so you may
recall
the eigenvalues for this matrix
were 1 and -1.  We talked about why
that was
because of our reflection we took a vector
in the direction of that line
and it just stayed there.  That
corresponds to eignevalue 1.
And then we took another vector that was
perpendicular to it
and we saw that that reflected  the vector
about that line in exactly the opposite
direction so that was where we got that
-1
eigenvalue.  But we also talked about how
to find these
using the definition and grinding it all out.
We also have the associated eigenvectors
of this matrix were (-1, 1)
for 1, that was one of the factors that
we found.  Remember that was in the
direction of the line
and then (1,1) was perpendicular to the
line that belonged to
-1. So here's our representation
by our theory our diagonal matrix
is just going to be a diagonal matrix of zeros
on the off diagonal and on the main diagonal have eignevalues,
I chose to put 1, up here at the top
-1 at the bottom, I named those lambda 1 and lambda 2.
And the corresponding eigenvectors will
make up this matrix,
P.  So the eigenvectors corresponding to
1, I put 1 first, I need
vector 1 first for my representation, I
have to have it that way.
So I have (-1,1) for this column
and (1,1) corresponding to that -1 eigenvalue
for the last column.  So now I have this
representation and I didn't really have
to do any work.
Well of course I had to find the eignenvalues and vectors but that did my work for
me.
So if you have a linear transformation
like this
we can use this idea which is really
cool.  So our
representation would be a this diagonal
matrix is equal to
P inverse*A*P where again
the A is the matrix, it was given.
That P is a matrix given by the eigenvectors
and if we multiply these three things
out we would get a diagonal matrix
with the eigenvalues on the diagonal. How does this all
really tie in with things that we've
learned?  Well, we know now that
a better basis to work with for this
transformation
than the standard basis would be this
vector in the
direction of the line and the one
that's perpendicular to it.  And those are
basiically the eigenvectors of A.  So if you
find the eigenvectors of A,
then you have a great basis work with.
It works even better than the standard
basis.
Which we're all used to working with.
But that's the idea why we need to change bases sometimes.
I am gonna show you this diagram one more time here.
What we know now
is we have these changes of bases and we
sort of maybe wondered why we were doing
that at the time.
But now we sort of have a nice way to
look at this.  So the standard matrix A,
we take a vector in R^2 and apply
the transformation
to that vector by multiplying A times that
vector
and it's taken across to R^2.  However,
we could find an alternate representation for that finding this
matrix
P, given by the eigen
vectors here if we multiply
that P by v we will get this
B coordinate vector with respect to that
new basis, the one that we like to use
and we have this new matrix D which they
called A'.
earlier, this is the matrix A
with respect to that other basis, the
basis
made up of these eigenvectors here. And those are with the eigenvalues on the
diagonal.
So if I multiply D, I go from this
to this. Now this is going to be that
transformation applied those vectors
with respect to B. And then of course
I can take it back here
using P inverse so this brings our theory,
as we say, full circle around and might
give you a reason why we want to use
something
other than the standard basis. And these
eigenvalues and eigenvectors can
totally help you with that so you don't
have to make it too hard.
I hope this has helped.  Thank you very
much!
