So, when we discussed diagonal matrices, we
observed that given a diagonal matrix A if
we consider the corresponding linear transformation
L A, then L A dilates each of the vectors
in the coordinate basis in the standard basis,
and it dilutes it by the corresponding value
in the diagonal. So, in this video, we will
explore this phenomenon in much greater detail
for in much greater depth for an arbitrary
linear transformation from a vector space
to itself. So, let us begin by considering
a linear operator T on a vector space V.
So let T from V to itself be a linear operator
on V. Recall that a linear operator is just
a linear operator on V is just a linear transformation
from V to itself. That is, T is a linear transformation
from V to itself. So, for example, T being
the identity map is the simplest example.
So for example, consider T equal to the identity
map of V, then I v of V is equal to V for
all v in capital V. Next simple example will
be a multiple of lambda I v. So, this is one
example, let us consider another example.
Now, let T be equal to lambda times I v, then
T v is equal to Lambda I v on V acting on
V, which is equal to lambda times v.
So, notice that our first example where the
linear methods, linear operator is the identity
map, it dilates the vector space, every vector
in the vector space by 1, or in other words
it leaves it fixed. And the second example,
every vector is dilated by lambda.
So, if we consider an arbitrary linear transformation,
this need not be the case, it need not dilate
every vector. However, there are some special
vectors in the vector space, which might get
dilated by a given linear transformation and
such vectors have a special name, they are
called the eigenvector. So, let me now give
a definition, this is the definition of an
eigenvector. So, let T from V to itself be
a linear operator on V or a linear transformation
from V to itself. We say that a nonzero vector,
notice that we are imposing a condition of
the vector being nonzero. Nonzero vector v
in capital V is an Eigenvector of T if T v
is equal to lambda v for some scalar lambda.
So, if the vector v is getting dilated by
some lambda, then V is set to be an eigenvector
provided V is not the 0 vector, the scalar
lambda is called the eigenvalue corresponding
to v, it is called the so, let me just underline
this eigenvector of T, this is called the
eigenvalue corresponding to the eigenvector
V. So we have defined two objects here, one
is the notion of an eigenvector, which is
basically a nonzero vector in the given vector
space, which is dilated by our linear operator,
and the second one is the eigenvalue, which
is the degree to which it is getting dilated.
Or in other words, it is the scalar lambda
such that T v is equal to lambda times V.
So let us look at a few examples, so maybe
a good example would be, let us consider some
simple example say from R 2 itself. Let T
be not given by, T of say x, y is equal to
2 x and 3 y. So we have already seen some
examples here. We will come back to that maybe,
but let us just focus on this particular example,
T of x, y is 2 x, 3 y. You notice the coordinate
basis are eigenvectors, so let v 1 be equal
to 1, 0 then T v 1 is equal to T of 1, 0 which
is 2, 0, which is equal to 2 times v 1. Similarly,
if v 2 is equal to 0, 1, then T v 2 is equal
to 0, 3 which is 3 times 0, 1 which is equal
to 3 v. So the coordinate basis here are eigenvectors
so, the standard basis are examples of eigenvectors.
You will observe carefully is a linear map,
there are plenty of examples of eigenvectors.
Say any vector, in fact any of the type a,
0 is an eigenvector of T. In fact, any vector
of the type a, 0 is an eigenvector of T with
eigenvalue 2. Also notice that not every vector
is an eigenvector of T. So, 1, 1 for example,
is not an eigenvector of T. So, why is a,
0 is an eigenvector? So let me just put it
in square brackets why this is the case, T
of a, 0, this is just 2 a, 0 which is just
two times a, 0 that is all. And why is 1,
1 not an eigenvector? T of 1, 1 is two times
1, 2 and three times 1, 3 so it is 2, 3, which
is not a scalar multiple of 1, 1.
Let us look at more examples. Let us put numbers.
This is example 2, if T is the identity map
that is the first example maybe we should
have considered, then every vector is an Eigen,
every nonzero vector. Notice that the definition
of an eigenvector we have imposed this condition,
every nonzero vector is an eigenvector with
eigenvalue 1. Similarly, with the dilation
lambda times I v, the eigenvalue there will
just turn out to be lambda. So, every vector,
another example, every vector in the null
space of our given linear transformation T
which is if T from V to V is not injective,
what happens then the null space of T has
nonzero vectors. Let V be in N of T or the
null space of T or maybe let me write null
of T so that there is no confusion such that
v is not equal to the 0 vector so, the 0 here
is the 0 vector.
And what does T do to our V? Then T v is equal
to 0, the 0 vector of v that is nothing but
the scalar 0 times our vector v. So again,
your job is to keep track of which 0 is where.
So this 0 is in the vector space V, this 0
is in the real numbers, it is a scalar. So
that means hence v which is a nonzero vector
is hence an eigenvector and v is an eigenvector
and what is the corresponding eigenvalue,
eigenvector with eigenvalue 0? We have only
demanded that the eigenvector should be nonzero.
We have not demanded that the eigenvalue cannot
be the 0 scalar, we have not demanded that
at all, so yeah, so V is an Eigenvector with
eigenvalue 0.
Maybe a good exercise to think over would
be to show that a vector or linear transformation
is invertible if all eigenvalues are nonzero.
Let us look at more examples, so next would
be maybe example 4, I think. So consider this
linear map, let T be the map from R 2 to R
2, which is given by a reflection along a
line. So let me just draw it for you, suppose
this is our Cartesian coordinates. So let
this be 4 and let this be 3, so this is our
4, 3 and let us look at the line joining the
points. Let us draw the line and then this
is our point 4, 3 and let us look at the reflection
along this particular line, which is joining
0 and 4, 3. So, any point here is mapped to
a point corresponding point here.
So in particular, let us look at this line,
so this line so 4, 3. 3 minus 4 would be a
perpendicular so this will be going like this.
This point, this turns out to be 3 minus 4,
this is perpendicular. And what will T do
to 4, 3? Notice that if v 1 is equal to 4,
3, then T v 1, if you reflect the vector 4,
3 along the line joining 0 to 4, 3 it does
not do anything to it, it just fixes it so,
this is equal to our v 1. What about the vectors,
say v 2. So, let v 2 be the perpendicular
vector which is 3 minus 4, and if you reflect
it, it will go to the other direction, the
other direction it will just be minus 3, 4.
Then T v 2 is equal to minus 3, 4, which is
equal to minus of 3, minus 4 which is minus
of v 2.
So v 1 and v 2 then this let T be a reflection
along l, where l is the line joining where
l is the line joining 0, 0 and 4, 3. So 
then T has Eigenvectors v 1 and v 2, notice
that v 1 and v 2 both are nonzero as eigenvectors
v 1 with eigenvalue 1 and v 2 with eigenvalue
minus 1. So let us come back to this example
later. We will revisit this example, so example
2 have at the back of your mind while studying
eigenvalues and eigenvectors. So next, so
we have defined what an eigenvector and an
eigenvalue is for a linear transformation.
So, linear transformations and matrices are
very closely related and you would like to
define a corresponding or similar notion for
matrices as well so, let us do that next.
So, we do not consider 0 vector to be the
eigenvector, always keep that in mind because
in the definition itself, we are incorporating
that an eigenvectors should be a nonzero vector,
because there is no eigenvalue which can be
associated to the 0 vector, every scalar will
turn out to be an eigenvalue and we do not
want that. So let us now look at what is meant
by the notion of an eigenvector and eigenvalue
for a matrix for an n cross n matrix. So let
us start with an n cross n matrix.
Let A be an n cross n matrix with real entries,
of course, we say that vector V in R n is
an eigenvalue of A, sorry, eigenvector of
A with eigenvalue lambda if v is an eigenvector
of the linear transformation L A with eigenvalue
lambda. So, if V is an eigenvector of L A
with eigenvalue lambda. So, for all practical
purposes, we do not distinguish between the
matrix A and the linear operator L A.
Yes, I should probably introduce one more
example which is something which you have
already seen. Let us consider a matrix, so
this is at a good place we will be looking
at this example. So, let A be a diagonal matrix
say a 1 to a n, then my claim is that each
of the eigenvectors of A or each of the standard
basis vectors is an eigenvector of A, then
e i, let us just see what L A does to e i,
then my claim is then e i is an eigenvector
of A.
So we should check that it is an eigenvector
of L A, so L A e i if you notice, this is
just A e i, which is equal to, we have already
done this, this is going to be a i times e
i. Yes, it is indeed the eigenvector of A
with so hence, e i is an eigenvector of L
A with eigenvalue a i.
Next let us give ourselves a definition of
what is meant by the Eigen space corresponding
to lambda. So let, definition of an Eigen
space, we have already seen what an eigenvector
is and what an eigenvalue is, let us look
at what an Eigen space is. So, let T from
V to itself be a linear operator, I will slowly
start using this term more frequently operator
on V that means it is a linear transformation
from V to itself, then Eigen space of a scalar
lambda is the set of vectors such that T v
is equal to lambda v. So, notice that every
eigenvector of T with eigenvalue lambda is
in the Eigen space of lambda apart from 0,
0s are obviously there, but every eigenvector
corresponding to the eigenvalue lambda or
every eigenvector which has eigenvalue lambda
will also be in the Eigen space of T. So,
let us try to see more about the Eigen space
of lambda say for example.
So, if lambda is a scalar so, for lambda in
R let us see what it means to say that T v
is equal to lambda v. T v is equal to lambda
v can be rewritten as this is if and only
if T v is equal to lambda I v v. And by the
operation of linear transformations, the vector
addition of linear transformations this is
if and only if T minus lambda I v of v is
equal to 0. So, I e this if and only if v
belongs to the null space of T minus lambda
I v.
So, v is hence, v is in the Eigen space of
lambda. So, rather Eigen space of lambda is
just the null space. So, let me write it in
a more refined manner, the Eigen space of
lambda is the null space of T minus 
lambda I v, so in particular, the Eigen space
of lambda is a subspace 
of V. So lambda is an eigenvalue if there
is at least one nonzero vector in the null
space of T minus lambda I v. So, also observe
that lambda is an eigenvalue if and only if
there exists a nonzero vector v in the null
space of T minus lambda times I v. But this
is the same as telling that T minus I v is
not injective. So this is if and only if T
minus lambda I v is not injective.
So lambda is an eigenvalue of our given linear
transformation T if and only if T minus I
v is not injective, or if T minus lambda I
v is not invertible, that is an alternate
definition we can keep to check whether something
is an eigenvalue. This is at times useful,
for example, let us consider one of the examples
we already looked into.
Let us maybe consider the first example that
might be, let me put a number. So recall that
the first example was T of x, y equal to 2
x, 3 y so, let us come back to this example.
So, consider example 1 again revisited.
So, T of x, y is equal to 2 x, 3 y. So, we
know that both 2 and 3 are eigenvalues of
T and that is quite straightforward because
consider T minus 2 times, so this is where
T is from R 2 to R 2. So consider T minus
2 times I v, and we would like to see whether
its Eigen space or rather its null space is
just the 0 vector or there are more. But we
already know that if you consider T minus
2 I v of say x, y, this is just going to be
equal to 0, 3 y.
And clearly, the x axis or the subspace, let
me put it like this. The subspace y equal
to 0, which is a one-dimensional subspace
is contained in the null space right here,
and now T minus 2 I v. Similarly, x is equal
to 0 is contained in the null space of T minus
3 times the identity map. So yes, this also
tells us that 2 and 3 are eigenvalues. This
also tells us that T does not have any other
eigenvalue, why is that the case? Because
consider T minus lambda I v, let me just leave
it as an exercise for you to check that T
minus lambda I v is invertible 
for all lambda, which is not equal to 2 or
3 and therefore, it cannot be not injective,
it has to be therefore injective because it
is invertible therefore, the null space of
T minus lambda will just have the 0 vector,
therefore it cannot be a eigenvalue.
Next let us discuss the relationship between
diagonal matrices and eigenvectors. So we
have already seen that if we have a diagonal
matrix, the coordinate base is turned out
to be eigenvectors or other words the matrix
of the linear transformation corresponding
to it is a diagonal matrix. So let us make
it more formal here, so let us put it into
a theorem, maybe a proposition. This proposition
states that linear operator on a vector space
V is having a diagonal matrix if we have a
basis, which consists of eigenvectors. So
let us start with a linear map from V to itself.
So let T from V to V be a linear operator
on V, which is of dimension n, let us say
which has finite dimension let us say n. Then
if v 1 to v n is an ordered basis of V 
consisting of eigenvectors of T.
So let us call it beta. Beta equal to v 1,
v 2, up to v n and ordered basis of V consisting
of eigenvectors of T then, so let us remove
this then. What do we have as a conclusion?
Then the matrix of T with respect to the basis
beta will be a diagonal matrix, then the matrix
of T with respect to beta is a diagonal matrix,
the converse is also true, I write it down.
Conversely, if the matrix of a linear transformation
corresponding to a basis beta is diagonal,
then the basis vectors in beta are eigenvectors
of T. Conversely, if the matrix of linear
transformation the matrix of T beta is a diagonal
matrix corresponding to an ordered basis beta,
which is say v 1 to v n, then v i are eigenvectors
of T. So, the proposition tells us that if
we have a linear operator with a basis of
eigenvectors of T, then with respect to this
basis the matrix of the linear transformation
will be a diagonal matrix. In fact, we will
see that the matrix will have as its diagonal
entries the eigenvalues. And converse is also
true that if you have a matrix which is a
diagonal matrix with respect to some basis,
then the vectors in the basis will be eigenvectors
of T.
Let us give a quick proof of this. It is going
to be actually quite short. So let us see
what the first statement says, the first statement
says that we have a basis consisting of eigenvectors
of V. So, given we have a basis beta which
is say v 1 to v n consisting of eigenvectors
of T. 
Let us see what is the matrix of T with respect
to beta, but to do that, we have to look at
what is T of v j. So, what is T of v j? T
of v j is some lambda j times v j, where lambda
j is the eigenvalue of v j.
Remember that each of the v j’s are eigenvectors
of T so, what does that mean? This implies
that T v j beta is just equal to 0 0, there
is a lambda j in the jth column 0 dot dot
dot 0, where lambda j is in the jth row. But
T v j beta will just turn out to be the jth
column of the matrix of T.
And putting this placing this together, we
have T beta beta will just turn out to be
the diagonal matrix of lambda 1, lambda 2
up to lambda n, where lambda i is the eigenvalue
of the eigenvector v j. Let us next prove
the converse to this proposition.
The converse is telling us that if we have
a diagonal matrix, so, let beta equal to v
1 to v n be a basis such that T beta is a
diagonal matrix. The basis such that you just
actually go back in the previous argument
and we will get it as equal to say diagonal
of lambda 1 to lambda n. But what does that
mean? By very definition this just implies,
let me leave it for you to check that T v
j is then equal to lambda j v j for j equal
to 1 to n. This just tells us that v 1 to
v n are eigenvectors corresponding to lambda
j so we have proved the result. So, we have
observed that any linear transformation, if
it has a matrix, which is diagonal then there
is a basis consisting of the eigenvectors
and vice versa, there is a basis consisting
of eigenvectors of our given linear transformation
the matrix is also a diagonal matrix.
So, this motivates definition of that of diagonalizability.
So, we say that a linear transformation is
diagonalizable if we can get hold of a basis
with respect to which the matrix of T is a
diagonal matrix.
So, let us give a definition, we say that
a linear transformation, linear transformation
T from V to itself is diagonalizable if there
exists a 
basis beta with respect, such that the matrix
T beta beta is a diagonal matrix. So, one
of the most straightforward examples is the
linear transformation corresponding to a diagonal
matrix, they have to be diagonalizable. So,
example in fact, example one is diagonalizable,
T from R 2 to R 2 such that T of x, y is equal
to say 2 x and 3 y, this is diagonalizable
by the very definition, why? Because what
will be our beta here? Our beta will just
turn out to be the standard basis.
In fact, let us start with a diagonal matrix.
So, let A equal to diagonal of a 1 to a n
be a diagonal matrix 
then L A is diagonalizable, again with respect
to the standard basis of R n. It is an n cross
n matrix which is a diagonal matrix, so with
respect to the standard basis, the matrix
of L A is a diagonal matrix and by definition,
this is going to be a diagonalizable linear
transformation. So let us look at one more
example we had considered.
Let us revisit one of the examples which we
had promised to revisit, which is this example
4, which is basically the reflection along
the line joining 0 to 4, 3. So I write it
down, so let us consider, let us revisit example
4. What was our T? T was a map from R 2 to
itself given by reflection along l, which
is the line joining the origin to 4, 3. Let
me not write 2, which is the origin line joining
0 and 4, 3 so infinite line, so we do not
want to consider this segment, it is a line
and you reflect every vector along this particular
line. So we had noticed that we had two eigenvectors
for this linear map T.
So, recall that 4, 3, the vector 4, 3 and
3, minus 4 are eigenvectors with eigenvalues
1 and minus 1 respectively. But we also know
that or I will leave it as an exercise for
you to check that 4, 3 and 3 minus 4 are linearly
independent. What can we say about set of
two vectors in R 2 which are linearly independent,
it should necessarily be a basis. So, let
Beta be equal to set 4, 3 and 3, minus 4.
Let us try to see or let us just jump up to
look at what we did to as a proposition, we
have obtained a basis of T which has eigenvectors
and which has every vector as an eigenvector.
So this means that T with respect to beta
is equal to 1, 0, 0, minus 1. So, this particular
form is quite nice because if you now consider
T square, what is going to be T square? If
you notice, this is going to be again beta
with respect to beta, this will just turn
out to be the product of this matrix with
itself which is going to be the identity matrix,
which is the identity matrix of the identity
with respect to the basis beta. And hence
we have obtained hence the matrix, the linear
transformation T when multiplied by with itself
will give you back the identity. So it is
an inverse of itself that is what we have
just proved. So, if we can get hold of basis
which has eigenvectors then it is quite useful
as you can notice, we can say a lot more than
what meets the i directly.
So, we have just defined what is meant by
diagonalizable for a linear transformation,
we would also like to do the same for a matrix.
So let A be an arbitrary n cross n matrix,
let A be an n cross n matrix. So we will say
that A is diagonalizable if the corresponding
linear transformation is diagonalizable, so
we say that A is diagonalizable if the linear
transformation L A is diagonalizable. So,
notice that A to begin with need not be a
diagonal matrix, A could be some arbitrary
matrix. And what is L A? L A is the linear
transformation corresponding to A. So, if
you look at the standard basis and look at
the matrix of L A with respect to the standard
basis, we will get back A, but A need not
be a diagonal matrix to begin with.
However, if you consider the linear transformation
L A and if we could get hold of some basis
of R n with respect to which our linear transformation
L A is a diagonal matrix, then we say that
A is also diagonalizable or then we say that
A is diagonalizable. So, needless to say,
example, all diagonal matrices are already
diagonalizable with respect to the standard
basis you look at the matrix of L A, all diagonal
matrices are diagonalizable.
So, let us now look at a necessary and sufficient
condition on when we can say that a matrix
is diagonalizable, let us capture in the next
proposition. So proposition, so let A be an
n cross n matrix, then A is diagonalizable
if and only if we can get hold of our diagonal
matrix D and an invertible matrix Q such that
A is Q D Q inverse, if and only if there exist
a diagonal matrix 
D and an invertible matrix 
Q such that A is equal to Q D Q inverse.
So, notice that A is equal to Q D Q inverse
tells us that A is similar to D, but D is
a diagonal matrix, so this is rephrasing:
an n cross n matrix is diagonalizable if and
only if it is similar to a diagonal matrix,
diagonalizable if and only if A is similar,
we call the definition of similar we say that
two matrices are similar, A and B are similar
if A is equal to something like Q B Q inverse
where Q is some invertible matrix, so A is
similar to a diagonal matrix. So, this is
a good characterization to keep in mind so,
let us give a proof of this proposition.
So, suppose A is diagonalizable. I have already
stated the proposition, I was writing proof.
So, let us look at a proof of the statement.
So, we have already assumed let us assume
that A is diagonalizable. So, let us assume
that A is diagonalizable, what does that mean?
That means, that the matrix L A that matrix
is diagonal with respect to some basis. So
let beta equal to say v 1 to v n be a basis,
let me call it beta prime. Beta let us keep
it for the standard basis so, let beta prime
be a basis of R n such that L A beta prime
beta prime is equal to a diagonal matrix or
let us say this is diagonal of a 1 to a n,
let us call this D.
So, we have assumed that A is diagonalizable
by definition, L A is diagonal matrix. L A
is a linear transformation, which has a diagonal
matrix with respect to some basis, let us
call that beta prime. So with respect to beta
prime L A has the matrix representation given
by D.
But then what is L A? L A is just I composed
with L A composed with I, where I is the identity
matrix. So, L A is just I v, L A I v, where
I v is the…So let me just plot the I R n
where I R n is the identity matrix, identity
linear transformation in R n. And now let
us look at the basis the matrix of L A with
respect to beta, L A, beta beta, where beta
is the standard basis. This is equal to A,
let beta be the standard basis and hence by
definition L A beta beta is nothing but A,
let us just write it now as L A I, L A I from
beta to beta.
Now let us write this to be equal to I L A
I from beta to beta prime, beta prime to beta
prime, beta prime to beta by the very definition
of or by the consequence of how the matrices
behave with respect to the composition.
Let us call Q to be the matrix so, let Q be
the matrix, I beta prime beta, then this is
a change of basis matrix then Q inverse is
nothing but I beta beta prime. This is something
which we have already seen and therefore,
A is nothing but Q. What is the matrix of
L A with respect to beta prime? Recall that
beta prime was exactly that basis with respect
to which L A was a diagonal matrix. So, this
is Q D Q inverse and that is precisely what
we had set out to prove. Recall what we had
written, the proposition is diagonalizable
if there is a matrix which is diagonal D and
an invertible matrix Q such that A is equal
to Q D Q.
Now, let us look at the converse, we have
only shown one side of the proposition. So,
to prove the converse let A be equal to Q
D Q inverse where D is a diagonal matrix and
Q is an invertible matrix. So, we also know
that the fact that D is a diagonal matrix
tells us that D e j is lambda j e j where
Lambda j is the jth entry along the diagonal.
So, what we will do is let us consider the
following vectors, beta be equal to, so this
is the standard basis. So, let beta be the
standard basis, then what do we know about
D e j? By definition D e j is something like
lambda j times e j, where lambda j is obtained
in the lambda 1 to lambda n.
Let us now consider beta prime where beta
prime is given by Q e 1 up to Q e n. And let
us notice how beta prime, how A behaves on
beta prime. So, notice that D Q inverse of
Q e j will just be equal to D e j which is
equal to lambda e j. So, what is going to
be Q then Q D Q inverse of Q e j is going
to be Q of lambda e j, but this is a linear
map, this is going to be lambda times Q e
j. That means Q e j is eigenvector for Q D
Q inverse. Q e j is an eigenvector of Q D
Q inverse. So what do we have now, beta prime
is a set, so recall that beta prime is a set
consisting of eigenvectors of Q D Q inverse.
But, let me put a claim down, this beta prime
is a basis of R n, if you prove this claim
then we are done because then we would have
obtained the basis of R n which consists exclusively
of eigenvectors of our given matrix or given
linear transformation whichever way you want
to look at it. But then what is beta prime?
Beta prime is the image of a basis under an
invertible linear transformation, I leave
this as an exercise for you to check at this
time that 
if you look at the image under an invertible
linear transformation, then that will turn
out to be a basis. So, let me just leave it
as an exercise for you to check this part.
And with this we have obtained a basis with
respect to which the matrix of Q D Q inverse
is diagonal, hence Q D Q inverse is a diagonal
matrix with respect to beta prime which is
the same as saying that A is diagonalizable.
So, if we actually look at this proposition
carefully, it is telling us that a given matrix
is diagonalizable if and only if it is similar
to a diagonal matrix. And the previous proposition
was telling us that some linear transformation
is diagonalizable if we can get hold of a
basis consisting of eigenvectors. So putting
these together, we can explicitly say what
our D and what our Q is going to be. So, let
us just write down a proposition explicitly
mentioning what our D and Q j are.
So, proposition, so let A be n cross n matrix.
Suppose, v 1 to v n are vectors or it is an
ordered set, are vectors in R n such that
A v j is equal to lambda j v j and such that
they are linearly independent and let us call
it beta and such that beta is linearly independent.
Then A is equal to Q D Q inverse, where Q
is the matrix obtained by inserting the vectors
v 1, v 2 up to v n and D is a matrix obtained
by putting in the corresponding eigenvalues.
So we can very explicitly compute eigenvalue
of v j, let us give a proof of this.
So we have already done all the hard work.
Let us just go back and see what we had noticed.
We had noticed that we will get an equation
of this type, we will get A is Q D Q inverse
where Q is I beta prime beta, so let us just
redo it.
Let us recall that from the proof of the previous
proposition we have L A beta beta which is
our matrix A this is equal to, what do we
call, let us call this beta prime. The proposition,
let us call the ordered basis to be beta prime.
So, notice that this is linearly independent,
forces it to be a basis, because it is in
linearly dependent vectors in a vector space
of dimension. So, this will just turn out
to be equal to L A beta beta prime beta prime
beta. No, no, no, this will be I beta prime
beta and I beta beta prime. But what is L
A beta prime beta? L A beta prime beta is
just beta prime beta prime is just diagonal
of lambda 1 to lambda n where lambda I is
such that A v j is equal to lambda j v j,
where lambda j are such that A v j is equal
to lambda j v j.
And what remains is to check for what is I
beta prime beta. So, this is the change of
basis matrix from beta prime to beta. So,
what will be the jth column of this matrix,
the jth column will be I of v j, where v j
is the jth vector in the ordered basis beta
prime. So, I of v j is just equal to in the
jth column of this change of basis matrix,
let me write it again, let me write it afresh.
The jth column of the change of basis matrix
I beta prime beta is the column vector of
v j and therefore, I beta prime beta which
is let us say Q is nothing but v 1 to v n.
So in the next video, let us discuss techniques
for computing the eigenvalues of a given linear
transformation.
