.
Welcome, we have been discussing on the Eigenvalues
and Eigenvectors of a square matrix. And which
I say it will be important if we ah try to
understand the fundamentals, find the direct
iterative method solvers, the convergence
of an an iterative method solvers, specially
for the directive iterative solvers or the
hm a hm the conditions under which the directive
iterative solvers will work, or some of modifications
will work on this solvers the eigenvalues
and eigenvectors will be important. And we
studied with the discussion that eigenvalues
actually arise from the solution of ordinary
differential equations or rate equations,
expressed as combination of several variables
.
So, we ah gone through an example to see how
eigenvalues and eigenvectors can be computed
and made few observations on them. And ah
in the in the present class will look into
more on the properties of eigenvalues and
eigenvectors.
So, ah we discussed about distinct eigenvalues
and repeated eigenvalues, as the roots of
the polynomial characteristic polynomial.
So, ah for repeated roots the ah part were,
indeed for repeated roots the associated eigenvectors
and null space vectors of the same matrix
A minus lambda I. So, there is one lambda
which is arising thrice or more than once
sizes an example, arising more than once as
a solution of the polynomial equation.
So, this when will write A minus lambda I,
this particular lambda will introduce more
than one 0 pivotes in the equivalent form
of the matrix. So, there are more than one
dependent columns in the matrix therefore,
there will be multiple Eigen ah eigenvectors.
Because the number of dependent columnns will
give us original dimension of the null space.
So, geometric multiplicity and this is for
real real symmetric matrix, we can see exactly
the number of times the value is the eigenvalue
is repeated is the number of eigenvectors
associated with, which equals geometric multiplicity
for real symmetric matrix and ah null space
vectors are linearly independent.
So, for one particular eigenvector when value
when it is repeating all the eigenvectors
associated with it, at least for real symmetric
matrix they are linearly independent. Now
what will try to explode is that eigenvectors
for distinct eigenvalues and it is not repeating
when there are distinct eigenvalues , there
will be ah lambda 1 lambda 2 lambda 3 and
none of them are repeating roots of the characteristic
polynomials. The associated Eigen vectors
for all the distinct eigenvalues they are
linearly independent.
The proof is ah little convoluted, but let
us see let us consider distinct eigenvalues
of a matrix A lambda 1 lambda 2 up to lambda
j. The representative eigenvalues are x 1
x 2 x j; we need to show that they are linearly
independent. So, we have the equation A x
i is equal to lambda x i, and let us assume
that ah; so this is x i equal to rather x
is equal to lambda i x i, lambda i is one
particular eigenvalue. And let us assume that
ah there is at least one non zero c i for
j plus consider j plus 1 eigenvectors and
there is one non zero c i for which; c 1 x
1 plus c 2 x 2 up to c j x j plus c j plus
1 and x j plus is equal to 0.
So, there is one non zero c i; that means,
c j plus 1 when we take x j plus 1 the j plus
1th eigenvector, that is linearly dependent
with x j eigenvectors , this is an assumption
we are making. And on top of that we are also
making the posing the condition that j plus
1th eigenvector is dependent on first j Eigen
vector, and where the first j Eigen vectors
are linearly independent. We assume that first
few eigenvectors are linearly independent
and there is one eigenvector which is linearly
dependent on that. How can we start that we
can take the first eigenvector which is a
single eigenvector, and I assume another Eigen
second Eigen vector Eigen vector 2 and assume
that it is linearly dependent on that.
And we will see that whether these things
satisfy the condition that one eigenvector
can be dependent on others, but few are linearly
independent. As c k is non zero, there is
one c k which is non zero we can write c i
plus j i ah c i j plus 1, so c k is rather
c j plus 1 ; as c j plus 1 is non zero we
should write c j plus 1 is non zero, c j plus
1 is j plus 1 is minus of this . 
And x j plus 1 because c j plus 1 is non 0.
So, x j plus 1 is combination of the first
j i eigenvalues. Now multiplying both sides
by A and we will use the fact that A x i is
equal to lambda i x i, Lambda x i lambda I
x i lambda I is the ith eigenvalue.
So, if I multiply left hand side by a it will
be A x j plus 1 is equal to b 1 A x 1 b 2
A x 2 up to b j A x j. And then A x j plus
1 is lambda j plus 1 x j plus 1 similarly,
for all other terms we will get the relationship
2 . Now multiplying both sides of ah 1 by
lambda 1 so we will get lambda j plus 1 x
j plus 1.
And we will get a equation lambda j plus 1
x this is not k this is lambda j plus j plus
1 x this is j plus 1 , so lambda written as
k. Lambda j plus 1 x j plus 1 is equal to
b lambda j plus 1 x 1 plus b 2 lambda j plus
1 x 2 etcetera. So, subtract 2 from 3 and
we get 0 is equal to b 1 lambda j plus 1 minus
lambda 1 plus b 2 lambda j plus 1 lambda 2
plus b j lambda j plus 1 lambda j. And lambda
j plus 1 minus lambda i, i is not equal to
j plus 1 is not is equal to 0.
Because the i because there distinct eigenvalues
. So, in this equation these terms lambda
j plus 1 minus lambda 1 lambda j plus 1 minus
lambda 2 up to minus j plus 1 minus lambda
ah j they are non-zero. And we also assume
that b 1 b 2 b 3 at least all of them are
non zero because c 1 c 2 to c j not all are
0 so some of them are non 0.
So, we get an equation where we can see that
this is equal to 0 , as a eigenvectors are
distinct and some of bs are non zero the set
of vectors become linearly dependent. Our
initial condition was that first few eigenvectors
we assume that they are linearly independent
and one eigenvector was linearly dependent
on that.
Now, we have seen if one eigenvector is linearly
dependent on the previous eigenvectors the
previous state of eigenvectors also become
linearly dependent. And we can start with
first 2 vectors and see that they are there
should be linearly independent. And then we
can go to the third vector and we will see
that the third vector we have also has to
be linearly independent with the previous
2 vectors, otherwise first 2 vectors are linearly
dependent. So, the first class is violated
that first j eigenvalues are independent,
if I assume j plus 1th eigenvector is dependent
on the ah not eigenvalues this is this should
be eigenvectors.
If we assume the first j eigenvectors are
linearly independent and then if you put then
also assume the j plus 1th eigenvector is
linearly dependent on first j eigenvectors,
they they are counter fitting. So, this is
validating the first clause therefore, any
eigenvector cannot be dependent on few eigenvectors.
Hence, we can start from first eigenvector
and go to second and then carry on and say
that all eigenvectors are linearly independent
and this is shown for when is eigenvectors,
which has distinct eigenvalues for repeated
eigenvalues the the that the proof will be
different, and that for real symmetric matrix
we will see that all the eigenvectors will
be linearly independent, but for ah distinct
eigenvalues the eigenvectors are linearly
independent .
So, 0 vector on very another important observation
is that 0 vector is not an eigenvector of
a singular matrix, because 0 vector is always
linearly dependent. So, only if A x is equal
to A is a singular matrix we can think of
something like that, but otherwise 0 vector
is is not an eigenvalue because it is linearly
eigenvector is linearly dependent on any vector
set
n independent eigenvectors , now if we have
ah matrix with distinct eigenvalues there
will be n n n a a a n n into n matix square
matrix. There are n distinct eigenvalues and
there will be an eigenvectors which are linearly
dependent on each other. Therefore, n independent
eigenvectors in R n will form a basis of R
n. So, the eigenvectors will form a basis
on R n, non zero eigenvectors are obtained
from singular matrix. So, they can form a
basis. Eigenvectors form a basis of the column
space for all distinct eigenvalues.
So, there are n independent eigenvectors these
n independent eigenvectors are basis for column
space also . 
And now we can see that as we got a basis
for column space which are eigenvectors, we
can utilize the eigenvectors to get a diagonal
form of the matrix A or we will do some ah
regulation with the eigenvectors are some
transformation of the matrix in vectors and
will get a diagonal from out of them .
Eigenvectors can diagonalizable a matrix,
let us consider n distinct eigenvalues of
the matrix A, which is lambda 1 lambda 2 up
to lambda n. The respective eigenvectors are
x 1 x 2 x n. So we will put the eigenvectors
as the columns of a matrix S, so it will be
like S is equal to x 1 is x 2 and these are
the eigenvector up to x n, these are the eigenvectors,
this is the matrix S. Now if you compute A
S , A S is equal to A x 1 x 2 x n.
And A multiplied with each eigenvector will
be A x because we have the formula A x is
equal to lambda x, for each ah Eigen value
lambda A x i is equal to lambda i x i.
So, A x 1 is lambda 1 x 1 A x 2 is lambda
2 x 2 A x n is lambda n x n. So, as will be
same column matrix S with the eigenvectors,
but each each of the column multiplied with
different eigenvalues . Now the product matrix
can be further shown as lambda 1 lambda x
is equal to x 1 x 2 again; we can take eigenvalues
and show that eigenvectors is multiplication
with eigenvectors into a matrix which is the
diagonal matrix and we only have the eigenvalues
in the diagonals. And what we can write as;
S into lambda where capital lambda is a diagonal
matrix with eigenvalues in it is diagonal.
So, what we get we got A S is equal to S into
capital lambda, where lambda is a matrix with
the diagonal eigenvalue as a diagonal matrix.
So, the diagonals are eigenvalues only .
So, we get A S is equal to S lambda , and
now S eigenvalues are independent. S is a
matrix with eigenvectors as columns , we are
considering eigenvalues Eigen eigenvectors
of distinct eigenvalues. So, I assume that
all the eigenvalues of a are distinct eigenvalues
therefore, the eigenvectors are independent
as we have just seen earlier. So, S has independent
columns . So, we will see that as a matrix
S has a is a n into n matrix with n independent
column it should be invertible. So, I can
multiply the left hand side and right hand
side by S inverse and when write S inverse
as is equal to lambda.
So, this is how we can diagonalize S, A we
have to multiply A pre multiply by inverse
of the matrix which has columns as the eigenvectors
and post multiply by it by that matrix again
and we will get lambda, which is ah matrix
where that eigenvalues are only in the columns
or we can write a is equal to S lambda S inverse
. Now what is recognize any matrix a we will
have all distinct Eigen values or independent
Eigen vectors if it is invertible. Any invertible
matrix can be expressed as a diagonal form;
we can get a diagonal we can diagonalize the
matrix from matrix operation.
Therefore, we can get a diagonal form of any
matrix which is invertible, and the diagonal
from the what will be in the diagonals, diagonal
values will be in the diagonals. So, if a
matrix has can reach diagonal form that; that
means, the matrix is ah invertible in that
case; it will have distinct independent eigenvectors
are distinct eigenvalues rather the vice versa.
In case a matrix is independent eigenvectors
so the matrix is distinct eigenvalues we can
get a diagonal form of the matrix. And we
the only get diagonal form of the matrix diagonal
form means the there, there is a diagonal
and all other terms are 0. So, only the pivot
terms are existing, pivot terms all pivot
exists for a matrix which is invertible.
So, if the eigenvalues are distinct the matrix
must be an invertible matrix . And now this
is a actually an important ah concept, but
and we will see the utility later. That if
a matrix A has eigenvalues matrix lambda capital
lambda and eigenvectors matrix S. And now
we take the matrix A to the power k we multiply
the matrix k times, and this will have eigenvalues
as the matrix of as the eigenvalues of lambda
to the power k. So, lambda 1 to the power
k lambda 2 to the power k up to lambda ah
n to the power k will be the eigenvalue matrix,
and same eigenvalues as A. So, we will see
a is equal to A S S inverse. So, we put a
to the power k S lambda S inverse, which is
S lambda S inverse into S lambda a inverse
k times.
And all S S inverse into S, S inverse into
S they cancels out. S inverse into S S inverse
into S S inverse into S cancel, but the first
S and last S inverse exists and lambda exist.
So, it will S lambda to the power k S inverse
. Now this can be a quick exercise the lambda
is a diagonal matrix . So, what will be lambda
to the power k ? Ah Which is which can be
very easily checked what will be the lambda
to the power k. So, A to the power k can be
very easily expressed as S a ah lambda to
the power k S inverse. And this also says;
because lambda to the power k and point to
be noted is lambda to the power k is also
a diagonal matrix, so you can check it quickly.
Because A to the power k, because we can write
A to the power k is equal to S lambda to the
power k S inverse which is a diagonal matrix,
and it will have the eigenvalues of a to the
power k. S and S is eigenvector matrix of
A to the power k. Because a is equal to S
lambda S inverse in this case S is eigenvector
matrix , 
and lambda is the ah is a diagonal matrix.
Similarly, we get a diagonal form using S
k lambda k S inverse therefore, S is also
the eigenvector matrix of a to the power k.
So, A to the power k and A we have same eigenvectors
and eigenvalues of a to the power k will be
the the diagonals of the matrix lambda to
the power k, which you can easily check what
will be this is this is very, very simple,
but please check it ; that what are what is
this matrix what are the diagonal components
of this matrix .
Now, we get couple of important definitions,
what is a trace of a matrix? For a matrix
say that trace is the sum of the diagonal
terms. Trace of a is a 1 1 plus a 2 2 up to
a n n, and this is equal to sum of the eigenvalues.
Similarly, for a matrix A the determinant
of a matrix is the product of the eigenvalues,
determinant of A is lambda 1 lambda 2 up to
lambda n . And this can be very easily shown
by that that term that characteristic polynomial
has roots lambda 1 lambda 2. So, determinant
of a minus lambda I is equal to lambda minus
lambda 1 into lambda minus lambda 2 lambda
minus lambda n .
And will see that if we try to explain this
lambda 1 lambda 2 lambda n and compare the
powers in both sides , compare powers of ah
lambda n lambda to the power 1 and lambda
to the power 0 both sides . And then it will
follow that comparing the power of lambda
to the power 0 will follow the determinant
loop, and comparing the power of lambda compare
the coefficients rather 
compare the coefficients of lambda to the
power 0 will give you lambda 1 to lambda lambda
1 lambda 2 up to lambda n this multiplication
is determinant of A.
Similarly, comparing if you write the characteristic
polynomial and I will show you the characteristic
polynomial in the next class. And comparing
the powers of lambda coefficient of lambda,
which is lambda 1 plus lambda 2 up to lambda
n will give you that if you expand this and
try to write it down the trace of the matrix
A, a plus a 2 2 up to a n n is equal to lambda
1 plus lambda 2 up to lambda n minus some
of eigenvalues . So, if we know the trace
and the determinant at least we can find out
what is or if we know that ah eigenvalues,
we can find out what is the trace of the matrix
and what is the determinant of the matrix.
And there is also interesting because we will
we at least see about similarity transformation
etcetera. The linear transformations in which
eigenvalues do not of a matrix do not change,
in that case is the ah determinant trace of
the matrix will also not change. So, if we
rotate the matrix, rotate the co ordinate
frame the matrix will change where the sum
of the ah of their ah ah diagonal elements
will not change, as well as the volume encompassed
by all the vectors of the matrix that will
also not change. So, determinant and trace
are uniquely determined by the eigenvectors
eigenvalues.
Therefore, with ah transformation they will
do not change and we can with coordinate rotation
they do not change transformation like coordinate,
rotation. And we can say that determinant
trace and trace are invariants of the matrix.
So, it do not vary with the coordinate rotation
ok ah.
There is ah one more few more important property
that eigenvalues of A inverse is 1 by lambda
and this this things can be very, very easily
proved from just small exercise some of this
these problems may come in exams. Ah And you
you can check it yourself , these are very
straight forward, but this will be a good
exercise if you check this things yourself.
Eigenvalue of A inverse is 1 by lambda and
if there at 2 diagonalizable matrix A and
B, which share the same eigenvalue this will
happen if A B is equal to B A. If A B is equal
to B A then A and B will have same eigenvectors
and same eigenvalues same eigenvectors. So,
these are few important properties of eigenvalues
and eigenvectors.
And there are few more important properties
for a class of matrices, and this classes
called real symmetric matrix there are some
important properties for another class of
matrix, which is called positive definite
matrix. From the real value matrix which have
symmetrics eigenvalues and eigenvectors will
give some more important properties, which
will discuss in the next session.
Thank you.
