Hello everyone so welcome to the last lecture
of this module and again in this lecture we
are going to introduce another method that
is the inverse power method sifted inverse
power method Those are the variants of power
method for finding the eigenvalues those are
not dominant for a given matrix In the last
lecture we have talked about power method
and we have seen that using the power method
we can find only the dominant eigenvalue However
we have seen in the previous lecture that
if we use method of deflation with power method
we can compute the other eigenvalues than
dominant for a given matrix However in method
of deflation together with power method what
you have to do First you find the dominant
eigenvalue and eigenvector then generate a
new matrix and then for that matrix again
apply power method which will give you the
next dominant eigenvalue Then make a new matrix
again apply the power method and so on
So if I am having a 10 by 10 matrix and suppose
I want to find out the 5th eigenvalue of this
matrix which in order of decreasing order
So what I have to do For doing this I need
to apply 5 times power method to a 10 by 10
matrix and 4 times deflation transformation
I need to use So hence it will be very expensive
in terms of computational course So inverse
and shifted inverse power methods give us
algorithms for computing the eigenvalues and
eigenvectors those are not dominant directly
in one with uhh step by using the power method
or process of power method just once
So basically these methods are based on the
two principles The first principle is if lambda
and V is an eigenpair of a square matrix A
of order n then lambda inverse which will
be basically 1 upon lambda and V be the eigenpair
of matrix A inverse This is one of the rule
and this can be shown very easily as it is
the eigen pair for A So I can write AV equals
to lambda V If A is invertible then I can
multiply both side by A inverse So inverse
into A into V will become A inverse into lambda
V What I can do lambda is a scalar I can take
out so I can write and then I can divide the
whole thing by 1 upon by lambda both sides
so 1 upon lambda V will become A inverse V
it means the eigenvalue of A inverse is 1
upon lambda and corresponding eigenvector
is V
The other result we shift the eigenvalue that
is it is saying keep lambda V is an eigenpair
of a matrix A then 
lambda minus alpha together with eigenvector
V will be the eigenpair of matrix A minus
alpha I for the scalar alpha and here alpha
not equals to lambda 
So this again we can show we are having a
new matrix B that is A minus alpha I so BV
will become A minus alpha I into V and this
will become AV minus alpha V and we know that
lambda is an eigenvalue of A so it will be
lambda minus alpha V or I can write lambda
minus alpha into V So it means the eigenvalue
of B is lambda minus alpha and the eigenvector
is V which is the same as of A and V is A
minus alpha I So with these two results we
will start our shifted inverse power method
or inverse power method
So suppose that A as distinct eigenvalues
lambda1 lambda2 up to lambda n Consider the
eigenvalue lambda j suppose I need to calculate
or I need to compute this eigenvalue So then
a constant alpha can be chosen so that Mu1
is 1 upon lambda j minus alpha is the dominant
eigenvalue of A minus alpha I inverse further
more if we chose V0 carefully then the sequence
Vk that is V1k V2k Vnk having the components
and Ck given by yk equals to A minus alpha
I inverse V k and Vk plus 1 is 1 upon Ck plus
1 yk This is the power method only power method
for A minus alpha I inverse and finally we
can calculate the jth eigenvalue that is lambda
j as 1 upon Mu1 plus alpha from the Mu
Now what should be the choice of alpha We
cannot choose alpha just like equals to lambda
j but to be the Mu1 as the dominant eigenvalue
So it should be very large and for this alpha
should be the uhh should be quite close to
lambda j So for example if I want to find
out eigenvalue 4 alpha should be somewhere
for 4.2 or 3.8 or 4.3 3.7 like that So proof
of this result can be given very easily suppose
the eigenvalue satisfy lambda1 less than lambda2
up to less than lambda n Also let alpha be
the number such that alpha not equals to lambda
j but very close to lambda j as compare to
other eigenvalues then I can write that lambda
j minus alpha less than lambda (j) i minus
alpha for rest of the i from 1 to j minus
1and then j plus 1 to n
Then using the result which I have derive
on the board I can say that 1 upon lambda
j minus alpha will be the eigenvalue of A
minus alpha I inverse and the corresponding
eigenvector will remain V which is the eigenvector
of original A to corresponding to the eigenvalue
lambda j So more over we can say that 1 upon
lambda i minus alpha will be less than 1 upon
lambda j minus alpha and hence lambda1 upon
lambda j minus alpha which is my Mu1 will
be the dominant eigenvalue of the matrix A
minus alpha I inverse So how it will work
suppose I want to find out an eigenvalue of
a given matrix let us say some eigenvalue
lambda j So I will choose 1 alpha close to
this eigenvalue and which is not close to
rest of the eigenvalues
Now question arise without knowing about eigenvalues
how we will choose alpha because each and
every time I am saying that alpha should be
close to lambda j compare to any other lambda
i So how to do it without looking or without
knowing about the eigenvalues So this will
come from the gershgorin disc just by looking
on the given matrix I can say about the range
of eigenvalue or in which disc eigenvalue
will lie and from there I can get an idea
So the algorithm should be like this first
of all you have to choose initial V0 which
should be a non-zero then for k equals to
01 2 you will find out yk and yk will be A
minus alpha I inverse Vk from here Ck plus
1 will be the the largest component of vector
yk in terms of absolute value and then you
can define your Vk plus 1 as 1 upon Ck plus
1 into yk So the shifted inverse power method
with this with this fixed shift alpha is nothing
just power method where the matrix A is replaced
with a new matrix A minus alpha I inverse
The convergence of this algorithm is given
by this lambda1 minus alpha upon lambda2 minus
alpha where lambda1 and lambda2 are the closest
and the second closest eigenvalue to alpha
So for example if you are choosing a matrix
having eigenvalue 5 8 and 10 and I am choosing
alpha is 4 So it will be convergence will
be 1 upon that is 5 minus 4 upon 8 minus 4
So 1 upon 2 so it will be linear okay so it
will be always linear It will be something
like always between 0 and 1 okay or up to
1
Now we can use this shifted inverse power
method with variable shift also Variable shift
means we can update our alpha also in the
earlier algorithm we have a fixed alpha chosen
it in the beginning and we are using that
however here we can update our alpha also
to improve the convergence of a given method
So here the algorithm will be like that you
take a non-zero vector V0 and a initial alpha
that is let us say alpha not then compute
yk yk will be A minus alpha kI inverse into
Vk and Ck plus 1 will be the maximum component
in terms of absolute value of yk like if V
over for a 3 by 3 matrix yk is coming 1 minus
2 minus 4 So here Ck will become minus 4 and
then you set your Vk plus 1 as 1 upon Ck plus
1 into yk and here the same time you will
update your alpha alpha will becomes alpha
k plus 1 upon Ck plus 1 and tis method is
locally quadratic convergent having the quadratic
or second order convergence locally
We can apply the shifted inverse power method
with Rayleigh Quotient also and that is like
choose an initial value V0 that is not equals
to 0 such a way that this is having the unit
length then compute alpha not which will be
the Rayleigh Quotient of this vector V0 and
that will be V0 transpose A into V0 Now for
k equals to 012 compute yk So yk will become
A minus alpha kI inverse into Vk Set Vk plus
1 as 1 upon yk 2 yk and then alpha k plus
1 can be updated as Vk plus 1 transpose A
into Vk plus 1 So in each iteration again
updating my V and here I am updating my alpha
by the definition of Rayleigh Quotient and
this method is having (cubic) uhh this method
is cubic convergent in case of symmetric matrices
So let us take an example of separate inverse
power method for finding the eigenvalue one
of the eigenvalue of a 3 by 3 matrix and here
we will use the fixed alpha version of the
inverse uhh shifted inverse power method Hence
with fix shift So the eigenvalue of this matrix
is (4 21) the dominant eigenvalue is 4 So
suppose I take alpha 4.2 so if I take alpha
4.2 my shifted inverse power method will converge
to eigenvalue 4 and corresponding eigenvector
So for lambda1 equals to 4 I can define my
A minus alpha will become A minus 4.2 I and
then I will apply power method on this A minus
4.2 I with initial value 11 1 So this I have
taken in this way
And then using this we continue in this way
until the sequence Ck and Vk converges So
y0 is this value then C1 comes -23.18181818
and then V1 is this particular vector After
8 iterations we have Mu1 equals to minus 5
which is the dominant eigenvalue of A minus
4.2 I inverse and then Vk converges toV1 that
is 2 by 5 3 by 5 and 1 So hence the eigenvalue
is given by 1 upon Mu1 plus alpha that is
minus 1 upon 5 plus 4.2 that is 4 which verify
our claim that for a given alpha it will converge
to the closest eigenvalue If I take alpha
equals to 2.1 and I apply the same process
it converges to eigenvalue 2 with corresponding
eigenvector 1 by 4 1 by 2 and 1
So so far we were talking about shifted inverse
power method Let us take a and in shifted
inverse power method we need to calculate
inverse and like that Let us take a other
variant of this shifted inverse power method
just inverse power method and this we can
use for finding the smallest eigenvalue of
a given matrix and the corresponding eigenvector
and here we are using the result that if lambda
and V be the eigenpair of a matrix A then
1 upon lambda and V will be the eigenpair
of A inverse So if lambda is an eigenvalue
or lambda is the dominant dominant eigenvalue
of a given matrix A then 1 upon lambda will
be the uhh dominant eigenvalue of then lambda
inverse will be the eigenvalue of A inverse
and hence 1 upon lambda will be the smallest
eigenvalue of the matrix A So if we apply
the power method on A inverse what we can
get we can get the smallest eigenvalue of
the matrix A
So the inverse power method as advantage over
power method that it can approximate any eigenvalue
Consider y not which is a non-zero eigenvector
uhh vector in Rn and y0 can be expressed as
linear combination of eigenvectors of A and
then applying power method on A inverse we
can get Zk plus 1 equals to A inverse yk and
yk plus 1 equals to Zk plus 1 upon mk plus
1 So in this way which gives the approximation
to the dominant eigenvalue of A inverse in
modulus that is the smallest eigenvalue of
A in modulus However here we do not need to
find A inverse to find smallest eigenvalue
of A because if you are having a 10 by 10
matrix so finding the inverse of a 10 by 10
matrix is computationally expensive and I
will not prefer suppose I want to find out
the smallest eigenvalue of a 10 by 10 matrix
A So what is the inverse power method I need
to calculate A inverse and the dominant eigenvalue
of A inverse will be the smallest eigenvalue
of A but we need to find out A inverse Here
we do not require to find out A inverse at
all
What we can (We are) what we are We are starting
with a V0 and what we are doing We are finding
V1 as A inverse into V0 So what I will do
Here I will use multiply both side by A So
my AV1 will become V0 So instead of finding
V1 with this iterative process or from this
multiplication of a matrix with a column vector
I will be having a linear system of equation
and V0 is known to you A is known to you So
you can find out V1directly from here without
using A inverse Then in the next equation
your V2 will be A inverse into V1 So what
you can say you can solve this system AV2
equals to V1 and from here you will find out
the next iteration of the inverse power method
that is your V2 and so on So here no need
to calculate A inverse at any stage however
we need to solve a linear system of equation
in each and every stage
So let us take an example of this we are taking
this matrix So A is (2-1 0) (-12-1) (0-12)
Here we are doing it by calculating A inverse
but we can do it without calculating A inverse
also So if we are doing it with A inverse
and starting with and starting with an initial
solution 111 we are getting our first approximation
as y1 that is 1.52 to 1.5 and here if I divide
it by 2 it is 1 upon 2 into 2.751 and 0.75
transpose So first approximation of the eigenvalue
is coming 2 and eigenvector is this one However
here we are doing it with A inverse but if
I do it without finding A inverse then the
system can be solved with less computational
effort for example my original matrix is (2-10)
(-12-1) and then we are having (0-12) So this
is A V not is 11 1 transpose and I am finding
V1 as A into V0 which is coming -1.5 2 -1.5
I think So which is coming 1.5 2 and 1.5
Now here what I am doing sorry it is A inverse
into V0 so doing this I need to calculate
inverse of tis matrix but if I solve this
system A into V1 equals to V0 then my augmented
matrix becomes (2 -1 0) (-12-1) (0-12) and
then (111) and after solving this let us say
solve it with Gauss elimination So (2 -1 0)
and then R2 will be replaced by R2 plus 1
by 2 times R1 So this will be 0 2-1 by 2 will
be 3 by 2 minus 1 1 1 plus 1 by 2 will be
3 by 2 then this is already 0 -1 2 1 and then
this can be change into just by R3 replaced
by R3 plus 2 by 3 R2 by using this row operation
elementary row operation then I 
will get 2-1 0 1 0 3 by 2 -1 3 by2 and then
0 0 2-2 by 3 4 by 3 and then this will become
1 plus 2 by 3 So 1 plus 2 by 3 will be 5 by
3 and from here we will get this vector V1
So continuing this I will calculate y2 and
then V2 y3 V3 y4 V4 and after 4 iterations
we observe that my system is converging to
Mu equals to 1.71 and lambda equals to 1 upon
Mu that is 0.5848 Since A minus 0.5848I will
be 0 so here lambda equals to 0.5848 will
be the required eigenvalue that is the smallest
eigenvalue and the corresponding eigenvector
is 0.707 1 and 0.7073 The smallest eigenvalue
of A will be 2 -root 2 that is again true
which is as we have computed numerically
So in this lecture we have learned the two
variants of power method that is the shifted
inverse power method and inverse power method
for finding the eigenvalues other than dominant
for a given matrix This ends the module 3
of this course and in this module we have
learned various methods for calculating power
uhh calculating eigenvalues and eigenvectors
like we started with Jacobi method then we
have learn power method power method with
fixed shift inverse power method with variable
shift inverse power method with Rayleigh Quotient
and finally the classical inverse power method
for finding the smallest eigenvalue of a given
matrix In the next lecture we will talk about
interpolation till the end bye Thank you very
much
