good morning as discussed in the previous
lecture today in this lecture we will start
from this lesson which is the first of a few
lessons on the module of algebraic eigenvalue
problem ah i will again remind you that in
order to follow the lectures in this segment
it is very important that the subject matter
of this segment this module is thoroughly
ah emerged in your understanding and therefore
it is very important that at this stage you
should have completed most of the exercises
of this segment because some of the ah background
necessary for the following lectures is actually
developed through the exercises in the book
in the text book that i have referred to you
in this tutorial plan the problems of the
book listed here you must have completed by
now and that will help you in following the
lectures in the coming module ah which is
chapters eight to fourteen now in this lecture
we will be studying eigenvalues and eigenvectors
in which we will talk about the eigenvalue
problem as an introduction and then generalized
eigenvalue problem which will also ah expose
you to the to one of the practical problems
from which eigenvalue problems emerged then
we will discuss some basic theoretical results
which will be utilized later for sophisticated
methods of solving the eigenvalue problem
and then towards the end briefly we will discuss
a quick and easy method of solving the problem
which is power method
to begin with i again draw your attention
to the mapping a which is from r n to itself
that is from n dimensional space to itself
that means it is a the corresponding matrix
is n by n its a square matrix now when we
multiply a vector to a a matrix to a vector
ah the vector gets mapped to another vector
in the same space in this case but then in
this mapping there are two effects produced
on the vector one is a magnification which
may be less than one which means in that case
the actual vector will get reduced in size
the other then magnification the effect is
turning it rotation it
now this is the general way in which effecter
can get mapped through multiplication with
a matrix now some of the vectors for every
matrix are special there special in the sense
that they undergo only magnification or scaling
and do not rotate under multiplication with
a particular matrix these vectors are in some
sense the own vectors of that matrix or some
special vectors for that particular matrix
and these vectors are called eigenvectors
the word eigen in german means special or
so as if these vectors belong to this particular
matrix so if you multiply the vector a matrix
a to one such special vector its own vector
then the result the mapping is nothing other
than a pure scaling so in that case we call
this vector v as an eigenvector and the scale
factor lambda is called the eigenvalue or
the characteristic value together lambda and
v eigenvalue and eigenvector are quite often
refer to as the eigenpair they form a pair
now determination of all the lambdas and corresponding
vs there is eigenvalues and eigenvectors for
a given matrix is called the algebraic eigenvalue
problem now how we can find the values lambda
and the corresponding vectors v from only
this much the process the underlying ah concept
is actually very simple
you can take this lambda v on this side though
you cannot write as a minus lambda into v
because a is a matrix and lambda is a scalar
so but what you can is that this v you can
write as identity into v and then take lambda
i and a together in this manner take taking
a v on this side then you get lambda i v minus
a v lambda is a matrix and a is also a matrix
then you will have this system of linear equations
now you will note that this system of linear
equations is n equations in this vector or
n variables and these equations are homogenous
equations that is the right hand side is zero
then you know that for a homogenous system
of equations for the existence of non trivial
or non zero solution the coefficient matrix
must be singular that is the coefficient matrix
must have a null space and we will be actually
a member of the null space of this matrix
lambda i minus a
so for singularity of this matrix you must
set its determinant equal to zero now you
find that we have reached a stage where from
a large number of unknowns we will certainly
reduced to one unknown in this particular
equation you had one scalar unknown lambda
and one vector unknown v which was n plus
one total number of unknowns now the condition
that the coefficient matrix is singular tells
you that determinant of the coefficient matrix
is zero now you have got a single question
in a single unknown in addition you know that
this side is a polynomial in the unknown lambda
polynomial of degree n so then the question
boils down to finding the roots of that polynomial
to begin with or to find the solution of this
polynomial equation
and we know that it will have n roots including
multiplicities right so the polynomial on
this side is called the characteristic polynomial
of the matrix a and therefore the corresponding
equation this equation is called the characteristic
equation and its solutions are the eigenvalues
so characteristic equation or characteristic
polynomial we will give you n roots of this
n th degree polynomial these are the an eigenvalues
and for each of them you will try to find
the corresponding eigenvectors that very difficult
because as you insert those eigenvalues one
by one for every eigenvalue sitting here you
will get a homogenous system of equations
in which the coefficient matrix is completely
known all that you need to do is to find the
null space of that known matrix lambda i minus
a which we have studied earlier
now we have been just talking about the number
of eigenvalues total number of eigenvalues
from this with certainly be n but that may
be repeated for example suppose we have got
a three by three matrix for which the eigenvalues
may turn out to be two two and four that is
possible so here the eigenvalue two is said
to have and algebraic multiplicity of two
because it is operating twice in this polynomial
so this polynomial will be lambda minus two
whole square to appearing twice into lambda
minus four four appearing only once now we
also talk of geometric multiplicity that is
when we take this eigenvalue lambda and try
to insert it here and try to find v
now in this particular example if the eigenvalues
are two two and four then as you insert lambda
equal to two here and you try to find the
corresponding eigenvector v you would expect
that there may be up to two such vectors one
eigenvector belonging to lambda equal to two
in the first instance and the second one belonging
to lambda equal to two in the second instance
you may succeed in finding two such eigenvectors
or you may not that depends upon the particular
matrix a that means that if the algebraic
multiplicity of a particular eigenvalue is
more than one then that may give you one eigenvector
corresponding to it or two eigenvectors or
three eigenvectors up to the number which
is the algebraic multiplicity
that means in a larger matrix if suppose an
eigenvalue say this eigenvalue lambda equal
to two in a seven by seven matrix appears
five times so the eigenvalues are two two
two two two something else and something further
one this structure in that case corresponding
to two eigenvalue two when you try to find
out the eigenvector you may find you only
one eigenvector you might find two or three
or up to five more than five you cannot get
that number corresponding to that particular
eigenvalue how many eigenvectors you could
find out that number is called the geometric
multiplicity now note this one is algebraic
this one is geometric
algebraic multiplicity is appearing from this
polynomial how many factors lambda minus a
particular lambda is appearing in this polynomial
how many times that is appearing that is coming
from an algebraic source and that is why it
is called algebraic multiplicity on the other
hand the number of corresponding eigenvectors
will span a sub space in the space r n of
the dimension which is equal to the number
of linearly independent eigenvectors that
you can find corresponding to that eigenvalue
and this description of this subspace that
you are talking about that is a geometric
entity that is why that number is called the
geometric multiplicity of that eigenvalue
now note that when you are talking about finding
eigenvectors different eigenvectors then in
that context linearly dependent eigenvectors
are not considered different that means if
you find one vector as an eigenvector then
it is obvious the twice of that will be certainly
an eigenvector so that is not counted as different
from the first now similarly if you have already
found two eigenvectors corresponding to a
particular eigenvalue then a linear combination
of these two will certainly b an eigenvector
with respect to all corresponding to that
same eigenvalue that is not considered anything
different
so that means when we hunt for eigenvectors
we look for a linearly independent eigenvectors
now when it happens that for a particular
eigenvalue the algebraic multiplicity and
geometric multiplicity have a mismatch between
them there is a algebraic multiplicity is
higher geometric multiplicity is lower in
that case we call that matrix as defective
in what sense it is defective what is the
defect and what to do in such a situation
that will discuss in detail in the coming
lectures when it is so that algebraic multiplicity
and geometric multiplicities for every eigenvalue
is same in that case we can do certain interesting
things very easily
we can diagonalize the matrix that means we
can change the basis for representation of
this mapping in such a way that the resulting
matrix representation for the same mapping
the same linear transformation transfer to
be diagonal that means the directions get
completely decoupled so such matrices are
called diagonalizable to recognize a diagonalizable
matrix the direct straight forward thing is
to check the algebraic and geometric multiplicity
eigenvalue if they match all of them then
that matrix is diagonalizable if a single
eigenvalue has a multiplicity mismatch between
algebraic multiplicity and geometric multiplicity
then that is not diagonalizable
in that case the eigenvectors cannot be decoupled
the space cannot be decoupled in terms of
individual eigenvalues in the same way as
diagonalizable matrices so actually the diagonalizability
is that way not a property of a matrix as
such it certainly is a property of a matrix
but it is actually the property of a much
more fundamental thing underlying the matrix
there is the linear transformation so diagonalizability
is actually the property of the linear transformation
for which the matrix is just one representation
now considering these things apart does this
outline ah try tend to suggest that eigenvalue
problem solution method is complete
it may look so because finding the determinant
of a matrix in terms of lambda is something
which we can think of there is setting there
equal to zero and getting a polynomial equation
is something which is which does not somewhere
it dangerous and then solving a polynomial
equation also is something with which we are
acquainted after finding the lambda putting
that here and for every lambda finding the
corresponding eigenvectors that also as a
part problem is not very difficult problem
but does it mean that all the discussion in
eigenvalue problem gets completed here answer
is no the reason is that when the degree of
the polynomial equation goes very high in
that situation solving this polynomial equation
is actually not very easy
in fact for solving a polynomial equation
one of the very popular one of the very used
methods says that try to solve the polynomial
equation through the methods of eigenvalue
problem so therefore for solving an eigenvalue
problem the polynomial equation solving as
a sub problem is not a very attractive for
position because as the degree of this polynomial
goes high it will be very difficult to computationally
solves this problem therefore people look
for other ways of packing this eigenvalue
problem directly without first making a recourse
to this polynomial equation solving problems
and in that attempt mathematician have developed
a method of interesting ah tools to handle
matrices and express them in canonical formations
and take a lot of advantage from these theoretical
developments into several fields of applied
mathematics and these interesting developments
will be studying in the coming lectures including
this one
so in order to make the ground for that i
will need to develop some basic theoretical
results first even before that it will be
a good idea to see a practical problem from
which eigenvalue problem appears there are
many such practical problems in almost all
branches of science and engineering where
eigenvalue problems certainly turn up one
such problem is the ah system of mechanical
system with free vibration for example if
you considered the one degree of freedom mass
spring system for which the dynamic equation
is just this where m is the mass and k is
the stiffness of the spring and then you try
to write the assumed solution of this equation
in the form because you know what kind of
a solution this will have this will have a
sinusoidal solution and so you try to write
it like this and then you differentiate it
twice and insert in this right
so you know that twice differentiation of
this we will produce minus omega square sign
is a constant and from that very easily you
work out the natural frequency of vibration
in which this mass spring system we will undergo
natural vibration now when you try to formulate
and solve the same problem for a multi degree
of freedom system we do not get such a nice
simple scalar equation but we get a matrix
vector equation in this manner so free vibration
of an n degree of freedom system will be governed
by this equation where m is the inertia matrix
k is the stiffness matrix x is the vector
ah representing the coordinates of the system
and its double dot is certainly the acceleration
corresponding to that
now in this problem when you ask this question
what are the natural frequencies in which
this particular mechanical system can execute
natural vibration and correspondingly what
are the vectors x along which those vibrations
will take place for example in a three d ah
three degree of freedom system it might happen
that x one x two x three give you a particular
direction a particular vector along which
the vibration takes place in one frequency
there is another second direction in which
the system may vibrate in a second frequency
similarly a third direction with the third
frequency
so what are these ah vibration modes and what
are the corresponding frequencies that becomes
the problem for a solution in the ah in this
free vibration problem now again in analogy
with this equation we now try to assume something
vector x is equal to an amplitude vector into
a term like this so there we assume a vibration
mode first in this manner the vibration mode
x is a constant vector phi into sin omega
t plus alpha again we differentiate it twice
with respect to time and insert that x double
dot here and that will tell us that this whole
thing is equal to zero because sin omega t
plus alpha after twice differentiation we
will produce a factor of minus omega square
so that minus omega square gets multiplied
here we have got this then the same argument
we use what we produce in this case that is
for this to be equal to zero for all time
this part has to be zero because this one
will not be zero always so this has to be
zero when we do that then we get the corresponding
equation k phi equal to omega square m phi
now this resembles the eigenvalue problem
that we discussed just now in the earlier
case we got a problem of this manner k phi
equal to lambda phi a x equal to lambda a
sorry a v equal to lambda v these are kind
of problem that we have been discussing just
now
now here it is this problem is not exactly
the same as this problem because in this location
there is a matrix sitting ok omega square
you can identify with this lambda but here
there is a matrix sitting that is why this
problem is not called just eigenvalue problem
but it is called the generalized eigenvalue
problem as is in the original eigenvalue problem
there was a matrix here which was identity
which indeed we inserted when taking it on
the other side right
now in this case in this particular case it
is generalized in the sense that in place
of identity matrix now there is a non trivial
matrix sitting there now how to solve this
problem because if you take it on the other
side then i mean in place of i if we have
m sitting here then as we take it on the other
side we will get k minus omega square m that
will be the matrix ok not the straight forward
a minus lambda i as we would get in the ordinary
eigenvalue problem now how to handle this
one might suggest that if we pre multiply
both sides of this equation with m inverse
then immediately we get this problem m inverse
a phi equal to omega square phi
why not solve this problem because m inverse
k we can take as a we know m we know k we
can evaluate m inverse k and then it becomes
an ordinary eigenvalue problem indeed it is
possible to do that but then it is not a good
idea why doing this is not a good idea the
reason follows from the nature of these matrices
that appear in these locations this is not
just sum matrix and this is also not just
sum matrix this is an inertia matrix and this
is a stiffness matrix such matrices when appearing
in practical problem have certain structure
a stiffness matrix is always symmetric and
inertia matrix is always symmetric and positive
definite now if we evaluate this m inverse
k that may lose the symmetry that was originally
they are in the original problem now it is
not a good idea to take a step in the solution
of a problem which actually makes the original
problem difficult later we will study in detail
how solution of a symmetric matrix eigenvalue
problem is actually much simpler and much
more straightforward compared to a general
non symmetric matrix therefore it would be
a bad idea to take a step which will spoil
the symmetry of the problem as originally
given
rather we should try to take a measure which
will utilize this particular structure so
what we do is that we take this symmetric
positive definite matrix m and recall that
for a symmetric positive definite matrix there
exists a composition l l transpose so if we
conduct the composition of this matrix m in
this form l l transpose and then conduct a
coordinate transformation the original coordinates
phi and now transform to this phi tilde through
this l transpose new basis ok in that case
when we insert this here then see how this
will look like we have k phi equal to omega
square m phi first of all in place of this
m we will write l l transpose
the moment we do that we get this l transpose
phi which we are going to define as phi tilde
right so l transpose phi we are defining as
phi tilde now on this side also we would like
to have phi tilde right because we are applying
that coordinate transformation so if phi tilde
is l transpose phi then what is phi in trans
of phi tilde that will be found through the
pre multiplication of l transpose inverse
now when we do that we get l transpose inverse
phi tilde right now we say that we can get
rid of this l by pre multiplying both sides
with l inverse as we do that from here l inverse
l gives us gives us identity and we have this
now notice that the original generalized eigenvalue
problem like this has been transformed to
this problem k tilde call this whole thing
as k tilde ok then we have got the new problem
as k tilde phi tilde is equal to omega square
phi tilde so in the new coordinate system
in which phi tilde is the vector we have got
an ordinary eigenvalue problem in which this
matrix l tilde is actually symmetry because
k was originally symmetric on this side we
have multiplied it with l inverse along this
side we have multiplied it with the transpose
of l inverse that will preserve the symmetry
you can just check that its transpose is itself
l inverse k l inverse transpose as you take
the transpose of this whole thing you get
the same thing back so the symmetry is resolved
now note here then when we wrote l inverse
transpose or l transpose inverse for this
it is not clear whether we have talking about
this or we have talking about this whether
we have talking about the transpose of l inverse
or whether we have talking about the inverse
of l transpose there is not clear in this
notation till this notation is varied because
in these two cases the result will be same
and therefore this l with minus t here actually
means any of the two because these two are
always going to be same
now this is one practical problem from which
you get an eigenvalue problem there are many
other situations in all of science and engineering
from which eigenvalue problems suddenly appear
now we will start with some of the basic theoretical
results of the eigenvalue problem over which
we will build up later methods by which to
solve the problem apart from that as a byproduct
of this process the theoretical results will
also providers which tools to handle matrices
in nice elegant and canonical ways which is
useful in many ah areas of a mathematics wherever
matrices appear
now first is the first important result that
we should always keep in mind is that eigenvalues
of the transpose of a matrix are the same
as though ah those of the original matrix
this is very easy because we know that determinant
of a transpose is a same as the determinant
of an original matrix and the characteristic
polynomial is found just by the expansion
of a determinant so these are obviously the
same of course eigenvectors need not be same
in general they are different next important
point that we should remember is the situation
for a diagonal matrix and a block diagonal
matrix
you know what is a diagonal matrix so suppose
we have got a three by three matrix in this
manner these are all zero these area all zero
and this is a diagonal matrix and its very
clear that these diagonal entries are actually
the eigenvalues of this matrix and a corresponding
eigenvectors are the natural basis members
for example if you multiply one zero zero
with this and obviously you will get a one
zero zero which can be utilized these so that
shows that you have a v equal to lambda v
right v is one zero zero that means a one
is an eigenvalue and this vector e one the
first base natural basis member is the corresponding
eigenvector
similarly a two and a three will be the other
eigenvalues with corresponding basis members
corresponding eigenvectors as e two and e
three and natural basis members now this is
obvious now if you say that this is actually
a much larger matrix this log this a one is
replaced with a matrix a square matrix this
a two scalar is replaced with a square matrix
and similarly this a three then what you get
is not a diagonal matrix because this square
matrix may have off diagonal entries there
will not be a diagonal matrix but what you
call it is block diagonal matrix which will
look like this
in which this matrix a one is filled up quite
a bit ok now when you talk of eigenvalues
of a block diagonal matrix then there is a
very interesting situation that the match
at the eigenvalues of this large matrix is
the eigenvalues of a one and the eigenvalues
of a two and the eigenvalues of a three so
if this is r by r this is s by s this is t
by t and everything else outside this blocks
is zero then the r eigenvalues of this s eigenvalues
of this and p eigenvalues of this separately
obtained can be all put in a list and this
r plus s plus t numbers will be the eigenvalues
of this large matrix ok and the corresponding
eigenvectors they are also very easy to find
they are just coordinate extensions for example
if suppose this small matrix a two has an
eigenvalue lambda two with the corresponding
eigenvector as v two then just above v two
you put as many zeros has has required to
fit the size of this matrix and below that
you put as many zeros as required to fit the
size of this matrix and then as you multiply
this you find that this gives you lambda two
into that same old vector
that means that the ah an eigenvalue of a
two is the eigenvalue of a also and the corresponding
eigenvector of a can be found through a coordinate
extension over v two is as many extra zeros
are equal above and below you can put that
and then you get the big vector which is an
eigenvector of this matrix large matrix corresponding
to that same eigenvalue for diagonal and block
diagonal matrices the situation is very simple
the matter gets are little complicated when
you talk of triangular matrices a triangular
matrices a triangular matrix will have known
zero entries here ok but still the diagonal
entries are the eigenvalues because below
that everything as is zero
so when you try to write the characteristic
polynomial you write lambda i minus this so
you will get lambda minus a one lambda minus
a two lambda minus a three something something
something here but below you have got everything
zero so when you try to expand this fellows
determinant you get you expand from the first
column then you get lambda minus a one into
something plus all zeros then that something
again gives you lambda minus a two into something
plus all zeros and so on
so for a triangular matrix you will find that
obviously the characteristic polynomial we
will emerge as a product of these factors
that means that you have got the characteristic
polynomial already in factorized form that
immediately gives you a one a two a three
etcetera the diagonal members of the original
matrix as the eigenvalues but eigenvectors
is a different question for that you have
to do a lot of calculations to find the eigenvectors
eigenvectors are not so obviously visible
here ok so when you handle triangular matrices
we talk directly in terms of the eigenvalues
only not eigenvectors eigenvectors can be
found with some further processing they are
not so obviously visible
now when you take a block triangular matrix
that is if these scalars are replaced with
matrices and there are big blocks of zero
zeros sitting below that and big blocks of
other entries perhaps non zero any of them
will be nonzero answering here then we have
a block triangular matrix which is look like
this these a block triangular matrix with
four blocks block a square block b not necessarily
square block zero which is also not necessarily
square it will be just size of the transpose
of b and then block c which has to be square
then you say that the eigenvalues of this
is the same as the eigenvalues of a and the
eigenvalues of c now for this matrix the statement
that eigenvalues of this large matrix is the
collection of eigenvalues the matrix a and
the eigenvalues of the matrix c can be easily
seen in a similar way in which you saw just
now the result related to the diagonal matrix
however here the statement is made only for
the eigenvalues and not about the eigenvectors
so if the matrix a has an eigenvalue lambda
with an eigenvector v that is this then we
can apply the complete matrix hover a coordinate
extension of v zero and then we defined that
the product gives us this that means v zero
the coordinate extension of v turns out to
be an eigenvector of the complete matrix h
with the corresponding eigenvalue lambda
whatever when you try to ascertain verify
that the same wholes for c also then we cannot
immediately apply it on a coordinate extension
because that will create the punctuation with
this b because in the product the way this
zero at a help in this case it will not help
in the other case when this particular situation
what we do is that we take mu as an eigenvalue
of c and then argue then it is also an eigenvalue
of c transpose and then c transpose w turns
out to be mu w for that mu for sum vector
w and then we apply not h but h transpose
on the appropriate coordinate extension of
w in this manner and then we find at the end
that we get mu into this vector zero w that
means mu turns out to an eigenvalue of h transpose
ok then that will mean that mu is an eigenvalue
of h as well
now apart from these results there are a few
points which we need to keep in mind which
will be very useful in many of the methods
one is that if we add a scalar times identity
to a matrix then all the eigenvalues get shifted
by that scalar value and this is called the
shift theorem this is very easy to verify
and so i am not going into that i am leaving
it for you than the other important we show
that we must keep in mind it actually applicable
only for a symmetric matrix that is for a
symmetric matrix a which mutually orthogonal
eigenvectors a fact that we will verifying
the next lecture for a an eigenvalue lambda
j with corresponding eigenvectors as v j we
find that if construct another matrix b from
in which forma we have subtracted this part
then this resulting matrix b has exactly the
same eigenstructure as a eigenstructure means
same eigenvalues with the corresponding same
eigenvectors except that the eigenvalue corresponding
to that particular eigenvector v j is now
more lambda j
but it is reduced to zero that means the information
worth of that eigenvalue only has been removed
from a the all the rest of the information
of the eigenvector remains as it is now this
is an important issue to which we will come
back after studying the symmetric matrices
in detail in the next lecture before that
i will try to explosive right now to an important
quick and easy method for solving the eigenvalue
problem and that is for power method this
helps you in finding the eigenvalues of the
matrix when you are not interested in finding
all eigenvalues of a large matrix
but you are interested in finding only a few
largest magnitude eigenvalues or perhaps the
largest magnitude and the lowest magnitude
eigenvalue eigenvalues now this is very quick
and easy method easy to understand is to implement
but note that it will work only for those
matrices which have a full set of n eigenvectors
that is which are diagonalizable and for which
there is a single eigenvalue which has the
largest magnitude that means that the largest
magnitude eigenvalue has a magnitude which
is larger than all the rest that is not too
are at the top only one eigenvalue is at the
top
in that case power method gives you the largest
magnitude eigenvalue very easily what we do
for that is first to understand the way it
operates you consider that if the matrix a
processes a full set of n eigenvectors then
these eigenvectors will span the entire space
are in and that means any other vector x that
you can think of can be expressed as a linear
combination of these vectors in this manner
now it is a different matter that given a
vector x we can choose any vector x that will
have a representation as a linear combination
of the eigenvectors with alpha one alpha two
etcetera representing the corresponding coefficients
now even though we do not yet know those eigenvectors
and the corresponding coefficients what we
know this much that any vector x that we can
think of that we can have picked up we will
have some representation like this with alpha
one alpha two etcetera and v one v two etcetera
currently unknown to us
now if on both sides we multiply with a the
matrix then what happens on this side x is
a non vector with we have picked up so we
multiply a x we can work out the result on
this side we do not known what is happening
exactly the numbers we do not know in detail
but we know this much that a v one with the
lambda one v one a v two v v lambda to v two
and so on that means through a multiplication
of a whatever was the representation here
now in the coefficients we will get any other
additional factor of lambda one lambda two
lambda three etcetera if we go on multiplying
the vector the resulting vector with a once
more once more once more in after p such multiplications
on this side we will have a two the power
p x which is known which is the result of
multiplying a p times over x
on this side we will have alpha one onto lambda
one to the power p p one plus alpha two into
lambda one lambda two to the power p b two
and so on if we take that lambda one to the
power p outside then this will remaining side
right now under the assumption that the lambda
one eigenvalue is the largest magnitude eigenvalue
and the next one is a little below that what
will happen is that as p goes too high many
many many times it has been multiplied then
that will mean that in that case lambda two
by lambda one lambda three by lambda one all
being of magnitude less and one after raise
to large power all of them will tend towards
zero when p is sufficiently large
that will mean that after many such multiplications
we will have a vector sitting inside this
which is in the same direction as v one and
then after that process has stabilized after
that direction has been stabilized one more
application of that same multiplication with
a will mean at on this side and eigenvector
is being multiplied with a and that will give
you lambda one into that vector and that gives
you the vector in the direction and the lambda
one as the scale between two successive values
so as p tends to infinity this fellow tends
to this lambda one to the power p alpha one
v one then you find that after the process
has converged then you will find that the
result a p x compare to the result in the
previous iteration previous step are two vectors
which are in the same direction that means
the ratio between the first components in
the ratio between the second components in
the ration between the third components will
all this m and that ratio is lambda one that
convergence all n ratios will be same in fact
that is a test that convergence has taken
place
so this way you quickly get the largest eigenvalue
largest magnitude eigenvalue note that it
may be negative for that matter it doesnt
matter it so you will get the largest magnitude
eigenvalue and the corresponding vector will
be the eigenvector now we will make two points
here one is that other than the largest if
you need the least magnitude eigenvalue also
then how to do that for this purpose we can
use the shift theorem so how to find the least
magnitude eigenvalue what we can do is that
after finding this largest magnitude eigenvalue
we see its sign this is a ratio which may
have sign
so whether it is positive or negative that
he has been found here so if so for example
suppose that lambda one transfer to be positive
say the largest magnitude eigenvalue is twenty
three then what we can do is from the original
matrix we subtract twenty three from all the
diagonal entries that is application of the
shift theorem that is we subtract twenty three
i from the original matrix that will mean
that all the eigenvalues have got shifted
left word by twenty three that is whatever
was twenty three earlier that become zero
now whatever was twenty one earlier that becomes
nineteen and so on in that case the smallest
magnitude smallest algebraically that turns
out now as the largest magnitude eigenvalue
largest magnitude then we can apply the same
power method once more and then we will find
that which is the largest magnitude eigenvalue
and then as we shift the think back twenty
three steps on the right side then will get
the appropriate correct eigenvalue for matrix
a with the corresponding eigenvector right
so this is one way to find the largest and
least magnitude eigenvalues which has a lot
of practical significance now one more possibility
of a important of an important question maybe
that for example if you are not interested
in finding all eigenvalues what we are interested
in finding a top few the largest magnitude
once lambda one lambda two lambda three lambda
four etcetera some say six of them six top
eigenvalues we want to find out and the corresponding
eigenvectors there also for example the matrix
suppose is one hundred by one hundred we are
interested in all the one hundred eigenvalues
and there eigenvectors but only top six or
if you top once with some conditional requirements
then what we can do after the finding the
largest one we can use equation this will
work in the case of symmetric matrix which
is quite often ah encountered in practical
situations by deflation what we can do is
that we can subtract the part which is contributed
by this particular eigenvalue lambda one and
the corresponding eigenvector then the resulting
matrix will have the largest magnitude eigenvalue
as lambda two which can be found to power
method and so on now this is a very ah state
forward method which can be applied if you
are sure that the matrix does satisfy this
requirements otherwise the process may not
operate as expected or as desired rapper from
these things there are two important concepts
which will go long way in our discussion in
the coming lectures on its the eigenspace
this is a done to ah in use for representing
a subspace of r n which is composed by the
eigenvect eigenvectors of a matrix corresponding
to the same eigenvalue lambda for example
suppose a has an eigenvalue lambda corresponding
to which there are k eigenvectors v one v
two v three up to v k then that will mean
that any linear combination of these eigenvectors
is also going to be an eigenvector becomes
verify that very easily suppose corresponding
to eigenvalue lambda there are two eigenvectors
v one and v two that will mean that a one
a v one is lambda v one and a v two is lambda
two then if we apply a on a linear combination
of these two eigenvectors then we will find
that this will turn out to be a one is scalar
so we can take it out and then we will have
a one into a v one which is lambda one lambda
v one plus a two into a v two which is lambda
v two from here and taking lambda scalar outside
this common we find that we have got this
that means the matrix a multiplied over this
vector gives us lambda into this vector that
means if v one and v two are two eigenvectors
corresponding to the same eigenvalue lambda
not that it is applicable for same eigenvalue
then any linear combination of them is obviously
going to be an eigenvector with respect to
further particular matrix a now this is not
an a linear ah linearly independent eigenvector
but this is certainly an eigenvector it doesnt
come in the counting of eigenvectors but whenever
required this vector does operate like an
eigenvector and thats means that if these
a eigenvectors are corresponding to the same
eigenvalue lambda then the complete subspace
spend by these vectors gives you a subspace
in which every vector every vector is an eigenvector
and therefore this particular subspace is
also called the eigenspace of a corresponding
to that eigenvalue
there is important theoretical point that
will be ah quite in our discussion in coming
lectures that is similarity transformation
this is something which we have already earlier
seen once and here we look at some important
properties of it if we decide to represent
the vectors of a space r n in a different
new basis s and therefore the matrix representation
of a linear transformation changes from a
it becomes b for b is s inverse a s this we
have seen earlier now note that determinant
of lambda i minus a which is the characteristic
polynomial of the matrix a
now we already know that determinant of a
matrix and the determinant of its inverse
are reciprocals of each other that means that
if we multiply this with determinant of s
and also with determinant of s inverse we
are actually making no change because this
will be reciprocal of this we also know that
determinant of the product of three matrices
of the same size is same as determinant of
p into determinant of q into determinant of
r now what we have got here is determinant
of p into determinant of q into determinant
of r that means this is same as determinant
of p q r that means a single determinant with
s inverse inserted from this side and s inserted
with in this side will be the same as this
now when s inverse and s are inserted on from
the two sides on this they cancel each other
because identities remain inside that is why
this is lambda i on this it will have the
effect which is different that is s inverse
a s which is b that means we have got this
whole thing same as determinant of lambda
i minus b what is that that is the characteristic
polynomial of the matrix b now that shows
us that so the similarity transformation the
matrix might has been changed but its characteristic
polynomial remains same as earlier ok the
characteristic polynomial of a and the characteristic
polynomial of b turn out to be the same is
the entire polynomial is same for a and b
then all the roots will be same that means
that eigenvalues remain unchanged through
a similarity transformation because similarity
transformation comes out only as a result
of a change of basis
no geometrical entities being changed only
its representation is being changes and eigenvalues
are the property of the underlined linear
transformation not of the basis and therefore
eigenvalues remain constant through all this
similarity transformations how do eigenvectors
change geometrically even eigenvectors do
not change but then their representation in
the new basis will change as the vectors as
any other vector would change its representation
in the new basis through the multiplication
of s inverse which we have already studied
in the same manner and eigenvector of a will
transform to s inverse v in the new basis
which is given by s so if v is an eigenvector
of a the corresponding eigenvector of b will
be s inverse v because there the new basis
s has appeared
so the basis change of vectors takes place
through this relationship and the same will
apply to eigenvectors as well now let us quickly
summarize what are the points that we have
discussed in this particular lecture first
important point is that meaning and context
of the algebraic eigenvalue problem that we
have discussed second is that we have studied
the fundamental relationships deductions which
are vital for the solution of the algebraic
eigenvalue problem and third we have a been
exposed to a quick and easy method for power
method as an inexpensive procedure to determine
the extremal magnitude eigen values only the
largest or largest and lowest or the largest
few in all these situations we can use the
power method with a little bit of help from
the shift theorem or the deflation technique
but then while applying power method you must
be careful that the power method does not
apply to arbitrary matrices but on thirteen
matrices having particular kinds of eigen
structure if a matrix false in that category
then power method will be very handy for you
in many situations but otherwise it may not
operate as desired
so in the next lecture we will build up on
what we have develop till now and see the
detailed discussion on the theoretical developments
on eigenvalue problem which will be then use
in different categories of methods for solving
the algebraic eigenvalue problem
thank you
