Let's translate some of these insights
into what we call the inverse power
method.  And here's the idea. If we have
eigenvalues ordered from largest to
smallest, and this time we say that the
smallest one is strictly smaller than
the next smallest one in magnitude.  Then
if these are the eigenvalues of matrix A,
then the eigenvalues of matrix A inverse
have the property that one divided by
lambda_m-1 is the largest in magnitude
and so forth. Now what does that mean?
Well, if we execute the power method but
we do so with A inverse, then we get the
exact same effect as we did (with) when
we iterated with matrix A except that
this time we will home in on the
eigenvector associated with the largest
eigenvalue of matrix A inverse, which
then is the eigenvector associated with
the smallest eigenvalue of matrix A.  Okay?  And then obviously in order to keep this
vector from either getting too long or
too short, we would want to scale it,
again to keep its length, for example, equal to one.  Now in
analyzing that, we get exactly what we
had before, except that lambda is
replaced by lambda inverse.  And then here what I have done is I've written our
vector v^(0) as a linear combination of
the columns of X.  But now I order them
from (the small) the eigenvector
associated with smallest eigenvalue to
the eigenvector associated with the
largest eigenvalue. These are then the
coefficients associated with those terms.
And if we've already hit our vector v k
times with matrix A inverse, then
we end up with this expression right
here.  And what we notice is that these
terms here will either grow or shrink
faster than the first term, and therefore
eventually this term is what will
dominate.  Okay, and then we can express
the exact same thing in matrix form as
follows.  In order to analyze this then
what we would do is say, "Instead of
dividing by lambda_0, let's multiply by
lambda_m-1."  In that case, we get a
lambda_m-1 to the kth power here.
This first term here is replaced by 1.
And these terms are all replaced by the
ratios between the smallest eigenvalue
in magnitude and the next smallest eigenvalue the magnitude for this term, and so
forth.  And the same thing than happens
here.  We get lambda_m-1, lambda_m-1,
and then this term right here becomes 1.  And what we notice then is
that again we end up converging on the
vector psi_ m-1 times x_m-1.
How fast did we get there?  Well how fast
we get there is now dominated by the
ratio of the magnitude of the smallest
eigenvalue and the second smallest
eigenvalue.  Alight?  So what we have now is the inverse power method, a method for
finding the eigenvector associated with
the eigenvalue that is smallest in
magnitude of matrix A.  And obviously, once we have that eigenvector, we can use the
Rayleigh quotient to compute the
corresponding eigenvalue.  Now there's one
more comment I should make.  And that is we don't really want to compute the
inverse of matrix A. Alright?  If we are working with a
dense matrix then we may want to solve
with matrix A. What would that involve?
Well that would involve LU with partial
pivoting, for example.  And obviously we
wouldn't want to do that every time through the loop.  We would want to do the
LU with partial pivoting up front once
and then use the factors to actually
solve for the vector in the linear
system. Alright?  Alternatively if our matrix
A is sparse, then we may want to employ
any of our sparse solution methods that
we looked at in previous weeks.
