So, let's see now how we can take the
power method and change it into one
where we compute the eigenvector
associated with the eigenvalue that the
second in magnitude, so that would be lambda_1.  So here we
have our power method. Let's make sure
that we now save this as -- let's call it
the PowerMethodLambda1.  And then
let's see. What do we want to do?  Well
every occurrence of lambda_0 we
should really call lambda_1. So let me go
ahead make all of those changes. And v
we're now going to return v1. And we are
going to pass in a vector associated
with the eigenvalue largest in magnitude.
Let's call it x0 that gets passed in.
And we need to rename this routine
lambda_1.  And I really should go and
change the comments here but let's not
bother with that. And then every
occurrence of v should become v1 and
then every occurrence of lambda_0 should become lambda_1.  So let me just go ahead
and do that.  And we'll just cut to where
I'm done with making those changes. Okay?
So I think I've made all of those
changes.  And now let's actually change
the algorithm itself.  The idea now is
that instead of setting v1 equal to the
vector x normalized to have length 1.
What we want to do is take x and
subtract out the component of x in the
direction of x0.  Now we're going to
assume that x0 is of length 1.  So that
means that we need to do this right here.
And then once we actually have computed that then we want to normalize that
vector that we start with to have length
1. Now to test this what we need to do is
go to test subspaces and uncomment the
section that calls this PowerMethodLambda1.
And we do that by simply
setting the zero to one. Now we still
need to run the power method because we need a vector associated with lambda_0.
But we don't necessarily want to do
all of the reporting along the way.  So
let's set illustrate equal to zero.  So
that we don't get the graphs associated
with running the power method that we
don't get the intermediate results printed
out.  So if I now execute test_SubspaceIteration, it'll very quickly run the
power method.  And then it'll immediately
start reporting on this method that
tries to home in on the next eigenvalue.
So let's have a look here.  And sure
enough it starts homing in on the
eigenvalue 4.  Now, let's observe this
for a little bit.  It gets close but now
actually it's starting to lose accuracy
and it's starting to move away from the
eigenvalue equal to four.  And actually
it's starting to home in on the
eigenvalue equal to five did one that's
largest in magnitude.  And after fifty
iterations it completes this process and
if we now look at the graph of what
eigenvalue this method homed in on, we
noticed that for a while it does just
fine and starts homing in on the
eigenvalue equal to four.  But then,
apparently a little bit of error starts
sneaking in in the direction of the
eigenvector associated with lambda zero.
And once that happens, that gets
amplified over and over and over again.
And eventually we again start pointing
in the direction of the eigenvector
associated with lambda_0.  So that's
the problem--
noise creeps in.  And then this method
starts acting like the power method as
opposed to this power method that had
subtracted out the components in the
direction of x0.
