Now as mentioned, the problem is that
eventually we may end up with
a vector that's arbitrary large or
arbitrarily small depending on whether the
magnitude of lambda_0 is greater or less
than 1.  If the magnitude of lambda_0 is
equal to 1, then actually everything
works just right.  So how can we fix this
algorithm in order to keep the vector
from growing arbitrarily large? Well, we
could during every iteration simply
divide by lambda_0. Okay?  If we did that
then our kth vector would actually be
equal to 1 divided by lambda_0 to the k
power.  And therefore it would actually
equal to this right here.  And lo and
behold, it would converge to our vector
x_0 that we wanted.  Now obviously we've
got a little bit of a problem here. Right?
We're cheating.  In order to do this, we
would have to know what our eigenvalue
is.  However, if we knew what our
eigenvalue was, then we could subtract
that from the diagonal of A or we
could form lambda_0 times identity minus A.  And then we could
compute a vector in its null space and
we would have our eigenvector.  So that's
probably cheating, right?
What do we do instead? Well, we recognize that we're only interested in the
direction. Alright? So what we could do at every iteration is say, "Okay, let's just
make this a temporary vector, and once
we've computed it, let's take it and
let's scale it so that it has length one."
Compute its length. Divide by that
length. And notice now we end up with a
vector that starts pointing in the right
direction but that always has length 1.
What we now have is a method that's
practical, that gives us a vector that
eventually points roughly in the
direction of the eigenvector associated
with the largest eigenvalue in
magnitude. What it doesn't give us yet is
an eigenvalue. So what we notice is
that it seems to be easier to find a
method for finding the eigenvector than
to find a method for finding the
eigenvalue.  In a homework you're now going to verify how to compute the eigenvalue
associated with an eigenvector once you
have it.
