So here what I've created is an
implementation of the SubspaceIteration.
Now how I got here was simply to take
the last implementation of the PowerMethod
and copied it over, and then make a few changes. In particular, I pass in
matrix A and the matrix V. V now is a
matrix where the columns are going to
hold the approximations for the
eigenvectors.  And I'm also going to pass
back both V and the matrix that is going
to give us some indication of what the
eigenvalues are.  In particular, it'll
eventually become a diagonal matrix
where the eigenvalues can be found on
this diagonal. Now the important things
here are that I now track n eigenvalues --
n is the number of columns in matrix V --
so that I can plot them.  I initially take
my matrix V, which was a random matrix,
and I make it into a matrix that has
mutually orthonormal columns, where I
only want n of those mutually orthonormal columns.  And that's where the
option 0 comes in when you pass that
into the QR factorization algorithm that
MATLAB provides.  And then we start
iterating by doing A times V, and then
computing the QR factorization of that
to make the resulting make columns of A
times V mutually orthonormal again.  And
then here is that matrix Ak where the
matrix Ak is in theory will eventually
become a diagonal matrix, if the
columns of V become eigenvectors of
matrix A.  And then if we want to
illustrate this, what we do is we extract
the diagonal elements of matrix Ak from
that matrix and we place those into a
column of array lambdas, which is the
array that tracks these things. And then
if necessary or if desired we actually
printout these matrices Ak to see how the
convergence happen Now the
stopping criteria is not really ideal.
Here what I do is I look at the strictly
lower triangular entries of matrix Ak.
And if the largest in absolute value of
those becomes less than 10 to the minus
14, I decide that I've done enough. And
then I've made the obvious changes
to my test driver so that if I go to the
command window and I execute test
subspace iteration then I start seeing
this matrix Ak slowly but steadily
becoming a diagonal matrix where we can
find the eigenvalues on the diagonal.  And
when it's all done, we end up with a
graph of how this convergence to the
various eigenvalues happened.  We are
tracking three eigenvectors and three
eigenvalues here.  And they're plotted
here.  And indeed they converge to the
three largest eigenvalues which is what
we expect. And then if we look at a graph
where we plot the difference between the
final eigenvalue that's computed and
the current iteration then we see that
the convergence to the first eigenvalue
is faster and then the convergence to
the second and third eigenvalue.
Anyway, that's subspace iteration.
 
