Hello internet, Oscar Veliz again, this time
with a video on Power Method for finding eigenvalues.
We'll go over some examples, talk about the
Inverse Power Method, and also the Rayleigh Quotient.
First let's define some notation.
The eigenvalue we'll represent using the letter
lambda, the eigenvector we'll represent using
the letter v, and the pair of them we'll call
the eigenpair.
This is the definition of an eigenvalue where
if we multiply A times the eigenvector v we
get lambda times that vector.
And eigenvalues have all sorts of applications
from solving equations to differential equations
to applications in physics and geology, you
name it.
Let's look at an example of the eigenvalue.
If we take this matrix, 0, -4, 2, -6, and
we multiply it by the eigenvector 1, 1 what
we're left with is the value -4 times 1, 1.
This is means our -4 is the eigenvalue and
1, 1 is the eigenvector.
We can do the same thing with the eigenvector
2, 1 where we're left with the eigenvalue of -2
Let's look at a common approach to finding
eigenvalues called the Power Method, which
is given by this function here.
Don't worry we're going to go through some
examples.
Here A is an n by n matrix, and b is n-element
vector.
We're going to take the norm which is simply
the maximum of this matrix multiplication.
And to start with we'll usually use random
numbers for b.
We'll repeat the process until b is equal
to itself and then that's when we find our
eigenvector and our eigenvalue.
Let's try to find the eigenvalues of this
matrix 0, 1, 1, 1 using Power Method starting
from the vector 1, 1.
We multiply the matrix times the vector to
get the result of 1, 2.
Then we normalize this by find the maximum
of this vector in this case 2.
Then we divide it out from 1, 2 giving us
0.5, 1.
Then we'll restart the process.
Multiply our matrix times 0.5, 1 giving us
the value of 1, 1.5, find the maximum which
is 1.5, and dividing it out, leaving us with
the result of 0.666 repeating and 1.
Let's look at a more streamlined version of
this process.
If we take our example matrix and our starting
vector, multiply our matrix times our vector,
then factor out a 2, then multiply our matrix
times our new vector, factor out a 1.5.
We repeat the process and eventually we converge
to this eigenpair of 1.618 as our eigenvalue
and 0.618, 1 as our eigenvector.
The nice thing about Power Method is that
this process of multiplying and factoring,
multiplying and factoring, works on most any
matrix although it can be slow depending on
how your eigenvalues are configured and it
only finds the dominant eigenvalue meaning
the one that is largest in absolute value.
What if you wanted to find a smaller eigenvalue?
This is where you'll want to use the Inverse
Power Method which is simply Power Method
applied to the inverse of matrix A. It looks
something like this.
The inverse of a matrix multiplied by it's
original will give us the Identity matrix
and here's an easy way to solve for the inverse
of a 2 by 2 matrix; if we apply it to our
matrix 0, 1, 1, 1 that we've been using earlier
we get the matrix -1, 1, 1, 0.
And indeed if we multiply our original matrix
A times the inverse we get the identity.
So let's apply the Power Method to the Inverse
of matrix A starting from the same vector of 1, 1.
Multiply, factor out 1, multiply, factor out
1, repeat the process and then eventually
this sequence actually starts to loop between
two eigenvectors.
Although we're not quite done.
The Inverse Power Method only finds an eigenvector
we still need to compute its corresponding eigenvalue.
The way to do that is to simply take our original
matrix and multiply it by the eigenvector
that we found.
Then factor out the value to make it so that
our eigenvector will equal itself.
In this case we factor out -0.618; we have
to do the same thing for the other eigenvector
that we found and indeed this has the same
eigenvalue.
Another way you could find the eigenvalue
given the eigenvector is to use the Rayleigh
Quotient, which is this function here.
Why it works is a bit beyond the scope of
this video but as an example lets plug in
the eigenvector that we found earlier and
indeed this does give us our eigenvalue of
-0.618 in fact you can actually use this quotient
even if your eigenvector is not exact.
Lets take one of our earlier values for b
and if you plug this into our quotient you'll
get the same eigenvalue.
So far we've been able to find two eigenvalues.
One using Power Method to find the dominant
eigenvalue and the other using the Inverse Power Method
But what if you wanted to find even more eigenvalues?
The algebraic way of finding the eigenvalue
is by using the determinant given this expression
here, set it equal to zero, and solve it using
variables for lambda.
In this case you would say lambda squared
minus lambda minus one is equal to zero which
you could solve by using the quadratic equation
or if you've seen my video on Horner's Method
using that.
Or use this same idea of A minus some subtraction
of the identity to come up with Inverse Power
Method with a Shift which looks like this.
Where mu is a guess or a close approximation
of an eigenvalue.
You usually also just refer to this expression
as also just Inverse Power Method.
And it is also a nice way to compute the eigenvector
given an eigenvalue.
Here's an example of that Inverse Power Method
Shifted in action.
We'll give it a guess, a value for mu of -0.5,
plug this into Inverse Power Method the matrix
we'll be operating on now is now -6, 4, 4,
-2 which we simply plug into Power Method
starting from the vector 1, 1.
After seven iterations we're able to find
our eigenvector and we apply the Rayleigh
Quotient to it to find our eigenvalue.
We actually could have applied the quotient
even earlier at the fourth iteration to come
up with our eigenvalue.
This idea of applying the Rayleigh Quotient
early and often is the basis behind the Rayleigh
Quotient Iteration.
You simply take Inverse Power Method but always
apply the Rayleigh Quotient which looks like this
With initial values for mu and b given close
approximations to the eigenvalue and vector
after every step you'll update mu sub k using
the Rayleigh Quotient.
This does mean you do need to compute a new
inverse at every step but hopefully it will
converge to the eigenvalue that much faster.
Some final thoughts, both Power Method and
Inverse Power Method have linear orders but
you can speed things up using the Rayleigh
Quotient.
And in fact Rayleigh Quotient Iteration actually
has a cubic convergence order.
There are many other eigenvalue methods out
there, these are not the only three, and MATLAB
even has two of them built in when you call
the eig() function.
As always the code that I used in these examples
will be hosted on GitHub.
Sincerely, thank you for watching.
I hope you found this video on eigenvalues
helpful.
If there are any other topics that you'd like
for me to make videos for please be sure to
suggest them in the comments.
