OK.
Shall we start?
This is the second
lecture on eigenvalues.
So the first lecture was --
reached the key equation,
A x equal lambda x.
x is the eigenvector and
lambda's the eigenvalue.
Now to use that.
And the, the good way
to, after we've found --
so, so job one is to
find the eigenvalues
and find the eigenvectors.
Now after we've found them,
what do we do with them?
Well, the good way to see that
is diagonalize the matrix.
So the matrix is A.
And I want to show
-- first of all,
this is like the basic fact.
This, this formula.
That's, that's the key
to today's lecture.
This matrix A, I
put its eigenvectors
in the columns of a matrix S.
So S will be the
eigenvector matrix.
And I want to look at this
magic combination S inverse A S.
So can I show you how that --
what happens there?
And notice, there's
an S inverse.
We have to be able to invert
this eigenvector matrix S.
So for that, we need n
independent eigenvectors.
So that's the, that's the case.
OK.
So suppose we have n linearly
independent eigenvectors
of A. Put them in the
columns of this matrix S.
So I'm naturally going to call
that the eigenvector matrix,
because it's got the
eigenvectors in its columns.
And all I want to do is
show you what happens
when you multiply A times S.
So A times S.
So this is A times the matrix
with the first eigenvector
in its first column,
the second eigenvector
in its second column, the n-th
eigenvector in its n-th column.
And how I going to do this
matrix multiplication?
Well, certainly I'll do
it a column at a time.
And what do I get.
A times the first column
gives me the first column
of the answer, but what is it?
That's an eigenvector.
A times x1 is equal to
the lambda times the x1.
And that lambda's we're
-- we'll call lambda one,
of course.
So that's the first column.
Ax1 is the same
as lambda one x1.
A x2 is lambda two x2.
So on, along to in the n-th
column we now how lambda n xn.
Looking good, but
the next step is even
better.
So for the next step,
I want to separate out
those eigenvalues, those,
those multiplying numbers,
from the x-s.
So then I'll have
just what I want.
OK.
So how, how I going
to separate out?
So that, that
number lambda one is
multiplying the first column.
So if I want to factor it
out of the first column,
I better put --
here is going to
be x1, and that's
going to multiply
this matrix lambda
one in the first
entry and all zeros.
Do you see that that,
that's going to come out
right for the first column?
Because w- we remember how --
how we're going back to that
original punchline.
That if I want a number
to multiply x1 then
I can do it by putting
x1 in that column,
in the first column, and
putting that number there.
Th- u- what I
going to have here?
I'm going to have lambda --
I'm going to have x1, x2, ...
,xn.
These are going to
be my columns again.
I'm getting S back again.
I'm getting S back again.
But now what's it multiplied
by, on the right it's
multiplied by?
If I want lambda n xn in the
last column, how do I do it?
Well, the last column
here will be --
I'll take the last column,
use these coefficients,
put the lambda n
down there, and it
will multiply that n-th column
and give me lambda n xn.
There, there you see matrix
multiplication just working
for us.
So I started with A S.
I wrote down what it meant,
A times each eigenvector.
That gave me lambda
time the eigenvector.
And then when I peeled
off the lambdas,
they were on the right-hand
side, so I've got S, my matrix,
back again.
And this matrix, this diagonal
matrix, the eigenvalue matrix,
and I call it capital lambda.
Using capital letters
for matrices and lambda
to prompt me that
it's, that it's
eigenvalues that are in there.
So you see that the
eigenvalues are just
sitting down that diagonal?
If I had a column x2 here,
I would want the lambda two
in the two two position,
in the diagonal position,
to multiply that x2 and
give me the lambda two x2.
That's my formula.
A S is S lambda.
OK.
That's the -- you see,
it's just a calculation.
Now -- I mentioned, and
I have to mention again,
this business about n
independent eigenvectors.
As it stands, this is
all fine, whether --
I mean, I could be repeating
the same eigenvector, but --
I'm not interested in that.
I want to be able to invert S,
and that's where this comes in.
This n independent
eigenvectors business
comes in to tell me that
that matrix is invertible.
So let me, on the next board,
write down what I've got.
A S equals S lambda.
And now I'm, I can multiply
on the left by S inverse.
So this is really --
I can do that, provided
S is invertible.
Provided my assumption of n
independent eigenvectors is
satisfied.
And I mentioned at the end of
last time, and I'll say again,
that there's a small
number of matrices for --
that don't have n
independent eigenvectors.
So I've got to discuss
that, that technical point.
But the great -- the most
matrices that we see have n di-
n independent eigenvectors,
and we can diagonalize.
This is diagonalization.
I could also write
it, and I often will,
the other way round.
If I multiply on the
right by S inverse,
if I took this equation at the
top and multiplied on the right
by S inverse, I could --
I would have A left here.
Now S inverse is
coming from the right.
So can you keep
those two straight?
A multiplies its eigenvectors,
that's how I keep them
straight.
So A multiplies S.
A multiplies S.
And then this S inverse makes
the whole thing diagonal.
And this is another way
of saying the same thing,
putting the Ss on the
other side of the equation.
A is S lambda S inverse.
So that's the, that's
the new factorization.
That's the replacement for L U
from elimination or Q R for --
from Gram-Schmidt.
And notice that the matrix --
so it's, it's a matrix times
a diagonal matrix times the
inverse of the first one.
It's, that's the
combination that we'll
see throughout this chapter.
This combination with
an S and an S inverse.
OK.
Can I just begin to use that?
For example, what
about A squared?
What are the eigenvalues and
eigenvectors of A squared?
That's a straightforward
question with a,
with an absolutely clean answer.
So let me, let me
consider A squared.
So I start with A
x equal lambda x.
And I'm headed for A squared.
So let me multiply
both sides by A.
That's one way to get
A squared on the left.
So -- I should write
these if-s in here.
If A x equals lambda x,
then I multiply by A,
so I get A squared x equals
-- well, I'm multiplying by A,
so that's lambda A x.
That lambda was a number, so
I just put it on the left.
And what do I -- tell me how
to make that look better.
What have I got
here for if, if A
has the eigenvalue
lambda and eigenvector
x, what's up with A squared?
A squared x, I just
multiplied by A,
but now for Ax I'm going
to substitute lambda x.
So I've got lambda squared x.
So from that simple
calculation, I --
my conclusion is that the
eigenvalues of A squared are
lambda squared.
And the eigenvectors --
I always think about both of
those.
What can I say about
the eigenvalues?
They're squared.
What can I say about
the eigenvectors?
They're the same.
The same x as in -- as for A.
Now let me see that
also from this formula.
How can I see what A squared is
looking like from this formula?
So let me -- that
was one way to do it.
Let me do it by just
taking A squared from that.
A squared is S lambda S
inverse -- that's A --
times S lambda S inverse
-- that's A, which is?
This is the beauty of
eigenvalues, eigenvectors.
Having that S inverse
and S is the identity,
so I've got S lambda
squared S inverse.
Do you see what
that's telling me?
It's, it's telling me the same
thing that I just learned here,
but in the -- in a matrix form.
It's telling me that
the S is the same,
the eigenvectors are the
same, but the eigenvalues
are squared.
Because this is --
what's lambda squared?
That's still diagonal.
It's got little
lambda one squared,
lambda two squared,
down to lambda n
squared o- on that diagonal.
Those are the eigenvalues, as
we just learned, of A squared.
OK.
So -- somehow those eigenvalues
and eigenvectors are really
giving you a way to --
see what's going
on inside a matrix.
Of course I can
continue that for --
to the K-th power,
A to the K-th power.
If I multiply, if I have
K of these together,
do you see how S inverse
S will keep canceling
in the, in the inside?
I'll have the S outside
at the far left,
and lambda will be in there
K times, and S inverse.
So what's that telling me?
That's telling me
that the eigenvalues
of A, of A to the K-th
power are the K-th powers.
The eigenvalues of A cubed are
the cubes of the eigenvalues of
A. And the eigenvectors
are the same, the same.
OK.
In other words, eigenvalues
and eigenvectors
give a great way to understand
the powers of a matrix.
If I take the
square of a matrix,
or the hundredth
power of a matrix,
the pivots are all
over the place.
L U, if I multiply L U times
L U times L U times L U
a hundred times, I've
got a hundred L Us.
I can't do anything with them.
But when I multiply S
lambda S inverse by itself,
when I look at the eigenvector
picture a hundred times,
I get a hundred or ninety-nine
of these guys canceling out
inside, and I get
A to the hundredth
is S lambda to the
hundredth S inverse.
I mean, eigenvalues
tell you about powers
of a matrix in a way that we had
no way to approach previously.
For example, when does --
when do the powers of
a matrix go to zero?
I would call that
matrix stable, maybe.
So I could write down a theorem.
I'll write it as a theorem
just to use that word
to emphasize that here I'm
getting this great fact
from this eigenvalue picture.
OK.
A to the K approaches zero as K
goes, as K gets bigger if what?
What's the w- how can
I tell, for a matrix A,
if its powers go to zero?
What's -- somewhere inside that
matrix is that information.
That information is not
present in the pivots.
It's present in the eigenvalues.
What do I need for the -- to
know that if I take higher
and higher powers of A, that
this matrix gets smaller
and smaller?
Well, S and S inverse
are not moving.
So it's this guy that
has to get small.
And that's easy to
-- to understand.
The requirement is
all eigenvalues --
so what is the requirement?
The eigenvalues have
to be less than one.
Now I have to wrote
that absolute value,
because those eigenvalues
could be negative,
they could be complex numbers.
So I'm taking the
absolute value.
If all of those are below one.
That's, in fact, we
practically see why.
And let me just say that I'm
operating on one assumption
here, and I got to
keep remembering
that that assumption
is still present.
That assumption was that
I had a full set of,
of n independent eigenvectors.
If I don't have that, then
this approach is not working.
So again, a pure eigenvalue
approach, eigenvector approach,
needs n independent
eigenvectors.
If we don't have n
independent eigenvectors,
we can't diagonalize the matrix.
We can't get to a
diagonal matrix.
This diagonalization
is only possible
if S inverse makes sense.
OK.
Can I, can I follow
up on that point now?
So you see why -- what we
get and, and why we want it,
because we get information about
the powers of a matrix just
immediately from
the eigenvalues.
OK.
Now let me follow up on this,
business of which matrices
are diagonalizable.
Sorry about that long word.
So a matrix is, is sure -- so
here's, here's the main point.
A is sure to be --
to have N independent
eigenvectors and, and be --
now here comes that word
-- diagonalizable if, if --
so we might as well get the
nice case out in the open.
The nice case is when -- if
all the lambdas are different.
That means, that means
no repeated eigenvalues.
OK.
That's the nice case.
If my matrix, and most -- if
I do a random matrix in Matlab
and compute its eigenvalues --
so if I computed if I took
eig of rand of ten ten, gave,
gave that Matlab command, the --
we'd get a random
ten by ten matrix,
we would get a list of
its ten eigenvalues,
and they would be different.
They would be distinct
is the best word.
I would have -- a random matrix
will have ten distinct --
a ten by ten matrix will have
ten distinct eigenvalues.
And if it does, the eigenvectors
are automatically independent.
So that's a nice fact.
I'll refer you to the
text for the proof.
That, that A is sure to have
n independent eigenvectors
if the eigenvalues
are different, if.
If all the, if all
eigenvalues are different.
It's just if some
lambdas are repeated,
then I have to
look more closely.
If an eigenvalue is repeated, I
have to look, I have to count,
I have to check.
Has it got -- say it's
repeated three times.
So what's a
possibility for the --
so here is the, here is
the repeated possibility.
And, and let me
emphasize the conclusion.
That if I have repeated
eigenvalues, I may or may not,
I may or may not have, have
n independent eigenvectors.
I might.
I, I, you know, this isn't
a completely negative case.
The identity matrix --
suppose I take the ten
by ten identity matrix.
What are the eigenvalues
of that matrix?
So just, just take the
easiest matrix, the identity.
If I look for its
eigenvalues, they're all ones.
So that eigenvalue one
is repeated ten times.
But there's no shortage of
eigenvectors for the identity
matrix.
In fact, every vector
is an eigenvector.
So I can take ten
independent vectors.
Oh, well, what happens
to everything --
if A is the identity
matrix, let's
just think that one
through in our head.
If A is the identity
matrix, then it's
got plenty of eigenvectors.
I choose ten
independent vectors.
They're the columns of S.
And, and what do I get
from S inverse A S?
I get I again, right?
If A is the identity -- and
of course that's the correct
lambda.
The matrix was already diagonal.
So if the matrix is
already diagonal,
then the, the lambda is
the same as the matrix.
A diagonal matrix has
got its eigenvalues
sitting right there
in front of you.
Now if it's triangular,
the eigenvalues
are still sitting
there, but so let's
take a case where
it's triangular.
Suppose A is like,
two one two zero.
So there's a case that's
going to be trouble.
There's a case that's
going to be trouble.
First of all, what are the --
I mean, we just --
if we start with a
matrix, the first thing
we do, practically
without thinking
is compute the eigenvalues
and eigenvectors.
OK.
So what are the eigenvalues?
You can tell me right
away what they are.
They're two and two, right.
It's a triangular matrix, so
when I do this determinant,
shall I do this determinant
of A minus lambda I?
I'll get this two minus lambda
one zero two minus lambda,
right?
I take that determinant, so I
make those into vertical bars
to mean determinant.
And what's the determinant?
It's two minus lambda squared.
What are the roots?
Lambda equal two twice.
So the eigenvalues are
lambda equals two and two.
OK, fine.
Now the next step,
find the eigenvectors.
So I look for eigenvectors, and
what do I find for this guy?
Eigenvectors for
this guy, when I
subtract two minus the
identity, so A minus two
I has zeros here.
And I'm looking
for the null space.
What's, what are
the eigenvectors?
They're the -- the null
space of A minus lambda I.
The null space is
only one dimensional.
This is a case where I don't
have enough eigenvectors.
My algebraic
multiplicity is two.
I would say, when
I see, when I count
how often the
eigenvalue is repeated,
that's the algebraic
multiplicity.
That's the multiplicity,
how many times
is it the root of
the polynomial?
My polynomial is two
minus lambda squared.
It's a double root.
So my algebraic
multiplicity is two.
But the geometric multiplicity,
which looks for vectors,
looks for eigenvectors, and
-- which means the null space
of this thing, and the
only eigenvector is one
zero.
That's in the null space.
Zero one is not
in the null space.
The null space is
only one dimensional.
So there's a matrix, my --
this A or the original A,
that are not diagonalizable.
I can't find two
independent eigenvectors.
There's only one.
OK.
So that's the case that I'm --
that's a case that I'm
not really handling.
For example, when I
wrote down up here
that the powers went to zero if
the eigenvalues were below one,
I didn't really handle that
case of repeated eigenvalues,
because my reasoning was
based on this formula.
And this formula is based on
n independent eigenvectors.
OK.
Just to say then, there are
some matrices that we're, that,
that we don't cover
through diagonalization,
but the great majority we do.
OK.
And we, we're
always OK if we have
different distinct eigenvalues.
OK, that's the, like,
the typical case.
Because for each
eigenvalue there's
at least one eigenvector.
The algebraic multiplicity here
is one for every eigenvalue
and the geometric
multiplicity is one.
There's one eigenvector.
And they are independent.
OK.
OK.
Now let me come back to
the important case, when,
when we're OK.
The important case, when
we are diagonalizable.
Let me, look at --
so -- let me solve
this equation.
The equation will be each --
I start with some -- start
with a given vector u0.
And then my equation
is at every step,
I multiply what I have by A.
That, that equation ought
to be simple to handle.
And I'd like to be
able to solve it.
How would I find -- if I start
with a vector u0 and I multiply
by A a hundred times,
what have I got?
Well, I could certainly write
down a formula for the answer,
so what, what -- so u1 is A u0.
And u2 is -- what's u2 then?
u2, I multiply -- u2 I get from
u1 by another multiplying by A,
so I've got A twice.
And my formula is
uk, after k steps,
I've multiplied by A k
times the original u0.
You see what I'm doing?
The next section is
going to solve systems
of differential equations.
I'm going to have derivatives.
This section is the nice one.
It solves difference equations.
I would call that a
difference equation.
It's -- at first order, I would
call that a first-order system,
because it connects only --
it only goes up one level.
And I -- it's a system
because these are vectors
and that's a matrix.
And the solution is just that.
OK.
But, that's a nice formula.
That's the, like, the
most compact formula
I could ever get. u100 would
be A to the one hundred u0.
But how would I
actually find u100?
How would I find -- how would
I discover what u100 is?
Let me, let me show you how.
Here's the idea.
If -- so to solve, to
really solve -- shall I say,
to really solve --
to really solve it, I would
take this initial vector u0
and I would write it as a
combination of eigenvectors.
To really solve, write u
nought as a combination,
say certain amount of
the first eigenvector
plus a certain amount of
the second eigenvector
plus a certain amount
of the last eigenvector.
Now multiply by A.
You want to -- you got to
see the magic of eigenvectors
working here.
Multiply by A.
So Au0 is what?
So A times that.
A times -- so what's A --
I can separate it out
into n separate pieces,
and that's the whole point.
That each of those pieces is
going in its own merry way.
Each of those pieces
is an eigenvector,
and when I multiply by A,
what does this piece become?
So that's some amount
of the first --
let's suppose the eigenvectors
are normalized to be unit
vectors.
So that says what
the eigenvector is.
It's a --
And I need some multiple
of it to produce u0.
OK.
Now when I multiply
by A, what do I get?
I get c1, which is just
a factor, times Ax1,
but Ax1 is lambda one x1.
When I multiply this by
A, I get c2 lambda two x2.
And here I get cn lambda n xn.
And suppose I multiply by A
to the hundredth power now.
Can we, having done it,
multiplied by A, let's
multiply by A to the hundredth.
What happens to this first term
when I multiply by A to the one
hundredth?
It's got that factor
lambda to the hundredth.
That's the key.
That -- that's what I mean
by going its own merry way.
It, it is pure eigenvector.
It's exactly in a direction
where multiplication by A
just brings in a scalar
factor, lambda one.
So a hundred times brings
in this a hundred times.
Hundred times lambda two,
hundred times lambda n.
Actually, we're -- what
are we seeing here?
We're seeing, this
same, lambda capital
lambda to the hundredth as in
the, as in the diagonalization.
And we're seeing
the S matrix, the,
the matrix S of eigenvectors.
That's what this has got to
-- this has got to amount to.
A lambda to the hundredth power
times an S times this vector c
that's telling us
how much of each one
is in the original thing.
So if, if I had to really
find the hundredth power,
I would take u0, I would
expand it as a combination
of eigenvectors --
this is really S,
the eigenvector matrix, times
c, the, the coefficient vector.
And then I would
immediately then,
by inserting these hundredth
powers of eigenvalues,
I'd have the answer.
So -- huh, there must be --
oh, let's see, OK.
It's -- so, yeah.
So if u100 is A to the hundredth
times u0, and u0 is S c --
then you see this formula
is just this formula,
which is the way I would
actually get hold of this,
of this u100, which is --
let me put it here.
u100.
The way I would actually
get hold of that, see what,
what the solution is after
a hundred steps, would be --
expand the initial vector
into eigenvectors and let each
eigenvector go its own way,
multiplying by a hundred at --
by lambda at every step,
and therefore by lambda
to the hundredth power
after a hundred steps.
Can I do an example?
So that's the formulas.
Now let me take an example.
I'll use the Fibonacci
sequence as an example.
So, so Fibonacci example.
You remember the
Fibonacci numbers?
If we start with one
and one as F0 -- oh,
I think I start
with zero, maybe.
Let zero and one
be the first ones.
So there's F0 and F1, the
first two Fibonacci numbers.
Then what's the rule
for Fibonacci numbers?
Ah, they're the sum.
The next one is the sum
of those, so it's one.
The next one is the sum
of those, so it's two.
The next one is the sum
of those, so it's three.
Well, it looks like one
two three four five,
but somehow it's not
going to do that way.
The next one is five, right.
Two and three makes five.
The next one is eight.
The next one is thirteen.
And the one hundredth
Fibonacci number is what?
That's my question.
How could I get a formula
for the hundredth number?
And, for example, how could
I answer the question,
how fast are they growing?
How fast are those
Fibonacci numbers growing?
They're certainly growing.
It's not a stable case.
Whatever the eigenvalues
of whatever matrix it is,
they're not smaller than one.
These numbers are growing.
But how fast are they growing?
The answer lies
in the eigenvalue.
So I've got to find the
matrix, so let me write down
the Fibonacci rule.
F(k+2) = F(k+1)+F k, right?
Now that's not in my --
I want to write that
as uk plus one and Auk.
But right now what I've got is
a single equation, not a system,
and it's second-order.
It's like having a second-order
differential equation
with second derivatives.
I want to get first derivatives.
Here I want to get
first differences.
So the way, the way to do it
is to introduce uk will be
a vector --
see, a small trick.
Let uk be a vector,
F(k+1) and Fk.
So I'm going to get a two
by two system, first order,
instead of a one -- instead of
a scalar system, second order,
by a simple trick.
I'm just going to add in an
equation F(k+1) equals F(k+1).
That will be my second equation.
Then this is my system,
this is my unknown,
and what's my one step equation?
So, so now u(k+1), that's --
so u(k+1) is the left side,
and what have I got
here on the right side?
I've got some matrix
multiplying uk.
Can you, do -- can you
see that all right?
if you can see it, then you
can tell me what the matrix is.
Do you see that I'm
taking my system here.
I artificially made
it into a system.
I artificially made the
unknown into a vector.
And now I'm ready to look
at and see what the matrix
is.
So do you see the left side,
u(k+1) is F(k+2) F(k+1),
that's just what I want.
On the right side, this
remember, this uk here --
let me for the moment
put it as F(k+1) Fk.
So what's the matrix?
Well, that has a one and a one,
and that has a one and a zero.
There's the matrix.
Do you see that that gives
me the right-hand side?
So there's the matrix A.
And this is our friend uk.
So we've got -- so
that simple trick --
changed the second-order
scalar problem
to a first-order system.
Two b- u- with two unknowns.
With a matrix.
And now what do I do?
Well, before I even think,
I find its eigenvalues
and eigenvectors.
So what are the eigenvalues and
eigenvectors of that matrix?
Let's see.
I always -- first let me just,
like, think for a minute.
It's two by two, so this
shouldn't be impossible to do.
Let's do it.
OK.
So my matrix, again,
is one one one zero.
It's symmetric, by the way.
So what I will eventually
know about symmetric matrices
is that the eigenvalues
will come out real.
I won't get any
complex numbers here.
And the eigenvectors,
once I get those,
actually will be orthogonal.
But two by two, I'm
more interested in what
the actual numbers are.
What do I know about
the two numbers?
Well, should do
you want me to find
this determinant of A minus
lambda I?
Sure.
So it's the determinant of
one minus lambda one one zero,
right?
Minus lambda, yes.
God.
OK.
OK.
There'll be two eigenvalues.
What will -- tell me again
what I know about the two
eigenvalues before
I go any further.
Tell me something about
these two eigenvalues.
What do they add up to?
Lambda one plus lambda two is?
Is the same as the trace down
the diagonal of the matrix.
One and zero is one.
So lambda one plus lambda two
should come out to be one.
And lambda one times
lambda one times lambda two
should come out to be
the determinant, which
is minus one.
So I'm expecting the
eigenvalues to add to one
and to multiply to minus one.
But let's just see
it happen here.
If I multiply this out, I get --
that times that'll be a lambda
squared minus lambda minus one.
Good.
Lambda squared minus
lambda minus one.
Actually, I -- you see the b-
compare that with the original
equation that I started with.
F(k+2) - F(k+1)-Fk is zero.
The recursion that -- that the
Fibonacci numbers satisfy is
somehow showing up directly here
for the eigenvalues when we set
that to zero.
WK.
Let's solve.
Well, I would like to be able
to factor that, that quadratic,
but I'm better off to use
the quadratic formula.
Lambda is -- let's see.
Minus b is one plus or minus
the square root of b squared,
which is one, minus four
times that times that,
which is plus four, over two.
So that's the
square root of five.
So the eigenvalues are
lambda one is one half of one
plus square root of five, and
lambda two is one half of one
minus square root of five.
And sure enough, they -- those
add up to one and they multiply
to give minus one.
OK.
Those are the two eigenvalues.
How -- what are those
numbers approximately?
Square root of five,
well, it's more than two
but less than three.
Hmm.
It'd be nice to
know these numbers.
I think, I think that -- so that
number comes out bigger than
one, right?
That's right.
This number comes
out bigger than one.
It's about one point six
one eight or something.
Not exactly, but.
And suppose it's one point six.
Just, like, I think so.
Then what's lambda two?
Is, is lambda two
positive or negative?
Negative, right, because I'm
-- it's, obviously negative,
and I knew that the
-- so it's minus --
and they add up to one, so minus
point six one eight, I guess.
OK.
A- and some more.
Those are the two eigenvalues.
One eigenvalue bigger than one,
one eigenvalue smaller than
one.
Actually, that's a great
situation to be in.
Of course, the
eigenvalues are different,
so there's no doubt whatever --
is this matrix diagonalizable?
Is this matrix diagonalizable,
that original matrix A?
Sure.
We've got two
distinct eigenvalues
and we can find the
eigenvectors in a moment.
But they'll be independent,
we'll be diagonalizable.
And now, you, you can already
answer my very first question.
How fast are those Fibonacci
numbers increasing?
How -- those --
they're increasing,
right?
They're not doubling
at every step.
Let me -- let's look
again at these numbers.
Five, eight, thirteen,
it's not obvious.
The next one would be
twenty-one, thirty-four.
So to get some idea of
what F one hundred is,
can you give me any --
I mean the crucial number --
so it -- these --
it's approximately --
what's controlling the growth
of these Fibonacci numbers?
It's the eigenvalues.
And which eigenvalue is
controlling that growth?
The big one.
So F100 will be approximately
some constant, c1 I guess,
times this lambda one, this
one plus square root of five
over two, to the
hundredth power.
And the two hundredth F -- in
other words, the eigenvalue --
the Fibonacci numbers are
growing by about that factor.
Do you see that we, we've got
precise information about the,
about the Fibonacci numbers
out of the eigenvalues?
OK.
And again, why is that true?
Let me go over to this board
and s- show what I'm doing here.
The -- the original initial
value is some combination
of eigenvectors.
And then when we start -- when
we start going out the theories
of Fibonacci numbers, when
we start multiplying by A
a hundred times, it's this
lambda one to the hundredth.
This term is, is the
one that's taking over.
It's -- I mean, that's big, like
one point six to the hundredth
power.
The second term is
practically nothing, right?
The point six, or minus point
six, to the hundredth power
is an extremely small,
extremely small number.
So this is -- there're
only two terms,
because we're two by two.
This number is -- this
piece of it is there,
but it's, it's disappearing,
where this piece is there
and it's growing and
controlling everything.
So, so really the --
we're doing, like,
problems that are evolving.
We're doing dynamic
u- instead of Ax=b,
that's a static problem.
We're now we're doing dynamics.
A, A squared, A cubed,
things are evolving in
time.
And the eigenvalues are
the crucial, numbers.
OK.
I guess to complete
this, I better
write down the eigenvectors.
So we should complete
the, the whole process
by finding the eigenvectors.
OK, well, I have to --
up in the corner, then,
I have to look at
A minus lambda I.
So A minus lambda I is this one
minus lambda one one and minus
lambda.
And now can we spot an
eigenvector out of that?
That's, that's, for
these two lambdas,
this matrix is singular.
I guess the eigenvector -- two
by two ought to be, I mean,
easy.
So if I know that this
matrix is singular,
then u- seems to
me the eigenvector
has to be lambda and one,
because that multiplication
will give me the zero.
And this multiplication gives
me -- better give me also zero.
Do you see why it does?
This is the minus lambda
squared plus lambda plus one.
It's the thing that's zero
because these lambdas are
special.
There's the eigenvector.
x1 is lambda one one,
and x2 is lambda two one.
I did that as a little trick
that was available in the two
by two case.
So now I finally have to --
oh, I have to take
the initial u0 now.
So to complete this
example entirely,
I have to say, OK, what was u0?
u0 was F1 F0.
So u0, the starting vector is
F1 F0, and those were one and
zero.
So I have to use that vector.
So I have to look
for, for a multiple
of the first eigenvector and
the second to produce u0,
the one zero
vector.
This is what will find c1
and c2, and then I'm done.
Do you -- so let me instead
of, in the last five seconds,
grinding out a formula,
let me repeat the idea.
Because I'd really -- it's
the idea that's central.
When things are
evolving in time --
let me come back to this board,
because the ideas are here.
When things are evolving in
time by a first-order system,
starting from an
original u0, the key
is find the eigenvalues
and eigenvectors of A.
That will tell --
those eigenvectors --
the eigenvalues will already
tell you what's happening.
Is the solution
blowing up, is it
going to zero, what's it doing.
And then to, to find
out exactly a formula,
you have to take
your u0 and write it
as a combination of
eigenvectors and then
follow each
eigenvector separately.
And that's really what this
formula, the formula for, --
that's what the formula
for A to the K is doing.
So remember that
formula for A to the K
is S lambda to the K S inverse.
OK.
That's, that's
difference equations.
And you just have to -- so the,
the homework will give some
examples, different from
Fibonacci, to follow through.
And next time will be
differential equations.
Thanks.
