all right we had this example coming
from this population or predator/prey
population model this matrix same matrix
we've been using columns are [4 1] and
[-2 1]
we found the eigenvectors we called them
v and w let's say [1 1] & [2 1] actually we
might have used [10 10] and [20 10] but it
doesn't matter we can scale eigen
vectors so I'm just going to use [1 1] and
[2 1] and they were eigenvectors. I.e., Av equals 2v so that's an eigenvector
with eigenvalue 2 and Aw is 3w so
w is an eigenvector with eigenvalue 3 and and we
used the eigenvectors to compute
what happened when we
wanted to multiply repeatedly by a right
so A^n times p_0. some initial
population vector, p_0 is some initial
population vector
right and we sort of said okay well
because A acts nicely on v and w we could
write p_0 in terms of v and w, right?
in terms of v and w, as a
linear combination of v and w. and that
will give us... you know we actually had a p_0
that was v plus w and then we could
we use the fact that multiplying by A
just doubles v and just triples w to be
able to sort of like figure out if we
did that repeatedly how that would
affect p_0, so that was the
predator-prey video but I want to talk
about something else: what if we wanted to
compute just A^n directly
That seems like that should be something we should be able to do
right as like an actual matrix there we
started with an initial vector and then
competed what haven't we multiplied by a
repeatedly but what if we actually want
to you know a to the N is some
two-by-two matrix and what if we
actually wanted to compute that well
here's the idea right and now it's gonna
be the kind of I'm gonna sort of
generalize what we did in that
predator-prey video right here so let's
generalize that and the idea was that
actually V and W this is a basis right
for R 2 for the plane these point
different directions so this is a basis
it's actually called an eigenbasis so if
you have a basis made a plugin vectors
it's actually called an eigenbasis and
that means we can write any vector in
the plane as a linear combination of V
and W right so given any X so let's
write that as X 1 X 2 we can rewrite it
as C 1 V 1 sorry not be one with v c 1 v
plus c 2 w for some choice of C 1 and C
2 right just as an example if I took
something like 7 comma 10 I claim that
that's the same thing as 13 v plus
negative 3 W because if you will work it
out you can check
if you if you actually actually write
this out you can check that this works
um and so you know it we can do it's
pretty vector I could start with any any
to like seven and ten were arbitrarily
chosen and I can get I can always write
it has a linear combination of V and W
so in general we have this equation
right here
and before I could get an A I just want
to do something kind of goofy I want to
rewrite this we've actually started to
notice hopefully more and more that
whenever we have a linear combination of
vectors we could actually rewrite that
as a matrix times a vector right this is
the matrix whose columns are V and W
columns are being delayed right in fact
it's really just a matrix 1 1 2 1 I
think because that's the V is this is V
right here and this is W okay and I want
to think a little bit about
see okay when we do a couple things here
I want to call I want to give this
matrix a name
so let's actually call this matrix B so
we have B so B equals the matrix whose
columns are 1 1 and 2 1 okay and be sort
of four basis you know because this is
an eigenbasis it's because it's a basis
sometimes and it's gonna be called a
change of basis matrix in fact if you
think about it writing if we call this
vector C then we have something like x
equals BC and if you think back to the
change there's actually a change of
basis video that three blue one brown
did this is sort of in standard
coordinates right like we had something
like seven ten and that's just we think
of that as seven I Plus 10 J but C is in
the VW coordinates and somehow B is a
change of basis matrix it's it actually
is how we change from the VW coordinates
to the standard coordinates so it's
called a change of basis matrix this
name and this is gonna end up being very
useful like I said the whole goal is to
compute powers of a matrix and powers of
a that original matrix we had but I
claim this is gonna be useful because
let's think about what happens now if we
do a times X right
let's just actually go through that so a
times X what
if we rewrite and I'll come back to this
notational stuff in a minute but I'm
just gonna do it sort of naively first
so if we rewrite this as C 1 C 1 V plus
C 2 W well we can use linearity and this
is kind of the generalization of what we
did in the predator-prey video where I
sort of say okay well I really just
think about actually putting the a in
sides for distributed over the
multiplication and and also over those
disturbing the emulsification over
addition and discrete and distributing
it through the scalar multiplication we
get we get c1 times a2 plus c2 times aw
right that's c1 times two V plus c2
times 3w and I want to do something
similar to what I did
up here I want to do something similar
it's like to hear as we factor and so I
could actually rewrite this then as the
matrix whose columns are to V and 3w
times c1 and c2
okay that's sort of interesting stuffing
about BB is the matrix whose columns are
V and W this matrix has columns to me
and 3w but I have a claim
I claim that this is the same thing as
the matrix whose columns are B&W
multiplied on the right by this diagonal
matrix where the diagonal is two zero
zero three the diagonals are elements
are two and three and everything off the
diagonal 0 we can just check that in
general right like we just took like an
ABCD matrix and multiply by 2 0 0 3 you
can actually check this that you get to
a you actually get 3 B to C and 3 D
right so actually you've doubled the
first column and you've tripled the
second column and so that actually does
that does work so let me go back to this
argument again now ok so now we have ax
is well it's it's the matrix VW the
matrix whose columns are V and W times 2
0 0 3 times C 1 C 2 okay couple things
this is kind of just an algebra argument
I'll give some sort of like geometric
argument that's very similar to what's
given in the three blue one Brown video
in a minute but I just want to give it
sort of algebraic arguing first that's B
I call this D for diagonal matrix what
about C 1 C 2 it about that vector
because it's not really we still need to
know what C 1 and C 2 are like if you
think about it we're just trying to
multiply a by a general vector X 1 X 2
right but if you go up here and think
about this equation we had
right I can actually rewrite this I can
also write down here that C is B inverse
X okay and so that means that C 1 C 2 we
can think of that as B inverse times X 1
X 2 and if we do that then we actually
get we put that all together it's kind
of a lot going on here but when we put
it all together we get this equation we
get ax equals we have the matrix B right
here right times D right here and then
times B inverse and this is really B
inverse X right so B inverse X so we
have B DB inverse times X and that's
kind of cool because like it's almost
like we take an A it turn exercise
really matter it's just this actually
implies that a is B DB inverse and this
is a nice construction it's kind of an
interesting thing that's going on here
that D is diagonal with the eigenvalues
on the diagonal right the 2 and the 3
and this is sort of my weight we've
taken a and we sort of factored it in an
interesting way actually let me write it
even more explicitly so it turns out so
B was the matrix we said it was conjure
1 1 2 1 you can compute the inverse I
wrote it down for myself and I got minus
1 1 2 minus 1 I think that's the matrix
the university can check
hopefully that works out and then D is
the matrix let me set whose diagonal
matrix for the V final answer on the
diagonal and so you can check that our
original matrix you could actually I
mean you really should write this out
just to check when you should do it once
it's beat supposed to be B DB inverse
right so that's PE times D I sort of
gave an argument just using algebra
matrix algebra Y matrix vector algebra
why this should work but it's a good
idea for you to write this out and check
that that actually if you multiply those
three matrices on the right out you
actually get the matrix on the left okay
so this is a cool thing this is called
diagonalization because we've sort of
written a in terms of a diagonal matrix
and it's got this nice or structural
thing where you're multiplying the B on
one side in the beginners on either side
that's a name for that in mathematics
that's it we'll find something in its
inverse on either side is actually
called conjugation we've actually done
something already like this when we
computed matrices for projections of the
reflections we actually did sort of
rotate projected and rotate back and
that's sort of like doing a B in version
of B on either side of another thing so
anyways uh I'll make that connection
more explicit in another video but this
is called ionization and
it's interesting so you know also I
should actually mention I mean DS angled
eigenvalues on the diagonal and actually
B is the matrix whose columns are the
eigenvectors so it's not even just
she didn't write that over here it's not
even that the I just the eigen divide
show if it actually the eigenvectors
show up right and the first column was V
which was the eigenvector the one with
eigenvalue two which was the first
diagonal entry and right like this went
with this number right here and this
column the second column goes with that
number right there and so it actually is
kind of this thing that we've sort of
taken our matrix and rewritten it fully
in terms of just the eigenvectors and
eigenvalues which is really cool and you
get this sort of diagonal matrix in the
middle and so this is kind of an
exciting thing to do well maybe some
websites exciting you but it's an
interesting it's it's kind of
interesting that we sort of factored a
real part that and put that in sort of
quotes we factored a using the eigen
stuff okay now I want to see how this is
gonna pay off so remember we wanted to
compute so go back we want to compute a
to the N you know just power
we're not gonna be powers of a well
let's think about a squared for a second
depends L so we know a is B DB inverse
let's see what a squared well that's B
DB inverse times B DB inverse do you see
something interesting there you should
be excited right now these cancel and we
can drop the parentheses and then B
inverse and B will cancel each other out
and so this actually equals B d squared
B inverse that's interesting if we do it
again a cubed well that's a squared
times a and we know a squared is B d
squared B inverse and a is B D B inverse
and it happens again so that is B D
cubed B inverse and you can see that
this is gonna work in general right a to
the N is sort of B DB inverse times B DB
inverse dot dot you know n times you're
multiplying this together and all of
these middle B inverse B's cancel right
there's some stuff in the Donna Dawson
end up cancelling and this ends up
equaling B D to the end B inverse and
this is actually like astonishingly easy
to compute right like in fact we can
just do this in general here like if you
think about if d is to 0 0 3
then what's D to the N well you can
check for yourself it's not that hard to
see it it's actually 2 to the N 0 0 3 to
the N you're just multiplying this sort
of the numbers that are in the same
place by each other and that's all the
nodes of being multiplied by so that
means that if we wanted to take the
matrix 4 minus 2 1 1 and raise it to the
nth power we can actually just multiple
we can actually just do this
diagonalization procedure and write down
BD to the N and then B inverse which let
me just double check what the inverse
was minus 1 1 2 minus 1 Y and multiply
this out and you get a formula a direct
formula it's kind of you know the nicest
formula movie but it's a direct formula
for the power so let's actually did so
we get 2 to the N 2 times 2 multiply the
first two matrices first 2 to the n 3 to
the N we take that and we multiply by
minus 1 to 1 minus 1 and we get you know
it starts to look a little bit ugly but
whatever minus 2 to the n plus 2 times 3
to the n actually okay if I don't just
write the way it - to the n plus 3 to
the N
two times two to the N which I guess is
two to the n plus one minus two times
three to the N and two to the n plus one
minus three to the N and that actually
works this actually tells us we actually
have a formula for the nth power so a to
the N if a is the matrix for one minus
two one then a TN is we have to find a
formula
just using the eigen vectors eigen
values and so this is we sort of saw
already how that was useful in iterating
a discrete dynamical system but how
having that eigen vectors eigen values
but this actually even works just in the
journal matrix level and we can actually
compute powers of the matrix but it all
came down to this formula right here
that we were able to somehow rewrite the
matrix in terms of a diagonal matrix and
that was easier to complete package
powers and so that's what's called
diagonalization and it sort of falls out
naturally from the algebra maybe one
last thing I'll say is that if I
actually just want to say it again they
write this is many times as I can I
equals B times D times B inverse well B
we set all the way back up here that B
sort of translates I said it was a
change of basis matrix right it sort of
translates from the VW coordinates to
the standard coordinates so let me go
down and write that so B so B is sort of
a way to translate from VW coordinates
to the standard IJ coordinates just
where we normally write IJ coordinates
B inverse then goes backwards
it goes from standard coordinates and
converts to the VW coordinates in fact
you can see that it with in this example
all the way back here
where was that example I did yeah in
this example right here I actually found
this by I guess oh we I started with 710
and I sort of converted I started with
standard courts and converted to the 13
in the minus three were the VW
coordinates and I claim I can get that
just by multiplying by B inverse also
right so for example if we take the
inverse which again was this matrix and
we multiply by 13 negative 3 because
that's what I'm sorry no times 7 10
that's actually it's really he's doing
this backwards so just be careful like
the 710 was the standard coordinates so
this is B inverse and this was sort of X
in standard coordinates right if we
actually multiply this out you get minus
7 plus 20 which is 13 and you get 7
minus 10 inches 9003 which was in the VW
coordinates so you can actually see how
the conversion happens okay well okay so
so
we think now back up here to what this
does is we remember that we will meet
multi matrices were sort of reading them
right left I think where there was
transformation so so in other words to
compute what a does so is a some sort of
transformation of the plane right we've
always seen these in the three blue one
round three blue one Brown videos do
really good job of showing this so if we
take this to a transformation viewpoint
so we go right to left and so if we want
a as a transformation
we can sort of think of first so to see
how a to see how a works as a
transformation first
from standard coordinates to VW
coordinates to the eigenbasis
coordinates and then in in the VW
coordinates the transformation is really
easy because it just doubles in the
direction of V and triples in the
direction of W so in the W coordinates
the transformation really is this
diagonal matrix transmission is that
right because all you're doing is you're
just thinking everything in terms of V
and W coordinates you're just doubling
in the V direction and tripping in with
in the W direction so that's actually
the transformation is diagonal and like
again the videos for three blue and
brown have much nicer pictures of this
but then so you actually prefer so this
is the sort of this is let's switch the
first step is a switch basis step the
second step is actually perform the
transformation and then switch back to
standard coordinates and so that's
really
if you think about what's happening is
this is this first step is given by x a
B inverse then you know the inverse
times something then the second step is
given by D times the previous step and
then the last step is given by B times
the previous step and so in other words
how does a what is a really well what
you do is you used to write right to
left you write the B inverse which means
you switch from standard chords to V W
quartets multiplied by D which is the
transformation in the VW quartets and
then you multiply by B which switches
back and so it's the same it kind of it
gives you a G it gives you sort of a
geometric reason why this formula works
where dilation come true okay I will
stop there
you
