In this video we are gonna talk about
simultaneous diagonalization of two
operators. So the situation is that
you have two operators on a vector
space v and we are assuming that
both of them are diagonalizable.
So you could find a basis of eigenvectors
of A and that would make handling A
really nice but they may not be
eigenvectors of B or you could find
a basis of eigenvectors of B and that 
will be great but they wouldn't be
eigenvectors of A. And we would like
to have our cake and eat it too.
We would like to ask, does there exist
a basis consisting of eigenvectors
of both A and B? In other words,
if you work in the basis B,
every basis vector is an eigenvector
of A. Let's say with eigenvalue λ_i.
And it's also an eigenvector of B.
You know maybe a different eigenvalue
but it's the same eigenvector.
And that means that if you work
in the B basis, A winds up being
diagonal and B winds up being
diagonal and that's why we call
it simultaneous diagonalization of
A and B. In this situation we say
A and B are simultaneously diagonalizable.
So for example, let's look at
the matrices 2 1 1 2 and 3 4 4 3.
Now they both have eigenvectors
1 1 and 1 -1. If you multiply,
this is our favorite matrix.
We've seen that multiplying this
by 1 1 gives you 3 3 and 1 -1 gives you
1 -1. And likewise, 1 1 and 1 -1 are
eigenvectors of B. 1 1 is an
eigenvector with an eigenvalue 7.
And 1 -1 is an eigenvector with
an eigenvalue -1.
If you work in the B basis, you find
that the matrix of A in the B basis
is 3 0 0 1. Matrix of B is 7 0 0 -1.
And now we discover something
interesting about products.
You multiply A * B. You get 10 11 11 10.
And if you multiply B * A.
You get the same thing.
And that shouldn't be too surprising
because this matrix and this matrix
commute. They are both diagonal.
The matrix of AB or the matrix of BA
in the B basis is just this matrix
and this matrix. That's 21 0 0 -1.
Now this is a general occurrence.
So here is the rule about
simultaneous diagonalization.
Two matrices if they are both
diagonalizable, they are simultaneously
diagonalizable if and only if they 
commute.
Now this is a really big theorem.
It won't fit in one video.
We are gonna break it up into pieces.
In this video we will show that
if they are simultaneously diagonalizable,
then they have to commute.
We will also show that if they commute,
and if all the eigenvalues of A have
multiplicity 1, then the eigenvectors of
A are also eigenvectors of B and
we are done. We just take the eigenvectors
of A and we win.
That's actually what happened in 
the example that we did before.
We found the eigenvalues of A. 3 and 1.
The eigenvectors are 1 1, 1 -1.
And boom! They are also eigenvectors
of B.
And then in the next video, we will
deal with the case where A might have
eigenvalues with higher multiplicity.
That's harder, we can make it work.
But it involves more work and it's best
left for another day.
Let's suppose that things are 
simultaneously diagonalizable.
Then A in the B basis is a diagonal
matrix. B in the B basis is also
diagonal matrix. Different entries
but they are both diagonal matrices.
And if you multiply diagonal matrices
in either order, you get the same thing.
But A in the B basis times B in 
the B basis is AB is the B basis.
And B in the B basis times A
in the B basis is BA in the B basis.
So AB in the B basis is the same thing
as BA in the B basis.
And that means that AB has to be
BA to begin with.
The only way that two matrices
will have the same,
two operators will have the same
matrix in the B basis is if they were
the same operator to begin with.
And that's theorem 1.
If they are simultaneously diagonalizable,
then they commute. So let's prove
the converse in the case that A is,
all the eigenvectors have multiplicity 1.
So let's suppose we have an eigenvector
of A. In other words, A times this 
vector gives you λ times the vector.
I claim that if you multiply B times that
vector you also get an eigenvector of A,
with the same eigenvalue.
The way to check that is use say, A  Bb
is ABb is BA*b because AB = BA.
But Ab is just λb. λ is a constant.
You pull it out. You get λBb.
So ABb is λBb. In other words,
Bb is an eigenvector of A
with eigenvalue λ. But we assume
that the multiplicity was 1.
So there is only 1 eigenvector
with eigenvalue λ up to scale.
So Bb has to be a multiple of b.
In other words, b is an eigenvector
of big B and I apologize for using
letter b in three different ways.
One's for the basis vector.
One's for the operator and
one's for the basis. But that's
the way it is.
