Ok last time we discussed this result I want
to make an emphasis the result is self adjoint
operator on a finite dimensional inner product
space has an eigenvalue. I want to just mention
that every self adjoint operator 
on a finite dimensional inner product space
has an eigenvector ok, it is the same thing
showing that something an operator has an
eigenvalue is the same as saying there exists
a vector x knot equal to zero such that if
the operator is T, T x equals lambda x. What
is important is in this result is that, this
is, see for, this is for a finite dimensional
inner product space.
For a finite dimensional vector space we have
already proved that for a finite dimensional
complex vector space, we have already proved
that any operator has an eigenvalue ok. But
if it is a real vector space there are operators
which do not have eigenvalues ok. For example
the rotation matrix ok rotation matrix does
not have eigenvalues, if the rotation is not
90 or 270 ok. So this result have been proved
earlier that is what I want to emphasize for
a complex vector space an operator T having
an eigenvalue is a simple application of the
fundamental theorem of algebra. Fundamental
theorem of algebra says that the roots of
that polynomial are the roots exists they
are either real or complex ok.
So there is no guarantee that the roots are
real, so we have this general result for a
complex vector space. So this is more result
for the real finite dimensional inner product
space than for complex finite dimensional
inner product space ok. This result is more
for the real case more important for the real
case than for the complex case. Complex case
has been settled already ok. There are also
one or two comments I need to make. One is
this says if you have a complex finite dimensional
inner product space and a self adjoint operator
on it.
(Now) for a self adjoint operator you can
look at the matrix corresponding to that operator
relative to some orthonormal basis, then that
matrix is a Hermitian matrix if A is the if
T is operator and if A is the matrix of T
relative to some orthonormal basis then this
A is equal to A star ok. I am still in the
complex finite dimensional inner product space
so the entries of A could all be complex ok
but this theorem says that the characteristic
polynomial has only real coefficients because
it has only real roots. If it has only real
roots then it can be written as the characteristic
(equa) characteristic polynomial can be factorized
with linear factors lambda minus lambda 1,
lambda minus lambda 2 etc lambda minus lambda
n where each of these lambda 1, lambda 2,
lambda n are real ok.
So it maybe a completely complex matrix but
if it is self adjoint then its characteristic
polynomial is real, no this is not a trivial
observation, this is the consequence of the
previous the proof of the theorem and finally
finite dimensionality is important. If the
space is not finite dimensional and if the
operator is self adjoint then we could we
need not have eigenvalues so I will give that
example. So I am saying that this is not true
in the case of an infinite dimensional inner
product space so again for us the familiar
infinite dimensional space will be C 0, 1
this time I will take real value (so) it need
not be complex value.
Real valued continuous functions 
on 0, 1 with the inner product f, g being
0, 1 f of t, g of t dt, this an inner product
space. Let’s ok, real inner product space
I am not taking the complex conjugate. Let’s
look at the operator T on V defined by T f
this must be a continuous function so T f
acting at T is T times f of t, multiplication
operator we have encountered this before.
Obviously it is continuous because it is product
of two continuous functions. So this is well
defined T is an operator on V, T is linear
that can be verified. T is also self adjoint
that is an exercise, simple exercise. T is
a self adjoint operator ok. Suppose that I
want to show that this T does not have an
eigenvalue, suppose that there exist an f
such that T f equals lambda f ok just look
at the definition of T f then it means T f
minus lambda f is zero I can write this as
T minus lambda f of t this must be zero for
all t in 0, 1.
If this equation holds for some lambda then
that lambda must satisfy this equation for
all t ok. Lambda is if this equation holds
for some fixed lambda so lambda is fixed when
t is not equal to lambda this means F t is
zero, lambda is just one number provided offcourse
lambda belongs to 0,1 ok. But for if a continuous
function it is zero at all points except at
one point in 0,1 then what must be the value
of the function at that point? Also be zero.
You take either a left limit or the right
limit depending on the situation, depending
on whether you are to the left of lambda or
to the right of lambda so it simply follows
that F must be identically zero so F cannot
be a Eigen function, eigenvector it is a function
here continuous function that we are seeking
so there is remember the condition on the
eigenvector is that x knot equal to zero.
T f equals lambda f T x equals lambda x, x
knot zero, F is zero is the only function
that satisfy this equation so T does not have
an eigenvalue ok. So T has no eigenvalues.
But we have proved that in the finite dimensional
real inner product space also if it is self
adjoint then it has eigenvalues. So finite
dimensionality is important ok. The next result
is how is given an invariant subspace of corresponding
to a linear transformation how does the orthogonal
compliment of that subspace behave. This question
comes for the following reasons.
If see all Eigen spaces corresponding to a
given eigenvalue are invariant subspaces ok
we have seen this before. If you are in an
inner product space what more can be said.
If w is a subspace invariant under a linear
transformation T then W perpendicular will
be invariant under T star ok. This result
will prove useful and only for finite dimensional
spaces. So Let be a linear operator over a
finite dimensional inner product space, let
W be finite dimensional inner product space
I will call it V, let W be a subspace of V
invariant under T. For instance you could
take the Eigen spaces. Then W 
perpendicular is invariant under T star, the
proof is really straight forward. Proof is
as follows so all that I want to show is given
T W contained in W it follows that T star
W perpendicular contained in W perpendicular,
this what we want to show ok. W is invariant
under T, W perpendicular invariant under T
star.
So lets take Y in W perpendicular and T star
y to be x, so this x belongs to this left
hand side subset I must show that, that is
perpendicular this vector x is perpendicular
to W. I will rewrite it as x perpendicular
to W. Ok so take an arbitrary W ok lets say
U let U belong to W and consider the inner
product of x with U, I must show that this
is zero, I want to show x is perpendicular
to W, x is taken from the left hand side subset,
x is star y, y belongs to W (perpendicular).
So look at inner product of x with U, it is
T star y with U and this is y with T U, the
proof is through right.
See this T U, U is in W, T of U must be in
W so this is in W so I can write this as y,
U prime where U prime belongs to W. But y
has been taken from W perpendicular. So this
a dot product of a vector in W perpendicular
and a vector in W which is zero by definition.
So x is perpendicular to U and so x belongs
to W perpendicular.
Ok in particular will apply this result for
the case of a self adjoint operator. So for
a self adjoint operator if W is invariant
under T then W perpendicular is invariant
under T ok, will make use of this, that is
our next result. So the next result is an
important corner stone.
Let T be a self adjoint 
operator on a finite dimensional inner product
space. See we have shown that V has sorry
T has real Eigen we have shown that all eigenvalues
of T are real ok. What we want to mention
further is that, there exists an orthogonal
basis self adjoint on a finite dimensional
inner product space there exists an orthonormal
basis for V such that each basis vector is
an eigenvector. Remember that we proved already
the converse of this result, that is how we
started the section.
If T is a, is that agreeable? We started with
the following assumption, let T be a linear
operator on a finite dimensional lets say
real or a complex inner product space lets
say T is a finite T is a linear operator on
a finite dimensional inner product space with
a property that there exists an orthonormal
basis B such that the matrix of T relative
to this B is a diagonal matrix ok, then we
have seen the T must be in the real case we
have seen T must be self adjoint, in the complex
case we have seen that T must be normal T-T
star equals T star T ok.
In the real case normality is not possible
in the real case only self adjointness is
possible that is only for self adjoint, so
all that I am saying is this is the converse
of that result the question that one could
ask is in the complex case there is normality
of the transformation T, in the real case
there is self adjointness of T. So I am saying
that the self adjoint case the answer is yes.
Can you see that the matrix of T relative
to this basis must be diagonal? If this happens,
there exists a basis V each of whose vector
is an eigenvector. So the matrix of T relative
to that basis is a diagonal matrix, so this
is a converse of that result ok.
We, ok let’s take up the complex case a
little latter but let me mention presently
that the equation similar to normality, that
is if T-T transpose lets say A-A transpose
equals A transpose A does not necessarily
imply that A is diagonalizable ok. This is
the real case for the definition of normality.
Definition of normality over complex is A-A
star equals A star A ok, the claim is that
if you have a complex matrix that satisfy
if you have a normal complex matrix then it
can be diagonalized ok that is the claim.
That is the claim that I am making now, I
told you that this is the converse of the
question that we stared with which we will
see is true. We are only look at the real
case for real case remember that normality
when you replace star by transpose does not
hold ok. Example is again the rotation operator.
The rotation operator for theta not equal
to pie by 2 or 3 pie by 2 (trans) satisfies
the equation A-A transpose equals A transpose
A identity infact ok but the rotation operator
we know that for these two values does have
an eigenvalues ok so no question of even asking
for eigenvectors.
Ok so lets prove, so this is the result both
fore real and complex case right, I have not
mention anything about the underlying field.
You have a self adjoint operator then it is
diagonalizable by means of a unitary matrix
or an orthogonal matrix depending on whether
it is a complex space or a real space ok that
is what this theorem says. So the proof will
make use of the two results that we proved
earlier, for a self adjoint operator we have
shown that there are all eigenvalues are real
we have shown that a self adjoint operator
has eigenvalues ok, these two results are
important offcourse I will also make use of
this result.
The proof is by induction, so lets take the
case proof is by induction lets take the case
when dimension of V is 1, I know that T has
an eigenvalue and so an eigenvector T has
an eigenvalue value and offcourse an eigenvector
ok, what I mean by this is that if you are
in the complex case offcourse is this make
sense if you are in the real case let us just
remember once again that we have shown for
a self adjoint operator that there exists
a real eigenvalue and which actually means
the corresponding eigenvector can be taken
to be real ok.
So T has an eigenvalue and an eigenvector,
lets take see if dimension V is 1, so let
me call it ok let us say T x equals lambda
x, lambda is eigenvalue x is eigenvector.
In this case lets call x1 as x by norm x,
x is not zero so norm is not zero call x1
as x by norm x then just look at the basis
B consisting of this vector alone, the matrix
of T this is the basis for V and this is an
eigenvector by construction. So the induction
the first step of induction principle that
is satisfied ok. V is one dimensional this
is a basis, this vector by construction is
an eigenvector.
So lets assume that the result is true for
all finite dimensional vector spaces of dimension
less than n ok that is I have a self, whenever
there is a self adjoint operator on a finite
dimensional vector space of dimension less
than the dimension of V the there is an orthonormal
basis each of whose vector is an eigenvector
ok.
Ok so lets now look at 
this construction can be done in any case
T has an eigenvalue real eigenvalue in the
real case x is a real eigenvector so this
construction can be done. What I will do is
to look at W as the subspace span by this
vector x1 ok then this is this eigenspace
an eigenvector so obviously T of W is contained
in W and W is invariant under T by the previous
theorem. So T of sorry T star of W perpendicular
in contained in W perpendicular but T start
is T self adjoint operator. So T of W perpendicular
is contained in W perpendicular. The dimension
of ok T is self adjoint. The dimension of
W perpendicular is one less than the dimension
of V because the vector remember V is equal
to W plus W perpendicular finite dimensional
vector space V is W plus W perpendicular the
dimension of W is one so dimension W perpendicular
is one less than the dimension of V. So now
I will define an operator U on W perpendicular
using the operator T.
Lets set U from W perpendicular to W perpendicular,
so U must be a linear operator, the spaces
must be the same, set this defined by not
set now it is let U be defined by U is T restricted
to W perpendicular, the restriction of T to
W perpendicular that is my operator U, and
remember you need to verify that see when
you look at U as T restricted to W perpendicular
it means you are restrict your attention in
the domain, the domain is W perpendicular
you are making sure but what is the guarantee
that the co-domain is W perpendicular?
Because I am saying U is an operator from
W perpendicular to W perpendicular, that comes
from this, see this comes from this will tell
you that T takes that element in x, that element
x in W perpendicular to W perpendicular again
it won’t go to W and so this is well defined
ok this that U is an operator on W perpendicular
is well defined because of this ok. Now U
is an operator on ok T is self adjoint implies
U is self adjoint, I am going to leave that
as an exercise. T equal to T star implies
U equals U star ok, this an easy exercise
you have to again use the fact that V is W
plus W perpendicular that is all ok.
So U is a self adjoint operator on a finite
dimensional vector space W perpendicular whose
dimension is less than dimension (W) dimension
V so by induction hypothesis see this is another
induction principle that I am using ok so
U corresponding to this U there is a orthonormal
basis. So I am sure you will agree when I
write that there exists an orthonormal basis
I will call it B prime because I have already
used B the reason orthonormal basis B prime
I will call the elements x2 x3 etc x n there
exists an orthonormal basis B prime for W
perpendicular it’s a spaced W perpendicular
that we are concerned about.
For W perpendicular which also has the extra
property that such that each such that ok
you tell me if this is ok such that T xi equals
some lambda i xi for (one) sorry 2 less or
equal to i less or equal to n.
X2 I have used for the first vector, this
is an orthonormal basis, so they are mutually
perpendicular and norm of each of these vectors
is one, each vector must also be an eigenvector
sorry corresponding to U, have objected corresponding
to U, U is a operator that we are talking
about, such that U xi equals lambda i xi for
each of this vectors. So I varies from 2 to
n. So the natural thing is to ask whether
this vectors are also eigenvectors for T.
If they are eigenvectors for T then I am through,
there is one eigenvector x1 already these
are n minus 1 eigenvectors, the dimension
must add.
Dimension 1 there, the dimension of this is
n minus 1 this must add to the dimension of
V so that this union will give me a an orthonormal
basis for V and the matrix of T with respect
to this basis will be a diagonal matrix. Each
of each vector of this basis is an eigenvector
ok. So does it follow that each xi is an eigenvector
for T also from this that is by definition.
See these xi’s belong to W perpendicular
and U is T restricted to W perpendicular.
So it follows immediately that T xi equals
lambda i xi.
Some of these lambda may repeat but doesn’t
matter to us. What we are interested in is,
the vectors. Do I have a basis? Orthonormal
basis ok. So I have these vectors together
let me say x1 together with B prime gives
an orthonormal basis, basis for V with the
desire property I have repeated this too many
times ok. So the story stops for the real
inner product space because you must take
this theorem along with the rotation operator
to conclude that you need self adjointness
in order to conclude that there is an orthonormal
basis, each of whose vector is an eigenvector.
For the rotation operator there are no eigenvalues,
it is normal with regard to a real inner product
space. The rotation operator T satisfies T-T
transpose equals T transpose T equals identity
but T cannot be diagonalized in I mean it
fails in the worst possible case in the sense
that it does not even have real eigenvalues
ok. So T as a rotation operator on or to on
real space does not have eigenvalues so for
real space this is the result and remember
the question of diagonalizablity has been
specialized here. See the original question
of diagonalization is for a finite dimensional
real vector space there you are interested
only in a general basis.
But if it is an inner product space it is
only natural to require something extra from
the basis which is orthonormality ok. So for
orthonormality you need A equals A star ok
for if you want orthonormality then the operator
must be self adjoint especially if it is a
real inner product space the matrix version
as we always do. The matrix is the following,
the matrix version is a corollary of this
result. Let A be see in the case of complex
self adjoint the word Hermitian is used.
Let A be a Hermitian operator Hermitian matrix
of order n, then ok let me also emphasize
that it is complex, be a complex Hermitian
matrix of order n then there exists a unitary
matrix I will call it P such that 
P inverse A P equals D where D equals diagonal
lambda 1, lambda 2, etc lambda n, lambda 1
etc lambda n being the eigenvalues of A. If
A 
is real symmetric then there exists an orthogonal
matrix I will call it Q different from P there
exists an orthogonal matrix Q when I say orthogonal
matrix it is a real orthogonal matrix because
if it complex then will call it a unitary
so there exists an orthogonal matrix Q such
that Q inverse A Q equals D where D is diagonal
as before diagonal entries of D being the
eigenvalues of A.
Ok so here I need to only emphasize that P
inverse is equal to P star because P is unitary.
Similarly here P inverse is P transpose ok,
what is the proof? Is the corollary of the
previous result Q, Q inverse Q transpose ok
this is the corollary of previous one so we
can appeal to the previous result. You are
given a complex Hermitian matrix A so you
can define linear transformation through this
so you define C n with a usual inner product.
Define T on V by T of x equals A x. You have
a matrix through which you can define a linear
transformation then this definition means
that the matrix of T relative to standard
basis is A.
The matrix of T relative to the standard orthonormal
basis is the matrix A. A is complex Hermitian
so A is A star so T is T star. So I have a
self adjoint operator on a complex inner product
space then I know that by the previous theorem
there is an orthonormal basis for C n satisfying
the property that each vector in that orthonormal
basis is an eigenvector for T. Eigenvector
for T means T x equals lambda x but T x equal
to A x so A x equals lambda x. Collect all
this eigenvalues collect all the yeah, collect
all these eigenvalues, arrange them as a diagonal
matrix then we know that this is the same
as writing down matrix of T relative to the
new orthonormal basis that we have construct
ok.
So I will simply say appeal to the previous
theorem, appeal to the previous result to
construct an orthonormal basis this time I
will call it B so I have x1, x2, etc x n for
C n for V. What I know is that each of these
vectors is an eigenvector for the operator
T so if I look at the matrix of T relative
to this basis then I know that that’s a
diagonal matrix lambda 1 etc lambda n ok.
The proof is complete if I tell you what must
be P, just give one choice for P ok.
Let us call P as matrix whose first column
is x1 second column x2 etc x n. You have these
vectors constructed by the previous theorem
existence not construction. So collect those
vectors so this is something that we have
done even in the ordinary case without the
inner product space say P equal to this then
this P this matrix P has a property that its
columns are mutually orthogonal and the norm
of each column is one. So this is a unitary
matrix that is P star equal to P transpose.
So then P is unitary. Finally this equation
must be verified but as before this equation
we have seen before. Look at A P, A P by definition
is A into x1, x2 etc x n, we know that this
A can be brought inside to write A x1 etc
A x n.
Each of these is an eigenvector so I have
the eigenvalues coming now, lambda 1 x1, lambda
2 x2 etc. Let me just write down the last
step which is a little exercise for you, verify
that this is equal to P times D. Yes, which
is almost obvious, you first write P and then
D ok. So A P equals P D you know that P is
invertible so you can pre-multiply by P inverse
and the you get this equation ok. Real case
is similar, in the real case you know that
the eigenvalues are real corresponding vectors
can be taken to be real so this will be basis
consisting of real vectors.
Now real vectors giving you Q for instance
then it is an orthogonal matrix right it will
be an orthogonal matrix and the rest of the
proof is as before ok. So this is just version
matrix version of this important theorem.
The last part is really for normal operators
that I will do in the next class ok. So what
is means is that, an operator is diagonalizable
by means of an operator on a complex vector
space this time, just complex vector space
is diagonalizable by means of a orthonormal
transformation by means of a unitary matrix
if and only if it is normal ok.
So there is a significant difference between
the question of a real symmetric matrix and
the complex symmetric matrix that is if A
is real and if A is equal to A transpose then
this theorem says A can be diagonalized ok.
Take A to be complex, A equal to A transpose
there is no theorem which can guarantee that
A is diagonalizable ok. Whereas you take A
complex A equal to A star the conjugate transpose
then A is diagonalizable ok. So the question
is really about what is the corresponding
operation for transpose in the complex case?
The corresponding operation for transpose
in the complex case is conjugate transpose
ok. So remember that the statement is wrong
a complex symmetric matrix is diagonalizable
is wrong ok. A real symmetric matrix is diagonalizable,
a complex Hermitian matrix is diagonalizable
ok, so let me stop here.
