Today we are going to start a new topic and
that is the eigenvalue problems. So, far we
considered real vectors real matrices ,now
even if matrix is a real matrix its eigenvalues
they can be complex. So, that is why now our
underlying field is going to be field of complex
numbers. So, we will be considering complex
matrices then the vectors also will be complex
eigenvalues they are defined for square matrices.
We will show that if A is n by n matrix either
real or complex then its roots are the eigenvalues
they are given by roots of a polynomial of
degree n, now as a consequence of fundamental
theorem of algebra.
We know that if a polynomial has degree n
then it has got exactly n 0 or n roots counted
according to their multiplicity,that means,
we will count if a 0 is repeated twice ,it
will be considered as two zeroes.
Now, when we consider polynomial of degree
bigger than or equal to 5, then we cannot
have a formula for finding its roots like,
if you have got a quadratic polynomial then
we can write its two 0 in terms of the coefficients
of our polynomial.
If you have got a x square plus b x plus c
is equal to 0 ,then the roots can be written
in terms of the coefficients a b c .This will
not be possible, when your polynomial is of
degree bigger than or equal to 5.
So, that is why for calculating the eigen
values our methods ,they are going to give
us only approximation. This was not the case
with solution of system of linear equations
When we considered gauss elimination method
or its variants, then the error came because
of the finite precision whereas, the method
was exact method in contrast for eigen values
our method will be giving only an approximation.
So, 1 tries to find as much information possible
as of eigen values by say looking at a matrix.
So, there are some special matrices for which
we will study what are their their eigenvalues;
that means, we can if the matrix is a real
symmetric matrix then its eigenvalues they
are going to be all real and similar results
then we will have some localization results,
that means, we will find a region in the complex
plane which is going to contain all our eigen
values.
We are going to consider power method for
finding the dominant eigen value of a matrix
and then there are some variants of this method
,I am going to describe what is known as q
r method for finding eigen values.
At present that is the most popular and the
best possible method available for calculating
eigen values or rather calculating approximations
to eigen values of our matrix a. Now it is
beyond this course, to prove convergence of
q r method the description of q r method can
be given easily and that is what I will do.
So, now we are going to start with complex
vectors .When we consider the real vectors
and complex vectors for real vectors ,what
we had done was you can add 2 vectors. So,
that is component wise addition you multiply
a vector by a scalar. So, you multiply each
component of your vector by that number. So,
these things remain same for complex vector
It will be the real numbers they are replaced
by complex numbers. So, again addition of
2 vectors will be component wise multiplication
by a scalar will be same as before then matrix
into vector multiplication will be exactly
same as before.
There will be a change in the definition of
inner product because we have to take into
consideration the complex numbers then we
had defined one norm infinity norm for real
vectors that definition remains exactly the
same the corresponding induced matrix norm
the proof will have slight modifications,,,,
but ,, but ,,, but ,, but let me not get into
those details.
It they are the formula which you obtain is
exactly the same as before. So, . So, now,
let us quickly consider complex vectors then
the inner product the vector norm and matrix
norm. So, let us look at the complex vectors
and the corresponding operations.
So, we have got z to be a complex vector z
1 z 2 z n. So, each z is going to be a complex
number w is another n by 1 vector as I said
before z plus w will be component wise addition.
So, it is z 1 plus w 1 z 2 plus w 2 plus z
n plus w n alpha times' z will be each component
,will get multiplied by alpha then inner product.
So, here when we had real vectors then the
inner product was x comma y was summation
x i y i ,now here change is you'll consider
z i w i bar w i bar is the complex conjugate.
Now, when you consider inner product of z
with itself it will be summation i goes from
1 to n z i z i bar. So, you have complex number
you are multiplying by complex conjugate.
So, it will be summation i goes from 1 to
n mod z i square. So, thus inner product of
z with itself will be bigger than or equal
to 0 and it will be equal to 0 if and only
if z is a 0 vector.
When you consider inner product of w with
z it will be summation w i z i bar by our
definition, which will be same as summation
i goes from 1 to n z i w i bar and then complex
conjugate. So that means, it is z comma w
bar.
So, we have got conjugate symmetry inner product
of w with z is complex conjugate of inner
product of z with w
This is linearity in the first variable z
plus v w will be summation i goes from 1 to
n z i plus v i into w i bar split the summation
into two summations .The first summation will
be nothing,,,, but ,, but ,,, but ,, but inner
product of z with w and the second summation
is inner product of v with w.
Similarly, if you consider alpha z comma w
this will be summation i goes from 1 to n
alpha z i w i bar ,now alpha is independent
of i. So, it will come out of the summation
sign what remains in the summation that is
inner product of z with w. So, our inner product
will be linear in the 1 variable.
So, these are the properties of the inner
product the 1 is positive definiteness 2 is
conjugate symmetry and 3 property is linearity
in the 1 variable ,when you consider z comma
alpha w ,then alpha will come out as alpha
bar because of the conjugate symmetry.
So, inner product is conjugate linear in the
2 variable. So, this the difference between
real inner product and complex inner product
that real inner product was symmetric now
this is conjugate symmetric and we had linearity
in both the variables for real inner product
whereas, now complex inner product is going
to be linear in the 1 variable whereas, conjugate
linear in the 2 variable otherwise it is exactly
similar.
Now, we had cauchy-schwarz inequality for
real inner product. So, there is cauchy-schwarz
inequality for complex inner product also
and using this cauchy-schwarz inequality one
considers the induced norm. So, that is induced
norm by the inner product.
So, one show that it satisfies various properties
of norm So, here is inner product of z with
z is summation i goes from 1 to n mod z i
square we define norm z 2 to be positive square
root of z comma z and the cauchy-schwarz inequality
is modulus of z comma w is less than or equal
to 2 norm of z into 2 norm of w
I want you to notice that our complex inner
product it is a map from c n cross c n to
c. So, in general our complex inner product
is a complex number,,,, but ,, but ,,, but
,, but when you consider inner product of
a vector z with itself, then it is going to
be a positive real number and that is why
you can take its positive square root and
then obtain a real number. In fact, the number
is going to be bigger than or equal to 0 and
that is our euclidian norm.
So, norm z 2 is positive square root summation
goes from 1 to n mod z i square norm z 2 will
be bigger than or equal to 0 it will be equal
to 0 .If and only if z is equal to 0 vector
that will follow from positive definiteness
of inner product norm ,alpha z will be equal
to mod alpha times norm z ,it will follows
from the definition and the triangle inequality
norm of z plus w is less than or equal to
norm z plus norm w. So, it is for the triangle
inequality that we need the cauchy-schwarz
inequality. So, this is about the 2 norm
Now, analogously one can define 1 norm and
the infinity norm. So, norm z 1 is going to
be summation i goes from 1 to n mod z i and
norm z infinity to be maximum of modulus of
z i 1 less than or equal to i less than or
equal to n. So, in the definition there is
no difference instead of real numbers we have
got complex numbers,,,, but ,, but ,,, but
,, but you are taking its modulus.
For 2 norms we are taking summation mod z
i square. So, this modulus is important for
real inner product space or for if the vector
is real ,whether I write x i square or whether
I write mod x i square ,the answer is the
same whereas, for the complex number it is
important that you should take modulus of
z i square.
Now, we are going to look at the induced matrix
norm. So, if you are given any vector norm
then you define norm of the matrix to be maximum
of norm a x by norm x x not equal to 0 and
then for 1 norm and infinity norm; that means,
if you are taking or if you are fixing vector
norm to be 1 norm ,then look at the corresponding
induced matrix norm for that we obtained an
expression in terms of the elements of the
matrix.
Similar thing was possible for norm a infinity
whereas, for the 2 norm we have to be satisfied
only with an upper bound. So, here the expressions
for norm a 1 and norm a infinity they are
going to remain to be exactly the same.
So, we are looking at the induced matrix norm.
So, we have norm A 1 to be column sum norm.
So, summation i goes from 1 to n modulus of
a i j. So, look at the first column take the
modulus ,add it up do it for all the columns
whatever is the maximum that is norm A1 norm
A infinity the expression is obtained by interchanging
j and i. So, column sum norm becomes row sum
norm. So, we have got norm A infinity to be
summation j goes from 1 to n modulus of a
i j 1 less than or equal to i less than or
equal to n.
And then this is the frobenius norm. So, it
summation over i summation over j mod a i
j square raise to half norm A 2 is not computable
,,,, but ,, but ,,, but ,, but norm a frobenius
here it is norm A 2 less than or equal to
norm A F, here this less than or equal to
should be bigger than or equal to.
Then we have got this basic inequality norm
A is maximum of norm A z by norm Z. So, from
here we get norm A z to be less than or equal
to norm A into norm z for z belonging to C
n next we define conjugate- conjugate transpose.
So, we defined the conjugate transpose for
a vector as well as for a matrix
So, you take complex conjugate of each entry
and then you take transpose. So, if you are
taking conjugate transpose of a vector column
vector then its conjugate transpose will be
a row vector if the matrix is square matrix
then conjugate transpose is again going to
be equal to the matrix of size n.
So, this conjugate transpose we know that
matrix multiplication is not commutative.
So, if the conjugate transpose commutes with
the matrix then it deserves a a special name
it is a special class of matrices and those
are known as normal matrices.
So, we are going to define normal matrix and
then self-adjoint matrix q self-adjoint matrix
these matrices their eigen values they have
got some special property.
So, here is definition z is vector z 1 z 2
z n z star is z bar transpose. So, it becomes
a row vector z 1 bar z 2 bar z n bar.
Now, inner product of z with w this is our
definition summation z i w i bar. So, in this
notation we can write it as w star z w star,
is going to be a 1 by n vector z i is n by
1 vector. So, when you take 1 by n vector
multiplied by n by 1 vector you are going
to get 1 by 1 matrix or you are going to get
scalar. So, inner product of z with w will
be same as w star Z.
Next for a matrix A we define A star to be
equal to A bar transpose conjugate transpose
if you repeat the operation A star star is
going to give you back matrix A then when
you consider A B star this will be A B bar
and then transpose A B bar will be same as
A bar into B bar and then its transpose when
you take A bar B bar transpose the order gets
reversed. So, you get B bar transpose A bar
transpose.
So, this will be equal to B star A star. So,
A B star is B star A star and inner product
of A z with w will be we have seen that this
is the w star A z then w star A I write as
A star w star because when you take the complex
conjugate it will become w star A star star;
that means, w star A and this is nothing,,,,
but ,, but ,,, but ,, but z comma A star w.
So, important property A z comma w a will
go to the second variable as A star and here
are the special matrices A star A is equal
to A A star. So, that is class of normal matrices
then A star is equal to A that is class of
self-adjoint matrices if you consider A star
is equal to minus A that is skew self-adjoint
and lastly unitary matrix. So, we have got
A star A is equal to identity and now for
matrix we know that the left identity is same
as the right identity left inverse is same
as the right inverse. So, that is why you
will have if A star A is equal to identity
then automatically A A star is equal to identity.
Now, if you take 2 self-adjoint matrix ,if
you add it up then again you are going to
get a self-adjoint matrix .This result will
not be true for product of matrices, because
when you will consider A B star then you are
you are going to have B star A star. So, if
A star is equal to A B star is equal to B
does not mean A B star is equal to A B because
A B star will be equal to B A. So, these are
some of the special matrices and they are
going to their eigen values they are going
to be something special or we can say something
more about their eigen values
So, now we want to show we want to define
eigen value eigenvector ,and then we want
to show that they are roots of a characteristic
polynomial. So, here is eigen value problem
our notation is going to be a will be either
a real matrix or a complex matrix,,,, but
,, but ,,, but ,, but it has to be a square
matrix one defines eigen value and eigenvector
only for square matrix
So, definition is a complex number lambda
is said to be an eigenvalue of A. If there
exists a non-zero vector u such that A u is
equal to lambda u, and in that case u is called
an associated eigenvector .This non-zero part
is important ,because if you take a 0 vector
then when you apply matrix A to it you are
going to get a 0 vector ,then A u will be
equal to lambda u for any lambda. So, lambda
will be eigenvalue provided you have got a
non-zero vector u such that A u is equal to
lambda u.
Now, how to find a lambda like you cannot
find,,,, but ,, but ,,, but ,, but at least
we want some characterization. So, that characterization
we are going to show that the lambda is nothing,,,,
but ,, but ,,, but ,, but look at determinant
of A minus lambda I A is matrix which is given
to us then you look at matrix A minus lambda
times identity
Look at its the determinant is something which
we can calculate. So, you will get a polynomial
in lambda of degree n and our eigen value
is going to be 0 of this polynomial. So, we
start with the definition that lambda is eigen
value, provided we have got a non-zero vector
u such that A u is equal to lambda u
So, we have A u is equal to lambda u u not
equal to 0 .This will imply that A minus lambda
I u is equal to 0 vector which will mean that
A minus lambda it is A n by 1 matrix. So,
we can consider it has a map from C n to C
n any vector in C n ,you apply A minus lambda
I to it you again get A n by 1 vector. So,
A minus lambda I from C n to C n it is a map
this map is not 1 to 1 because we have got
A minus lambda I ,u is equal to 0 vector where
u is a non-zero vector and A minus lambda
I into 0 vector is also equal to 0 vector.
So, we have got 2 vectors u bar and 0 vectors
which have the same image, and that is the
0 vector. So, that is why A minus lambda I
will not be 1 to 1 if A minus lambda I is
not 1 to 1 it cannot be invertible, because
for inevitability what we need is our map
should be 1 to 1 and on 1 and in our case
infinite dimensional spaces it is sufficient,,,,
but ,, but ,,, but ,, but if A minus lambda
I is 1 to 1 then A minus lambda will be invertible
or if A minus lambda I on 2 it will be invertible.
So, our we are starting with lambda is an
eigenvalue u is eigenvector. So, map A minus
lambda I will not be 1 to 1; that means, A
minus lambda I will not be invertible. So,
you have got A minus lambda I to be a singular
matrix now if it is singular; that means,
its determinant has to be equal to 0.
So, you get determinant of A minus lambda
I to be equal to 0. Now conversely suppose
lambda I is a complex number such that determinant
of A, minus lambda I is equal to 0. So, you
look at homogeneous system A minus lambda
I z is equal to 0 vector. Now this homogeneous
system it is going to have a non-trivial solution,
because the coefficient matrix as determinant
equal to 0. So, it has a non-trivial solution
u such that A minus lambda I u is equal to
0 vector and that precisely means A u is equal
to lambda u u not equal to 0 vector.
So, thus the eigen values of A they are given
by determinant of A minus lambda I is equal
to 0. So, this is the determinant of A minus
lambda I when you will expand the determinant
you are going to have minus 1 raise to n lambda
raise to n plus c n minus 1 lambda raise to
n minus 1 plus c 1 lambda plus c 0 is equal
to 0.
So, you have a polynomial in lambda of exact
degree n because the coefficient of lambda
raise to n is non-zero it is minus 1 raise
to n
Now by consequence of the fundamental theorem
of algebra ,this it is going to have n roots
,if you count them according to their multiplicities.
So, thus we know that the n by n matrix it
is going to have at the most n eigen values
and they are going to be roots of this polynomial.
So, thus the problem of finding eigen values
it gets reduced to finding roots of a polynomial
So, this determinant of A minus lambda I this
polynomial now we factorize it. So, it will
be lambda 1 minus lambda raise to m 1 lambda
2 minus lambda raise to m 2 into lambda k
minus lambda raise to m k where the m 1 m
2 m k they add up to n.
So, you have got eigen values to be lambda
1 lambda 2 lambda k .These are distinct eigen
values and the power m I that is known as
the algebraic multiplicity of lambda i.
So, you count lambda 1 m 1 times lambda 2
m 2 times and lambda k m k times and that
is how you have got exactly n eigenvalues
counted according to their algebraic multiplicity
Now, there is another multiplicity associated
with eigen value ,and that is known as geometric
multiplicity. So, your geometric multiplicity
is going to be number of linearly independent
eigenvectors associated with a particular
eigen value.
So, we have A u is equal to lambda u u not
equal to 0 vector if I consider A of alpha
u this will be alpha times A u A u is lambda
u. So, it is alpha time's lambda u now alpha
and lambda they are scalars those are complex
numbers. So, they commute and then you can
have lambda time's alpha u. So, if u is an
eigenvector alpha u will also be an eigenvector
provided alpha is not equal to 0. So, eigenvector
is not unique
You have got infinitely many eigenvectors
as soon as you find one eigenvector any non-zero
multiple of it is also going to be an eigenvector.
Now, one defines what is known as eigen space.
So, see what you have got is suppose , I have
got a eigenvector then I take a multiple.
So, if you are in say r two you are going
to have a straight line ,except what you do
not want is multiply by 0. So, eigen space
by definition is going to be all multiples
and you add 0 to it. So, all non-zero vectors
in your eigen space they are going to be eigenvectors
associated with eigenvalue lambda and. So,
there are infinitely many eigenvectors ,,,, but
,, but ,,, but ,, but when you consider number
of linearly independent eigenvectors they
are going to be finite and. In fact, the that
number is going to be less than or equal to
algebraic multiplicity.
So, if you have got lambda 1 to be an eigen
value with algebraic multiplicity to be m
1. In that case you can have at the most m
1 linearly independent eigenvectors, the number
can be less .We will consider an example where
your number of linearly independent eigenvectors
can be strictly less than algebraic multiplicity.
Your algebraic multiplicity is you consider
factorization of characteristic polynomial
and in that you have lambda 1 minus lambda
term whatever its power that is our algebraic
multiplicity and geometric multiplicity is
number of linearly independent eigenvectors
associated with it.
So, here is definition of eigen space null
space of a minus lambda I is set of all z
such that a minus lambda I z is equal to 0
vector, it is a subspace it consists of eigenvectors
and a 0 vector the dimension of this sub space
is called geometric multiplicity of our eigen
value lambda then
As I said it is same as number of linearly
independent eigenvectors associated with eigenvalue
lambda and geometric multiplicity ,will always
be less than or equal to algebraic multiplicity.
So, now let me give you an example of 2 by
2 matrix a simple matrix for which in one
case geometric multiplicity is strictly less
than algebraic multiplicity and in another
case they are equal. If your matrix is upper
triangular matrix ,then your eigen values
are going to be diagonal entries. So, for
upper triangular matrices you do not have
to do any computation just look at the diagonal
entries those are your eigen values.
Now, when you considered gauss elimination
method we reduced matrix a to upper triangular
form,,,, but,, but ,,, but ,, but these elementary
row transformations they do not preserve the
eigenvalues .You have matrix a it has got
certain eigen values you do elementary row
transformations obtain to an upper triangular
matrix,,,, but ,, but ,,, but ,, but the eigen
values of upper triangular matrix which you
have obtained will be completely different
than your original eigen values.
This elementary row transformations they do
not change the solution of system a x is equal
to b, that is why it was useful there whereas,
here it is not useful. So, now, let us consider
a example.
So, here is upper triangular matrix 1 1 0
1 the determinant of A minus lambda I is 1
minus lambda square. So, A has eigenvalue
1 with algebraic multiplicity 2. So, it is
a repeated eigenvalue.
I look at its eigenvector. So, 1 1 0 1 u 1
u 2 is equal to u 1 u 2. So, you get u 1 plus
u 2 is equal to u 1 and u 2 is equal to u
2 this second equation gives us no information
the first equation tells us that u 2 has to
be 0; that means, null space of a minus I
is going to be vector u 1 0 u 1 belonging
to c. So,your null space of A minus I which
is all u 1 0 u 1 belonging to c. So,; that
means, we have got multiples of vector 1 0.
If you want eigenvector then it should be
a non-zero multiple. So, for this example
you have got 1 is eigenvalue with algebraic
multiplicity 2 and geometric multiplicity
to be 1. So, geometric multiplicity is strictly
less than algebraic multiplicity now let me
change this examples slightly let me make
this 1 as 2.
So, when you look at matrix 1 1 0 2 its characteristic
polynomial will be 1 minus lambda 2 minus
lambda. So, you have eigen values to be 1
and 2 with algebraic multiplicities in both
the cases to be equal to 1.
When we try to consider the eigenvector then
you are going to have u 1 plus u 2 to be equal
to u 1 and 2 u 2 is equal to u 2. So, that
means, u 2 has to be 0 and eigen vector will
be of the form u 1 0 with u 1 not equal to
0. So, one will be eigenvector with geometric
multiplicity to be equal to1.
Next look at 1 1 0 2 u 1 u 2 into is equal
to 2 times u 1 u 2. So, what will be the first
equation it will be u 1 plus u 2 is equal
to 2 u 1 ,second equation will be 2 u 2 is
equal to 2 u 2. So, again the second equation
does not give us any information from the
first equation you will get u 1 is equal to
u 2. So, any eigenvector associated with 2
will be of the form u 1 u 1 u 1 is not equal
to 0 or equivalently it is going to be a non-zero
multiple of vector u 1 u 1.
So, eigenvector of 1 will be 1 0 or any multiple
eigenvector of 2 will be vector 1 1 or any
non-zero multiple. So, . So, now, what we
are going to do is we are going to consider
eigenvalues of our special matrices. If the
matrix is self-adjoint A star is equal to
A then we will show that eigenvalues they
have to be real if A star is equal to minus
A then eigenvalues have to be purely imaginary
or 0
For normal matrix we do not have any such
structure your eigenvalues can be complex,,,,
but ,, but ,,, but ,, but still for eigenvalues
of normal matrix it has got some special property
if you look at two distinct eigenvalues and
corresponding eigenvectors then they are linearly
independent for normal matrix something more
is true.
Eigenvectors corresponding to distinct eigenvalues,
they are going to be perpendicular to each
other; that means, their inner product is
going to be 0. If you consider eigenvectors
of unitary matrix; that means, the matrix
which satisfies A star A is equal to A. A
star is equal to identity then the eigen values
they are going to have modulus to be equal
to 1. So, they will lie on unit circle, now
what does these eigen values tell us.
So, these are going to be precisely the points
where a minus lambda 1 will not be invertible
at all other complex numbers our matrix A
minus lambda 1 will be invertible. So, when
you have got n by n matrix there are going
to be at the most n complex numbers for which
A minus lambda I will not be invertible for
all other complex numbers A minus lambda I
will be invertible.
So, let us show the properties of eigen values
of special matrices the proofs are simple
and straight forward.
So, look at A u is equal to lambda u u not
equal to 0 vector lambda complex number pre
multiply by u star. So, you have got u star
A u is equal to u star lambda u. So, which
is same as lambda times u star u.
u star u will be summation I goes from one
to n u i u i bar. So, that is summation i
goes from 1 to n mod u i square u is not a
0 vector. So, that means, at least 1 u i will
be non-zero and hence this summation will
not be equal to 0. So, I get lambda to be
equal to u star A u divided by u star u which
is equal to in the notation of inner product
it is A u comma u divided by u comma u.
So, we have lambda to be equal to inner product
of A u with u divided by inner product of
u with u let me consider complex conjugate
of lambda this is going to be complex conjugate
of A u with u divided by complex conjugate
of u with u now since inner product of u comma
u is bigger than or equal to 0 here this u
comma u bar will be same as u comma u and
by conjugate symmetry the numerator is going
to be inner product of u with A u divided
by u comma u.
So, thus lambda is equal to A u comma u divided
by u comma u and lambda bar is u comma A u
divided by u comma u ,now lambda is also equal
to this a when it goes to the second variable
it goes as A star. So, it is going to be u
A star u upon u comma u, now from here I can
conclude that A star is equal to A, will imply
that lambda bar is equal to lambda and which
will mean that lambda is going to be real
because lambda is a complex number its complex
conjugate is equal to itself, that means,
lambda has to be real.
Similarly, if A star is equal to minus A then
your lambda bar is minus lambda. So, if lambda
is equal to x plus y. So, it is say minus
x plus y and lambda bar is going to be x minus
y and hence in this case you are going to
have if you have got A star is equal to minus
A then lambda bar is equal to minus lambda
and then this means that lambda is purely
imaginary or zero.
So, this is for self-adjoint and skew self-adjoint
matrices now, for the normal matrix.
So, suppose A is normal. So, you have got
A A star is equal to A star A consider norm
A x its ecludian norm and its square this
will be nothing,,,, but ,, but ,,, but ,, but
inner product of A x with itself this a will
go here as A start. So, it is x A star A x
now use the property that A star A is same
as A A star. So, it will be x A A star x which
will be x now this A I can write as A star
its star A star x. So, this is same as A star
x A star x because this A star will go to
the second variable as its star. So, this
is nothing,,,, but ,, but ,,, but ,, but norm
A star x 2 norm square.
So, an important relation that if A is normal
then euclidian norm of A x is same as euclidian
norm of A star x how does this property helps
us for saying something about eigenvalues.
So, what we have to proved is if A is normal
then norm A x is same as norm of A star x
then suppose lambda is eigenvalue of a then
we have got A minus lambda I u is equal to
0.
So, norm of A minus lambda I u will be equal
to 0 now a normal will mean that if I consider
A minus lambda I its star that u into also
will be 0. So, that will mean that lambda
bar will be an eigenvalue of A star.
So, A normal implies norm A x is equal to
norm of A star x its 2 norm then A u is equal
to lambda u is not equal to 0 vector. So,
norm of A minus lambda I u will be 0 this
will be same as A minus lambda I star u is
equal to 0 and this is equal to A star minus
lambda bar I u is equal to 0 and thus A star
u is equal to lambda bar u.
So, now for normal matrices the A and A star
if lambda is eigenvalue of a lambda bar will
be eigenvalue of A star and eigenvector is
going to be the same. So, using this fact
in our next lecture ,we will show that eigenvectors
of a normal matrix associated with distinct
eigenvalues, they are perpendicular ,then
I am going to state scherus theorem spectral
theorem and then we will go to localization
of eigenvalues. So, thank you.
