Good afternoon everyone. So, we will be, as
we have discussed in the last class that we
have looked into the definitions and various
properties of matrices and determinants and
looked into so many. We recapitulated some
of the old ideas and definitions of whatever
we have done earlier years. In our earlier
classes, the properties and various properties
of the matrices and determinants which will
be satisfied.
Classification of different matrices over
to which one is called the symmetric matrix,
asymmetric matrix, diagonal matrix, skew symmetric
matrix, and various properties of them, we
have already seen. Now, at the end, we have
defined the Eigenvalue problem and Eigenvalues
are quite important in all chemical engineering
applications.
So, let us start from that point onward, if
we write A X is equal to lambda X, you have
already seen what is the concept of matrix
and matrix is nothing but an operator, it
operates on a vector x and it written a value
in it maps into X; so, it a, matrix is like
a function, it is an operator and if we are
able to write the form of the matrix in this
particular fashion A X is equal to lambda
X, where lambda is a multiplier, is a scalar;
we call this problem as an as a standard Eigenvalue
problem. So, if you bring A X with the lambda
X on the other side, it becomes A X minus
lambda X is equal to 0; so, we can represent
A minus lambda I should be equal to, multiplied
by X, this should be equal to 0; so, therefore
this equation is set of a homogenous algebraic
equation.
So, this is absolutely homogeneous equation,
there is no non-homogeneous term present on
the other side. If we have the form of this
equation is A minus lambda I times X is equal
to b; this is a non-homogenous equation. Now,
for 1, for equation 1, X is equal to 0 is
a solution; ofcourse, if X is equal to 0,
then that is a solution and this is known
as a trivial solution. Therefore, the necessary
and sufficient condition for, not so, we are
not looking for the trivial solution, we are
looking for the nontrivial solution.
The necessary and sufficient conditions for
nontrivial solution, is that determinant of
A minus lambda I is equal to 0. Condition
for nontrivial solution 
is determinant of A minus lambda I is equal
to 0. And determinant of this is, if A is
a square matrix of size n cross n, then this
equation will gives you the determinant of
A minus lambda I is equal to 0.This equation
gives a polynomial of lambda; finally, it
gives a polynomial of lambda 
that is P of lambda is equal to 0; degree
of the polynomials, this is known as the characteristic
equation; this P of lambda is equal to 0 is
known as a characteristic equation.
If, for a matrix, square matrix n into n,
then the order of this degree of this polynomial
is n. So, for n into n matrix, we are talking
about nth degree, P lambda is equal to 0 becomes
an nth degree of, nth degree polynomial. The
roots of this polynomial, roots of this polynomial
of P lambda may be real, it may be complex
conjugate. So, therefore these roots can occur
either a complex conjugate pair and or real
valued system and these roots are called Eigenvalues.
Eigenvalues of the system and the corresponding
solution, the solution corresponding to Eigenvalue
is called Eigenvector.
Suppose, there are lambda i is a set of real
valued or complex valued Eigenvalues; i is
the, lambda i is the ith Eigenvalue; corresponding
to ith value, corresponding solution is known
as the ith Eigenvector; solution is ith Eigenvector.
So, therefore, in the equation A X i is equal
to lambda i X i; so, this will be the solution
of the problem corresponding to lambda I,
we will be having corresponding to Eigenvalue,
will be having the corresponding ith Eigenvector.
Now, if we look into the characteristic equation
P lambda is equal to 0. If we are talking
about a five, you know five component system,
then you will be getting a five into five
square matrix and degree of this polynomial
will be order will be five; so, therefore
you will you are going to get expect five
number of roots from this polynomial or five
Eigenvalues.
Now, if it is a 10 into 10 matrix, if A is
a 10 into 10 matrix, we are going to get a
characteristic equation of degree ten; in
that case, there will be ten number of Eigenvalues
present in that case. So, if it is a simple
system, if it is a 2 into 2 system or 3 into
3 system, the Eigenvalues are quiet; they
can be evaluated analytically, for 2 into
2 or 3 into 3 matrix Eigenvalues are evaluated
analytically.
For higher order system, we have to take recourse
to some numerical techniques, for higher order
systems, numerical solution is required. One
can take a request to Newton raphson algorithm
to find out the first root, then one has to
use the some kind of numerical method to evaluate
all the Eigenvalues, all the roots of this
equation, may be Horner's method may be one
of them.
So, by using such numerical technique, one
can evaluate all the root of the characteristic
equation, P of lambda equal to 0; from for
the corresponding lambda i one has to evaluate
the Eigenvector, may be, Givens' method is
there to evaluate the corresponding Eigenvectors.
So, what are the physical significance of
Eigenvalues and Eigenvectors? Eigenvalues
are the Eigenvectors are basically the characteristic
coordinate system.
For example, for a three-dimensional system,
like any vector can be represented, as the,
along the x axis, along the y axis, along
the z axis; so, by the 3 unit vectors, it
can be expect. Similarly, for an n-dimensional
space, the Eigenvectors will represented,
will represent the mutually orthogonal directions.
So, any vector can be broken down can be resolved
into this Eigenvector directions and in these,
every Eigenvector direction, the corresponding
value, the amplitude will be corresponding
to the Eigenvalues, that is the contribution
coming up of the original vector the contribution
of the vector into the direction of the Eigenvectors
will be corresponding, will be represented
by the Eigenvalues.
So, therefore, that is the physical significance
of Eigenvectors and Eigenvalues. Now there
is something called algebraic simplicity.
If all Eigenvalues are distinct, they are
not repetitive, if all of them are distinct,
that means, lambda i they do not occur twice
repeatedly. If all the Eigenvalues are distinct,
then this is the system is called algebraically
simple system.
So, what is an algebraically simple system?
A characteristic equation, where all the Eigenvalues
are distinct. Then, next we will be proposing
will be proving some of the axioms or theorems,
which we will be using quite often in our
course. So, the first theorem that we are
going to prove state and prove, is that, theorem
one left Eigenvector of a matrix A is same
as right Eigenvector of corresponding matrix
transpose matrix A transpose or vice versa.
Now, to prove this thing, let us consider
a vector in n-dimensional real space. Consider
vector y belongs to n-dimensional real space;
so, ofcourse, the transpose vector also belongs
to n-dimensional real space.
Once, we define this vector y and y transposes,
then we consider the Eigenvalue problem 
Y transpose A is equal to eta Y transpose;
suppose, this equation number one. In this
particular equation, Y transpose is left Eigenvector
of matrix X and eta is corresponding eigenvalue.
The size of Y transpose is 1 cross n, the
size of A will be n cross n, so since the
number of rows number of columns of A matching
with the number of rows of Y transpose; therefore,
they are confirmable and the matrix multiplication
is allowed.
Now, let us take the transpose of equation
one; if you take the transpose of equation
one, this becomes Y T A transpose of that
eta Y transpose of that, you remember we have
already taught in the earlier class that,
AB transpose is nothing but B transpose A
transpose. So, therefore this becomes A transpose
transpose of Y transpose eta is being a scalar,
so it does not matter; so, it becomes Y transpose
and transpose of that.
Now, transpose of a transpose matrix is nothing
but vector is basically the same one; so,
basically A transpose Y is equal to eta Y.
So, again if you look into this equation,
this is again an Eigenvalue problem; the form
of the equation is again an eigenvalue problem.
Here, the corresponding vector is A corresponding
matrix is A transpose and the Eigenvector
is Y. So, Y is Eigenvector, it is basically
Eigenvector of A transpose and Y is right
Eigenvector of A transpose, because y is occurring
in the right direction, so it will be right
Eigenvector of A transpose; so, therefore
that completes the proof.
So, we have proof that left Eigenvector of
a matrix is nothing but the right Eigenvector
of transpose matrix, Y is right at Eigenvector
of the transpose matrix; so, we have proved
our theorem first theorem.
Then, we go the next theorem; this theorem
says that 
Eigenvalues of a real matrix is identical
to those of transpose matrix. Eigenvalues
of a real matrix A are same identical equal
to those of A transpose; so, you have to proof
that Eigenvalues of A and Eigenvalues of A
transpose both are identical.
So, we assume the associated Eigenvalue problem,
consider following Eigenvalue problem, will
consider a pair of such problems. First one
is A X is equal to lambda X and second one
is A transpose Y is equal to eta Y; so, in
this case X is the lambda, is the Eigenvalues
of A. And corresponding Eigenvectors are X,
in the second case, in the second problem
eta are the Eigenvalues of A trans transpose
and the Y are the corresponding Eigenvectors.
Now, we have already seen that for a matrix
from the property of the determinant that
determinant of a matrix is equal to the determinant
of the transpose matrix. So, therefore we
utilize the property for a matrix B, determinant
does not change whether the matrix is transpose
or normal; so, determinant of B is identical
to determinant of B transpose. So, therefore,
we write determinant of A minus lambda I should
be equal to determinant of A minus lambda
I transpose.
So, you just consider a matrix A minus lambda
I as B; so, determinant of the matrix A minus
lambda I is identical to determinant of matrix
A minus lambda I transpose. So, we just open
up this transpose operator here, so this becomes
determinant of A transpose minus lambda being
a scalar multiplied by I transpose, but I
transpose in essence I is an identity matrix,
it remain same.
So, determinant of A transpose minus lambda
I; so, therefore, from this equation, we can
write down that determinant of A minus lambda
I should be is equal to determinant of A transpose
minus lambda I. Now, since determinant of
A minus lambda I is equal to 0, we can, we
can say that determinant of A minus A transpose
minus lambda I should also be equal to 0.
From this equation determinant of a lambda
equal to 0; so, therefore determinant of a
transpose minus lambda I should also be equal
to 0.
Now, look into equation number two and equation
number three, from the equation number three,
what we have got. So, whenever we have written
that, A X is equal to lambda X, that simply
means, determinant of A minus lambda I is
always 0 and whenever we write A transpose
minus eta I is equal to 0 or A transpose Y
is equal to eta Y this implements determinant
of A transpose minus eta I is equal to 0.
So, from this we can compare this equation
and this equation; so, if you compare this
equation and this equation, what we will get
that determinant of A transpose minus lambda
I is equal to determinant of A transpose minus
eta I; that simply tells us that lambda is
equal to eta.
So, if you remember what is lambda? Lambda
is at the Eigenvalues of the matrix A, eta
are the Eigenvalues of matrix A transpose,
since they are identical; so, we can say that
Eigenvalues 
of matrix A are identical with the Eigenvalues
of matrix A transpose; so that completes the
proof of the second theorem.
So, we move over to the next theorem and all
these theorem are quite useful and helpful
for solving the chemical engineering problems,
that we will see later on.
So, next theorem goes like this, this says
if Eigenvalues are simple; simple, means,
there is no repetition of the Eigenvalues,
they are distinct. If Eigenvalues are simple,
then Eigenvectors form an independent 
set of vectors or a basis set. So, that simply,
that means, if for the simple Eigenvalues
for distinct Eigenvalues, the Eigenvectors
form an independent set of vectors of the
basis set vectors; that means, that was, I
was telling in the beginning of this class,
that is the physical significance of the Eigenvectors
and Eigenvalues.
The Eigenvectors represent individually orthogonal
and independent directions, so any vector
in the space can be represented as a linear
combination of the Eigenvectors; so that is
the physical significance of Eigenvectors
and contributions in each direction of the
Eigenvectors will be dictated by the corresponding
values of corresponding Eigenvalues.
The, let us prove this, that Eigenvectors,
they form, they are mutually or independent
to each other; so that, they will be form
in the independent as if they are represented
by the independent axis or this axis will
be basically, they are independent directions.
So, the proof goes like this, we assume there
are r number of Egenvectors of a matrix and
out of these r Eigenvectors, there are n number
of Eigenvectors; in a system there are n number
of Eigenvectors and out of these n number
of Eigenvectors, out of n Eigenvectors r number
of Eigenvectors are independent; that means,
rest Eigenvectors from n minus r numbers they
are algebraic, they are independent vectors;
so, we write X 1, X 2, up to X r these are
Eigenvectors are independent.
Now, in this notation X j corresponds to 
lambda j, that means, for the Eigenvalue lambda
j, the corresponding Eigenvector is X j. So,
therefore, for any j lying in between n minus
r to n, what is that, all Eigenvectors are
independent n minus or eigenvectors are dependent;
that means, any Eigenvectors, so we have said
that the total number of Eigenvectors are
n out of these n, r number of vectors are
independent.
So, what is this set lying in between n minus
r and n is this set; that means, it is a dependent
set of Eigenvectors. Therefore, any Eigenvectors
present in this set it can be expressed as
a linear combination of all the independent
Eigenvectors X 1, X 2, X 3, up to X r.
So, that we have already proved earlier that,
any if there are n number of, n number of
independent vectors present in a space, any
other vector can be represented as a linear
combination of these n independent vectors.
So, therefore, X j can be written as a linear
combination of all the other independent Eigenvectors
and this index i runs from 1 to r, because
r number of independent Eigenvectors present
in your system; this is equation number one
but corresponding C i must not be equal to
0.
So, why we write this equation? This equation
will represents a linear combination of vectors
C i X i are nothing but the linear 
combination of vectors. So, this is a dependent
vector and X i is these are all independent
vectors.
So, let us go to the next step; so, we just
open up this equation C i X i i is equal to
1 to r; we this, let us say equation number
1, we take, we operate this equation by the
matrix A. We have already seen earlier that
matrix is like a function, it is an operator;
so, we operate equation 1 by matrix A, like,
it is like differentiation is an operator.
We are taking differentiation on both sides;
so, it is like, we are taking, we are operating
equation 1 by A; so, if that is the case,
it will be A X j is equal to summation of
A C i X i or i is equal to 1 to r.
Now, we open up this summation series; so,
this becomes A X j is equal to A C 1 X 1 plus
A C 2 X 2 plus A C 3 X 3 up to A C r 
X r. Now, each of them, so since C 1 is a
scalar multiplier, this will be A X j is equal
to C 1 A X 1 plus C 2 A X 2 plus C 3 A X 3
up to Cr A X r. And we have seen we know that
A Xi is nothing but lambda i X I, that is
the standard Eigenvalue problem, for the corresponding
value of the lambda i the Eigenvalue, the
corresponding Eigenvector is X i.
So, therefore we can write A X j is equal
to C 1, A X 1 is nothing but lambda 1 X 1,
so we write lambda 1 X 1 plus C 2, A X 2 we
write lambda 2 X 2 plus C 3, A X 3 is lambda
3 X 3, likewise Cr A X r is nothing but lambda
r X r. Now, so that is 1 equation we get,
now we multiply equation 1 by lambda j, let
us and so we can also write, A X j as lambda
j X j; so, we get this equation lambda j X
j is equal to C1 lambda 1 X 1 plus C 2 lambda
2 X 2 plus C 3 lambda 3 X 3 up to C r lambda
r X r.
Next, what you do, we multiply equation number
1 with lambda j and see what we get; so, if
we multiply equation number 1 by lambda j,
we will be getting lambda j times X j is equal
to C 1 lambda j X 1 plus C 2 lambda j X 2
plus C 3 lambda j X 3 up to Cr lambda j X
r; this will be getting by multiplying equation
1 by lambda j, then we subtract this equation
from that equation; so, we do a subtraction
here and see what we get.
So, if we really do the subtraction, the left
hand side will be equal to 0 and let us see
what we get in the right hand side, this will
be C 1 lambda 1 minus lambda j X 1 plus C
2 lambda 2 minus lambda j X 2 plus C 3 lambda
3 minus lambda j X 3 up to C r lambda r minus
lambda j X r.
So, let us say, this is equation number 1;
so, we get an equation number 1 an equation
number a; let us write down equation number
a, once again for convenience lambda 1 minus
lambda j X 1 plus C 2 lambda 2 minus lambda
j X 2 likewise up to C r lambda r minus lambda
j X r is equal to 0.
Now, in this equation, we have already, this
problem is for algebraically simple problem.
So, therefore lambda i is not equal to lambda
j; so, therefore lambda 1 is not equal to
lambda j, lambda 2 is not equal to lambda
j; so, lambda 1 minus lambda j is not equal
to 0; so, lambda 1 minus lambda j is not equal
to 0.
Similarly, lambda 2 minus lambda j is not
equal to 0, lambda r minus lambda i is not
equal to 0. So, therefore, in order to satisfy
this equation A, to satisfy equation A, we
must have that all the corresponding coefficients
must vanish each and individual; that means,
to satisfy equation a compulsorily, we should
have C 1 equal to 0, C 2 equal to 0, C 3 equal
to 0, up to C r all of them should be individually
equal to 0.
So, therefore, this simply proves that, whatever
we have done in the earlier classes, that
X 1 for the C i; each of this C i will be
equal to 0. Then, Xi, X 1 to 
X r, they form, they form a set of equations,
set of vectors and we have assumed, these
are, these are Eigen independent Eigenvectors.
So, therefore, X j any other vector can be
represented as summation C i X i, where C
i is not equal to 0; so that was our earlier
assumption, when we started that we have taken
up any Eigenvector from the dependent set
lying between j the n and n minus r j, being
the, being lying between n minus r and n;
so, X j can be expressed as a linear combination
of these vectors; so, therefore we assumed
that Cy was not equal to 0, but in this case
whatever we have proved that it goes to the
contrary of our assumption, that each of them
will be individually equal to 0.
So, we have proved that C i must be equal
to 0; so, it contracts, contradicts of our
assumption, so it is 
contrary to our assumption; so, this is called
a negative proof. So, what is the implication?
The interpretation is that, the rest Eigenvectors
the X j Eigenvectors which a set lying between
n minus r to n; they are not dependent set
of vectors, they are also independent set
of vectors, X j are also independent 
set of vectors.
So, therefore, all the Eigenvectors are the
independent set of vectors; therefore, all
Eigenvectors are independent, all Eigenvectors
are independent and they constitute the members,
they are the members of the basis set. So,
they form a basis set, this completes this
proof. So, we have seen that all the Eigenvectors
of a matrix A are basically the members of
the basis set and basis set vectors any other
vector in the space can be represented as
a basis set vectors.
So, therefore Eigenvectors of a matrix are
always independent, each of them is in independent,
they are the members of basis set vector.
So, any other vector in the space can be represented
as a linear combination of all these Eigenvectors.
So, therefore, as we have said a few minutes
back, regarding the physical interpretation
of the Eigenvectors, that Eigenvectors are
nothing but the directions which are mutually
independent to each other and any other vector
in the space can be represented as a linear
combinations of these independent vectors.
So, next we go to the fourth theorem; theorem
number 4, this says Eigenvectors of X i, Eigenvectors
X i and X j corresponds to, if Eigenvectors
X i and X j correspond to two distinct Eigenvalues
lambda i and lambda j; then if Eigenvectors
X i and X j correspondent to distinct Eigenvalues;
lambda i and lambda j they are orthogonal
to each other, for a real symmetric matrix.
For a real, if this is the case, then for
a real symmetric matrix 
X i and X j are orthogonal and symmetric matrix,
means, A is equal to A transpose. Therefore,
if we have, so what is the proposition in
this theorem? In this theorem, it is proposed
that if we have a real symmetric matrix, that
means, A is equal to A transpose then 2 Eigenvectors
X j and X i they correspond to, if they correspond
to two distinct Eigenvalues lambda i and lambda
j, then X i and X j form an orthogonal set;
that means, they are orthogonal to each other.
So, therefore, we proof this theorem and the
proof goes like this, let us say lambda i
and lambda j are two distinct Eigenvalues,
corresponding Eigenvectors are X i and X j.
And let us write down the associated Eigenvalue
problems, with the linked, with the Eigenvalues
lambda i and lambda j.
So, we write down the associated Eigenvalue
problems, A X i is equal to lambda i X i,
this is equation number 1; A X j is equal
to lambda j X j, this is equation number 2;
these are the associated Eigenvalue problems
for lambda i and lambda j. Then what we do,
we take the inner product of equation 1 with
respect to X j.
Take inner product of equation 1 with respect
to X j; if we take the inner product, we will
getting X j, A X i is equal to inner product
of X j lambda i X i. So, if we remember that
we have proved this equation in the last class,
X inner product of X j and X i should be X
j A transpose.
So, therefore, so this will be inner product
of X j. And we write it in the other form,
we will take that later on, first we take
the inner product of equation 1 with respect
to X j and we will be getting equation number
3.
Then we take inner product 
of equation 2 with respect to X i on the right;
so, let us see what we get. Now from these
2, we will be getting, inner product of X
j A X i is equal to inner product of X j lambda
i X i. Second, the inner product of the second
equation with respect to X I, so A X j, X
i is equal to inner product of A X j is nothing
but lambda j X j the inner product of lambda
j X j and X i.
Now, we subtract these 2 equation; if you
subtract these 2 equations, what we get, it
is that inner product of X j and A X i minus
inner product of A X j, X i is equal to we
can take, we invoke the property of the inner
product that X lambda inner product between
X lambda Y is nothing but lambda inner product
of X and Y, that is, if lambda is a scalar;
so, similarly, we can take lambda i out, so
this becomes X inner product between X j and
X i minus same here we took lambda j out so
it will be lambda j multiplied by inner product
of X j and X i and we have already proved
earlier that inner product of X j X i is identical
to inner product of X i X j; that means, these
two are identical and we can write lambda
i minus lambda j multiplied take an inner
product X i and X j can be taken as common.
Then we utilize the formula that whatever
we have derived earlier that inner product
between X and Y should be is equal to X transpose
Y, that we have already proved probably in
the last lecture. So, utilizing that, we will
be doing X j transpose A X i minus A X j transpose
full X i is equal to lambda i minus lambda
j inner product of X i and X j.
So, just open up this transpose, so what we
get X j transpose A X i is nothing but X j
transpose A transpose X I, we utilize the
formula AB transpose is nothing but B transpose
A transpose, so X j transpose will be coming
in the front, so it will be X j transpose
A transpose X i is equal to lambda i minus
lambda j inner product of X j and Xi and X
j.
Now, if you see the quantity on the left hand
side is that the left hand side will be having
the identical quantities one with the negative
sign; so, these 2 will vanish. Now, we have
already seen, so what we get from here is
that 0 is equal to lambda i minus lambda j
inner product of X i and X j.
Now, lambda i the Eigenvalues being simple
in nature, that means, they are distinct;
distinct, means lambda i is not is equal to
lambda j, they are not repeated roots they
are distinct roots. So, therefore, to satisfy
this equation inner product of X i and X j
should be equal to 0; therefore, so since,
lambda is not equal to lambda j, this quantity
is not equal to 0; therefore, to satisfy this
equation only option left is that inner product
of X i and X j must be equal to 0. So, therefore,
that simply means, that the Eigenvectors X
i and X j are orthogonal.
So, just one more clarification is that, in
this equation whenever we are omitting the
left hand side, the we are putting the left
side is equal to 0; if you look into this
equation X j transpose A X i minus X j transpose
A transpose Xi, but since it is a symmetric
matrix A is equal to A transpose. Therefore,
since A is a symmetric matrix these two quantities
become identical and they will be subtracted.
So, therefore this whole left hand side will
become 0 and you will be getting this equation;
since, lambda i and lambda j are not equal,
they are distinct, they will be not equal
to 0. So, in order to satisfy this equation,
X i inner product between X i and X j must
be equal to 0; so, they are orthogonal to
each other.
So, therefore if we have three-dimensional
system, for three-dimensional system 
we have three distinct Eigenvalues, lambda
1 lambda 2 and lambda 3. Corresponding to
three distinct Eigenvalues, you will be having
three distinct Eigenvectors X 1, X 2, X 3.
Now, this simply means, that for the symmetric
matrix 
for symmetric square matrix, because Eigenvalue
problems are defined on square matrix; only
for symmetric square matrix X 1 inner product
of X 1 and X 2 is equal to 0, is equal to
inner product of X 1 and X 3 and equal to
inner product of X 2 and X 3.
This, simply says that, the Eigenvalue, Eigenvectors
are orthogonal to each other, if we have a
symmetric matrix; the other is not true that
if the matrix is not symmetric that we cannot
say, the Eigenvectors are not orthogonal to
each other, that may not be the case, if the
matrix is not symmetric. But in that case
the Eigenvectors of a matrix A and the Eigenvectors
of matrix A transpose they form an orthogonal
set.
So, that we will prove in the next class and
after that proof we will be able to solve
take up any chemical engineering problem and
solve them by this Eigenvalue, Eigenvector
method.
We stop in this class at here and we will
take up from this point onwards, in the next
class. Thank you very much.
