Good morning everyone. So, we are looking
into the matrices, determinants, Eigen value
problems and their applications in various
chemical engineering problems. So, in the
last class we looked into the standard Eigen
value problems. And there are various properties
of an Eigen values and Eigen vectors of a
square matrix would satisfy and we are looking
into some of these properties and we have
derived several theorems for that. The last
theorem as I remember is that, we have proved
that for a symmetric matrix the Eigen vectors
are orthogonal to each other if the Eigen
values are simples; that means, all the Eigen
values are distinct.
Now, in this class we will look into one more
theorem that is that will be quite appropriate
and it will be quite useful as we go along
the along this course, various classes of
this course. This theorem is the last theorem
that we are going to look into the series
of theorems of an Eigen value problem should
satisfy and after this will be looking into
some of the typical major applications of
Eigen value problems in chemical engineering
applications.
Now, this theorem is that, for the eigenvectors
for a matrix and its transpose matrix, they
are orthogonal to each other, they form a
bi-orthogonal set. We have already seen earlier
that the Eigen values of a matrix and its
transpose matrix are identical, but their
Eigen vectors may not be identical. So, in
this theorem we will be proving that Eigen
vectors of A and A transpose, therefore the
matrix is A, they are mutually orthogonal
to each other and form a bi-orthogonal set.
.
So, the last theorem that we are going to
prove, which deals with properties of the
Eigen vectors is that, Eigen vectors of a
matrix a and its transpose matrix a transpose,
they form an orthogonal set. So, we formulate
the corresponding Eigen value problem. This
problem is A X i is lambda i X i, that is
number one. Next one is a transpose Y j is
lambda j Y j. So, this is Y. This is the Eigen
value problem for the matrix A and this is
the Eigen value problem for the matrix A transpose,
where X i's are Eigen vectors of matrix A
and Y j's are the Eigen vectors of matrix
A transpose.
So, next what we do? We take the left inner
product of equation one with respect to Y
j and see what we get. Take left inner product
of equation one with respect to Y j, and what
will be getting is that Y inner product of
Y j A X i is nothing but, inner product of
Y j lambda i X i.
.
So, if you remember the property of the of
the inner product that we have already proved
this earlier, that is inner product of Y j
and X j will be nothing but, multiplication
of Y j transpose and X j. That we have already
proved earlier, so utilizing this will be
simplifying the left hand side of this equation.
So, this equation, so it will be nothing but,
Y j transpose multiplied by A X i. So, if
we get that you will be writing Y j multiplied
by A X i should be is equal to inner product
of Y j and lambda i X i.
Now, we have already looked into the property
of inner product, since lambda i being an
Eigen value, it is a scalar. So, this scalar
will be coming out of the inner product sign
and it will be Y j and inner product of Y
j and X i. So, this is equation number three.
So, next we take the inner product of 
equation number two, that is the Eigen value
problem of A transpose with respect to X i.
So, we take right inner product of equation
two with respect to X i. If you do that what
you will be getting is that, A trans inner
product of A transpose Y j and X i should
be is equal to inner product of lambda j Y
j and X i.
Now, using the same rule that we have already
theorem, that we have already proved earlier,
these will be nothing but, multiplication
of transpose of A transpose Y j and X i. So,
if you write it that way, you will be getting
A transpose Y j transpose of that X i is equal
to, again here the lambda j being a scalar,
so it will be coming out of the inner product.
So, it will be Y j comma X i.
Now, we invoke the property of the matrix
operation, that is A B transpose is nothing
but, B transpose A transpose. Invoking this
property and utilizing it over here, what
we will be getting is Y j transpose and A
transpose transpose of that X i lambda j inner
product of Y j and X i. So, you will be getting
Y j transpose, transpose of transpose, so
it will be A transpose and enter A transpose
should be is equal to lambda j Y j X i. So,
this will be Y j transpose, transpose of transpose
will be the same. So, it will be Y j transpose
A X i is equal to lambda j Y j comma X i.
.
So, if you look into the earlier derivation,
that when we take the left inner product equation
number one with respect to Y j. So, it will
be inner product of Y j and A X i is equal
to Y j inner product between Y j and lambda
X i, we invoke this property inner product
of Y j and X j should be Y j transpose and
X j. So, in this equation we will be having
a transpose over here. So, that is equation
number three and this is equation number four.
Now, if you compare the equation between the
equation number three and four, you will be
seeing that the left hand side is identical.
So therefore, comparing the two we can from
three and four, equation three and four, we
can say that lambda i inner product of Y j
X i is equal to lambda j inner product of
Y j X i.
So, and we have already seen that inner product
of X and Y is identical as inner product of
Y and X. So, we utilize this property and
the come to conclusion, when you get this
on the left hand side, what you will be getting
as. So, basically inner product of Y j and
X i is identical to the inner product of Y
j and X i.
.
So, you will be getting lambda i minus lambda
j inner product of Y j and X i is equal to
0. Now, this lambda i is not equal to lambda
j therefore, will be having to satisfy this
equation, the only option we have that inner
product of Y j and X i is equal to 0; that
means, the Eigen vectors of A transpose and
Eigen vectors of A they are orthogonal to
each other. So, Y j and X i are orthogonal
to each other, provided i is not equal to
j. So, this will be satisfied if i is not
equal j. So, this set of the Eigen vectors
of A and Eigen vectors of A transpose, they
are called to form a bi-orthogonal set.
So, if we talk about a vector in R three space,
a matrix in R three space. So, it will be
a three into three matrix, A is basically
have a size three into three, and it will
be having the Eigen vectors X 1 X 2 and X
3. Similarly, A transpose will be having Eigen
vectors Y 1, Y 2 and Y 3, then by using this
theorem one can say that inner product of
X 1 and Y 2 is equal to inner product of X
1 and Y 3 is equal to inner product of X 2
and Y 1 is equal to inner product of X 2 and
Y 3 is equal to inner product of X 3and Y
1 is equal to inner product of X 3 and Y 2
will be is equal to zero. That means, unless
and until i is not equal to j, all the eigenvectors
they form the bi-orthogonal set and they will
be orthogonal to each other.
So, that completes this proof that A and A
transpose, the Eigen vectors of A and A transpose
will form a bi-orthogonal set and these Eigen
vectors will be orthogonal to each other.
So, that completes various theorems of a standard
Eigen value problems, where the Eigen values
and Eigen vectors will obey these rules or
axioms. Now, next what we will be doing, we
will be looking into several chemical engineering
applications of this Eigen values and Eigen
vector problems.
.
So, let us look into the applications in chemical
engineering problems. The first application
I will be talking about is the solution to
non-homogenous set of algebraic equations.
Whenever we will be writing the mathematical
models for any chemical engineering systems,
typically for a steady state problem, you
will be landing up with a set of algebraic
equation. In case of transient problems, you
will be landing up with a set of ordinary
differential equations.
Now, so both of these set of equations either
algebraic equations or ordinary differential
equations, they are typical for any chemical
engineering processes depending on the steady
state and unsteady state. Now, using these
Eigen value problems, one can elegantly solve
these set of equations. So, we will be using
this method or the Eigen value method as a
tool to solve various chemical engineering
problems.
Now, we will be looking into the solution
of set of algebraic equation. Algebraic equations,
these are typically correspond to steady state
problems, steady state situation. So, we need
to solve a set of equation in a compact form,
the compact notation, compact form or matrix
notation, these set of equations is written
in this form A X is equal to b. And this term
b on the right hand side is the source of
non-homogeneity and therefore, these equations
are called non-homogeneous algebraic equations.
Now, the vector X belongs to n-dimensional
real space. It would formulate the whole problem
in a more generalized fashion, so that it
can be reduce to solve solution of any dimensional,
finite dimensional problem, may be three-dimensional
problem, may be ten-dimensional problem. And
similarly, the vector b belongs to n-dimensional
real space, where the matrix A is having a
size n into n.
Now, in this equation one, A b are known,
both A b are known, because whenever you are
writing, you are modeling the chemical engineering
process, what you are basically doing, you
are writing some mathematical expressions
to express the chemical engineering processes.
So, that is called the modeling, solution
of them is the simulation.
Now, the matrix, the various elements of the
matrix A will be determine from the processes
and we will be looking into specific example
of this. And similarly, from the model equations,
the various elements of this vector b will
be known. So, A and b are known, they are
fixed by the process of the chemical engineering
process. On the other hand, the solution matrix,
the solution vectors X is not known.
So, therefore, where our aim is to obtain
this vector X. Now, we consider, so what is
the method that we are going to adopt? We
consider matrix A and the set of vector X,
say set of vectors X i, which are basically
constitute X 1, X 2, X 3 up to X n, they are
the 
Eigen vectors of A which are. So, set of vectors
A, X 1 to X n, are the Eigen vectors of A.
And of course, we have already proved that
Eigen vectors of a matrix are independent
to each other and they form a basis set.
So, therefore, we already proved 
that Eigen vectors of a matrix A are always
independent, they form independent set of
vectors so therefore, they form a basis set.
That means, any other vector in the space
can be written as a linear combination of
this Eigen vectors which are independent vectors.
Therefore, this the vector b in equation number
one can be expressed as a linear combination
of Eigen vectors, which are nothing but, Eigen
vectors of the matrix A.
.
So, let us express the vector b as a linear
combination of Eigen vectors i is equal to
1 to n. So, since X i's form the basis set
vectors or independent set vectors therefore,
the vector b can be expressed as a linear
combination of this.
So, the coefficients beta i are not known
to us, we are going to determinate. Now, let
us consider another case, let us consider
the set of vectors, Y 1, Y 2, Y 3 up to Y
n are Eigen vectors of A transpose, A transpose,
and since these are also Eigen vectors of
a matrix A transpose, they form a set, they
form a basis set.
Now, next what we do, we take the inner product
of equation a with respect to, lets say any
Eigen vector of A transpose, lets say with
respect to Y 1, take inner product of equation
one with respect to vector Y 1. So, what we
will be getting is that, Y 1, inner product
of Y 1 and b will be nothing but, summation
if you just express it, you just open up the
summation term wise, then it will be inner
product of each term with respect to Y 1.
So, we can put the summation outside and this
will be simply i is equal to 1 to n inner
product of beta i X i and Y 1. So, this will
be nothing but, i is equal to 1 and beta i
being a scalar, so by the property of inner
product, it will be taken out of the inner
product sign and this will be simply X i and
Y 1.
Now, we open up this summation. So, this if
you open up this summation, this will be become
beta 1 inner product of X 1 Y 1 plus beta
2 inner product of X 2 Y 1 plus beta 3 inner
product of X 3 Y 1 likewise. And finally,
you will be having beta n inner product of
X n and Y 1. And we have already seen that
for the two matrices, in this class is the
first theorem that we have proved in this
class, that for the matrix A and A transpose
the corresponding Eigen vector will form a
bi-orthogonal set.
So, therefore, X i and Y j inner product of
that will be always equal to 0, for i is not
equal to j, where X i's are the Eigen vectors
of a and Y j's are the Eigen vectors of A
transpose. Therefore, all these terms will
be equal to 0, X 2 inner product of X 2 and
Y 1 X 3 and Y 1 X n and Y 1, only one term
will survive out of these summation series
and that will be beta X 1 and Y 1,. so inner
product of X 1 and Y 1. So, you will be able
to calculate the value of beta 1, that will
be a simply Y 1 inner product of Y 1 and b
and inner product of X 1 and Y 1. So, you
can write it in more compact invitation, Y
1 transpose b and X 1 transpose Y 1. So, it
will be matrix multiplication.
.
So therefore, this whole thing can be calculated
as beta 1. So, you can generalize it and we
will be able to get the form of beta 1 as
Y i transpose b and you can write it as X
1 transpose Y 1. You can put it in the notation
i and you can always argue that inner product
of X 1 and Y 1 is equal to inner product of
Y 1 and X 1, so what is X 1 transpose Y 1
that will be identical to Y 1 transpose X
1. So therefore, since this is in the form
of Y j transpose, we can write it in this
form in a more generalized fashion, Y i transpose
b divided by Y i transpose X I, this will
be Y.
So, they are identical inner product of X
1 Y 1 is equal to inner product of Y 1 X 1.
So, inner product of X i and Y 1 is identical
to inner product of Y i X i. So, they are
basically identical. Why we have written in
this form? Because here it appears this transpose
,here it is appearing as Y i, so I would like
to put Y i t and X i.
Why I will be doing that? I will just comment
in a minute that, whenever you will be having
a large number of system, large dimensional
system, for example, a 10 into 10 dimensional
system or may be 20 into 20 dimensional system,
it is a very complicated system. So, you will
be tackling with so many equations simultaneously,
then you cannot calculate all these things
by using a calculator or by hand. So, you
have to take recourse to the numerical techniques.
So, if you put it in this form the numerical
operations will be more. So, for numerical,
if you talk about the numerical calculations,
what is basically you will be doing that,
you will be taking recruits of simple assumptions,
simple operations in order to compute beta
i. To compute beta I, what the operations
you have to take, you have to basically get,
once you have a vector, you have to get the
transpose of the vector. How the transpose
of vectors will be done? It is a very simple
operation, it is basically the elements in
the rows will be or columns will be assign
to those in the rows.
So therefore, by changing the indices, one
can get a transpose vector, may be Y i to
Y transpose is basically it is a one line
program. Next is the multiplication of the
two vectors or two matrices, this is 1 into
n this is n into 1. So, you can have a matrix
multiplication and this matrix multiplication,
if you remember, it is basically multiplying
the corresponding terms and add then up. You
multiply the corresponding terms and add them
up and that will be giving the value of the
multiplication of the matrices.
So, it is basically, the basic operation is
multiplication, first is multiplication, second
is addition. You multiply the corresponding
elements and add them up. So therefore, once
you get this operation, Y i to Y transpose.
So, by writing one simple one or two lines
simple subroutines, you need not to compute
X i transpose. So, what is the advantage you
are going to get by writing it in this form?
The advantage is that, once you are writing
a program for getting the value of and elements
of Y i transpose ,you need not to compute
the elements of X i transpose, same Y i transpose
will be appearing in the denominator and the
numerator. So, you avoid one more subroutine
which will be calculating X i transpose.
So, that is not require, that is why we have
written this equation keeping the denominator
in the form of Y i transpose, so that same
transpose routine, since all the elements
of Y i transpose are already been computed
that can be directly used here, and we are
we have reduced one step of computation. So
therefore, it is a known vector b and X i's
are known because they are the Eigen vectors
of the matrix A. Y i's are known because they
are the Eigen vectors of Y transpose. So,
we will be able to compute the values of beta
i. So, beta i's will be calculated 
from this equation.
Now, if you remember that in our original
problem our aim is to calculate X in equation
number one. Our aim was not to calculate beta
i but, our aim was to calculate X, the solution
vector in equation one. And what is X? It
is nothing but, the solution vector.
Now, since again this vector belongs to n-dimensional
space, again this vector can be written as
a linear combination of Eigen vectors of A
trans of the matrix A because they form a
basis set of vectors. So therefore, X can
be expressed as a linear combination of basis
set vectors which are nothing but, the Eigen
vectors of matrix A. So therefore, we can
write, X can be written as i is equal to 1
to n alpha i X i. And so, this is one more
equation and then what do you do since matrix
is an operation, we take, we operate this
equation by A. So, whatever we will be getting
is that A X is nothing but, summation of A
alpha i X i. So, alpha i can be taken as,
alpha is being a scalar, so it will be operated
on every vector. So, it will be nothing but,
summation alpha i a X i. And if you remember
what these X i's are? These X i's are nothing
but, the Eigen vectors of the matrix A, so
they will satisfy the equation A X i is equal
to lambda i X i.
.
So, A X i's will always satisfy the corresponding
Eigen value problem. This is the Eigen value
problem of A, of matrix A, where lambda i
are the Eigen values and X i's are Eigen vectors.
So therefore, we can write this equation as
A X is equal to summation alpha i lambda i
X i. So, this is equation number three. Similarly,
we have the, if you remember what was the
original problem, the original problem was
a set of non-homogeneous equations. So, A
X is equal to b. So therefore, we just substitute
A X by this equation. So, this will be summation
alpha i lambda i X i and this b, if you remember
from equation number two, we have written
as b as a linear combination of the Eigen
vectors X i. So, this will be nothing but,
beta i X I, where the index i runs from 1
to n one and 1 to n. So therefore, we can
take this on this side. So, this will be alpha
i lambda i minus beta i is equal to multiplied
by the vector X i is equal to 0, where the
index i runs from 1 to n.
Now, since X i forms an independent set of
equation or independent set of vectors. So,
they are members of a basis set, they are
all independent. So therefore, to satisfy
this equation to be 0, all the coefficients
have to be identical equal to 0 and we have
already proved this earlier. So therefore,
to satisfy equation number four, one has to
have the corresponding coefficient should
be equal to 0. So, alpha i lambda i minus
beta i should be equal to -.
.
So, therefore, the condition that we are going
to have to get this equation correctly, we
have the alpha i is equal to nothing but,
beta i by lambda i. So therefore, we will
be getting, what is beta i? Beta i we have
already proved that it is nothing but, Y i
transpose b divided by lambda i, lambda i
will be there, Y i transpose X i. So, this
will be the expression of alpha i. And if
you remember, what this alpha is at there,
solution vector is formed by alpha i X i.
So, this is the solution vector that we are
looking for. These are the Eigen vectors of
A. So, these are known, alpha i at the corresponding,
alpha i is obtained from this equation. So,
where in this equation b is a known vector,
Y i at the Eigen vectors of A transpose, so
again this is a known vector. So, the numerator
can be computed, lambda i at the corresponding
Eigen values of A, Eigen values of A and A
transpose both are identical, we have already
proved that. So, this is also known, numerator
is known and X i's are basically the Eigen
vectors of A, they are also known. So, alpha
i can be computed. Once alpha i is known,
by using this equation, you can X i's are
known. So, alpha i is unknown, you can get
the solution vector X.
So, therefore, this is an elegant way of getting
the solution of set of algebraic equation.
So, let us look into what are the different
subroutines, one has to make to compute this
problem, for solution of this problem numerically,
subroutines one has to write 
are as follows.
First one is that, you have to have a subroutine
to get A transpose- first you have to get
the subroutine to get the Eigen values, subroutine
to compute the Eigen values of a matrix. And
to get the Eigen values, you will be getting
a characteristic equation and from the characteristic
equation you can find out the, using the honors
method, one can find the number of roots in
a polynomial. So, basically you will be getting
a root finding subroutine, get roots or zeros
of a polynomial. Next, corresponding to these
Eigen values we have to have a subroutine
for Eigen vectors. May be there are several
method, Givens method is one of them. So,
you can compute the Eigen vectors, then only
these two are the major things, then you have
to write a program which will get a transpose
matrix from a matrix, convert a program that
converts A to A transpose, that is again a
small program, couple of lines program may
be. Then second a program for matrix multiplication.
Next, you have to get a program for addition
because whenever you multiply, you have to
multiply the corresponding elements and add
them up. So, basically this will be included
here itself. So, you require to have a matrix
multiplication, multiplication includes the
multiplication of the corresponding elements
and then add them up.
So therefore, it does not matter. So, the
addition operation is included into the matrix
multiplications subroutine. So, you require
only these four operations, four subroutines
to compute the solution vector X from a set
of algebraic equations. Now, that completes
the method how to compute the solution of
set of non-homogeneous algebraic equation
using the Eigen value problem, whatever we
have discussed in the last few classes.
Now, if you remember how will you solve a
set of non-homogeneous algebraic equation,
probably if you remember Gauss Seidel algorithm
or by Gauss Elimination method, one can compute
either analytically or numerically. If it
is a large dimensional problem, it is easier
to go for numerical techniques. Now, for using
and for implementing Gauss Seidel algorithm,
the major problem or the major barrier one
will face is getting the inverse of the matrix.
So, that becomes a very problem, problematic
thing, if the matrix is not well behaved or
well posed, then the determinant, the matrix
may be a singular matrix, the determinant
may be tending to 0. So, matrix inversion
becomes a very big problem, it is a big challenge
numerically for solving such set of algebraic
equations. So, but, if you adopt to this method,
one can do away with the matrix inversion
and just using the algorithm for evaluation
of Eigen values and Eigen vectors and simply
matrix multiplication and transpose of the
matrix, that will give you the solution vector.
And one can just avoid the matrix inversion
in this particular method, and you will be
getting an elegant solution by using the Eigen
value problem for the solution of set of linear
algebraic equation.
.
Now, I will takeoff couple of examples to
demonstrate this method. The first example
that I will be talking about is, solution
of the problem a X is equal to b, where the,
I will be taking a very simple problem in
order to demonstrate, otherwise one if it
is a, I will be taking a two-dimensional problem
for demonstration purposes. For hard dimensional
problem, one has to go for the numerical methods
to get this.
Now, all these subroutines that I was just
talking about in order to solve this set of
equations, the Eigen values, Eigen vectors,
conversion from A to A transpose, multiplication
of the matrices, all these subroutines are,
you need not to write your own code, these
subroutines are always available in the numerical
recipes, they are available either in Fortran
or in c plus plus, or one can use the MATLAB
code to connect all these routines together
and can get the complete solution numerically.
So, you need not to write your own code, you
can invoke these subroutines either from libraries
or MATLAB or you can join them up to get a
complete solution.
So, for the demonstration purpose, I will
be taking up this example, A X is equal to
b, where the elements of A are minus 2 and
2 1 and minus 3 and solution matrix, the solution
vector X will be composing of two elements
X 1 and X 2, and b will be composing of minus
1 and 0. So, basically the chemical engineering
system will be described by 2 X 1 plus 2 X
2 is equal to minus 1 and X 1 minus 3 X 2
is equal to 0. The chemical engineering process
will be having two unknowns and these two
unknowns will be requiring two equations to
solve them uniquely. So therefore, these are
the model equations 
to represent the chemical engineering problem
and probably this chemical engineering system
is operating under steady state, so we model
the equation, model the system by writing
these two equations. So, these two equations
can be written in a compact matrix form, in
the form of X equal to b in this fashion.
Now, our aim is to find out what are the solution
vector X 1 X 2 like that. In this case, since
it is 2 into 2 matrix, will be having only
two solution, two elements in the solution
vector X 1 and X 2. So, our aim is to obtain
X 1 and X 2. So, first step 
lambda i is ,we find out the Eigen values
of matrix A. Evolution of 
Eigen values of matrix A. So, how to evaluate
them? You can evaluate by computing determinant
of A minus lambda i is equal to 0. And if
you remember that minus two minus lambda,
minus two minus lambda two one minus three
minus lambda, that is the determinant of A
minus lambda I, that should be is equal to
0. So, if you open up this determinant, this
into this minus this into this. So, it will
be 2 plus lambda, minus minus plus, so it
will be 3 plus lambda minus 2 will be is equal
to 0. So, it will be a quadratic, it will
be 3 into 6 plus 3 lambda plus 2 lambda 5
lambda plus lambda square minus 2 is equal
to 0. So, you will be getting lambda square
plus 5 lambda plus 4 is equal to 0. This is
a quadratic equation, you can get the roots
as lambda plus 4 into lambda plus 1 is equal
to 0. So, two Eigen values are obtained as
minus 4 and minus 1 in this case.
.
So, the Eigen values are lambda 1 is equal
minus 4 and lambda 2 is equal to minus 1.
Now, we get the Eigen vectors corresponding
to lambda 1, you will be having the Eigen
vector X one, so A X 1 will be nothing but,
lambda 1 X 1. So, A minus lambda 1 i X 1 should
be is equal to 0. So, what you will be getting
is that, I am writing the elements of this
matrix, minus 2 minus lambda 1 2 1 minus 3
minus lambda 1 and the A means of the Eigen
vectors are X 1 and X 2 that will be equal
to 0.
Now, if you put lambda 1 is equal to minus
4 you just write, so let us write these elements
of this equation of this matrix, 2 2 1 1 X
1 X 2 will be is equal to 0. So therefore,
we constitute the corresponding two equations
2 X 1 plus 2 X 2 is equal to 0, X 1 plus X
2 is equal to 0. So, they are giving basically
identical solution. So, we have to just assume
one value. So, if X 1 is equal 1, then X 2
is equal to minus 1. So, 1 minus 1 will be
one of the Eigen vectors corresponding to
the Eigen value, lambda 1 is equal to minus
4.
So therefore, X 1 is nothing but, 1 minus
1, that is the solution of this set of equation.
So, 1 minus 1 will be the Eigen vector corresponding
to lambda 1 is equal to minus 4. So, we write
it down more explicitly. So, X 1 is equal
to 1 minus 1 is Eigen vector corresponding
to 
Eigen value lambda 1 is equal to minus 4.
Then, we compute X 2, the Eigen vector corresponding
to lambda 2 is equal to minus 1. For lambda
2, next Eigen value lambda 2 is equal to minus
1. So, again we formulate A minus lambda 2
I, formulate the Eigen value problem, I X
two is equal to 0. So, this will be minus
2 minus lambda 2 2 1 minus 3 minus lambda
2. And again the elements will be, let us
say X 1 X 2 that will be equal to 0. So, you
will be getting minus X 1 plus 2 X 2. You
put the value of lambda 2 as minus 1. So,
you will be getting this minus 1 plus 2 X
2 will be equal to 0, X 1 minus 2 X 2 is equal
to 0.
So, again they are giving identical, they
are identical. So, if X 2 is equal to 1, then
X ` will be is equal to 2. So therefore, 2
1 is Eigen vector 
corresponding to lambda 2 is equal to minus
1. For the Eigen value as minus 1, the corresponding
Eigen vector is 2 1. So, we have found out
the Eigen values and Eigen vectors of A. Next
step is evaluation of Eigen vectors of A transpose.
Now, we have already seen that Eigen values
are identical for a matrix A and its transpose
matrix A and A transpose, but the Eigen vectors
are different. So therefore, in the next class
we will be looking into how the Eigen vectors
are evaluated for A transpose and the rest
of the solution follows. And we will closely
look into the solution step by step and complete
the problem in the next class We stop this
class here and we will start the next class
after few minutes. Thank you very much.
