In our last lecture, we have defined power
method for calculating eigen vector associated
with dominant eigen value; then, if the matrix
is invertible and if the eigen value which
has got lowest modulus - so, if modulus of
lambda n is strictly less than modulus of
the remaining eigen values, then we can apply
power method to the inverse and then obtain
approximation to eigen vector associated with
the least eigen value.
Thus, we can approximate the eigen vector
associated with the largest eigen value in
modulus or the least eigen value in modulus.
Now, what about some intermediate eigen value?
For that, we need to have some approximation
to say lth eigen value and then we can extend
our power method; that is known as inverse
power method, which we are going to consider
today; after defining inverse power method
we are going to consider what is known as
q r decomposition of a matrix.
We have considered various decompositions
of matrix - l u decomposition, then Cholesky
decomposition and today we are going to consider
decomposition of a into q into r, where q
is going to be orthogonal matrix and r is
going to be upper triangular matrix.
Once we consider this q r decomposition we
can define q r method for finding eigen values
of matrix. Then, we are going to consider
relation between q r decomposition of matrix
and Gram-Schmidt Orthonormalization process
of the column vector.
So, this is roughly the plan of today's lecture.
Let us look at the power method - recall it.
Our A is n by n matrix; our assumption is
that eigen vectors u 1, u 2 and u n form a
basis for c n.
Eigen values lambda j (s) are arranged in
this manner and assumption is mod lambda 1
bigger than mod lambda 2 bigger than or equal
to mod lambda n; these are our assumptions.
Not all matrices will have a basis of eigen
vectors, but then we have seen that if the
matrix is normal then there is a basis of
eigen vectors; if our matrix A has n distinct
eigen value, even then it has a basis of eigen
vectors. So, this restriction of eigen vectors
forming a basis is not too restrictive - there
is big class of matrices for which it will
be satisfied.
Thus our assumption is A should have a basis
of eigen vectors and then it should have a
dominant eigen value; that means, modulus
of lambda 1 bigger than mod lambda 2 bigger
than or equal to mod lambda 3 bigger than
or equal to mod lambda n.
Afterwards we can have greater than or equal
to; but, the eigen value which is biggest
in modulus should be a simple eigen value
and it should be strictly bigger than moduli
of the remaining eigen values.
Then, we start with a non-zero vector; now,
this non-zero vector can be expressed as a
linear combination of u 1, u 2 and u n because
u 1, u 2 and u n form a basis.
Then, we assume that our z - the vector arbitrary
vector, which we are choosing, its component
in the direction of u 1 is not equal to 0;
so, alpha 1 is not equal to 0.
With these assumptions and lambda 1 to be
bigger than 0 - that means, lambda 1 is real
and positive. So, we formed A raised to k
z divided by norm of A raised to k z.
Under these assumptions, A raised to k z upon
norm of A raised to k z converges to alpha
1 u 1 by norm of alpha 1 u 1.
This is going to be unit eigen vector associated
with eigen value lambda 1; it is easy to calculate
these iterates - you start with a arbitrary
vector z not equal to 0; find these iterates
- if your matrix a is a sparse matrix, that
means a lot of zeroes; then, this will not
be computationally too expensive and this
is going to converge to a unit eigen vector
corresponding to the eigen value lambda 1;
this is the power method.
Now, we want to look at extension of this
method, which will allow us to find eigen
vector associated with intermediate eigen
value.
Again, our assumption is that A has n linearly
independent eigen vectors - A u j is equal
to lambda j u j, j is equal to 1 to up to
n and mu is given to be an approximation to
lambda l; some lth eigen value in between
- it need not be the largest eigen value or
the least eigen value in modulus.
Since mu is approximation to lambda l its
distance from lambda l is going to be smaller
than distance of mu from the remaining eigen
values. So, mod mu minus lambda l is less
than mod of mu minus lambda j, j not equal
to l; now, our eigen values - these are roots
of characteristic polynomial.
Now, for the characteristic polynomial we
can find an interval in which our eigen value
lies; for example, we had considered bisection
method - we have got a polynomial, you find
two numbers or two real numbers where the
polynomial has opposite sign; that interval
is going to contain a root of the polynomial;
so, you go on subdividing it getting a smaller
and smaller interval. So, in such a manner
or by some other method we can have an approximation
to eigen value lambda l.
Starting point of our inverse power method
is - your given mu an approximation to lth
eigen value, and our aim is to find approximation
to an eigen vector corresponding to this lth
eigen value lambda l.
So, we have A u j is equal to lambda j u j
- j is equal to 1 to up to n, mu being an
approximation to lambda l, modulus of mu minus
lambda l is less than modulus of mu minus
lambda j, j not equal to l.
Now, from here A u j is equal to lambda j
u j gives us A minus mu i u j is equal to
lambda j minus mu u j; assume that mu is not
equal to lambda l; that means, it is not an
eigen value.
So, A minus mu i will be invertible and then
from this relation we obtain A minus mu i
inverse u j to be equal to 1 upon lambda j
minus mu u j.
I apply A minus mu i throughout - A minus
mu i inverse throughout; left hand side will
be u j is equal to lambda j minus mu A minus
mu i inverse u j and take lambda j minus mu
on the other side.
Now, look at this - we have got modulus of
mu minus lambda l to be less than modulus
of mu minus lambda j, j not equal to l; if
I take the reciprocal 1 upon mod of mu minus
lambda l will be strictly bigger than 1 upon
modulus of mu minus lambda j, j not equal
to l, which will mean that 1 upon lambda l
minus mu this is going to be a dominant eigen
value of matrix A minus mu i inverse.
The idea now, is to apply our power method
to this A minus mu i inverse; in power method,
we need dominant eigen value; now, our A minus
mu i inverse has dominant eignen value - 1
upon lambda l minus mu.
So, modulus of 1 upon lambda l minus mu is
strictly bigger than 1 upon modulus of lambda
j minus mu.
Here are our iterates - choose z not equal
to 0 vector, an arbitrarily vector define
z 0 to be equal to z upon norm z and then
z k - the kth iterate, will be A minus mu
i inverse z k minus 1 divided by its norm.
It is exactly the power method, which is applied
to the matrix A minus mu i inverse; then,
this z k is going to converge to, say, vector
w; this vector w will be eigen vector associated
with matrix A minus mu i inverse and 1 upon
lambda l minus mu being the eigen value. Now,
what we are interested in is eigen values
of matrix A and associated eigen vectors.
We have obtained approximation to an eigen
vector w or we have obtained - yeah -approximation
to an eigen vector w, which is eigen vector
of a minus mu i inverse. But, this w is also
going to be eigen vector of a associated with
eigen value lambda l.
A minus mu i inverse w is equal to 1 upon
lambda l minus w; that will mean that A minus
mu i w is equal to lambda l minus mu into
w; this mu w will get cancelled and then you
are left with A w is equal to lambda l w.
So, w is eigen vector associated with lambda
l. Thus, when we define the iterates in the
inverse power method they involve matrix A
minus mu i inverse. You get an approximation
and you get the limiting vector as w; this
w is going to be eigen vector of a associated
with eigen value lambda l.
If lambda 1 is the dominant eigen value - power
method - in that case, we obtained eigen vector
associated with lambda 1; when you apply power
method to matrix a inverse we obtained approximation
to eigen vector associated with lambda l.
If mu - an approximation to eigen value lambda
l is available, then we obtain approximation
to eigen vector associated with lambda l,
where lambda l can be - it is an intermediate
eigen value. You need to have approximation
mu available.
Here, there can be some problem or we want
to see whether they is going to be a problem;
what is the problem? We are saying that mu
is going to be an approximation to lambda
l; lambda l is eigen value, so, our a minus
lambda l into i is not an invertible matrix;
now, when you are going to have mu to be a
better and better approximation to your eigen
value lambda l, a minus mu i inverse can be
ill conditioned.
Then whether this is going to pose a problem
- now we will see that it does not matter
in this particular case.
We have, in the inverse power method, we need
to find z k to be equal to A minus mu i inverse
z k minus 1 divided by norm of A minus mu
i inverse z k minus 1.
Now, we are not going to calculate A minus
mu i inverse. So, this A minus mu i inverse
z k minus 1 will be obtained by solving a
system of linear equations A minus mu i r
k minus 1 is equal to z k minus 1 we want
to calculate this. So, i am denoting it by
r k minus 1.
Now, this r k minus 1 is going to be the solution
of the system A minus mu i r k minus 1 is
equal to z k minus 1; z k minus 1 is known,
it is coming from the iterations step. So,
we need to calculate this r k minus 1 and
then divide by its norm so that you get the
next iterate.
What I was saying - whether we are going to
have some problem about the stability.
Here, if A is an invertible matrix and lambda
and mu are eigen values of A, then modulus
of lambda is less than or equal to norm of
a; it follows from our basic inequality.
You have got - suppose you have A x is equal
to lambda x, x not equal to 0 vector; then,
take norm of both the sides . So, norm of
A x is equal to norm of lambda x; this will
be modulus of lambda times norm x by property
of norm.
Hence, mod lambda will be equal to norm A
x divided by norm x and this will be less
than or equal to norm of A, where norm of
A is maximum of norm A z divided by norm z;
z not equal to 0 vector.
Thus, we have got mod lambda to be less than
or equal to norm of A; if you consider A y
is equal to mu y - another vector, another
eigen value mu and associated eigen vector.
If A is invertible, A inverse y will be equal
to 1 upon mu times y; hence, again using similar
inequality you get modulus of 1 by mu to be
less than or equal to norm A inverse.
So, the condition number norm A into norm
A inverse will be bigger than or equal to
mod lambda by mod mu, where lambda and mu
are any eigen values of matrix A.
Here we have A is invertible matrix, lambda
and mu are eigen values of A, mod lambda to
be less than or equal to norm A, 1 upon mu
- mod mu - will be less than or equal to norm
of A inverse.
Take the quotient; this is the condition number
of matrix A it is bigger than or equal to
this and if your eigen values are arranged
in descending order of modulus, then we have
mod lambda 1 mod lambda n less than or equal
to norm A into norm A inverse.
Let us go back to our A minus mu i inverse;
mu is approximation to lambda l. So, norm
of A minus mu i inverse is going to be bigger
than or equal to 1 upon modulus of mu minus
lambda l; if mu is approximation to lambda
- a good approximation - good approximation
will mean that denominator is small; that
means, 1 upon mod mu minus lambda l will be
big and then your matrix A minus mu i is going
to be ill conditioned.
Now, we have considered the perturbation theory;
in the perturbation theory, if your matrix
a is ill conditioned - that means, if norm
a into norm a inverse is big, then the solution
is sensitive to the perturbation in the right
hand side.
We had looked at a x is equal to b, perturb
b is slightly; consider the nearby system
a of x plus delta x is equal to b plus delta
b, then even though norm delta b by norm b
is small norm delta x by norm x can be big.
In fact, we had proved this inequality - norm
delta x by norm x is less than or equal to
norm A into norm A inverse into norm delta
b by norm b.
This can be small, but if you are multiplying
by a big number then norm delta x by norm
x which is the relative error in the computed
solution; x is the exact solution, because
of finite precision instead of b it is going
to be b plus delta b; so, x plus delta x will
be computed solution.
The relative error in the computed solution
will be norm delta x by norm x; this can be
big even though norm delta b by norm b is
small.
Now, this situation is going occur in our
case we are calculating our z k; the kth iterate
was given by a minus mu i inverse z k minus
1 - the earlier iterate divided by its norm.
So, we need to calculate a minus mu i inverse
z k minus 1, which we denoted it by r k minus
1.
That means, we need to solve a minus mu i
r k minus 1 is equal to z k minus 1; now,
this a minus mu i will be ill conditioned
and hence our solution, r k minus 1, will
be sensitive to the perturbation. This is
what in general - at each stage we will be
solving a minus mu i r is equal to q.
Now, in practice instead of this you are going
to solve A minus mu i r cap is equal to q
cap, where q cap is the near by vector.
Even though norm of q minus q cap divided
by norm q - even though this is small, because
A minus mu i inverse has a big norm, norm
of r minus r cap by norm r can be big. This
is something we need to worry about, because
as such when we talk about the approximation
mu should be available to lambda l.
Then the better the approximation our task
should be simplified; it should not face such
difficulty that when you have better and better
approximation of mu to lambda l available
then your problem becomes more and more ill
conditioned.; then, the relative error in
the computed solution becomes bigger and bigger.
In this particular application, it does not
matter because we are not interested in the
computed solution, but we are interested in
direction; we are trying to find an eigen
vector; what is important is the eigen direction.
Let me make it more specific - suppose our
q the - right hand side q - we are looking
at q to be a minus mu i r is equal to q and
this is the... Instead, we are going to solve
this - our A has a basis of eigen vectors
u 1, u 2 and u n; hence, our q will be c 1
u 1 plus c two u two plus c n u n - a linear
combination.
Q cap, which is nearby vector will also have
a linear combination c 1 cap u 1 plus c two
cap u two plus c n cap u n; r is A minus mu
i inverse q; A minus mu i inverse u 1 is going
to be c 1 upon lambda 1 minus mu u 1 plus
c l upon lambda l minus mu u l plus c n upon
lambda n minus mu u n; here, for r cap you
are going to have c 1 cap upon lambda 1 minus
mu u 1 etcetera.
q n q cap are near. The distance between c
1 and c 1 cap c 2 and c 2 cap that is going
to be small, when you look at r minus r cap
look at this component, here you have c l
upon lambda l minus mu into u l; here you
have c l cap upon lambda l minus mu u l. So,
c l minus c l cap is small, but you are multiplying
by big number - 1 upon lambda l minus mu.
So, our r is approximately equal to c l upon
lambda l minus mu u l, because that is going
to be the significant part - r cap will be
c l cap upon lambda l minus mu u l. Norm of
r minus r cap will be approximately equal
to modulus of c l minus c l cap upon mod lambda
l minus mu.
Now, this can be big, but we are not interested
in what r cap is we are interested only in
the eigen vector that is the direction. So,
even when you calculate r cap then what we
do is you normalize it.
The exact solution is r; then, you consider
r upon norm r, because when you are applying
inverse power method you calculate a minus
mu i inverse z k minus 1 and divide by its
norm; so, a minus mu i inverse z k minus 1
is r k minus one - you are dividing by its
norm; instead of r k minus 1 you are going
to have some r cap k minus one, where if you
consider the values they may differ, but for
both r k minus 1 and r cap k minus 1 significant
part is going to be in the direction of u
l.
We are dividing by norm then the numerical
values - they do not really matter; what is
happening is the direction for both the exact
solution and the computed solution.
The component in the direction of u l becomes
significant and that is why there is no contradiction
in our method; that when you have better and
better approximation to lambda l that will
give us faster convergence to eigen vector
corresponding to eigen value lambda l.
So far we have been talking about approximation
to eigen vector; in the power method we had
approximation to the eigen vector associated
with lambda 1; when you applied it to an inverse
it is eigen vector associated with lambda
n and in the inverse power method it is eigen
vector associated with eigen value lambda
l.
I have obtained an approximation to eigen
vector; now, what about eigen value? When
we talked about exact eigen value and exact
eigen vector we said that if I know eigen
vector then finding eigen value is easy - you
have to just find constant of proportionality;
if v is eigen vector of matrix a then look
at two vectors a into v and v, these two they
are proportional; I find what is the constant
of proportionality between the two. Here,
our eigen vector is only approximate. So,
what best eigen value approximation can I
choose?
That means - suppose I give you an eigen vector
approximation and I want to know this is eigen
vector corresponding to which eigen value?
The best way it can be done is by considering
the Rayleigh quotient. So, the Rayleigh quotient
has got some minimization property, which
we now consider.
You have q to be an approximation eigen vector;
the question is what should be chosen as an
approximate eigen value? The answer is choose
Rayleigh quotient which is q star a q divided
by q star q which is equal to eta.
Now, let me tell you what is the minimization
property of Rayleigh quotient. So, we consider
eta to be equal to q star A q; so, our q is
approximate eigen vector.
Then, eta is q star A q divided by q star
q; if I consider A q minus eta q - its inner
product with q, this will be nothing but q
star A q minus eta q which is going to be
equal to q star A q minus eta - is a complex
number - so, it is minus eta q star q; using
this result, this is going to be equal to
zero.
That means A q minus eta q is going to be
perpendicular to vector q; now, let z be any
complex number and look at norm of A q minus
z q - its two norm, let me take its square;
this will be equal to norm of A q minus eta
q plus eta q minus z q - add and subtract;
here is one vector here is another vector,
our eta and z these are complex numbers. So,
we have got vector eta minus z into q; A q
minus eta q is perpendicular to q - it is
perpendicular to any multiple, which will
mean that these two vectors they are perpendicular;
we can use Pythagoras theorem.
So, we have got norm of A q minus z q square
to be equal to A q minus eta q plus eta minus
z q for any z belonging to c.
Using orthogonality property, this vector
is perpendicular to this vector - eta minus
z into q; this two norms square will be nothing
but square of norm of this Aq minus eta q
square plus norm of eta minus z q square.
What does this relation tell us? It tells
us that two norm of A q minus z q is going
to be bigger than or equal to two norm of
A minus eta into q.
Our vector q is approximate eigen vector;
we cannot hope to find a complex number lambda
such that A q is equal to lambda q; but, what
we are saying is - look at the Rayleigh quotient
q star A q divided by q star q that we denote
by eta.
Now, consider two norm of a q minus eta q;
this norm will be less than or equal to norm
of a q minus z q, where z is any complex number.
The Rayleigh quotient eta minimizes two norm
of a minus z i into q where z varies over
the complex plane and that is why if you are
given q to be an approximate eigen vector
the best you can do is choose eigen value
approximation to be the Rayleigh quotient
q star a q divided by q star q.
If your q happens to be exact eigen vector,
then your eta will be - suppose your e q is
equal to lambda q, then q star a q will be
q star lambda q; that means, lambda times
q star q because lambda is a complex number.
You have eta to be equal to - in the numerator
lambda times q star q divided by denominator
is q star q; so, you get eta is equal to lambda.
So, the Rayleigh quotient associated with
the exact eigen vector is nothing but the
eigen value itself; if it is not an eigen
vector then it minimizes two norm of vector
a minus z i into q.
Next, I want to consider q r decomposition.
The reason I am going to consider q r decomposition
is I want to describe what is a q r method
for calculating eigen values of a matrix A
or finding approximations to eigen values
of matrix A.
Now, the description of q r method is easy;
you will see that we can quickly describe
what is q r method. What is difficult is to
show its convergence and also why it works;
unfortunately these things are involved, so
we are not going to do these things in detail;
but, I want to tell you what is a q r method,
which is the currently used method for - or
it is the best method available for calculating
eigen values of matrix A.
There is a relation between power method and
q r method. What we have done is we have considered
power method to find approximation to one
eigen vector; now, instead of one eigen vector
you can consider several eigen vectors together.
That gives rise to what is known as simultaneous
iteration, and the best implementation of
simultaneous iteration is done by q r method.
Anyway, let us first look at what is q r decomposition
of a matrix A; for simplicity, let me assume
a to be an invertible matrix and also let
us look at the matrix to be real matrix; q
r decomposition is available for complex matrices
also, but just for the sake of simplicity
let us restrict ourselves to real matrices.
So, our assumption is A is real n by n matrix
and we want to write it as q into r where
q is an orthogonal matrix; that means q transpose
q is equal to identity and r is an upper triangular
matrix.
Here is A, n by n real invertible matrix;
aim is to write A is equal to q into R where
q is orthogonal; that means, q transpose q
is equal to q, q transpose is equal to identity
and R is upper triangular.
Let me write it as q 1, q 2 and q n - these
are the columns of Q. Q is going to be a n
by n matrix and these are - q 1 is the first
column, q 2 is the second column and q n is
nth column.
Now, what will be Q transpose? Q transpose
will be given by q 1 transpose, q 2 transpose
and q n transpose; q n is n by 1 vector, when
I take its transpose then it becomes a row
vector; here, q 1 is the first column of Q;
q 1 transpose is going to be first row of
Q transpose; then, q 2 transpose and q n transpose
multiplied by q 1, q 2, q n. So, when I consider
Q transpose Q it will be first row into first
column, first row into second column and so
on - the usual matrix multiplication .
The i jth element of Q transpose Q will be
given by ith row here - that will be q i transpose
multiplied by jth column here, so it is q
i transpose q j; our notation is inner product
of q j with q i. Q transpose Q is equal to
identity. So, identity matrix means 1 along
the diagonal and 0 elsewhere; q j q i is the
i jth entry of q transpose q.
So, q transpose q is equal to identity is
equivalent to saying that inner product of
q j with q i will be 1 if i is equal to j
0 if i not equal to j, which will mean that
the columns of q are ortho normal.
We had seen similar relation for unitary matrices,
when we considered q star q - q is equal to
identity; q star means conjugate transpose,
q unitary means the columns of q are orthonormal
- it is exactly - it is a special case.
Actually, that q transpose q is equal to identity;
that means, the columns of q are orthonormal;
now, orthogonal matrix also satisfies q q
transpose is equal to identity.
From that we can deduce that the rows of q;
they are going to be orthonormal. Orthogonal
matrix is that matrix which has - if you look
at any column its Euclidean norm is going
to be equal to 1 and that column will be perpendicular
to any other column and similar property holds
for rows of matrix q.
We have got Q transpose Q is equal to identity;
that means, inner product of q j with q i
is one if i is equal to j, 0 if i not equal
to j.
So, the columns of q are orthonormal and q
q transpose is equal to identity; that means,
the rows of q are orthonormal.
Now, we are trying to write A as Q into r,
where Q is orthogonal and
R is upper triangular. So, let me write columns
of A as c 1, c 2 and c n. So, c 1, c 2 and
c n is equal to q 1, q 2 and q n and then
here is upper triangular matrix. So, below
diagonal call the entries are going to be
0.
Now, let me equate the columns; the first
column c 1 will be nothing but r 1 1 times
q 1 second column c 2 will be given by r 1
2 into q 1 plus r 2 2 into q 2 etcetera.
This is just property of matrix multiplication;
we have got c 1, c 2 and c n to be columns
of our original matrix A; suppose, we write
it as q into r, then first column of a which
is c 1 will be r 1 1 times q one.
So, c 1 is r 1 1 times q 1 the second column
c 2 will be r 1 2 times q 1 plus r 2 2 times
q 2; now, look at the first relation - c 1
is equal to r 1 one times q one.
Now, q 1 - just now we have seen that the
columns of q they form an orthonormal say;
that means, Euclidean norm of q 1 is going
to be equal to one.
We have got c 1, which is the first column
of A; this is r 1 1 times q 1, so norm of
c 1 - its two norm is equal to norm of r 1
1 q 1; this will be modulus of r 1 1 and then
norm of q 1 2 norm and this is equal to one.
For modulus of r 1 1 we have got a choice
- you can either choose r 1 1 to be equal
to norm of c 1 or r 1 1 is equal to minus
norm of c 1 2; you get q 1 to be equal to
c 1 divided by norm of c 1 2, if you choose
this. So, the matrix a is given to us; that
means, I know what its column c 1 is; I am
trying to write it as q into r where q is
orthonormal and r is upper triangular .
If you want to consider q 1 - the first column,
it will be nothing but the c 1 normalized
like the first column of a which we are denoting
by c 1 it need not have Euclidean norm to
be equal to one, so you divide by its norm.
Looking at c 1 is equal to r 1 1 into q 1,
we get q 1 will be nothing - first of all
r 1 1, we have to make a choice either you
choose it to be positive or you choose it
to be negative.
For the sake of definiteness, let us choose
r 1 1 to be bigger than 0, then our r 1 1
will be nothing but c 1 divided by its Euclidean
norm - that is for the first. That means,
we have determined what should be the first
column of q and what should be the entry r
1 1 in the upper triangular matrix.
Now, in the second one, you have the relation
c 2 is equal to r 12 q 1 plus r 22 q 2; we
have already determined q 1, so, we have c
2 minus r 12 q 1 is equal to r 22 into q 2
.
Now, I need to determine what is r 12 what
is r 22 and what is q 2; q 1 is determined;
these are the three things to be determined.
I make use of the fact that q 1 and q 2 are
perpendicular to each otherl; inner product
of c 2 with q 1 will be r 12 and inner product
of q 1 with q 1 - this is going to be equal
to 1.
This determines r 12, because c 2 is known
q 1 is known take their inner product and
that will be r 12; now, after this go to this
relation and then take norm of c 2 minus r
1 to q 1 - its two norm - will be modulus
of r 22 because norm q 2 is 1. So, once again
choose r 22 to be bigger than 0 so that you
would have determined r 22 and then q 2 will
be nothing but - we have determined r 22 - q
2 will be c 2 minus r 12 divided by r 2 2.
In this manner we can determine the matrix
q and matrix r. So, any invertible matrix
can be written as product of q into r.
In our next lecture we will consider the relation
between a is equal to q into r and Gram- Schmidt
Orthonormalization process, then I will describe
what is a q r method and then we will consider
an efficient way of finding q r decomposition
matrix - q r decomposition of a matrix by
using what are known as reflector.
So, this we are going to do in the next lecture.
Thank you.
