We are considering q R decomposition of n
by n invertible matrix using reflectors. So,
we will discuss this q R decomposition and
then after that I want to consider approximation
of a continuous function by polynomials in
the 2 norm. So, that is known as least square
approximation. We have already considered
best approximation in the infinity norm. So,
now this will be best approximation in the
2 norm. So, first we look at the q R decomposition
of an invertible matrix.
So, we have a to be an invertible matrix of
size n and our aim is to find an orthogonal
matrix q; that means, matrix which satisfies
q transpose q is equal to identity and an
upper triangular matrix R, such that a is
equal to q into r. So, this we are going to
achieve using reflectors.
So, let me recall the reflectors, if you have
2 vectors in R n, such that x is not equal
to y and the euclidian norm of x and y, they
are the same. Then what we do is, we look
at parallelogram with sides as x and y. The
diagonals of this parallelogram, they will
be given by vector x plus y and vector x minus
y. We consider a unit vector in the direction
of x minus y, which is given by u is equal
to x minus y divided by its norm and v is
a unit vector along the other diagonal. So,
v is x plus y divided by its norm. These 2
unit vectors u and v, they are going to be
perpendicular. So, inner product of u with
v is equal to 0. What we want is a reflector,
which will take vector x to vector y. So,
we want orthogonal matrix q such that q x
is equal to y. So, we look at the reflection
in the line along the direction of v. The
reflector q will be given by identity matrix
minus 2 u u transpose.
When you look at q into u, that is going to
be u minus 2 u u transpose u. U being a unit
vector u transpose u will be equal to 1. So,
we have u minus 2 u, So, that is equal to
minus u. On the other hand when I look at
q of v, it will be v minus 2 u u transpose
v, since u and v are perpendicular to each
other, q of v will be equal to v. So, thus
if you define u and v in this fashion and
look at q to be equal to identity minus 2
u u transpose, it has got property, that q
u is equal to minus u and q of v is equal
to v.
Now, look at vector x. This we write as x
plus y by 2 plus x minus y by 2. Q of u is
equal to minus u. U is x minus y divided by
norm of x minus y. And hence q of x minus
y by 2 will be minus of x minus y by 2 because
x minus y by 2 is perpendicular to vector
u. Q of v is equal to v and hence q of x plus
y by 2 will be equal to x plus y by 2 and
thus we get q of x is equal to y.
We have achieved the fact that if x and y
are vectors which are not equal with the same
euclidian norm, then we can find q, such that
q x is equal to y. Now, let us look at the
properties of q. First of all q transpose
is going to be equal to q. Q x is equal to
y and consider q square. So, this will be
identity minus 2 u u transpose multiplied
by itself. So, when you multiply, you are
going to have identity minus 2 u u transpose
minus 2 u u transpose plus four u u transpose
u u transpose. This u transpose u is going
to be equal to 1 because u is a unit vector.
So, thus we have here four u u transpose,
which will get canceled with this minus four
u u transpose and you have q square is equal
to identity. Thus q transpose is equal to
q and q square is equal to identity. So, this
matrix q is going to be orthogonal matrix.
So, our q is the desired matrix which takes
vector x to vector y and it is a orthogonal
matrix.
Now, look at our n by n matrix. So, it is
a invertible matrix. So, what we are going
to do is, we are first going to look at the
first column, a 1 1 a 3 1 a n 1. This column
will be a non-0 column, because a is invertible.
This column, what we want to do is, we want
to reduce a to an upper triangular form. That
means, we want to introduce zeroes in the
first column below the diagonal. That is what
we are going to do using reflectors. So, now
we have got the first column. It is a non-0
vector. We want to convert it into a vector,
which has only first entry to be non-0 and
all other entries to be 0. Since the new vector
should have the same euclidian norm as the
original vector, the first entry, it should
be norm of the first column or it should be
minus of norm of the first column.
So, here take vector x to be equal to a 1
1 a 2 1 a n 1 the first column, vector y to
be sigma 1 0 0 0 where sigma 1 is nothing,
but euclidian norm of x; that means, a 1 1
square plus a 2 1 square plus a n 1 square
whole thing raise to half. Then euclidian
norm of y is equal to norm of x, even if I
write here minus sigma 1, then also this property
will be satisfy. So, thus we have got 2 vectors,
which I have got the same norm, then we know
how to construct a orthogonal matrix q, such
that q x is equal to y. So, sigma 1 is the
norm of this vector, we look at u to be equal
to x minus y divided by its norm and then
q 1 is equal to identity minus 2 u u transpose,
then q 1 of x is going to be equal to y. So,
thus we can reduce the first column to a column
of the form sigma 1 0 0 0.
Now, let me look at the norm of x minus y.
Norm of x minus y is given by the square of
all the entries, sum it up and then its square
root. So, it is going to be summation a I
1 square minus 2 times sigma 1 a 1 1 plus
sigma 1 square. Sigma 1 square is the norm
of x, so, it is summation a I 1 square and
hence norm of x minus y will be equal to 2
sigma 1 into sigma 1 minus a 1 1.
So, we have calculated, we need to calculate
sigma 1, we need to calculate. Once you calculate
sigma 1, norm of x minus y is given by this
quantity. Then q 1 is identity minus 2 u u
transpose. This we have calculated that is
2 sigma 1 sigma 1 minus a 1 1 and sigma 1
is this. So, now we are not going to calculate
q 1 explicitly as a matrix. What we want to
do is apply q 1 to a. So, we will look at
the columns of a, we will call them as C 1
C 2 C n and we need to look at the action
of q 1 on each column. So, we have q 1 a is
equal to q 1 C 1 C 3 C n the columns of a,
this will be q 1 C 1 q 1 C 2 q 1 C n. Q 1
C 1 is going to be vector sigma 1 and then
0 0 0.
Look at q 1 C 2, q 1 is identity minus 2 u
u transpose and hence q 1 C 2 will be C 2
minus 2 u u transpose C 2. So, this is nothing,
but inner product of C 2 and u. So, the second
column will be given by original column C
2 minus 2 times, this inner product multiplied
by vector u, and similarly for the other columns
q 1 C 3 q 1 C 4 and q 1 C n. Then look at
this q 1 into a. The first column is reduced
to sigma 1 and then 0 0 0. The second third
nth column, they will all get modified. So,
I am denoting the modified entries of the
matrix as a 1 2 super script 1 a 2 2 superscript
1 and so on. So, this is about the first column.
Now, what we want is, we will look at the
second column, and in the second column, we
want to introduce zeros below the diagonal.
Now, in the process, we do not want our first
column to be disturbed, because we have already
achieved the desired form. So, what we will
do is, we will look at this n minus 1 by n
minus 1 sub matrix. And then we will look
at the first column of that n minus 1 by n
minus 1 sub matrix and find a orthogonal matrix
of size n minus 1, which will reduce the first
column of this smaller matrix 2 a vector of
the form here it will be non-0 and rest of
the things they are going to be 0.
So, find n minus 1 by n minus 1 matrix q 2
tilde, such that q 2 tilde of this vector
of size n minus 1. It becomes sigma 2 and
then all the entries to be 0, where sigma
2 will be norm of this vector. So, this is
q 2 tilde next what we do is, we add the entries
to q 2 tilde and obtain a n by n matrix q
2. So, q 2 is going to be 1 here, this will
be a row vector of length n minus 1, this
will be column vector of length n minus 1
and then this will be q 2 tilde.
Then, when you consider q 2 q 1 a, because
of the nature of q 2, the first row and first
column of our original matrix will not be
changed and you will get here sigma 1 remaining
entries 0, here sigma 2 and remaining entries
0 and then so, on. Then we will go to a third
column. So, we will look at a matrix of size
n minus 2 and using the same idea we continue.
So, like that we will find matrices q 1 q
2 q n minus 1, such that when you pre multiply
a by this matrix, what you get is upper triangular
matrix R. Each of q I is going to be a symmetric
matrix and its square will be identity. So,
each q I will be orthogonal matrix.
So, q I square is identity; that means, inverse
of q I is equal to q I and hence from here,
I will get a to be equal to q 1 q 2 q n minus
1 into R. Look at this product, that is going
to be our matrix q. Since q 1 q 2 q n minus
1 they are orthogonal, their product also
will be orthogonal, here we had q I transpose
is equal to q I, but when you take their product,
it will not be a symmetric matrix, but what
we want is, we want q to be orthogonal. So,
q transpose q will be, when you take the transpose,
you change the order. So, it will be q n minus
1 q n minus 2 q 1 and then q 1 q 2 q n, q
1 square will be identity then q 2 square
will be identity and then, so on.
So, that is how we get q R decomposition of
a matrix a, where a is invertible and q is
going to be an orthogonal matrix, R is going
to be an upper triangular matrix. Such a decomposition
is not unique, but then for the uniqueness,
we can impose some conditions on the diagonal
entries of the matrix R. For example, we if
we say that R should be such that, all diagonal
entries they are bigger than 0, then the q
R decomposition with this additional condition
is going to be unique. Now, we are going to
look at an example of a 2 by 2 matrix and
we want to find its q R decomposition.
So, a is matrix with first column to be 1
1 and second column to be 2 3. The determinant
of this matrix is going to be equal to 1 and
hence it is a invertible matrix. The first
column I denote by x. So, x is equal to 1
1. Its norm is going to be root 2. Define
y to be equal to vector root 2 0. So, thus
x and y, they have got the same norm. Next
look at u is equal to x minus y divided by
norm of x minus y and q to be equal to identity
minus 2 u u transpose. We have seen that such
a matrix q will be such that q x is equal
to y. And I said that we are not going to
calculate q explicitly, what we need is its
action on the columns, but since it is a illustrative
example of size 2, let us calculate the what
q looks like.
So, here is our vector 1 1 y is vector root
2 0, when you consider x minus y into x minus
y transpose. So, this will be vector 1 minus
root 2 1 and then transpose will be a row
vector 1 minus root 2 1. So, take the multiplication.
So, this will be 1 minus root 2 square 1 minus
root 2 1 minus root 2 and then 1. So, thus
and also norm of x minus y square, it is going
to be equal to 4 minus 2 root 2. So, this
is this matrix and norm is going to be 4 minus
2 root 2, because we are going to divide by
this norm here.
Now, q is equal to identity matrix minus 2
x minus y x minus y transpose upon norm of
x minus y square, this is the identity matrix
2 4 minus 2 root 2 was norm of norm square
of this x minus y and this matrix, we have
seen that it is 1 minus root 2 square 1 minus
root 2 1 minus root 2 1. So, now one can simplify
and then see that q is equal to 1 by root
2 1 1 1 minus 1. Notice the columns of q,
they have norm to be equal to 1 and they are
perpendicular to each other. So, this is our
q, and now let us look at r.
So, a is 1 2 1 3 the columns of q are nothing,
but the vectors, column vectors of a ortho
normalized. So, we have got this and then
one can check that q into a is going to give
us this matrix r. So, it becomes an upper
triangular matrix 1 by root 2 2 0 5 minus
1. Now, since q transpose is equal to q inverse,
a is equal to q into R. So, this is our original
matrix, this we have written as a product
of orthogonal matrix q and then upper triangular
matrix r.
Now, we were saying that the diagonal entries
of R should be positive. Here we have got
this entry to be negative, but then it can
be adjusted with the entries of q. So, q into
R is here. So, if I consider q cap and R cap,
where q cap is 1 by root 2 1 1 minus 1 one
and R cap to be this, then q cap is also orthogonal
matrix and R cap is upper triangular matrix.
So, this is q R decomposition using the reflectors
and now we are going to look at the q R method,
we had already defined it. So, I just want
to state the method again and its convergence.
So, the q R method consists of writing a as
q 0 into R 0. So, a is the our starting matrix,
we find its q R decomposition and then we
define a 1 to be q 0 and R 0 multiplied in
the reverse order. Then find the q R decomposition
of this new matrix. Once you find this q and
R, you multiply them in reverse order, matrix
multiplication is not commutative, so, you
are going to get a different matrix in general.
Like that when you reach a m, then find its
q R decomposition and then a m plus 1 is equal
to R m into q m. So, you see in the q R method,
one needs to calculate this q R decomposition
at each stage and that is why we wanted some
efficient way of doing the q R decomposition.
So, here is the theorem, here is a sufficient
condition for convergence of q R method. So,
let a be a real n by n matrix with Eigen values
lambda 1 lambda 2 lambda n, such that mod
lambda 1 bigger than mod lambda 2 bigger than
mod lambda n bigger than 0. The matrix is
real. So, its Eigen values they are either
real or they are going to be complex conjugating
pairs. But because of this condition that
no 2 Eigen values they have the same modulus,
all Eigen values they are going to be real.
Then a m converges to an upper triangular
matrix that contains lambda I in the diagonal
position.
If a in addition is symmetric, then a m converge
the sequence a m converges to a diagonal matrix.
Symmetric real matrices, they are going to
be diagonalizable. And in general, if you
have a matrix a, then we know that there exist
a similarity transformation which will convert
a to a upper triangular matrix. So, this is
what we try to achieve iteratively in the
q R method. If this condition is not satisfied,
then the iterates in the q R decomposition,
they may not converge.
So, here is an example, look at matrix a 2
by 2 matrix with entries as 0 1 1 0, then
the Eigen values they are given by minus 1
and 1. So, that means, 2 Eigen values they
will have the same modulus. A is a symmetric
matrix and a square is equal to identity.
So, a itself is a orthogonal matrix and hence
its q R decomposition can be q 0 is equal
to a and R 0 is equal to identity, but in
that case, I will get a 1 to be equal to R
0 into q 0 which is same as a. So, that means,
all the iterates they are going to be equal
to the original matrix a, and in this case
a m s they do not converge to a diagonal matrix.
So, this is about the q R method, we could
not prove the convergence of q R method, but
that is beyond the scope of the first course
on a elementary numerical analysis. Now, what
I want to do is, I want to look at least square
approximation of a continuous function by
using polynomial. Now, before that let me
just mention that the q R decomposition, it
can be used to find solution of system of
linear equation.
So, we have got a system a x is equal to b,
where as a is invertible matrix, we have written
a is equal to q into R, where q is orthogonal
and R is upper triangular. So, the original
system becomes q R x is equal to b, this is
equivalent to 2 systems q y is equal to b
and R x is equal to y. So, first solve this
that is nothing, but y is equal to q transpose
b and then solve R x is equal to y by back
substitution.
However the number of computations for q R
decomposition they are going to be of the
order of 2 n cube by 2 multiplications and
2 n cube by 3 additions, which is twice as
expensive as the as compared to the L U decomposition.
So, that is why one does not use q R decomposition
for solution of system of linear equations.
Now, let us look at the polynomial approximation.
We had looked at Bernstein polynomials and
then the disadvantage of Bernstein polynomials
was slow convergence and it does not reproduce
the polynomial. So, that is why what we did
was, we looked at the best approximation.
Now, in the best approximation, our aim is
to find p n star, such that the error in the
maximum norm or the infinity norm that is
minimize. So, that means, we are trying to
do the best, as for as the error is concerned,
but then there exists a unique best approximation
p n star, but in order to find is, we need
a iterative method. So, that is why the best
approximation, it was not advisable or we
did not consider the best approximation, now
what I want to do is, I want to consider the
best approximation, but instead of infinity
norm in the 2 norm.
So, our space is C a b, on that we have got
this inner product, inner product of f and
g as integral a to b f x into g x d x. Take
f and g to be real valued functions. This
inner product, it induces a norm for elements
of C a b and that is integral a to b f x square
d x whole thing raise to half. So, that is
the 2 norm and now we are going to look at
the best approximation from the space of polynomials
in the 2 norm.
So, that is known as the least-squares approximation
that p n is the space of polynomials of degree
less than are equal to n. F is a continuous
function we want to find a polynomial of degree
less than or equal to n, such that norm of
f minus p n star is its 2 norm is equal to
minimum of norm of f minus p n 2 norm when
p n varies over script p n. We have to show
the existence of such a best approximation
p n star and then the way to find such p n
star. So, for that purpose, we are going to
use what are known as Legendre polynomials.
So, look at the functions 1 x x square x cube
and. So, on.
So, that is linearly independent, apply gram
Schmidt Roth normalization process to it,
then you will get the Legendre polynomial.
So, q 0 q 1 q 2, these are going to be Legendre
polynomials with the property, that q I is
polynomial of exact degree I and they are
mutually perpendicular. So, inner product
of q I with q j is 1, if I is equal to j and
0 if I not equal to j. The gram schmidt orthonormalization
process has the property that span of 1 x
x square x raise to n. If I look at the first
n plus 1 functions here, then that is going
to be same as span of first n plus 1 Legendre
polynomials q 0 q 1 q 2 and. So, on.
So, span of q 0 q 1 q n will be same as span
of 1 x x raise to n and that is nothing, but
the space of polynomials of degree less than
or equal to n. Q I is a polynomial of degree
I, they are they form a Roth normal set and
hence linearly independent. So, q 0 q 1 q
n will be a basis for the space of polynomial.
1 x x square x raise to n, that is also basis
for space of polynomials of degree less than
are equal to n. So, here is another basis.
So, when I look at a polynomial p n of degree
less than or equal to n, I can write uniquely
as a linear combination of q 0 q 1 q n. So,
thus alpha 0 alpha 1 alpha n, these are scalars.
Now, look at our claim is that the best approximation
in the 2 norm is going to be given by summation
j goes from 0 to n, inner product of f with
q j q j, f is a given continuous function.
Q 0 q 1 q 2 etcetera, these are the Legendre
polynomial. So, look at this p n star and
we want to show that norm of f minus p n star
is less than or equal to norm of f minus p
n. So, we are showing the existence and then
show that it is the best approximation. If
I take inner product of p n star with q I,
then by linearity of inner product in the
first variable, it will be summation j goes
from 0 to n f comma q j q j comma q I.
Now, this will be 1 only when j is equal to
I and hence it is inner product of f with
q I for I going from 0 1 up to n. So, thus
f minus p n star q I will be 0, for I is equal
to 0 1 up to n. Q 0 q 1 q n these are basis
for the polynomials of degree less than o
equal to n, and thus f minus p n star will
be perpendicular to each polynomial of degree
less than or equal to n and it is this property
we will use to show this inequality.
So, f minus p n star is perpendicular to each
polynomial of degree less than or equal to
n. Since q 1 q 2 q n they form a basis for
sequence of for a space of polynomials of
degree less than or equal to n. Consider f
minus p n norm square, add and subtract p
n star. So, f minus p n star plus p n star
minus p n, f minus p n star is perpendicular
to each polynomial of degree less than or
equal to n.
So, it will be perpendicular to this polynomial.
And hence by Pythagoras theorem, it will be
norm of f minus p n star square plus norm
of p n star minus p n square, and thus norm
of f minus p n star is less than or equal
to norm of f minus p n, p n belong into script
p. So, unlike in the case of best approximation
by polynomials in the infinity norm in the
case of best approximation in the 2 norm,
we can find the best approximation explicitly.
In case of infinity norm, we needed to go
to a iteration process.
So, thus we have considered polynomial approximation
of a continuous function. So, there are various
ways. So, one was Bernstein polynomial approximation,
then there was the best approximation in the
infinity norm, then approximation by interpolating
polynomials and now approximation best approximation
in the 2 norm. Now, all these approximations,
they have some desirable properties, some
not so, desirable properties. Now, what I
am going to do is, I am going to recall what
all results we have proved in this course.
So, our course, this is the last lecture.
So, now I want to recall what all things we
did briefly. I have already talked about the
approximation of continuous function by polynomials
in various ways and then we started with the
interpolating polynomial. Many of the topics
in this course, they were based on this interpolating
polynomial. We proved existence of and uniqueness
of interpolating polynomial by using Lagrange
functions or Lagrange polynomials, but then
such a definition is not recursive; that means,
if we find a polynomial of degree n and then
add 1 more interpolation points, then we have
to do all the work again. So, that is why
we looked at the divided difference form or
the Newton's form of the polynomial.
So, then Newton's form is given by, you have
got x 0 x 1 x n to be distinct points in interval
a b, then there is a unique interpolating
polynomial of degree less than or equal to
n. That polynomial is given by this form.
This is known as Newton's form and if from
p n to p n plus 1, I have to go I have to
just add 1 more extra term. The error in the
interpolating polynomial, it is given by f
x minus p n x is equal to divided difference
based on x 0 x 1 x n x and then multiplied
by x minus x 0 x minus x n.
So, this was a very important formula, because
when we use this polynomial approximation
for various problems, then we need to know
what is the error involved. Now, the first
topic we considered was numerical integration,
all continuous functions they are Riemann
integral, but when it comes to finding the
integral, it is not easy, for some functions,
yes, but otherwise the definition using Riemann
sums is not of much use in numerical analysis,
when we want to calculate the integral.
So, now integral a to b f x d x will be approximately
equal to integral a to b p n x d x. This p
n x if you write it in the Lagrange form;
that means, summation f x I l I x, where l
I x is this polynomial of degree n, then integral
a to b p n x d x, it is given by summation
w I f x y where w I is integral a to b l I
x d x. So, thus integral a to b f x d x is
approximately equal to summation w I f x I,
i goes from 0 to n. Choices of n and of the
interpolating point, they give raise to varies
numerical quadrature rules and the rules which
we have considered the basic rules, these
are the midpoint rule, when you are considering
the constant polynomial with the interpolation
point to be the midpoint, trapezoidal rule
when you consider the approximation by linear
polynomial with interpolation points to be
the end points or Simpson's rule when you
consider approximation by quadratic polynomials
with 3 interpolation points as the 2 end points
and the midpoint and these are all special
cases of Newton cotes formulae, where you
sub divide your interval into n equal parts
and take your interpolation points as the
n plus 1 partition points.
Now, once we got basic rules, then we considered
composite rules. So, composite rule is divide
your interval a b into smaller sub intervals
and then on each sub interval, apply a numerical
integration. We also considered Gaussian integration,
where we started with integral a to b f x
d x to be approximately equal to summation
w I f x I and treated w I the weights and
x I the nodes as unknowns to achieve the maximum
exactitude. The way we did numerical integration,
for the numerical differentiation we use the
same idea, that polynomials are infinitely
many times differentiable. So, consider a
interpolating polynomial and then derivative
of it will give you an approximation to derivative
of a function. And this later we used for
finite difference method for the solution
of differential equations.
Then important topic was system of linear
equations. So, Ax is equal to b with the assumption
that a is n by n invertible matrix. First
we considered gauss elimination method, this
gauss elimination method is equivalent to
L u decomposition of a matrix a, where l is
unit lower triangular matrix; that means,
the diagonal entries are equal to 1 and u
is upper triangular. Next we considered gauss
elimination with partial pivoting and in that
case, it is equivalent to L u decomposition
of not matrix a, but matrix p into a, where
p is a permutation matrix; that means, the
matrix obtained from the identity matrix by
finite number of row interchanges. If your
matrix a is positive definite, then you have
got what is known as colicky decomposition.
We have L u decomposition for a matrix a under
certain conditions.
One of the conditions are in fact, necessary
and sufficient condition is, look at the principle
minors, if they are all not equal to 0, then
you can write a as l into u, if a is positive
definite, then we can write it as g into g
transpose where g is going to be a lower triangular
matrix. So, this will need half the number
of computations as compared to l u decomposition,
but it will be possible only for positive
definite matrices. So, we have the colicky
decomposition of a positive definite matrix
as a is equal to g g transpose, we also considered
iterative methods for solution of a x is equal
to b and those were the Jacobi and gauss-sidle
method.
We looked at vector and matrix norms. So,
for the vector, we considered mainly 1 norm
2 norm and infinity norm. 2 norm is the well-known
Euclidian norm. Once you fix vector norm,
then we define induced matrix norm as norm
a is equal to maximum norm a x by norm x x
not equal to 0. And then corresponding to
1 vector norm and infinity vector norm we
have got a formula for norm a in terms of
its value of its entries a I j. For norm a
2 we have to be satisfied only with an upper
bound we want to solve a x is equal to b,
but then because of the finite precision of
computers, instead of a x is equal to b, we
will be solving a nearby system. There will
be, instead of a there will be a plus delta
a, instead of the right hand side b, you will
have b plus delta b and instead of x the computed
solution will be x plus delta x.
One wants to know what is the error between
x and x cap. The exact solution and the computed
solution. So, relative error is norm of x
minus x cap by norm x some vector norm and
then we showed that this will be less than
or equal to the error in the coefficient matrix
error in the right hand side and in that what
comes into crucially into picture is the condition
number norm a into norm a inverse. For a x
is equal to b, we looked at the iterative
methods which were Jacobi and gauss-sidle
methods.
Then we wanted to look at the solution of
non- linear equations f x is equal to 0, this
is related to finding a fix point of a method
g of C is equal to c. So, we considered Picard's,
it fixed point iteration and in detai,l the
Newton's method, secant method also regular
falsi method for finding 0 of a function.
Differential equations, we looked at initial
value problem, here there were 2 types of
methods, single step methods such as Euler
and range kite methods, these are classical
methods and multi-step methods such as Adams-Bashforth
and Adams-Moulton which are relatively of
recent origin method and the important thing
is the stability of the methods which we are
considered in detail.
Then we looked at the boundary value problem
and for the boundary value problem the method
which we considered was the finite difference
method, where the derivatives are replaced
by finite differences. So, that finishes our
course and it was a pleasure to give this
course. So, thank you.
