Last time we have considered Gaussian quadrature
with two points. So, we had found interpolation
points which are the zeros of a Legendre polynomial
of degree 2. When we interpolate at these
two points by a linear function and integrate
the linear function, we get a formula for
approximate quadrature and that formula is
exact for cubic polynomials. So, we first
found the points on the interval minus 1 to
1. So, the Gauss points in the interval minus
1 to 1; those are minus 1 by root 3 n plus
1 by root 3.
Next, we looked at a 1 to 1 onto a fine map
from the interval minus 1 to 1 2 interval
a b a general interval, and then, using this
map, we defined Gauss formula with two points
for the interval a b. We obtained an error
formula for this numerical Quadrature, and
then, next, we looked at composite Gaussian
quadrature with two points. So, our interval
a b was subdivided into small intervals of
length b minus a by n. On each of this interval,
we applied our basic Gauss formula with two
points, and then we obtained a composite Gaussian
quadrature, and error is of the order of h
raised to 4. So, it is same as the composite
Simpson's rule with the assumption that our
function should be four times differentiable.
Today, what we are going to do is - we are
going to define a general Gauss formula. So,
now, we had considered only two points. Now,
we will first define what we mean by n Gauss
points or n plus 1 Gauss point. The two points
minus 1 by root 3 n plus 1 by root 3, they
were obtained by looking at three functions
- 1 x x square and Orthonormalize it.
So, same idea we will use and we will look
at say 1 x x square x raised to x cube and
so on. To these functions, we will Orthonormalize
these and then get orthonormal polynomials.
And zeros of orthonormal polynomials, they
are going to be our Gauss points; they will
have similar property, that if you look at
n plus 1 Gauss points, if you fit a polynomial
of degree less than or equal to n and integrate,
then we are going to get a formula for numerical
quadrature of the type summation w i f of
x i; i goes from 0 to n.
Now, this formula, we expected to be exact
for polynomials of degree less than or equal
to n, but we will see that it is going to
be exact for polynomials of degree less than
or equal to 2 n minus 1.
So, let us first define the Legendre polynomials
and then define the Gaussian Quadrature. So,
our setting is x is equal to c a b. We have
got our inner product; inner product of f
and g is going to be integral a to b fx gx
dx. Look at the functions f 0 x is equal to
1; f 1 is x is equal to x; f n x is equal
to x raised to n and so on.
Norm f is going to be the induced norm. So,
we denoted by norm f 2 square root of inner
product off with itself positive square root.
Then the gram Schmidt Orthonormalization is
g 0 x is going to be equal to f 0 upon norm
f 0.
Then for n is equal to 1 2 and so on. Our
function r n is function f n minus summation
j goes from 0 to n minus 1 inner product of
f n with g j multiplied by g j. So, we have
come up to the stage n minus 1; so, we have
calculated g 0 g 1 g n minus 1. We subtract
this term from f n. Now, by vary definition
if I look at inner product of r n with g k
- where k varies from 0 to n minus 1, that
inner product is going to be 0, and this is
normalization; so, g n is r n divided by norm
r n to norm. The functions which we obtained
or the polynomials which we obtained g 0 g
1 g 2 and so on, they have this property that
when you consider span of f 0 f 1 f n, that
means look at all the linear combinations
of f 0 f 1 f n.
A linear combination of f 0 f 1 f n is going
to be a polynomial a 0 plus a 1 x plus a n
x raised to n. So, this span is same as span
of g 0 g 1 up to g n. Now, look at our function
r n. In r n, we have got this function f n,
which is f n x is equal to x raised to n and
then we are subtracting something.
Now each g j, when you consider j going from
0 to n minus 0, it is going to be a polynomial
of degree less than or equal to n minus 1.
So, f n is x raised to n; we are subtracting
a polynomial of degree less than or equal
to n minus 1. So, r n is going to be a polynomial
of degree n and we are dividing by a constant.
So, g n is going to be a polynomial of degree
n. So, these g 0 g 1 g 2 up to g n, these
are known as Legendre polynomials.
So now, g n is a polynomial of degree n; it
is going to have n roots, but what is important
is those n roots, they are going to be distinct.
I am not going to prove that part but that
is property of Legendre polynomial.
So, g n it has got n roots, those n roots
are distinct and those are known as our Gauss
point. So, we will look at the n plus 1 Gauss
points, fit a polynomial of degree less than
or equal to n, integrate it, and then, we
will get the formula for Gaussian integration.
Orthonormality property of our Legendre polynomials
g j - it tells us that if you look at in the
product of g i with g j, that will be equal
to 1 if i is equal to j and 0, if i not equal
to j. So, in particular, if you look at g
n plus 1, this g n plus 1 will be perpendicular
to function g 0 g 1 up to g n. It will also
be perpendicular to g n plus 2, but that part
we do not need. Now, our g n plus 1 is perpendicular
to g 0 g 1 up to g n.
Span of g 0 g 1 g 2 g n that was polynomial
space of degree less than or equal to n, and
hence, our g n plus 1 is going to be perpendicular
to any polynomial of degree less than or equal
to n. So, we have g n plus 1 g j is equal
to 0 for j is equal to 0 1 up to n span of
g 0 g 1 g n is equal to span of 1 x x raised
to n, and hence, inner product of g n plus
1 with a polynomial a 0 plus a 1 x plus a
n x raised to n is equal to 0.
For any value of a 0 a 1 a n a 0 a 1 a n these
are the coefficients. They are going to be
real numbers. Now, g n plus 1 it has got n
plus 1 distinct zeroes.
So, let me denote those zeros by x 0 x 1 x
n; g n plus 1 is a polynomial of degree n
plus 1, let us factorize it. These are zeroes
of g n plus 1. So, you will have factor x
minus x 0 x minus x 1 x minus x n. So, we
have got in all n plus 1 brackets; so, that
means that is going to contribute x raised
to n plus 1 terms and then the lower order
terms, because g n plus 1 is a polynomial
of degree n plus 1.
Here the coefficient is going to be a constant;
it cannot be a function of x, because if it
a function of x, then it will be a polynomial
of degree bigger than n plus 1, but g n plus
1 is a polynomial of exact degree n plus 1
and it is perpendicular to a 0 plus a 1 x
plus a n x raised to n for any values of a
0 a 1 a n, and hence, we can conclude that
x minus x 0 x minus x 1 x minus x n. This
is going to be perpendicular to x raised to
j for j is equal to 0 1 up to n.
You substitute here once a 0 is equal to 1,
remaining coefficients to be 0. Then a 1 is
equal to 1, remaining coefficients to be 0.
Alpha n plus 1 - it is a constant, so, it
comes out of the integration sign, so, we
have got this, and this x minus x 0 x minus
x 1 x minus x n this we denote by w x.
So, this is a crucial property of our Gauss
point. So, g n plus one is Legendre polynomial
of degree n plus 1 obtained by Orthonormalizing
functions 1 x x square x raised to n plus
1. This g n plus 1 it has got n plus 1 zeroes.
Those zeroes are distinct; so, we denote them
by x 0 x 1 x n, and then, if you look at w
x which is x minus x 0 x minus x 1 x minus
x n, its inner product with x raised to j
is going to be 0 for j is equal to 0 1 up
to n. So, using this property, we will show
that our Gaussian quadrature is going to be
exact for more than a polynomial of degree
n.
So now, let us look at the interpolating polynomial;
look at the Lagrange form. So, p n x will
be given by summation f x i l i x i goes from
0 to n. We have fix now our interpolation
point x 0 x 1 x n; we are fitting a polynomial;
we are integrating, and then, we get the formula
of the type w i into f x i; i goes from 0
to n the summation, and then, we are going
to look at the error part. So, the error part
- it is integral a to b and then 2 functions.
One function is our divided difference of
f based on x 0 x 1 x n and x, and we have
multiplied by w x into d x and then that integral.
If instead of the divided difference based
on a .x, if we had a divided difference based
on some fixed point, then I could have taken
it out of the integration sign and use the
fact that integral a to b w x d x is equal
to 0, because we have got integral w x x raised
to j is equal to 0 if j is equal to 0 1 up
to n.
So, a particular case, when we have got j
is equal to 0, then that means integral a
to b w x d x is equal to 0. This property
we can use by replacing our divided difference
x 0 x 1 x n x by a divided difference based
on say x 0 repeated twice x 1 x 2 x n plus
there is going to be a one more term and that
is obtained by using the recurrence formula
for divided differences.
We have used this method earlier; so, we are
going to use it now for this Gaussian integration.
So, we have f x minus p n x to be error f
x 0 x 1 x n x into w x - where w x is product
of x minus x 0 up to x minus x n integral
a to b w x x raised to j is equal to 0, integrate
both the sides. So, you have integral a to
b f x d x minus integral a to b p n x d x
is equal to this error term consisting of
two parts - one divided difference another
function w x.
Now, look at the divided difference based
on x 0 x 1 x n x. This using recurrence relation
repeatedly we can write this to be equal to
f. Its divided difference based on x 0 repeated
twice x 1 x 2 x n plus the next is divided
difference based on x 0 repeated twice x 1
repeated twice and x 2 x 3 up to x n appearing
only once multiplied by x minus x 0. In the
next term, x 2 also will be repeated twice
and we will have x minus x 0 x minus x 1 and
one continuous, and what one gets is divided
difference based on x 0 repeated twice x 1
repeated twice x n repeated twice.
So, all the interpolation points are repeated
twice, and then, x multiplied by now x minus
x 0 x minus x 1 x minus x n, that is nothing
but our w x and a property of w x is integral
a to b w x x raised to j is equal to 0 for
j is equal to 0 1 up to n. So, now, you integrate
this; so, you integrate this multiplied by
w x. So, I look at this integral; that is
our error in the numerical Quadrature.
When I do that, the first term is a constant;
so, it is comes out of the integration sign
integral w x d x at 0. So, there is no contribution
from this term. Then the next again the divided
difference is constant, it is not depending
on x and we are going to have w x multiplied
by x minus x 0 d x; w x is perpendicular to
constant function one and function x. So,
there will be no contribution from this term
and like that for all the terns except this
last term. So, our integral a to b f of x
0 x 1 x n x w x d x becomes equal to integral
a to b f x 0 repeated twice x 1 repeated twice
x n repeated twice x w x 1 w x from here 1
w x from here, so, we have got w x square.
Now, our error, it has got integration of
two functions - one function is continuous.
We assume f to be sufficiently differentiable;
so, we have got divided difference based on
x 0 repeated twice x 1 repeated twice x n
repeated twice. So, these are going to be
total 2 n plus two points, and then, we have
got point x multiplied by w x square; w x
square will always be bigger than or equal
to 0, and hence, the mean value theorem for
integration is applicable. So, using this
mean value theorem for integration, we can
take out the divided difference term out of
integration as f of x 0 repeated twice x 1
repeated twice x n repeated twice and some
point c and multiplied by integral a to b
w x square dx, and then, this term will be
equal to, as I said, we have got x 0 repeated
twice x 1 repeated twice x n repeated twice;
so, those are 2 n plus 2 point and this point
x. So, here, this point x, it should be equal
to point c because we are taking it out of
the integration.
This is some fix point and that is going to
be equal to 2 n plus second derivative of
f evaluated at some point c up on 2 n plus
2 factorial, and then, integral a to b x minus
x 0 square x minus x n square d x. So, we
have a formula integral a to b f x dx is equal
to summation w i f x i, i goes from 0 to n.
So, it is based on n plus 1 points. The error
is it contains 2 n plus second derivative
of our function.
Which will mean that if our function f is
a polynomial of degree 2 n plus 1, then the
error is going to be equal to 0. We have consider
the case n is equal to 1, we have got x 0
and x 1. In that case, there will be no error
provided f is a polynomial of degree 2 n plus
1. So, n is equal to 1; that means cubic polynomial.
So, this result, it can be generalized and
we have got a way. You choose our interpolation
points such that you are interpolating the
given function at n plus 1 points; so, that
means you are fitting a polynomial of degree
n, but the error is 0 for polynomials of degree
less than or equal to 2 n plus 1.
Now, this integration at Gauss points, so,
it comes out to be equal to modulus of the
error is less than or equal to there will
be norm of f 2 n plus 2 infinity norm, then
you will have integration of this. So, the
integration, it will definitely have a term
b minus a raised to 2 n plus 3. Each of this
I can dominate by b minus a; so, I will have
b minus a raised to 2 n plus 2 and then integral
a to b. So, that is how you get b minus a
raised to 2 n plus 3 and some constant. Now,
one can find a more precise bound by integrating
this like not dominating it by b minus a but
you can integrate. That is what we have been
do it. So, you integrate, but anyway, the
error for the Gaussian integration is going
to be less than or equal to this.
So now, we have defined Gaussian quadrature
for a general case like looking at the n plus
1 point. Now, the question comes - whether
this is going to converge for all continuous
functions? So, that means you look at our
set of points, they are going to be always
Gauss point. We have looked at already Gauss
2 points, so, those where our two points in
the interval a b. Now, you look at three Gauss
points; so, they will be something different.
So, like that if you choose your Gauss points
as interpolation points, fit a polynomial
obtain an approximate formula for integration
whether it will converge 2 integral a to b
f x d x as n tends to infinity.
Please note that we are not looking at composite
rule. Now, we are increasing the degree of
the polynomial. We already know that our polynomial
p n x, no matter how you choose your rules,
there always exist a continuous function for
which the interpolating polynomial does not
converge to f in the maximum norm. Now, the
convergence of p n to f that is a sufficient
condition for convergence of numerical quadrature
formula. It can happen that even though the
polynomial does not converge to f.
For all continuous functions, our numerical
integral can converge to integral a to b f
x d x. Now, this is what happens in the Gaussian
quadrature rule and it does not happen for
the Newton Cotes formula. When you are looking
at our set of points to be the equidistant
points in the interval a b in the interval
a b, we want to look at n plus 1 points. So,
in case of Newton Cotes formula, we look at
those points to be equidistant points, and
in case of Gaussian Quadrature, we will take
them as the Gauss points, which are the zeroes
of the Legendre polynomial.
Now, to prove the convergence of numerical
quadrature to integral a to b f x dx, there
are going to be two term, two facts crucial
- one is we are going to show that our weights
in the Gaussian integration; they are always
bigger than 0. So, this is the first one,
and the second one is the Weierstrass theorem
that any continuous function can be approximated
by polynomials in the maximum norm. So, using
these two results, we are going to show that
Gaussian quadrature converges to integral
a to b f x d x as n tends to infinity - where
n plus 1 are our interpolation point. So,
let us show that the points or the weights
in the Gaussian integration, they are always
bigger than 0.
Now, so far when we were writing a numerical
quadrature formula, we were writing summation
w i f x i i goes from 0 to n. So, as such
our w i and x i's, they depend on n; like
look at equidistant points, in the case of
equidistant points, we had got first two points
which are the two end points a and b. Then
the next case was a b and a plus b by 2, but
the case after that when we want to consider
four points, our points will be a b and then
two points which are at a distance b minus
a by 3. So, if we want to be specific, we
should have written our x i's to be depending
on n and our weights also depending on n.
In that case, so far what we have been doing
is we have been fixing the degree of the polynomial.
So, that is why in order to not to have notation
to be cumbersome, we wrote w i and x i with
dependence on n understood or it is implicit.
Now, we are going to change n. So, let us
be more precise with our notations and then
let us write this as w i depending on n and
x i's depending on n. Now, w i's the weights,
they are integral a to b l i x d x - where
l i is the Lagrange polynomial. Here, still
I have not written the dependence on n, but
afterwards, when we are going to look at convergence
that time, we will write it explicitly. So,
at present, I am writing w i with understanding
that it dependence on n. How do we obtain
w i's? We look at the interpolating polynomial
p n; look at their Lagrange form. So, that
is summation f x i l i x, i goes from 0 to
n. Integrate it, f x i's are constants. So,
they come out of the integration sign and
integral a to b l i x d x is that is our w
i these Lagrange polynomials. They have property
that summation j goes from 0 to n l j x is
equal to 1. This was one of our tutorial problem
that the Lagrange polynomials when you add
them up, then they are equal to 1, and hence,
I write w i as integral a to b l i x d x multiplied
by 1 so that 1 I am writing it as a summation
j goes from 0 to n l j x d x.
Let us split this sum as when j is equal to
i and the remaining terms when j is not equal
to i. So, w i is integral a to b l i's square
x d x plus summation j goes from 0 to n j
not equal to i integral a to b l i x l j x
dx. What we are going to show is this term
is equal to 0. If we can show that this term
is equal to 0, then w i will be strictly bigger
than 0 because it will be integral a to b
l i square x dx. So, that is the idea, and
in order to show that integral a to b l i
x l j x d x is equal to 0, we will use the
fact that our interpolation points those are
not any points in the interval a b but those
are some special points.
(
They have the property that when you look
at w x which is x minus x 0 x minus x 1 x
minus x n, this w x is perpendicular two functions
x raised to j going from 0 1 up to n. So,
using this property, let us show that integral
a to b l i x l j x dx is equal to 0 if i not
equal to j. So, we look at the case when i
not equal to j l i x l j x. So, the definition
of l i x is product k goes from 0 to n k not
equal to i x minus x k divided by x i minus
x k. Similarly, l j x will be product say
l goes from 0 to n l not equal to j x minus
x l x j minus x l. If you do not write like
the same notation l, it can be p; it is just
a domain x. So, this is product of l i x into
l j x.
So, look at the first product. The first product
contains all x minus x k except k not equal
to i. In the second one, we have got x minus
x l all the terms except when l not equal
to j, because we are assuming that i not equal
to j, the term x minus x i will be there.
So, I take the term x minus x i and join with
this. So, what I will have will be x minus
x 0 x minus x n including the term x minus
x i divided by the product k goes from 0 to
n x i minus x k upon n for k not equal to
i. From here, I am taking the term x minus
x i divided by x i minus x l. So, that term
will be x minus x i is absorbed here.
So here, this term should be equal to x j
minus x i because we are putting l is equal
to i. Now, from this product, the term l is
equal to i we are associating here. So, this
product becomes l goes from 0 to n l not equal
to j l not equal to i and then x minus x l
x j minus x i. The numerator x minus x 0 x
minus x n is going to be our function w x.
The denominator is a constant. Now, look at
this term. This has got n minus 1 brackets
because total there are n plus 1 brackets
and two brackets are not there; so, it is
going to be n minus 1 brackets. So, it is
going to be a polynomial of degree n minus
1. So, we have got our l i x into l j x to
be w x divided by some constant and multiplied
by a polynomial of degree n minus 1.
We are interested in showing that integral
a to b l i x l j x is equal to 0 for i not
equal to j. So, let us look at integral a
to b l i x l j x dx that will be equal integral
a to b w x by some constant c multiplied by
q n minus 1 and use the fact that w is perpendicular
to q n minus 1. So, since x 0 x 1 x n are
Gauss points w x into q n minus 1 x dx is
going to be 0 and l i x is a polynomial of
degree n. So, it cannot be identically 0.
So, integral a to b l i square x d x is bigger
than 0, and hence, our w i's they are going
to be bigger than 0. So, it is a very important
property of Gaussian integration that the
weights are always bigger than 0.
And using this property, we are now going
to show the convergence of Gaussian integration
when we are considering the interpolating
polynomial based on these Gauss points. Our
proof it is going to be based on the weights
are bigger than 0. Then the Weierstrass approximation
theory property, and the third property is
that in the Gaussian integration, there is
no error provided your function is a polynomial
of degree less than or equal to 2 n plus 1.
So, look at the function f to be continuous
function. Let us introduce the notation i
n f to be summation i goes from 0 to n w i
n f x i n. Now, I am denoting the dependence
on n, and there is no error or the integral
a to b f x d x is same as i n f provided f
is a polynomial of degree less than or equal
to 2 n plus 1. So, as a special case, if i
take f x to be equal to 1, then integral a
to b 1 d x is going to be equal to b minus
a and i n for that function will be summation
i goes from 0 to n w i n is equal to b minus
a.
Now, our claim is that i n f converges to
integral a to b f x d x as n tends to infinity.
So, the first thing is Weierstrass theorem.
What we want to show is integral a to b f
x d x minus i n f is less than epsilon or
constant times epsilon if your n is begin
up. So, I am fixing a epsilon greater than
0, and then, by the Weierstrass approximation
theorem, there exists a polynomial say q m
of degree less than or equal to m such that
norm of f minus q m infinity norm is less
than epsilon. Integral a to b q m x d x is
going to be equal to i n q m - where i n q
m is this approximate Quadrature.
Provided your n is bigger than or equal to
m minus 1. When you look at q m to be a polynomial
of degree less than or equal to m, and here,
you are looking at i n. So, we have got n
point. When we have got n points, the formula
is exact for polynomials of degree less than
or equal to 2 n plus 1. So, that is how i
get that if n is bigger than or equal to m
minus 1 by 2, then integral a to b q m x d
x is equal to i n q m. This is our first step.
In the next step, we want to show that modulus
of integral a to b f x d x minus i n f it
is going to be less than epsilon or constant
times epsilon. So, we have fix epsilon bigger
than 0. We have found a q m such that norm
of f minus q m it is infinity norm is less
than epsilon. If my n is bigger than or equal
to m minus 1 by 2, then integral a to b q
m x d x is same as i n q m. So, I add and
subtract that, and then, I get this. Now,
this will be less than or equal to integral
a to b modulus of f x minus q m x d x plus
we have i n; so, that is summation j goes
from 0 to n w j n q m x j n and w j n f x
j n.
Now, since our w j n's are bigger than 0,
I do not have to write modulus here; otherwise,
we have to write the modulus. So, we have
got this. f x minus q m x, it is going to
be less than or equal to norm of f minus q
m infinity, and integral a to b d x is b minus
a; q m x j n minus f x j n this is also going
to be less than or equal to norm f minus q
m infinity, because infinity norm means maximum
of mod of f x minus q m x x belonging to a
b. So, what is left is summation j goes from
0 to n w j n and that is equal to b minus
a. So, you get these to be less than 2 epsilon
into b minus a.
So, for a fix epsilon, we have found n such
that if n is begin up, i n f is going to in
converge to integral a to b f x d x. So, this
is convergence in Gaussian Quadrature. So
now, what goes wrong with the Newton Cotes
formula? Why cannot I use the same argument?
Like I start with f belonging to c a b, and
then, in case of Newton Cotes formula, I am
going to have n plus 1 interpolation points
it is going to be exact for polynomials of
degree less than or equal to n.
So, this is a difference that for the Gaussian
Quadrature, we had exactitude for polynomials
of degree less than or equal to 2 n plus 1,
but that should not matter, because anyway
I want that modulus of integral a to b f x
d x minus i n f should be less than epsilon
when n is begin up. So, may be in Newton Cotes
formula, I will have to choose n bigger than
in the Gaussian Quadrature, and for convergence,
that does not matter. What we want is given
epsilon for n large enough modulus of integral
a to b f x d x minus i n f should be less
than epsilon.
So, let us see where our proof breaks down.
So, we have equidistant points, and then,
our quadrature rule is going to be exact for
polynomials of degree less than or equal to
n. So, our fix our function f and let us calculate
or let us find a q m. So, by the Weierstarss
theorem, I will have q m such that norm of
f minus q m infinity norm is less than epsilon.
So, integral a to b q m x d x will be equal
to i n q m provided n bigger than or equal
to m. In case of Gaussian Quadrature, we had
n bigger than or equal to m minus 1 by 2.
So, I have only for n bigger than or equal
to m.
I look at modulus of integral a to b f x d
x minus i n f i add integral a to b q m x
d x add and subtract; so, I will get this.
Then modulus of integral a to b f x d x minus
integral a to b q m x d x this will be less
than or equal to integral a to b mod of f
x minus q m x d x this can be dominated by
norm of f minus q m infinity into b minus
a. So, we have got this term to be less than
epsilon into b minus a. Then look at the term
I n f minus I n q m. This will be summation
j goes from 0 to n w j n f of x j n minus
summation j goes from 0 to n w j n q m x j
n. So, by triangle inequality, this is going
to be less than or equal to summation j goes
from 0 to n mod w j n and modulus of f x j
n minus q m x j n. This term will be less
than or equal to epsilon and you have left
with summation j goes from 0 to n mod w j
n. So, here is the crucial difference. For
Gaussian Quadrature, this modulus of w j n
was same as w j n. So, we had here summation
j goes from 0 to n w j n and that is equal
to b minus a.
In the Newton Cotes formula also summation
w j n is going to be equal to be minus a;
that fact still remain, but in our error formula,
what is coming into picture is summation mod
w j n, and in case of Newton Cotes formula,
the, our weights they are going to be of mixed
signs; that means they can be both positive
and negative and that is why the there is
no convergence if you choose your points to
be equidistant points, and hence, we went
to the composite a numerical quadrature in
case of equidistant point or the Newton Cotes
formulae, we had special cases as trapezoidal
rule, then Simpson's rule, and then we can
write higher degree, where as, for the Gaussian
Quadrature, we have got convergence. We have
a choice; we can increase the degree of the
polynomial. So, instead of considering the
composite rules, we can look at the higher
degree polynomials and then get a numerical
quadrature formula. So, if your function f
is sufficiently smooth, then it is worthwhile
to apply Gaussian integration of higher order
than composite say trapezoidal or composite
Simpson's rule, because the speed of convergence
is going to be very high for Gaussian integration.
So, if your function f is sufficiently smooth,
then one should use the Gaussian integration
of higher order. Now, let us look at what
are the disadvantages of Gaussian integration.
Gauss two points in the interval minus 1 to
1 which we obtained, they were minus 1 by
root 3 and 1 by root 3.
Similarly, the higher order Gauss points,
they are going to be irrational. So, that
seems to be a stumbling block that one prefers
a simple Simpson's rule, but then when you
are writing a program, that should not be
a stopping thing because the Gauss points
and Gauss weights for higher degree polynomials
or for general case, the tables are available.
So, initially may be while writing the program,
it is a bit more problem but afterwards it
pays off.
Now, another drawback here is that suppose
I have got Gauss two points, so, I get minus
1 by root 3 and 1 by root 3 in the interval
minus 1 to 1, I calculate. Now, I find that
the accuracy is not good enough; so, I go
to 3. When I look at 3 Gauss points, then
whatever work we have done for 2 Gauss points
that is lost. That is one of the disadvantage
of the Gauss point, but as I said that if
your function is sufficiently differentiable,
then in Gaussian quadrature we are going to
get a very fast convergence. So, in our next
lecture, we will consider Romberg integration
and then we will solve some problems. So,
thank you.
