Wednesday, August 3, 2016

High Accuracy Technique For Solving Numerical Integrals

ERROR AREA REDUCTION INTEGRAL 



In this material, I want to show a way to compute the definite integral of a function y = f(x), between x = x1 and x = x2 with accuracy greater than both Simpson's rule and the polynomial rule.

Consider a polynomial approximation for the curve y=f(x), say y = h(x). where h(x) is a polynomial function that approximates f(x).







Since f(x) is the function which we wish to integrate and so is already known, we need to compute h(x) which is our ?

  Approximating function.

Sure you knew that already!

I trust that there are countless methods for generating the polynomial function h(x), but here we shall use one of the simplest ones available.

First we choose the limits for the approxiation say x = x1 and x = x2  between which we wish to compute the integral

Next, we choose the highest integer power that we wish to have in the polynomial, say n, which will be the degree of our polynomial, h(x).
So the polynomial will be h(x) = a0 + a1x + a2x+ a3x3 ....+ an-1xn-1 + anxn

Now, we can easily calculate the values of a, aaa3 .....a
since we know the value of the function we wish to approximate.

All we do is to take a number of collision steps, between x = x1 and x = x2   , say m steps.

These steps define the exact x-collision values where y=f(x) touches its approximating polynomial,
y=h(x)

m is directly related to the degree of our polynomial approximation by m = n+1

Now we calculate the x-spacing between each of the steps, say dx.

The x points we are taking are:

x1, x1+dx, x1+2.dx, x1+3.dx, ....+ x1+n.dx.
So since x2   = x1 + n.dx.

Then, dx = (x- x1)/n.

So if we wish to approximate y=sin(x), for example: say with a polynomial of degree n = 5,

between x = 1 and x = 3.

Then we would have:

dx = (3 - 1)/5; say dx = 0.4;

So x1 = 1, x2 = 1.4, x3 = 1.8, x4 = 2.2, x5 = 2.6, x6 = 3.

Of course this is consistent with the number of collision points, e.g m being n+1 = 6.

So the approximating polynomial gives:

 h(1.0) = sin(1)   = a0 + 1.0.a1 + 1.02.a2 + 1.03.a3 + 1.04.a4 + 1.05.a5
 h(1.4) = sin(1.4) = a0 + 1.4.a1 + 1.42.a2 + 1.43.a3 + 1.44.a+ 1.45.a5
 h(1.8) = sin(1.8) = a0 + 1.8.a1 + 1.82.a2 + 1.83.a3 + 1.84.a+ 1.85.a5
 h(2.2) = sin(2.2) = a0 + 2.2.a1 + 2.22.a2 + 2.23.a3 + 2.24.a+ 2.25.a5
 h(2.6) = sin(2.6) = a0 + 2.6.a1 + 2.62.a2 + 2.63.a3 + 2.64.a+ 2.65.a5
 h(3.0) = sin(3.0) = a0 + 3.0.a1 + 3.02.a2 + 3.03.a3 + 3.04.a+ 3.05.a5

You see where this is going?


                                     Sure I do!


This is a perfect system of linear equations which can be solved very quickly by a Matrix program!
And guess what? I  have one for you if you are really interested! and what's more?

 its absolutely free! 

I have programmed them in the Java and C languages and once requests are made in the comments section, I will make the code available.

The system's solution is then:

h(x) = 0.05621527890686895 + 0.8070522394224585.x + 0.2659311952212848.x2 - 0.35406795117551315.x3 + 0.0696942723181288.x4 - 0.003354049885331352.x5


Try and find say h(1.7), bearing in mind that 1.7 lies between  x = 1 and x = 3, the validity limits for the approximation.

You get h(1.7) = 0.9916802157371398

while its real value, which is f(1.7) = sin(1.7) = 0.9916648104524686

Need I tell that if we pick a larger degree for n, we will get a longer and more accurate approximating function, h(x) for

For instance for n = 10, we get

h(1.7) = 0.9916648104538464

which is much closer to:

f(1.7) = sin(1.7) = 0.9916648104524686


Now the gist of all this is that since h(x) approximates f(x),. we can apply simple polynomial integration to integrate h(x) since

h(x) ≅ f(x)


So also:


h(x)dx ≅ f(x)dx


Now for a bit of theory, just a light dose, nothing too complicated!

Remember that diagram? the one about a naughty curve up there? well here it is!



The area under the whole curve, i.e from A1 to  A2 along y=f(x) is the definite integral of   y=f(x) between  xand  x2.

The area between Aand A2  along y=h(x) is the definite integral of the approximating function which is hence an approximate value for the definite integral of y=f(x).

It is evident then that the area Abetween y=f(x) and y=h(x) is the error incurred in computing
the definite integral of y=f(x) with the polynomial integral of y=h(x).


The point of discovery!

So can we compute or estimate the error area, A?

If we can compute it, then by adding it to A, we can have a better approximation for the definite integral of  y=f(x).

If the absolute definite integral is A, then 

Af  AAe


Which can also be represented as:





Before we proceed, a very important principle to note is that any approximating function can be applied to compute h(x). Also any ingenious thinker can come up with a way to compute Ae

I propose a method for calculating Ahere in this material.

Our technique will involve Simpson's rule.

First, we use Simpson's rule to estimate Af  .


Simpson's approximation for f(x)dx between xand  xis:



Next, we use Simpson's rule to estimate ASimpson's approximation for h(x)dx is:




Can you see where this is going? Maybe not exactly this time?

Well, from

Af  AA.............(1)

This gives us that:

Ae  Af  Ah


Now, since S(f(x)) is the Simpson's area approximation for Af and S(h(x)) is the Simpson's area approximation for A, let Sf = S(f(x)) and S= S(h(x)).

Then, we see from the graph that:

Sf  SSe.............(2)

Se  SSh.............(3)



where Sis Simpson's approximation for the error area, Ae.

Now, we assume that since A≅ Se, so we can substitute equation (3) to replace Se in equation (1).

Then we have:

AASSh.........(4)



And we are nigh done!

For A=




and A=



and SSis equal to the difference between:


AND 


This gives SS=


Surely you can see the silver lining behind that cloud now?


Yes, but for one last step!

Now for a larger area comprising many of the above areas, we take the sum of all the error areas and add them up to give the general formula:

Now we sum all the areas in the region between x = xand x = x2

where xis the upper limit and xis the lower limit.

Putting all these in equation (4),




 And voila!

We have our approximation:


This formula will always be more accurate than Simpson's rule and the polynomial function for a given number of iterations, since it employs both to better its approximation process.

As a brief example, we will solve the problem that we started earlier!

f(x) = sin(x) and

h(x) = 0.05621527890686895 + 0.8070522394224585.x + 0.2659311952212848.x2 - 0.35406795117551315.x3 + 0.0696942723181288.x4 - 0.003354049885331352.x5


Integrating this polynomial, we get:

0.05621527890686895*x + 0.40352611971122926*x2+0.08864373174042826*x3-0.08851698779387829*x4+0.01393885446362576*x5-5.59008314221892E-4*x6

The definite integral of the polynomial beween  x = 1 and x = 3 is then easily obtained as:

h(3) - h(1) = -1.53026224468834; while its real value

Now for the summation part:

we need
 x = 1,   f(1 + 1.4)/2 = f(1.2)           h(1.2)
 x = 1.4,   f(1.4 + 1.8)/2 = f(1.6)     h(1.6)
 x = 1.8,   f(1.8 + 2.2)/2 = f(2.0)     h(2.0)
 x = 2.2,   f(2.2 + 2.6)/2 = f(2.4)     h(2.4)
 x = 2.6,   f(2.6 + 3.0)/2 = f(2.8)     h(2.8)


We present the values as coordinate pairs: (x,f,h)

(1.2, 0.932039086,  0.931961561369)

(1.6, 0.999573603,  0.999599014673)

(2.0, 0.909297427,  0.909279689992)

(2.4, 0.675463181,  0.675487289066)

(2.8, 0.334988150,  0.334918392048)



All these give our approximation as:

-1.5302948297254457

 as compared to its true value of -1.5302948024685852




Comments would be appreciated!


Thanks.




No comments:

LinkedList implementation in C with java.util.ArrayList like sublist capability

Place this in an header file, called list.h /*  * File:   list.h  * Author: hp  *  * Created on September 3, 2017, 2:41 AM  */ #if...