You are on page 1of 8

# Numerical integration

In numerical analysis , numerical integration constitutes a broad range of algorithms to calculate the numerical value of a definite integral and, by extension, the term is sometimes used to describe numerical algorithms for solving differential equations . The numerical quadrature term (often abbreviated a square) is more or less synonymous with numerical integration, especially if applied to integrals of dimensions although in the case of two or more dimensions ( multiple integral ) is also used. The basic problem considered by the numerical integration is to calculate an approximate solution to the definite integral:

This problem can be stated as an initial value problem for ordinary differential equation as follows:

Find and (b) is equivalent to calculating the integral. The methods developed for ordinary differential equations such as Runge-Kutta method can be applied to the reformulated problem. This article discusses methods developed specifically for the problem formulated as a definite integral.

Reasons for numerical integration
There are several reasons for carrying out the numerical integration. The principal may be unable to perform the integration analytically. That is, integral would require great understanding and management of advanced mathematics can be solved more easily using numerical methods. Integrable functions even exist but whose original can not be calculated, with the numerical integration of vital importance. The analytic solution of an integral yield us an exact solution, while the numerical solution would give us an approximate solution. The error of approximation, which depends on the method used and how fine, can become so small that is possible to obtain an identical result to the analytical solution in the first decimal place.

Methods for one-dimensional integrals

Numerical integration methods can be generally described as a combination of integrand evaluations to approximate the integral. An important part of the analysis of any numerical integration method is to study the behavior of the approximation error as a function of the number of integrand evaluations. A method that produces a small error for a small number of evaluations is usually considered superior. Reducing the number of integrand evaluations reduces the number of arithmetic operations involved, and therefore reduces the rounding error total. Also, each evaluation takes time, and the integrand may be arbitrarily complicated. Anyway, a way of integration by "brute force" can always be done in a very simplistic way, evaluating the integrand with very small increments.

Methods based on interpolation functions
There are a large family of methods based on approximating the function to integrate f (x) by another function g (x) of which the integral is known exactly. The function replaces the original is so that in a number of points has the same value as the original. As the end points are always part of this set of points, the new feature is called an interpolation of the original function. When the end points are not used to find the function that replaces the original then it is said extrapolation . Typically these functions are polynomials . Newton-Cotes formulas Interpolation with polynomials evaluated at equally spaced points in gives the Newton-Cotes formulas , of which the rule of the rectangle, trapezoidal and the Simpson are examples. If the nodes are chosen to k = n + 1 is the formula of Newton-Cotes closed and k are chosen = n - 1 is the formula of Newton-Cotes open.
Rectangle rule

The simplest of these is to make the interpolating function be a constant function (a polynomial of order zero) passing through the point (a, f (a)). This method is called the rule of the rectangle:

Midpoint Rule

Illustration of the midpoint rule.

If the above method the function passes through point method is called the midpoint rule:

. This

Trapezoidal rule

Illustration of the trapezoidal rule. Interpolating function may be an affine function (a polynomial of degree 1 or is a line) that passes through the points and . This method is called the trapezoidal rule:

Rule Simpson

Illustration of Simpson's rule. Interpolating function can be a polynomial of degree 2 that passes through the points , Simpson's rule : and . This method is called

.
Compound Rules

For any rule interpolating, you can make a more accurate approximation by dividing the interval in a number of subintervals, finding an approximation for each subinterval, and finally adding all results. The rules appear to do this are called compound rules, and are characterized by losing an order of overall accuracy compared to the corresponding simple, but generally give more precise values of the integral, at the cost that it significantly increase the operating cost of method. For example, the composite trapezoidal rule can be expressed as:

where and

the

subintervals

are .

shaped

with

Extrapolation methods

The accuracy of an integration method of Newton-Cotes type is generally a function of the number of evaluation points. The result is usually more accurate when the number of evaluation points increases, or, equivalently, when the width of the step between points decreases. What happens when the width of the step tends to zero? This can be answered by extrapolating the result of two or more widths of step ( Richardson extrapolation ). The extrapolation function may be a polynomial or a rational function. Extrapolation methods are described in more detail by Stoer and Bulirsch (Section 3.4). In particular, applying the Richardson extrapolation method to the composite trapezoidal rule gives the Romberg method . Gaussian Integration If allowed to vary the intervals between interpolation points, is another set of integration formulas, called Gaussian integration formulas. A rule of Gaussian integration is usually more accurate than a Newton-Cotes rule that requires the same number of integrand evaluations, if the integrand is smooth (ie, if one can derive many times).

If f does not have many derivatives defined at every point, or if the derivatives take very high values, Gaussian integration is often inadequate. In this case, an algorithm similar to the following would do better:
def integral (f, a, b): "" "This algorithm calculates the definite integral of a function in the interval [a, b], adaptively, selecting steps small near the trouble spots. h0 is the initial step. " x=a h = h0 acumulador = 0 while x < b: if x+h > b: h = b - x if error de la cuadratura sobre [x,x+h] para f es demasiado grande: haz h más pequeño else: acumulador += integral(f, x, x+h) x += h if error de la cuadratura sobre [x,x+h] es demasiado pequeño: haz h más grande return acumulador

Some details of the algorithm require careful look. For many cases, the error estimate of the integral over an interval for a function f is not obvious. One popular solution is to use two different integration rules, and take their difference as an error estimate of the integral. The other problem is deciding what is "too big" or

"too small." A possible criterion for "too big" is that the quadrature error is not greater than th, where t, a real number, is the tolerance that we have for the global error. But, if h is tiny and may not be worth it even smaller if the quadrature error is apparently large. This type of error analysis is usually called 'a posteriori' and we calculate the error after having computed the approximation. The heuristic for adaptive integration is discussed in Forsythe et al (Section 5.4).

Conservative error estimate (a priori)
Suppose , For has a first derivative on Gives bounded. The mean value theorem for

for some in depending . If we integrate in the equality and take absolute values, we

of

to

both sides of

You can approximate the integral over the right side putting the absolute value in the integrand, and replace the term a height greater than:

Thus, if we approximate the integral by integration rule , The error is not greater than the right side of the equation .

Multiple Integrals
The integration methods that have been discussed here are designed to calculate all one-dimensional integral. To calculate integrals of different dimensions, one approach is to express the multiple integral as repeated integrals of a dimension by using the Fubini theorem .

This approach leads to a number of function evaluations that grows exponentially as the number of dimensions grows. There are two methods to overcome this socalled curse of dimension.

Montecarlo
Main article: Monte Carlo Integration

The Monte Carlo methods and quasi-Monte Carlo methods are easily applicable to multi-dimensional integrals, and can produce better accuracy for the same number of function evaluations in one-dimensional methods using repeated integrations. A large class of Monte Carlo methods are useful algorithms called Markov Chain Monte Carlo , which include the Metropolis-Hastings algorithm and Gibbs sampling .

Programs for numerical integration
The numerical integration is one of the most intensively studied problems in numerical analysis. Among the many software implementations are:
y y

y

QUADPACK (part of SLATEC ) ( source code ): QUADPACK is a collection of algorithms in Fortran for numerical integration based on Gaussian rules. GSL : GNU Scientific Library . The Scientific Library GNU ( GSL ) is a numerical library written in C that provides a wide range of mathematical routines such as integration by Monte Carlo . ALGLIB : It is a collection of algorithms in C # / C + + / Delphi / Visual Basic / etc., for numerical integration.]]

REFERENCES

y

y

y

George E. Forsythe, Michael A. Malcolm, and Cleve B. Moler. Computer Methods for Mathematical Computations. Englewood Cliffs, NJ: Prentice-Hall, 1977. (See Chapter 5.) William H. Press, Brian P. Flannery, Saul A. Teukolsky, William T. Vetterling. Numerical Recipes in C. Cambridge, UK: Cambridge University Press, 1988. (See Chapter 4.) Josef Stoer and Roland Bulirsch. Introduction to Numerical Analysis. New York: Springer-Verlag, 1980. (See Chapter 3.)