You are on page 1of 21

SPARSE GRID METHODS FOR MULTI-DIMENSIONAL

INTEGRATION
SRI VENKATA TAPOVAN LOLLA.
Abstract. In this paper, we study various methods for multi-dimensional integration (cuba-
ture). Specically, we focus on a technique called sparse-grid method for cubature and illustrate
its performance by applying this method to several integration problems arising in physics. We also
implement an adaptive-sparse grid scheme which overcomes the drawbacks of regular sparse-grids to
some extent. The performance of regular and adaptive sparse-grids is compared to several competing
Monte Carlo algorithms for cubature.
Key words. cubature, sparse grids, Smolyak construction, adaptive sparse grids, multi-dimensional
integration
1. Introduction and motivation. Multidimensional integrals arise in many
areas of of interest, including science, engineering and even nance. Statistical me-
chanics, valuation of nancial derivatives, discretization of partial dierential equa-
tions, numerical computation of path integrals are but a few examples where high-
dimensional integration is essential [16]. In several stochastic numerical methods such
as spectral methods, computation of the spectral coecients involves the estimation
of a high dimensional integral. The dimension of the integrand in these problems can
be as large as several hundreds [37]. The exact computation of these high dimensional
integrals is usually out of the question since analytical expressions for the integrals
are rarely available in practice. Thus, ecient numerical schemes to approximate
these integrals are often sought after. This task is complicated by the fact that often,
these integrals are required to be computed to a high level of accuracy, and this can
become challenging computationally, even for supercomputers [36]. The main rea-
son for such a diculty is the curse of dimensionality. This is due the fact that to
achieve a prescribed level of accuracy in computing a multi-dimensional integral, the
amount of work required (e.g. number of quadrature points in a quadrature rule) typ-
ically grows exponentially with the dimension [25]. Thus, the rates of convergence for
moderate-to-large dimensional systems are extremely poor, and this limits the total
accuracy that can be obtained by conventional methods. Ecient methods for multi-
dimensional integration avoid the curse of dimensionality to some extent by taking
advantage of the structure of the function and the level of smoothness it exhibits. It
is an implicit assumption that the function to be integrated is expensive to evaluate.
Thus, an ecient integration method limits the number of function evaluations in the
approximation to a minimum and focuses on careful selection of the node points at
which the function is evaluated.
In this term paper, we explore various methods for multi-dimensional integration.
Specically, we aim to study strategies to evaluate the d-dimensional integral:
I
d
f =
_

f(x) dx, = [1, 1]


d
.
We will focus on a certain class of numerical methods called sparse grid methods which
approximate the integral to any desired accuracy. We also describe and implement
a dimension adaptive sparse grid method and study the improvement it oers over
regular sparse grid methods in terms of reduction in the number of quadrature points
and integration error.
1
2
1.1. Layout. This paper is organized as follows: In section 2, we perform an
exhaustive literature review on existing methods for multi-dimensional integration and
describe their historical developments, tracking works related to them until the recent
past. Original articles will be cited wherever possible. The advantages and drawbacks
and other characteristics of all the methods will be described in detail. In section 3,
we describe four rules for one-dimensional quadrature that we later extend to study
the multi-dimensional quadrature schemes. Error bounds and typical performance
characteristics will be highlighted. In section 4, we describe various methods to extend
the results of 1-D quadrature rules to the multi-dimensional case. Here, we present
the sparse grid method of Smolyak [12]. We also describe the dimension adaptive
sparse grid method originally proposed in [36]. We present the results of the above
algorithms in evaluating a wide variety of integrals in section 5. We compare our
results against those obtained by the open-source cubature package, CUBA [44]. We
summarize the paper and our ndings in section 6.
2. Literature Review. Multi-dimensional integration (or, cubature) is a well-
researched topic. Literature in this area dates back at least to the time of Gauss. It
is still, a very active area of research as no numerical scheme has yet been devised
that is vastly superior to the presently available ones. Literature for numerical 1-D
integration is even more widespread. Several textbooks on numerical integration such
as [24], [25] limit their focus to the 1-D case due to the large number of numerical
schemes available. Early works of Newton and Cotes [24] in approximating 1-D inte-
grals by a weighted summation of the function value at equally spaced points have led
to schemes like the Simpsons rule, trapezoidal rule, Booles rule etc. These schemes
rely on approximating the integral of a given function by the sum of areas of regular
polygons obtained by partitioning the interval equally. Such a numerical scheme that
approximates the integral by a weighted sum of function values evaluated at specially
chosen points is called a quadrature. The weights in the Newton-Cotes formulas
are computed by integration of a Lagrange interpolant through the corresponding
quadrature point. Seminal works of Gauss leading to various Gauss quadrature rules
are based on a similar idea. The key dierence between Gauss quadrature and the
Newton-Cotes method is that the quadrature points are not uniform, but correspond
to roots of a family of orthogonal polynomials [24], [26]. We will discuss Gauss
quadrature formulae in detail in section 3. A more recent quadrature scheme for 1-D
integration is the Clenshaw-Curtis quadrature [2]. This is based on an approximate
representation of the function in terms of Chebyshev polynomials, which are then
integrated exactly.
In the case of multiple dimensional integrals, a quadrature approximation can be
sequentially performed in each direction individually to obtain an approximate value
for the full integral [23]. This amounts to a tensor product approximation, based on
product rules, of the full integral. Such classical quadrature techniques soon run into
trouble when the dimension of the integral d, becomes large. For a given accuracy
level , the number of quadrature points N needed to obtain the required accuracy
scales as [25]:
(N) = O
_
N
r/d
_
,
for functions with bounded derivatives up to order r. This clearly suggests that
for moderate dimensions, the order of convergence is extremely slow, and an high
accuracy for the integral cannot be expected, unless the function is special.
3
Sparse-grid methods are largely based on the algorithm proposed by Smolyak [12]
in 1963. They alleviate the curse of dimensionality to some extent for certain classes of
functions that have bounded mixed derivatives [16]. In this method, the multivariate
quadrature formulas are constructed by a combination of tensor products of univariate
dierence quadrature formulas. The basic idea behind the method follows from the
observation that a given level of accuracy in evaluating a multidimensional integral can
be obtained by using much fewer points than the full tensor product approximation,
i.e. the quadrature points are much sparser than a full tensor product rule [17]. Of
all the possible index combinations, the only indices that are acceptable are the ones
that lie within a unit simplex (this will be described in detail in section 3). Hence the
name, sparse-grid methods. In these methods, the smoothness of the integrand plays
a crucial role in the computational complexity. For functions with mixed derivatives
bounded up to order r, the number of points N required to obtain an accuracy of
scales as [36]:
(N) = O
_
N
r
(log N)
(d1)(r+1)
_
.
This clearly indicates an improvement over the full tensor product rule. In fact,
for innitely smooth functions (r ), convergence can be up to exponential.
Smolyaks sparse grid construction has been utilized in several recent works in the
areas of wavelets [50], solution of partial dierential equations, [27, 51, 52] and data
compression [53], to name a few. Further recent works may be found in [16]. Even
though sparse grid methods oer a considerable advantage over the full tensor prod-
uct approximation, their convergence rates become lower as d increases due to the
dependence on the log N term.
A recent research thrust has been to develop sparse grid schemes which have
better convergence rates, without compromising too much on the accuracy. In certain
integrands, some dimensions are more important than the others. For such functions,
fewer quadrature points can be used in the less important dimensions and more points
can be concentrated in the more important regions. This leads to adaptive sparse grid
methods. Regular sparse grid methods treat all dimensions with equal importance
and hence have nothing to gain when dimensions are of dierent importance. When
the relative importance of dierent dimensions is known a priori, (e.g. see [34] for a
case of an elliptic equation) dierent weights may be assigned to dierent dimensions,
which leads to a dimension adaptive sparse grid method, as described in [31, 23].
Several dimension adaptive schemes (not necessarily sparse-grid) are available
in literature. One of the oldest methods for adaptive cubature was proposed by
van Dooren and de Ridder [35]. In their scheme, the d dimensional hypercube is
divided into several smaller hypercubes and a low order quadrature rule is used to
approximately evaluate the integral over each sub-cube, and obtain an estimate of the
integration error in that smaller cube. The cube with the largest integration error is
recursively sub-divided into smaller d cubes. This process stops when the integration
error in the smaller sub-cubes is lower than the requested accuracy. Other older
adaptive cubature routines are referenced in [35]. This algorithm was improved by
Genz and Malik [33] by introducing an alternate improved strategy for the adaptive
subdivision. The regions where the integrand has large error is identied by computing
the fourth divided dierences of the integrand, so that any further bisections of the
hyper-rectangle in the adaptive scheme can concentrate on this worst dimension.
Other improvements to these adaptive subdivision strategies may be found in [39] and
references therein. These adaptive sub-division strategies are found to perform best
4
for integrands of moderate number of dimensions (2 d 6) but the computational
overhead in identifying and rening the important dimensions is too large for higher
dimensional problems (d > 7).
Other algorithms have been developed which try and quantify the most important
dimensions of the integrand. In these algorithms, a high dimensional function is
approximated by sums of lower dimensional functions. Such ideas are common in
statistics for regression problems and density estimation. Examples of these are the
additive models [29], multivariate adaptive regression splines (MARS) [28], and the
analysis of variance (ANOVA) decomposition [30]. Examples and references to other
additive models and dimension reduction techniques can be found in [29].
When the important dimensions are not known a priori, the weights to be assigned
to the dimensions are not known. This is the main drawback to the dimension adaptive
sparse grid scheme proposed in [31]. To address this issue, several authors have
proposed general adaptive sparse grids. Bonk [32] proposed an adaptive sparse grid
scheme which allows adaptive renement in both the sub-domains and in the order. A
sparse-grid based extrapolation technique keeps the computational cost from blowing
up. However, [32] only deals with the case of linear basis functions. This scheme was
extended by Bungartz and Dirnstorfer [37] to general hierarchical basis functions.
However, these two approaches do not work for large dimensional integrals because
they are designed only to tackle local non-smooth behavior of the function, which
is detrimental to the performance of any quadrature method. Gerstner and Griebel
[36] proposed a dimension-adaptive tensor-product quadrature on sparse grids. We
implement this scheme in this paper and a detailed description of this algorithm is a
part of section 3. The scheme works by increasing the level of univariate quadrature
by performing a tradeo between the increased computational eort due to increased
number of function evaluations and the incremental improvement obtained as a result
of increasing accuracy in that dimension. More recently, Jakeman and Roberts [38]
combined the adaptive schemes in [36] and [37] to obtain a robust and exible sparse
grid scheme that is capable of hierarchical basis adaptivity. The algorithm greedily
selects the subspaces that contribute most to the variability of the integrand. The
hierarchical surplus of points within each subspace is used as an error criterion for
the hrenement to concentrate the eort within rapidly varying or discontinuous
regions. Thus, it combines the advantages of the schemes in [36] and [37]. Further
references for hadaptivity and padaptivity may be found in [38]. Thus, adaptive
sparse grid methods are reported to perform much better than other recursive sub-
division methods in multi-dimensional integration for high dimensional functions.
The other important class of numerical schemes that are at the fore-front of
high-dimensional integration are a class of randomized methods called Monte Carlo
approaches [40]. The Monte Carlo method is probably the best known representative
for this class of algorithms. The basic idea behind Monte Carlo methods is simple. The
function is evaluated at a characteristic set of points that are uniformly distributed
though the hypercube = [1, 1]
d
. These points constitute the samples of the
uniform distribution over . Given samples x
i
d
R
d
, i = 1, 2, , N, where N is the
total number of samples, the Monte Carlo approximation to the integral of f(x) over
is simply the average of the function values evaluated at the sample points:
I
d
f I
MC
f =
N

i=1
f(x
i
d
)
N
.
The weak law of large numbers states that as N , the Monte Carlo estimate
5
I
MC
f approaches I
d
f at a rate O(N
0.5
). In other words, in order to reach a given
accuracy of , the amount of work (number of Monte Carlo samples, N) required
scales as:
(N) = O(N
0.5
).
The rst obvious observation from the above asymptotic estimate is that the rate of
convergence of Monte Carlo methods is independent of the dimension of the integral,
d. However, this convergence rate is slow for most purposes. Thus, for large d, a high
accuracy is achieved only by using a large number of independent samples N. In fact,
a good fraction of the time in Monte Carlo integration is spent in generation of the
samples. This is because for large d, the number of samples needed to adequately
represent the function becomes large.
This leads us to the so-called Quasi-Monte Carlo algorithms [45] and Latin Hy-
percube Sampling [42], which have received considerable attention over the past few
years. Here, the integrand is evaluated not at random, but structurally determined
points such that the discrepancy of these points is smaller than that for random
points. These methods are improved sampling strategies which force the sampler to
draw samples more uniformly distributed in . For Latin Hypercube Sampling, this
is achieved by forcing the sampler to draw realizations within equiprobable bins in
the parameter range. The sample dimension is xed a priori to dene the bins. In
the Quasi-Monte Carlo method, a low discrepancy deterministic sequence of points
is generated so as to maximize the uniformity of the sample points. Minimum dis-
crepancy is obtained for samples that lie on vertices of a regular grid, which scales
exponentially. To circumvent this, dierent sequences such as the one proposed by
Sobol [46] are used. More information about these methods may be obtained from
[45].
For quasi-Monte Carlo methods, the integration error scales as:
(N) = O(N
1
(log N)
d
),
and is roughly half an order better than the regular Monte Carlo method [45]. It
should be noted that these error bounds are asymptotic (and deterministic). Clearly,
quasi-Monte Carlo methods have a convergence rate that depends on the dimension
d. Thus, they run into similar issues as quadrature based methods when approximat-
ing high dimensional integrals. The eectiveness of Quasi-Monte Carlo methods in
approximating high dimensional integrals was studied by Sloan and Wozniackowski
[41], where more references may be obtained.
An attractive feature of Monte Carlo methods is that their characteristics do
not depend on the nature (smoothness) of the integrand. Unlike quadrature based
methods, Monte Carlo approaches do not approximate the integrand by polynomials,
and exhibit the same convergence characteristics for all functions. This can however,
also be perceived as a drawback as Monte Carlo methods lead to no immediate ad-
vantage when trying to evaluate the integral of a highly smooth function. Thus, for
suciently smooth integrands, sparse grids, specically adaptive sparse grids are ex-
pected to outperform Monte-Carlo methods. Monte Carlo methods can be modied
if certain important dimensions of the integrand are known a priori. In this case, a
technique called importance sampling may be used to choose the samples from a sam-
pling distribution that is dierent from a uniform distribution [40]. The idea is that
the sampling distribution is designed so as to concentrate more samples in the im-
portant dimensions of the integrand. This also leads to variance reduction techniques
6
to accelerate the convergence of the Monte Carlo estimate of the integral. Similarly,
for Quasi-Monte Carlo methods, prior sorting of the important dimensions according
to their importance, leads to better convergence rates, yielding a reduction of the
eective dimension [36]. The reason for this is the better distributional behavior of
low discrepancy sequences in lower dimensions than in higher ones [43]. An open
source package for cubature, containing several Monte Carlo methods for integration
is freely available on the web [44].
We must also mention another set of schemes for cubature based on various lattice
rules [47]. These lattice rules are similar to quadrature rules, the key dierence being
that lattice rules assign equal weights to each lattice point. We do not describe or
study these methods in detail, but only refer the interested reader to [47] where more
related works can be found. Another class of cubature schemes which we do not focus
much on, are based on neural-networks [48]. More information about this can be
obtained in [48] and references therein.
3. One dimensional Quadrature. In this section, we describe some quadra-
ture rules for 1-D integration. The rules described here will form the basis of higher-
dimensional tensor product quadrature, and also for sparse-grid cubature (both non-
adaptive and adaptive). In particular, we describe four 1-D quadrature rules which
will then be applied to the sparse-grid extension. In the following, we use the notation
in [16] to represent various quantities. The exact integral of a function f in d dimen-
sions is denoted by I
d
f, while its quadrature approximation is denoted by Q
d
l
f. Here,
l N denotes the level of the quadrature, which governs the number of quadrature
points used in the approximation, n
d
l
. In the 1-D case, d = 1, and = [1, 1]. The
quadrature approximation to I
1
f
is given by:
I
1
f =
_

f(x)dx Q
d
l
f :=
n
1
l

i=1
w
li
f(x
li
), (3.1)
where w
li
are the weights and x
li
are the quadrature points. Dierent quadrature
rules have dierent criteria for choosing w
li
and x
li
. The sum in (3.1) is a n
1
l

point quadrature rule for evaluating I


1
l
f approximately. We shall simply call this
quadrature rule of level l from now on, as n
1
l
is uniquely determined by l for a
given quadrature rule. Typically, n
1
l
O(2
l
), i.e. the number of quadrature points
roughly doubles with increasing l, while n
1
1
= 1 for every quadrature rule we shall see.
Furthermore, we dene the grid of the quadrature points x
li
as:

1
l
:= {x
li
: 1 i n
1
l
} . (3.2)
For d = 1 this set is simply a collection of scalar quantities x
li
. For d > 1, the
quadrature grid is a collection of d-dimensional vectors, x
li
:

d
l
:= {x
li
: 1 i n
d
l
} [1, 1]
d
. (3.3)
The quadrature formulas are said to be nested if the corresponding grids are nested,
i.e.,

d
l

d
l+1
.
Finally, we introduce the notation for the error in a quadrature rule of level l for
ddimensional cubature as:
E
d
l
f =

I
d
f Q
d
l
f

. (3.4)
7
The error in the 1-D case follows from the above denition. We shall provide various
error bounds for E
d
l
f in the following sections, assuming certain smoothness condi-
tions on f. Let us denote by C
r
as the set of all functions which have all bounded
mixed derivatives up to and including order r. Finally, we use the following rule as a
convenience, for all quadrature rules:
Q
1
1
f = 2f(0) Q
d
1
f = 2
d
f(0),
i.e., the quadrature rule for level l = 1 will always be chosen so that the only quadra-
ture point is the origin, and it has weight 2. We now describe various quadrature
rules for the 1-D case.
3.1. Trapezoidal Rule. The trapezoidal rule is one of the quadrature formulas
of Newton and Cotes [25]. In this rule, the interval ([1, 1] in our case), is divided
by equally spaced abscissas, which form the quadrature points. The integral I
1
f is
then approximated by the sum of areas of the trapezoids formed by these quadrature
points and the corresponding heights of the function at these points.
I
1
f Q
1
l
f =
n
1
l
1

i=1
f(x
i
) + f(x
i+1
)
2

2
n
1
l
1
(3.5)
=
f(x
1
)
n
1
l
1
+
n
1
l
1

i=2
2
n
1
l
1
f(x
i
) +
f(x
n
1
l
)
n
1
l
1
, (3.6)
valid for l 2. The number of quadrature points at level l is chosen to be
n
1
l
= 2
l1
+ 1, l 2,
so that the origin is always included in the quadrature set. Clearly, for the trapezoidal
rule, the weights and quadrature points are given by:
x
li
= 1 + (i 1)
2
n
1
l
1
, 1 l n
1
l
, (3.7)
w
li
=
_
1
n
1
l
1
, for i = 1, n
1
l
2
n
1
l
1
, for 1 < i < n
1
l
.
(3.8)
We allow the end points {1, 1} to be a part of the quadrature set for l 2. Clearly,
trapezoidal rule is a nested quadrature because
1
l

1
l+1
. It is well-known that the
trapezoidal rule exhibits a convergence rate given by:
E
1
l
f = O
_
2
2l
_
.
For functions periodic in [1, 1] and f C
r
, the convergence rate dramatically im-
proves to O
_
2
lr
_
[16]. Similar bounds exist for other Newton-Cotes formulas [24].
Newton-Cotes formulas do not converge for a general integrand f. They converge
only if f is analytic in a region surrounding the interval of interest [3]. Evaluation of
trapezoidal weights and quadrature points requires no additional eort because they
are exactly known, independent of the integrand f.
8
3.2. Clenshaw-Curtis Rule. The Clenshaw-Curtis quadrature rule [2] makes
use of the increased rates of convergence of the trapezoidal rule for periodic functions.
For non-periodic functions f, a change of variables x cos makes the integrand 2
periodic. The term f(cos ) in the integrand is then approximated by its truncated
Fourier cosine series. The resulting expansion is then integrated exactly.
I
1
f
=
_

0
f(cos ) sin d =
_

0
_
a
0
2
+

m=1
a
m
cos(m)
_
sin d = a
0
+

k=1
2a
2k
1 4k
2
.
The coecients of the cosine series expansion are obtained by a trapezoidal rule.
a
m
=
1

f(cos ) cos(m) d.
For details, see [2],[3] and also [26]. In practice, the Clenshaw-Curtis quadrature is
evaluated by writing the integrand as a weighted sum of function values at the Cheby-
shev points, which form the quadrature points. As described by Gentleman [9, 10], the
Clenshaw-Curtis quadrature weights can be computed by a Fast Fourier Transform.
See also [8] for a nice discussion on fast construction of Clenshaw-Curtis weights. In
our implementation, we use the inverse Fourier transform method described in [8] to
compute the Clenshaw-Curtis weights. The quadrature points are given by:
x
li
= cos
(i 1)
n
1
l
1
, 1 i n
1
l
, (3.9)
for a quadrature rule with n
1
l
points. In our implementation, we use the practical
Clenshaw-Curtis points [3], i.e. we use the end points {1, 1} and the origin is always
a member of the quadrature set
1
l
. In this case, n
1
l
is given by:
n
1
l
= 2
l1
+ 1, l 2,
which is the same as in the trapezoidal rule. Like the trapezoidal rule, Clenshaw-
Curtis quadrature is nested. It is well-known that an n
1
l
point Clenshaw-Curtis rule
integrates polynomials up to degree n
1
l
1 exactly. Clenshaw-Curtis quadrature con-
verges to the true integral, for any continuous function f. The error in the integrand
f C
r
scales as:
E
1
l
f = O(2
lr
).
In fact, Trefethen [3] argues that Clenshaw-Curtis quadrature has an error bounded
by O(2
lr
/r) for a rtimes dierentiable f.
3.3. Gauss Quadrature Rule. Gauss quadrature is also an interpolatory quadra-
ture scheme like the above two schemes. Several Gauss quadrature schemes exist in
practice. Examples of these are Gauss-Legendre quadrature, Gauss-Hermite quadra-
ture etc. Gaussian integration methods approximate more general integrals of the
form:
I
f
=
_
1
1
w(x)f(x) dx,
where w(x) is a positive weight function over the interval. Typically, these integrals
arise frequently in probability where w(x) takes the role of a probability density func-
tion (PDF). For w(x) 1, we recover the original integral. The Gaussian quadrature
9
nodes are chosen to be the zeros of the polynomials that form an orthogonal family
with respect to the inner product:
f, g =
_
1
1
w(x)f(x)g(x) dx.
When w(x) = 1, the family of orthogonal polynomials with respect to this measure
is Legendre. Similarly, when w(x) exp(x
2
), the family of orthogonal polynomials
is Hermite. In our case, we only focus on w(x) = 1. We note here that this family of
orthogonal Legendre polynomials {p
n
(x)} can be generated by an orthogonalization
process such as Gram-Schmidt.
The main idea of Gaussian-Legendre quadrature is to set the quadrature points to
be the zeros of the n
th
degree Legendre polynomial, p
n
(x). A family of polynomials
orthogonal with respect to the above inner product typically satisfy a three-term
recurrence relation of the form:
xp
n
(x) =
n1
p
n1
(x) +
n
p
n
(x) +
n
p
n+1
(x),
where
i
and
i
are scalars. In the case of Legendre polynomials,
n
and
n
can be
computed analytically [26]:

n
= 0,
n
=
1
2
_
1 (2n)
2
_
1/2
.
The quadrature points (zeros of p
n
(x)) then reduce to the eigenvalues of the symmet-
ric tri-diagonal Jacobi matrix with {
n
} on the principal diagonal and {
n
} above
and below it. Eigenvalues of this tri-diagonal matrix can be computed to yield the
quadrature points. The method of Golub and Welsch [4] describes a way to calculate
the quadrature weights eciently. They observed that the weights correspond to the
rst components of the orthonormalized eigenvectors of the Jacobi matrix. Thus, the
eigenvalue decomposition of the Jacobi matrix yields both the quadrature points and
the weights. In our implementation, we use the method in [4] to calculate the weights
and quadrature points. A table of Gaussian quadrature weights and abscissas and
several related works may be found in [1].
The number of Gaussian quadrature points n
1
l
at level l is given by:
n
1
l
= 2
l
1, l 1.
An npoint Gaussian quadrature, exacts all polynomials up to degree 2n 1 ex-
actly. However, Gauss quadrature points are not nested, i.e.
1
l

1
l+1
. The error
for Gaussian quadrature has the same asymptotic upper bound as Clenshaw-Curtis
quadrature, based on the theoretical results in [3].
3.4. Gauss-Kronrod-Patterson Rule. The Gauss-Kronrod-Patterson rule, (re-
ferred to as Gauss-Patterson rule from now), addresses the issue of non-nestedness
of the regular Gauss-Legendre quadrature. Kronrod [5] extended an npoint Gauss
quadrature formula by n +1 points so that the resultant (2n +1)point rule exactly
integrates polynomials up to degree 3n + 1. This is done in order to make the resul-
tant quadrature rule nested. The new points added to the quadrature set are zeros of
Stieltjes polynomials. The dierence between Gaussian quadrature estimate and its
Kronrod extension yield an error estimate of the quadrature (and also an estimate of
the integral).
10
Patterson [6] described a method that recursively iterates Kronrods scheme to
obtain the sequence of nested quadrature formulae [16]. The interested reader is
referred to [6] and [7] for more details on this construction. It should be noted that
Patterson extensions do not exist for all Gauss-Legendre polynomials.
For our purposes, we set Q
1
2
to the 3-point Gauss formula, and Q
1
l
for l 3, equal
to its (l 2) Patterson extension. This yields a total number of quadrature points:
n
1
l
= 2
l
1, l 2,
and a polynomial degree of exactness of nearly
3n
1
l
2
. The error in the integral for
f C
r
is again:
E
1
l
f = O(2
lr
).
Thus, among all the nested quadrature formulae seen above, Gauss-Patterson has
the highest polynomial exactness. In our implementation, we obtain the weights
and points of Gauss-Patterson rule from the open-source package, QUADPACK [11]
available on the web for free. In Fig. 3.1, we show the quadrature points for all
the above four rules, for increasing levels l. As expected, trapezoid, Clenshaw-Curtis
and Gauss-Patterson points are nested, while Gauss nodes are not. In the following
section, we describe methods for extending 1-D quadrature rules to higher dimensions.
Fig. 3.1: Quadrature points (blue dots) for various rules, at various accuracy levels l.
4. Cubature. In this section, we review some methods for multi-dimensional
integration. We start with the naive tensor product rule, and describe the sparse grid
construction of Smolyak [12].
11
4.1. Full Tensor Product. Given a function f(x) with x R
d
, and a 1-D
quadrature rule:
Q
1
l
f =
n
1
l

i=1
w
li
x
li
,
the full tensor product rule for approximation of I
d
f is given by [16]:
_
Q
1
l1
Q
1
l
d
_
f :=
n
1
l
1

i1=1

n
1
l
d

i
d
=1
w
l1i1
w
l
d
i
d
f(x
l1i1
, , x
l
d
i
d
), (4.1)
where, l
i
is the desired level of accuracy in dimension i. The weights are simply, the
product of the weights along each dimension. We can see that the total number of
quadrature points in this case is N =

d
i=1
n
1
li
, which increases exponentially with d.
Therefore, it is not very practical as it quickly limits the accuracy one can achieve in
each dimension. It is useful only when integrating low-dimensional functions which are
highly smooth and do not require large number of quadrature points in any direction.
This leads us to sparse grid methods.
4.2. Smolyak Sparse Grid Construction. Smolyak [12] proposed an algo-
rithm using which the computational cost in evaluating a given multi-dimensional
integral can be reduced to make it signicantly better than the full-tensor product
approximation. The basic idea of the method is that in order to approximate a
multi-dimensional integral to a given accuracy level, the full tensor product performs
more computations than necessary. The same level of accuracy can be achieved by
performing far fewer computations, by carefully choosing the quadrature points and
the quadrature rule. To this end, given a quadrature rule Q
1
l
f in 1-D, a dierence
quadrature formula is dened as:

1
k
f := (Q
1
k
Q
1
k1
)f, with Q
1
0
f = 0. (4.2)
We see that
1
k
f is another quadrature rule for f. The grid of quadrature points for
this rule is the union of the grids:
1
k

1
k1
, which is simply
1
k
if Q
1
k
is a nested
quadrature rule. We immediately see the advantage of nested quadratures over non-
nested ones as they require fewer (assumed expensive) function evaluations in the
approximate integration. Smolyaks sparse grid construction for the integral I
d
f is
then:
Q
d
l
f :=

|k|1l+d1
_

1
k1

1
k
d
_
f, (4.3)
where k N
d
is the multi-index set containing the accuracy levels in each dimen-
sion, while l is the desired accuracy level of the cubature. Hence, the summation is
performed only on index sets that lie inside the simplex, |k|
1
l + d 1, as opposed
to the whole hypercube 1 k
i
l, i, which is what the full-tensor product does. In
Fig. 4.1, we show the quadrature points for d = 2 at various levels l. We see that the
sparse grid needs far fewer function evaluations (N) than the full tensor product, and
this dierence magnies as l increases. In Fig. 4.2, we plot the sparse-grid cubature
points corresponding to all four 1-D quadrature rules. We notice, as expected, that
the Gauss rule is not nested, and requires more function evaluations. The number
12
Fig. 4.1: Quadrature points in Smolyak construction (upper panel) and full-tensor product
rule (lower panel) for d = 2 and various levels of accuracy l. Signicantly higher number of
function evaluations (quadrature points, N) are observed for the full-tensor product.
of cubature points in a sparse-grid method at level l is [20]:
N
d
l
= O(2
l
l
d1
),
whereas for a full-tensor product rule, the cubature points scale as O(2
ld
). Ecient
methods for computing the cubature weights for sparse-grids are described in [16, 13,
15]. The error of a sparse-grid cubature for f C
r
at accuracy level d, is given by
[22, 14]:
E
d
l
f = O
_
2
lr
l
(d1)(r+1)
_
.
Several variants of these sparse grid methods exist in the literature. For example,
Bungartz and Dirnstorfer [37] extend sparse-grid methods to a higher-order quadra-
ture using hierarchical basis functions. Another similar work is [18]. Applications of
sparse-grid quadrature in areas of nance and insurance are explored in [21].
4.3. Adaptive Sparse Grids. Sparse-grid methods for cubature oer a great
advantage over regular tensor product rules and even Monte Carlo methods when the
integrand is smooth. However, for non-smooth functions, sparse-grid methods run into
trouble when the number of dimensions increases. This is because 1-D quadrature
rules are based on polynomial interpolations, and non-smooth functions require a large
degree polynomial to accurately represent them. Adaptive sparse-grid methods treat
each dimension dierently. They assess the dimensions according to their importance
and thus reduce the dependence of computational complexity on the dimension. The
13
Fig. 4.2: Sparse-grid cubature points for d = 2, l = 6. Gauss quadrature points are not
nested.
dimension-adaptive algorithm nds important dimensions automatically and adapts
by placing more cubature points in those dimensions [36]. In what follows, we describe
the dimension-adaptive strategy of Gerstner and Griebel [36].
The dimension-adaptive quadrature allows general allowable index sets in the
sparse-grid summation, i.e. the summation grid is no longer the simplex |k|
1
l+d1,
but another set that is adaptively built. Their self-adaptive algorithm nds the best
index set by an iterative procedure. To this end, the notion of an admissible set is
dened. An index set I is said to be admissible, if for all k I, the multi-indices:
k e
j
I, for 1 j d, k
j
> 1.
Here, e
j
is the j
th
unit vector. This condition ensures that isolated multi-indices do
not arise as candidates in I. This is done so that the telescoping sum of dierence
formulas
1
kj
in the sparse-grid remains valid. The general sparse-grid construction
then is:
Q
d
I
f :=

kI
_

1
k1

1
k
d
_
f,
as long as I remains admissible. However, this adaptive strategy is fruitful if it adds
cubature points to dimensions with large error. The error in a particular dimension is
a property of the integrand f. Thus, the enrichment of the index set in the dimension-
adaptive scheme is highly dependent on the nature of f. Therefore, asymptotic error
bounds for the approximation are not available. However, the algorithm does allow
for adaptive detection of dimensions with large error.
14
The algorithm starts by assuming the only member of the index set is 1 =
[1, 1, , 1]. Indices are added so that, (i) they remain admissible, and (ii) largest
error reduction is achieved. The rule:

k
f = (
k1

k
d
) f
indicates the incremental improvement achieved by adding multi-index k to the index
set. An indicator g
k
is used to denote the error indicator associated with a given multi-
index k. It combines information from the associated dierence term
k
f and the
computational complexity involved in its estimation, given by the number of cubature
points in its evaluation, n
k
. The form of g
k
proposed in [36] is:
g
k
max
_

k
f
1f

, (1 )
n1
n
k
_
,
where is a term that weighs the relative importance of incremental error reduction
with the cost of evaluating the function at new cubature nodes. = 1 is a greedy
approach that assumes that the function evaluation is cheap or it is well-behaved
(smooth). = 0 disregards the error reduction in adding new multi-indices to I.
Typically is chosen between these extreme limits by a rough knowledge of the
function behavior. The forward neighbor of an element k of I is dened as:
F
k
:= {k +e
j
, 1 j d}.
The next multi-index to be added, s, is selected so that: (i) s / I, (ii) s

kI
F
k
,
and (iii) I {s} is admissible [23]. This is implemented by considering two subsets O
and A of I. The set O contains the old multi-indices which need not be tested any
more, while A contains those which are in consideration for inclusion in I. The set O
is set to {1} initially and A to F
1
. The multi-index in A with the highest indicator g
is added to O and removed from A. A is then completed by the forward neighbors of
k that keep I = AO admissible. The error indicators g for the newly added indices
are computed. This process repeats until the global error indicator :=

kA
g
k
is
greater than a given tolerance . The algorithm is written as:
Initialization:
set O = {1}
set A = F
1
set r =

kAO

k
f
set =

kA
g
k
while ( > ) do
select k A with largest g
k
O O k
A A/k
g
k
For s F
k
such that (s e
j
) O for j = 1, , d do
A A s
r r +
s
f
+ g
s
15
end for
end while
return r
5. Applications and results. In this section, we test our sparse-grid imple-
mentations to three separate cases of multivariate integration, arising from physical
problems. These examples have been obtained from [16] for easy bench-marking.
We compare our results with both published literature, and algorithms included in
the open-source multi-dimensional integration library CUBA [44], available on the
web. Integrals arising in the chosen applications range from moderate dimensions
(3 d 6) to high dimensions d > 7.
5.1. Test functions. We rst consider the integral of the form:
I =
_
1 +
1
d
_
d
_
[0,1]
d
d

i=1
(x
i
)
(1/d)
dx
The exact value of this integral is 1. This integral is chosen because it exhibits a large
sensitivity near the origin. In this case, we set d = 5. Our quadrature rules assumed
the basic integral to be in the hypercube [1, 1]
d
. Thus, we perform a change of
variables x
y+1
2
to re-write the integral as:
I =
_
1 +
1
d
_
d
2
(d+
1
d
)
_
[1,1]
d
d

i=1
(y
i
+ 1)
(1/d)
dy
This integral can directly be evaluated using any of the quadrature rules discussed
earlier. Here, we only present results comparing our sparse-grid implementation, the
adaptive sparse-grid method and the results obtained by using the four available
solvers in CUBA (VEGAS, SUAVE, DIVONNE and CUHRE). The reader is referred
to [44] for a detailed description of these four algorithms. We only mention here that
VEGAS is the primary quasi-Monte Carlo based integration scheme based on Sobol
sequences and importance sampling, whereas SUAVE is a modication of VEGAS,
to include an adaptive sub-division strategy. CUHRE is a deterministic integration
method based on quadrature rules, while DIVONNE is also a Monte Carlo based
method, which uses stratied sampling.
The results of the above three classes of methods are shown in Fig. 5.1. Here,
we plot the absolute error in the integral |I
d
f Q
d
l
f| versus the total number of
function evaluations necessary in the corresponding rule. There are other possible
comparison criteria, but we limit ourselves to this one in this paper. We observe from
panel (a), that the Gauss-Patterson rule performs the best among the four quadrature
rules discussed. Gauss quadrature also performs well but Gauss-Patterson has lesser
error for asymptotically large number of function evaluations. However, the main
notable point here is that Clenshaw-Curtis rule does not perform well. This is perhaps
because Clenshaw-Curtis rule places many points near the origin, where the function
us very sensitive. The superiority of Gauss-Patterson rule over Clenshaw-Curtis is
expected to diminish when the dimension d increases as the latter requires fewer
function evaluations for polynomial exactness when l < d [16].
The Monte Carlo schemes of CUBA do not perform very well as the exhibit a
slower convergence rate compared to the Gauss and Gauss-Patterson rules. This
agrees with the results shown in [16] and [43]. This is because the function is smooth
16
(a) Sparse-Grid (b) CUBA
(c) Adaptive Sparse-Grid
Fig. 5.1: Performance of various cubature rules (5.1): absolute error in the integral is plot-
ted against the number of function evaluations required. (a) Sparse-grid implementation (all
four quadrature rules), (b) results of CUBA package, and, (c) adaptive sparse-grid imple-
mentation.
in the interior of the unit hypercube, and Monte Carlo methods do not take advantage
of this fact.
Finally, the sparse-grid implementation is found to improve the asymptotic per-
formance of both the trapezoidal rule and the Clenshaw-Curtis rule, but does not
signicantly aect the Gauss and Gauss-Patterson rules. Our dimension-adaptive
strategy is expected to be most useful when dierent dimensions have unequal im-
portance. In this case, the integral is symmetric, and thus, we do not see a notable
improvement in the already accurate Gauss rules.
5.2. Absorption Problem. This example arises from the transport problem
that describes the behavior of a particle moving through a 1-D slab of unit length
[43]. At each step, the particle travels a random distance between [0, 1]. If it does not
leave the slab, it may be absorbed with a probability 1 . The integral equation
17
describing the motion of this particle is:
y(x) = x +
_
1
x
y(z) dz
The exact solution to this equation is:
(a) Sparse-Grid (b) CUBA
Fig. 5.2: Performance of various cubature rules (5.2): absolute error in the integral is plot-
ted against the number of function evaluations required. (a) Sparse-grid implementation (all
four quadrature rules), (b) results of CUBA package, and, (c) adaptive sparse-grid imple-
mentation.
y(x) =
1

e
(1x)
.
We are interested in the solution y(0) when = 0.5. This result may be obtained by
considering a simplied multi-dimensional integral (see [43] for details):
y(x) =
_
[0,1]
d
d1

n=0
F
n
(x, z)dz,
where,
F
n
(x, z) =
n
(1 x)
n
_
_
n1

j=1
z
nj
j
_
_
_
_
1 (1 x)
n

j=1
z
j
_
_
.
In this problem, we set d = 10. The results obtained by our sparse-grid implementa-
tion and the algorithms in CUBA are shown in Fig. 5.2.
We observe that the trapezoidal rule is again outperformed by the other three
rules. Clenshaw-Curtis rule performs as well as Gauss-Patterson and Gauss quadra-
ture. We ran the adaptive sparse-grid method for this example using all the 1-D rules,
but show the results only for a Clenshaw-Curtis rule as it is similar to both the Gauss
rules. We see that the adaptive-sparse grid performs better than any sparse-grid rule
for this problem, as it requires fewer function evaluations for a given accuracy level.
18
Similar to the earlier example, the Monte Carlo methods of CUBA do not exhibit
a rapid convergence rate. This is due to the smooth nature of the integrand and
its high-dimensionality. For a non-smooth integrand of equally large dimension, we
expect the performance of Monte-Carlo methods to be comparable to sparse-grid
methods.
5.3. Integral Equation. The second test integral we consider, is also taken
from [16] and arises from an integral equation obtained using a nite element or
boundary element discretization of a problem. We do not provide too many details
about the physical problem. These may be obtained in [16]. Given = [0, 1]
2
, we
compute the 4-dimensional integral:
a
b,c
=
_
(b1+1)h
(b11)h
_
(b2+1)h
(b21)h
_
(c1+1)h
(c11)h
_
(c2+1)h
(c21)h

b
h
(x)
c
h
(y)
|x y|
dydx,
where,

b
h
(x) =
_

h
(x hb) for x hb ,
0 else,
and

h
(x) = max{(1 |x
1
|/h)(1 |x
2
|/h), 0}.
The exact value of the integrals are obtained from [16]. In this example, we set
h = 1/32, a = (a
1
, a
2
) = (0, 0) and b = (b
1
, b
2
) = (0, 3). For these values, the
integral has sharp edges in the interior of the domain. The results of the sparse-grid
(a) Sparse-Grid (b) CUBA
Fig. 5.3: Performance of various cubature rules (5.3): absolute error in the integral is plot-
ted against the number of function evaluations required. (a) Sparse-grid implementation (all
four quadrature rules), (b) results of CUBA package, and, (c) adaptive sparse-grid imple-
mentation.
implementation of this integral are shown in Fig. 5.3. The sparse-grid results are
similar to the ones shown for the previous two examples. Trapezoidal rule performs
19
the worst among all four 1-D quadrature rules. Gauss and Gauss-Patterson rules in
this example yield identical error values up to the third decimal place. Both these
rules converge adequately well, although Gauss-Patterson requires far fewer function
evaluations to yield the same error level. Clenshaw-Curtis rule converges at a rate
comparable to the Gaussian rules. Since the Gauss rules converge the fastest here, we
only show results of the adaptive-sparse grid method for the Gauss quadrature. As
in the case of example 1, the adaptive sparse-grid method only marginally improves
the performance of regular sparse-grid method.
In this example, we observe that the Monte-Carlo methods perform as well as the
leading quadrature rules. All the four Monte Carlo rules converge at rates comparable
to the Gauss quadrature. This is because of the non-smooth nature of the integrand
in the interior of the domain. This is detrimental to the performance of quadrature
rules, but does not aect the Monte Carlo based methods. Also, in this case, the
dimension of the integrand is low d = 4, which means the hyper-cube is well-sampled
even by a moderate number of points.
6. Conclusions. In this term paper, we have explored various methods for
multi-dimensional integration. We performed an exhaustive literature review on this
topic, summarizing most of the relevant work and providing references for more ob-
scure works. Our main focus from an implementation point of view was the sparse-grid
method introduced by Smolyak in 1963. We have implemented the sparse-grid method
based on various quadrature rules, and later extended it to adaptive-sparse grid meth-
ods, after noting the key challenges faced by the regular sparse-grid approach. We
tested our sparse-grid implementation by comparing our results against published lit-
erature and also competing Monte Carlo methods for three physical problems arising
in nature.
We nd that for smooth integrands of moderate to large dimensions, sparse-grid
methods may be signicantly better than Monte Carlo methods. For non-smooth inte-
grands, this edge of sparse-grid methods fades, and more ecient adaptive-sparse grid
implementations may be possible. For large dimensional integrands which are highly
discontinuous, Monte Carlo methods will also perform poorly because they cannot
possibly sample the whole domain adequately well to capture all the discontinuities.
From our results, we have also found that dimension-adaptive sparse grid methods
do perform better than regular-sparse grids, but only in some situations. Their per-
formance in most cases, is comparable to Quasi-Monte Carlo methods. However, in
practice, it is seen that the choice of the numerical scheme for cubature is highly
dependent on the nature of the integrand - there is no single ultimate method for
cubature that circumvents the drawbacks of the rest.
REFERENCES
[1] A. H. Stroud, and D. Secrest, Gaussian Quadrature Formulas, Englewood Clis, N.J.,
Prentice-Hall, 1969.
[2] C. W. Clenshaw, A. R. Curtis, A method for numerical integration on an automatic com-
puter, Numerische Mathematik, 2 (1960), pp. 197-205.
[3] L. N. Trefethen, Is Gauss Quadrature Better than Clenshaw-Curtis?, SIAM Review, 50
(2008), pp. 6787.
[4] G. H. Golub, and J. H. Welsch, Calculation of Gauss Quadrature Rules, Mathematics of
Computation, 23 (1969), pp. 221230.
[5] A. S. Kronrod, Nodes and Weights of Quadrature Formulas, Consultants Bureau, New-York,
1965.
20
[6] T. N. L. Patterson, The optimum addition of points to quadrature formulae, Math. Comp.,
22 (1968), pp. 847856.
[7] T. N. L. Patterson, Modied optimal quadrature extensions, Numerische Mathematik, 64
(1993), pp. 511520.
[8] J. Waldvogel, Fast construction of the Fejer and Clenshaw-Curtis quadrature rules, BIT
Numerical Mathematics, 46 (2006), pp. 195202.
[9] W. Gentleman, Implementing the Clenshaw-Curtis quadrature, I - methodology and experi-
ence, Commun. ACM, 15 (1972), pp. 337342.
[10] W. Gentleman, Implementing the Clenshaw-Curtis quadrature, II - computing the cosine
transform, Commun. ACM, 15 (1972), pp. 343346.
[11] R. Piessens, E. de Doncker, C.

Uberhuber, and D. Kahaner, QUADPACK - a subroutine
package for automatic integration, Springer-Verlag, 1993.
[12] S. A. Smolyak, Quadrature and interpolation formulas for tensor products of certain classes
of functions, Soviet Math. Dokl. , 4 (1963), pp. 240243.
[13] K. Petras, Fast calculation of coecients in the Smolyak algorithm, Numerical Algorithms,
26 (2001), pp. 93103.
[14] K. Petras, On the Smolyak cubature error for analytic functions, Advances in Computational
Mathematics, 12 (2000), pp. 7193.
[15] K. Petras, Smolyak cubature of given polynomial degree with few nodes for increasing dimen-
sion, Numerische Mathematik, 93 (2003), pp. 729753.
[16] T. Gerstner, and M. Griebel, Numerical integration using sparse grids, Numerical Algo-
rithms, 18 (1998), pp. 209232.
[17] H. J. Bungartz, and M. Griebel, Sparse Grids, Acta Numerica, 13 (2004), pp. 147269.
[18] V. Barthelmann, E. Novak, and K. Ritter, High dimensional polynomial interpolation on
sparse grids, Advances in Computational Mathematics, 12 (2000), pp. 273288.
[19] E. Novak, and K. Ritter, High dimensional integration of smooth functions over cubes,
Numerische Mathematik, 75 (1996), pp. 7998.
[20] E. Novak, and K. Ritter, Simple cubature formulas with high polynomial exactness, Constr.
Approx., 15 (1999), pp. 499522.
[21] M. Holtz, Sparse grid quadrature in high dimensions with applications in nance and insur-
ance, Ph.D. Thesis, Rheinischen Friedrich-Wilhelms-Universitat Bonn, 2008.
[22] G. W. Wasilkowski, and H. Wozniakowski, Explicit cost bounds of algorithms for multi-
variate tensor product problems, J. Complexity, 11 (1995), pp. 156.
[23] O. P. Le Matre and O. M. Knio, Spectral Methods for Uncertainty Quantication, First ed.,
Scientic Computation Series, Springer, 2010.
[24] J. Stoer, and R. Bulirsch, Introduction to Numerical Analysis, Eighth ed., Springer-Verlag,
New-York, 1980.
[25] P. Davis, and P. Rabinowitz, Methods of numerical integration, Academic Press, 1975.
[26] N. Trefethen, Spectral Methods in MATLAB, SIAM, Philadelphia, 2000.
[27] C. Zenger, Sparse Grids, SFB Bericht 342/18/90 A, Institut f ur Informatik, TU M unchen,
1990.
[28] J. Friedman, Multivariate adaptive regression splines, Annals of Statistics, 19 (1991), pp. 1
141.
[29] T.-X. He, Dimensionality reducing expansion of multivariate integration, Birkha user, 2001.
[30] G. Wahba, Spline models for observational data, SIAM Philadelphia, 1990.
[31] J. Garcke, and M. Griebel, Classication with anisotropic sparse grids using simplicial basis
functions, Intelligent Data Analysis, 6 (2002), pp. 483502.
[32] T. Bonk, A new algorithm for multi-dimensional adaptive numerical quadrature, in: Adaptive
Methods - Algorithms, Theory and Applications, eds. W. Hackbusch and G. Wittum,
Vieweg, Braunschweig, 1994, pp. 5468.
[33] A. Genz, and A. A. Malik, An adaptive algorithm for numerical integration over an n-
dimensional rectangular region, J. Comput. Appl. Math., 6 (1980), pp. 295302.
[34] F. R. T. Nobile, and C. Webster, A sparse grid stochastic collocation method for partial
dierential equations with random input data, SIAM J. Numer. Anal., 46 (2008), pp. 2309
2345.
[35] P. van Dooren, and L. de Ridder, An adaptive algorithm for numerical integration over an
n-dimensional cube, J. Comp. Appl. Math, 2 (1976), pp. 207217.
[36] T. Gerstner, and M. Griebel, Dimension-Adaptive Tensor-Product Quadrature, Computing,
71 (2003), pp. 6587.
[37] H. J. Bungartz, and S. Dirnstorfer, Multivariate quadrature on adaptive sparse grids,
Computing, 71 (2003), pp. 89114.
[38] J. D. Jakeman, and S. G. Roberts, Local and dimension adaptive sparse grid interpolation
21
and quadrature, CoRR, (2011).
[39] J. Berntsen, T. O. Espelid, and A. Genz, An adaptive algorithm for the approximate cal-
culation of multiple integrals, ACM Trans. Math. Soft., 17 (1991), pp. 437451.
[40] C. P. Robert, and G. Casella, Monte Carlo Statistical Methods, Springer Texts in Statistics,
Second ed., Springer, 2004.
[41] I. Sloan, and H. Wozniakowski, When are quasi-Monte Carlo algorithms ecient for high-
dimensional integrals?, J. Complexity, 14 (1998), pp. 133.
[42] M. McKay, W. Conover, and R. Beckman, A comparison of three methods for selecting
values of input variables in the analysis of output from a computer code, Technometrics,
21 (1979), pp. 239245.
[43] W. J. Morokoff, and R. E. Caflisch, Quasi-Monte Carlo Integration, J. Comput. Phys.,
122 (1995), pp. 218230.
[44] T. Hahn, CUBA- a library for multidimensional numerical integration, Comput. Phys. Com-
mun., 168 (2005), pp. 7895.
[45] H. Niederreiter, Quasi-Monte Carlo Methods and Pseudo-Random numbers, SIAM Philadel-
phia, 1992.
[46] I. Sobol, On the distribution of points in a cube and the approximate evaluation of integrals,
U.S.S.R. Computational Mathematics and Mathematical Physics, 7 (1967), pp. 86112.
[47] I. H. Sloan, and S. Joe, Lattice Methods for Multiple Integration, Oxford University Press,
Oxford, 1994.
[48] H. N. Mhaskar, Neural Networks and Approximation Theory, Neural Networks, 9 (1996),
pp. 711722.
[49] R. Sch urer, Parallel high-dimensional integration: Quasi-monte carlo versus adaptive cuba-
ture rules, in proceedings of: Computational Science - ICCS 2001, International Conference,
San Francisco, CA, USA, May 28-30, 2001.
[50] F. Sprengel, Periodic interpolation and wavelets on sparse grids, Numer. Algorithms, 17
(1998), pp. 147169.
[51] M. Griebel, A parallelizable and vectorizable multi-level algorithm on sparse grids, in: Parallel
Algorithms for Partial Dierential Equations, ed. W. Hackbusch, Notes on Numerical Fluid
Mechanics, Vol. 31, Viewweg, Braunschweig, 1991.
[52] M. Griebel, and G. Zumbusch, Adaptive Sparse Grids for Hyperbolic Conservation Laws, in:
Proc. of the 7th Internat. Conf. on Hyperbolic Problems, Birkhauser, Basel, 1998.
[53] T. Gerstner, Adaptive hierarchical methods for landscape representation and analysis, in:
Proc. of the Workshop on Process Modeling and Landform Evolution, eds. S. Hergaten
and H. Neugebauer, Springer, Berlin, 1998.

You might also like