You are on page 1of 8

Proceedings of the 1992 Winter Simulation Conference

ed. J. J. Swain, D. Goldsman, R. C. Crain, and J. R. Wilson

LATIN HYPERCUBE SAMPLING AS A TOOL IN UNCERTAINTY ANALYSIS OF COMPUTER MODELS

Michael D. McKay

Statistics Group, Los Ahunos National Laboratory


Los Alamos, New Mexieo 87545, U.S.A.

ABSTRACT are not only desirable but necessary. To find appropri-


ate methods, one needs precise statements of objectives.
Tlis paper addresses several aspects of the analysis of Unfortunately for many investigators, what begins as an
uncertainty in the output of computer models arising from intuitive concept, like uncertainty or sensitivity, can eas-
uncertainty in inputs (parameters). Uncertainty of this ily end up as an impuxisely stated and misleading ques-
type, which is separate and distinct from the randomness tion that suggests an inappropriate method of answering
of a stochastic model, most often arises when input values it. This paper tries to address that problem by providing
are guesstimates, or when they are estimated from data, or a framework within which questions of uncertainty and
when the input parameters do not actually correspond to importance are posed.
observable quantities, e.g., in lumped-parameter models. This discussion will be limited to particular consid-
Uncertainty in the output is quantified in its probability erations from the diversity of issues comprising model
distribution, which results from treating the inputs as analysis. First of all, this paper is not an empirical com-
random variables. The assessment of which inputs are parison or evaluation of methods currently used in the
important with respect to uncertainty is done relative to analysis of computer models. Examples of such stud-
the probability distribution of the output. ies are Saltelli and Homma (1992), Saltelli and Marivoet
(1990), Iman and Helton (1988) and Downing. Gardner
and Hoftinan (1985). Secondly, we will consider ques-
1 INTRODUCTION tions dealing with the values of the output and the input,
and not, specifically, with the form or structure of their
Uncertainty in a model output-how big it is and what relationship. In particular, we will focus on the issue of
it is attributable to-is not a new issue. The aecuraey uncertainty in the calculated value due to uncertainty in
of models has always been a concern, and things are no input values. To provide a setting, we will introduce cur-
different today with computationally intensive models on rent methods from two different perspectives related to
computers. Three things make the analysis of computer model analysis. With that background, we will develop
models usually more difficult than that of other models. a new and useful paradigm for analysis and methods de-
Fret, the nature of the relationship between the output velopment for issues related to uncertainty in the output
and the inputs is often very complex. Seeond, there can value.
be very many inputs for which the cost of data collection
is high. Finally, when the output is something like a fore-
2 TWO PERSPECTIVES
cast, comparison between predicted and calculated values
is essentially impossible. Despite these difficulties, ques- Model analysis can be thought of as a collection of
tions like “What is the uncertainty in the calculation?” questions asked about output and input values, Although
continue to be heard both in scientific circles and in po- a simplification, it is useful to dMin@h the questions
litical ones, where the cost of decisions resting on model as arising from one of two perspectives. This idea,
calculations can be high. previously discussed by McKay (1978, 1988), allows
Although there are several useful questions one the introduction of a new way of viewing importance
might ask about computer models, there seems to be a of inputs with regards to uncertainty in the output.
tendency to use the same kind of method to answer many The first perspective is from the space of input
of them. It is unlikely, however, that any single method values, and tends to focus at one point, like a nominal
of analysis exists that answers all questions irn model input value. Because of this, quantities of interest, like
evaluation. Movsover, an examination of typical ques- a derivative, can seem to be treated as constant over the
tions would likely suggest that different kinds of methods input space, so that the focal point really does not matter.

557
558 McKay

This perspective is termed a local perspective relative to perspectives is the use of the derivative to answer ques-
the input space. tions of a global nature. The practice is defensible if the
The second perspective is from the space of output model is essentially linear, meaning that the derivative
values. As such, its focus is not constrained a priori in does not change substantially with ZO; or that, to first
the input space, so that it is termed a global perspective order approximation, an “average” derivative is sufficient
relative to the input space. to characterize the model, again meaning that the model
is essentially linear. In what follows, a global approach
The reason for making the distinction between local
is taken and the role of the derivative is not paramount.
and global perspectives has to do with the problem of
identifying important inputs. Although “importance” has
not yet been defined, it seems reasonable to suppose that
3 UNCERTAINTY ANALYSIS
qualities that make an input important locally are not
necessarily those that make it important globally, and vice We are interested in the type of uncertainty in the output
versa. Therefore, it is necessary to realize the perspective of a model that can be characterized as being due to
of interest. the values used for the inputs. A related uncertainty,
due to the structure or form of the model itself, will not
2.1 A Local Perspective be addressed explicitly. Neither will we be concerned
with uncertainty due to errors in implementation of the
Let us suppose that there is some value of the inputs,
model on a computer. On the other hand, it is certainly
~., worth attention and that we are interested in changes
acceptable that the output might have the randomness of
in the output Y for small perturbations in inputs X
a stochastic process. In that case, we will think of the
about x O. A common question in this situation concerns
output of the model as being the cumulative distribution
propagation of error, characterized by the derivatives of
function of the observable output value. With this in
Y with respect to the components of X. Similarly,
mind, the purpose of uncertainty analysis is to quantify
one might be interested in the direction, not necessarily
the variability in the output of a computer model due to
parallel to a coordinate axis, in which Y changes most
variability in the values of the inputs.
rapidly, or in the change in Y in an arbitrary duection.
Issues like these lead one to the concept of “critical” or We proceed by fist describing the variability in the
“important” variable (or direction) as being one(s) that inputs with probability functions. Commonly, input val-
most accounts for change in Y. For propagation of error, ues are uncertain because they are guesstimates, or when
it seems to make sense to talk about each component of X they are estimated from data, or when the input parame-

as being important or not important. When the direction ters do not actually correspond to observable quantities,
becomes arbitrary, it seems natural to talk about subsets e.g., in lumped-parameter models. Treating the inputs as
of the components, rather than about individual ones. random variables introduces another layer of complica-
tion, namely, that of assigning to them probability distri-
butions. Everything that follows will depend on the dis-
2.2 A Global Perspective tributions used for the inputs, which means uncertainty in
Suppose that interest lies in the event that Y, the output or the input probability distributions leads to corresponding
prediction, exceeds some specified value, Questions that uncertainty in the analysis. As an alternative to quantify-
could arise in this case might be concerned with associ- ing that uncertainty, some kind of variational study used
ating particular inputs (components of X) or segments of to measure the effect of the distributional assumptions is
ranges with that event. Objectives of study for this ques- a possibility.
tion might be related to controlling the event, or with I When the inputs are treated as random variables, the
reducing its probability of occurrence in the real world output becomes a random variable because it is a trans-
by adjusting the values of some of the inputs. If costs formation of the inputs. Uncertainty in the output, then,
are associated with the inputs, minimum cost solutions is characterized by its probability distribution. Therefore,
might be sought. when we consi&r questions related to uncertainty in the
Clearly, both perspectives have a place in model output, Y, we will Icok to the probability distribution of
analysis. In the local perspective, interest in X is re- Y for answers.
stricted to a (small) neighborhood of a single point, and We assume that interest in uncertainty in Y can be
the derivative seems to come into play. In the global per- summed up by in these two questions: “How big is it?”
spective, interest is in values of Y, which might translate and “Can it be attributed to particular inputs?” An obvious
into a subset of, or possibly just a boundary in, the in- motivation for these questions is a desire to minimize
put space. In this case, the role of the derivative is less uncertainty in the model output, which might be achieved
clear. What tends to blur the distinction between the two by reducing the variance of some of the inputs. Thus, an
Latin Hypercube Sampling 559

important problem might be how to find a minimum-cost Although it has been suggested that a probability in-
reductionin the variance of Y by reducing the variances terval is a more appropriate measure of uncertainty than
of components of X. This problem presupposes, of is variance, the use of variance to “partition” or allocate
course, that reduction in the variance of the inputs makes uncertainty to components of X cannot be overlooked.
sense. In fact, three categories of techniques for the determi-
nation of “important” inputs, relative to uncertainty in
the output, look primarily at the variance of Y. We will
4 MEASURING UNCERTAINTY review these techniques using variance as the measure
of uncertainty before introducing the new paradigm for
We look to the probability dkdribution function of the uncertainty analysis.
output Y for information about uncertainty. Questions
like “What is the uncertainty in Y?” might be answered
using a probability interval constructed with quantiles of 5 PARTITIONING UNCERTAINTY
the distribution of Y. For example, the 0.05 and the Statements like “20% of the uncertainty in Y is due to
0.95 quantiles define an interval covering 90% of the Xl” have a nice sound, but maybe very misleading with-
probability content of the distribution of Y. Alternatively, out explanation. If we suppose that uncertainty in Y is
the difference in the two quantiles provides a range of measured by its variance, then a reasonable interpreta-
90% coverage. The use of probability internals as a tion of the statement is that the variance of Y can be
measure of uncertainty has au advantage over the use written, approximately, as the sum of two functions, one
of the variance of the distribution in that the variance depending on the distribution of X1 alone and the other
may not directly relate to coverage. This is not the independent of the distribution of Xl. This picture cap-
case, though, in the familiar normal distribution where tures a motivation for, but dces not limit, the classes of
quantiles depend in a straightforward manner on the mean techniques to be discussed.
and variance of the distribution.
Ideally, the probability distribution of Y would be 5.1 Linear Propagation of Error
known once the distribution of X is specitied. Realis-
When we are using variance to measure uncertainty,
tically, the distribution will have to be estimated, most
the problem of partitioning uncertainty reduces to that
likely, with a sample of runs using the model which re-
of tiding suitable decompositions for the variance of
lates Y to X. Simple random sampling (SRS) could be
Y. The simplest of these is the usual propagation of
used for this purpose, as well as could other sampling
error method in which Y is expressed as a Taylor series
schemes. Latin hypercube sampling (LHS), introduced
in the inputs X about some point XO. To iirst order
by McKay, Conover and Beckman (1979), is a preferred
approximation, the variance of Y is expressed as a linear
alternative to SRS when the output is a monotone function
combination of the variances of the components of X by
of the inputs. Additionally, Stein (1987) shows that LHS
choosing zo to be p, the mean value of X.
yields an asymptotic variance smaller than that for SRS.
Besides being used to estimate the probability distribu- w(m)) X ~oi) + ~. .
tion, sample values cmdd be used to construct a tolerance Y(x) = Y(xo) + ~ =( ~–
a
interval, which covers at least a specitied portion of the
probability distribution
level.
of Y with a specified confidence
(For a short discussion of probability intemd and
V[Y] s! ~’ i
(-) ’v[xil

tolerance interval, see Tietjen (1986, p, 36).) Generally,


a tolerance interval, which corresponds to a probability When the derivatives of Y are not determined numeri-
interval when the probability distribution is estimated, is cally, but estimated by the coefficients from a linear re-
based on a random sample. If usual methods for con- gression of Y on X, one seems to be making a stronger
structing tolerance intervals based on nonparametric tech- assumption about the linear dependence of Y on X.
niques or on the normal distribution-e.g,, in Bowker and However, it is generally unknown whether the value of
Liebennan (1972, pp. 309-316) where they are called the actual derivative of Y or the value of au average slope
tolerance limits-are applied to the LHS, however, the is preferred in the variance approximation. In a technique
results are only approximate. that could be related to linear propagation of error, Wong
In the remainder of this paper, we will be mnca-ned and Rabitz (1991) look at the principal components of
only with probabMy distributions and their moments. the partial derivative matrix.
Furthermore, we will assume that sample sizes are suffi- Although not precisely a variance decompmition,
ciently large to rule out concern about sampling error in correl:iion coefficients have been used to indicate rel-
all regions of interest in estimated distribution functions. ative importance of the inputs. They are mentioned here
560 McKay

because they are closely related to linear regression coef- (1991) examines a probability distribution of the partial
ficients. In a similar way, rank transformed values of Y derivatives of the output arising from particular sampling
and X have been used for rank correlation and rank re- designs.
gression by McKay, Conover and Whiteman (1976), md Finally, I mention a partition of variance described
Iman, Helton and Campbell (1981a, 1981 b). Also, by Cox (1982). Though not actually a sampling method,
Oblow (1978) and Oblow, Pin and Wright (1986) use a the elements of the decomposition are likely to estimated
technique whereby the capability of calculating deriva- from sampled data, in practice, The identity used in-
tives into the model is added using a precompiled called volves the variances of conditional expectations of the
GRESS. output given subsets of the inputs. As with general an-
alytical approximation, it is not possible to isolate terms
5.2 General Analytical Approximation for all the individual components of X.

The natural extension of linear propagation of error, to


add more terms in the Taylor series, makes it ticult to 6 MATHEMATICAL FRAMEWORK
interpret variance decomposition component-wise for X.
The uncertainty in the output that we focus on is that
That is, the introduction of cross-prwluct terms brings
attributable to the inputs. Specifically, we are ignoring
cross-moments into the variance approximation, which
the uncertainty in calculations due to the possibility that
makes the approximation no longer separable with respect
the structure of the model might be deficient. We let
to the inputs. Nevertheless, one may feel it necessary to
Y denote the calculated output, which depends on the
use higher order terms in variance approximation. The
input vector, X, of length p through the computer model,
adequacy of the approximation to Y might be used as
h(*). Because proper values of the components of X may
a guide to the adequacy of the variance approximation.
be unknown or imprecisely known, or because they can
However, there is no particular reason to think that one
only be described stochastically, it seems reasonable to
implies the other.
treat X as a random variable and to describe uncertainty
Similarly, the linear approximation of Y used in the
about X with a probability function. Uncertainty in the
regression can be generalized to an arbitrary analytical
calculation Y is captured by its own probability function,
approximation from which, in theory, the variance of Y
which is what we will study. In summary, then,
can be derived either mathematically or through simula-
tion. Alternatively, one can use a method proposed by Y = h(x)
Sacks, Welch, Mitchell and Wynn (1989), which looks at X * f.(z) , XCRP
the model as a realization of a stochastic process. The
difficulties in interpretation and assessing adequacy just y w fY(Y) .

mentioned for the higher order Taylor series expansion For now, we will think of fz as known, although in
apply here, tco. practice, knowledge about it is at best incomplete.
We look to the probability distribution, f~, for an-
5.3 Sampling Methods swers to the question “What is the uncertainty in Y’?’
This tlnal category of partitioning techniques relies on a That is to say, we can use the quantiles of the distribu-
sample (usually, some type of random sample) of values tion of Y to construct probability intervals. Alternatively,
of Y whose variability can be partitioned according to the one might use the variance of Y to quantify uncertainty.
inputs without an apparent assumed functional relation In either case, under the assumption that fy can be ad-
between Y and X. In this category is a Fourier method equately estimated, questions answerable with quantiles
of Cukier, Levine and Shuler (1978). The procedure or moments are covered. However, as has already been
says that values of each component of X are to be mentioned, the issue of how well f. is known will surely
sampled in a periodic fashion, with different periods for have to be addressed in practice.
each component. The variability (sum of squares) of We relate questions of importance of inputs to the
the resulting values of Y can be written as a sum of probability distribution of Y. That is, we will consider
terms corresponding to the different periods, and thus questions like “Which variables really contribute to (or
associated with the different components. It is unclear ailed) the probability distribution of the output?” What
how this relates to linear propagation of error, but it it means to be important is defied in somewhat of a
may be just another way to estimate the same quantities. backwards way as beii the complement of unimportant.
The original Fourier method applies to continuous inputs. We say that a subset of inputs is unimportant if the
It is extended to binary variables by Pierce and Cukier conditional distribution of the output given the subset is
(1981). Again, the relation to linear propagation of essentially independent of the values of the inputs in the
error is unclear. Another procedure suggested by Morris subset. We now examine these ideas in more detail.
Latin Hypercube Sampling 561

Suppose that the vector X of inputs is partitioned equal to their margimll counterparts for all values of XZ.
into Xl and X2. Corresponding to the partition, we Specifically, the variance (over XJ of the conditional
write expectation in (2) is zero. It is unlikely, of course, that
Y = h(x) any (realistic set) of the inputs is completely unimportant.
= h(xl, xz) . llhe~fore, the equality between marginal and conditional
quantities will be true only in approximation, with the
Furthermore, we assume that Xl and A’z are stochasti-
degree of approximation linked to the level of acceptance
cally independent, meaning that
of the difference between the marginal and conditional
xi wf~(z~) , i= 1, 2 distributions of the output, Y.
By inference, if Xl, the complement to X2, is (com-
j-x(z) = f,(z~).f2,(z,)
pletely, singly) important, the conditional variance of Y
We address the question of the unimportance of X2 by given Xl is zero, and the variance of the conditional ex-
looking at pectation of Y given XI is the marginal variance. As
before, these relations usually hold only in approxima-
.fvl~, = distribution Y given X2 = X2 . tion. Nevertheless, a comparison of terms in (2) will
offer a way to look at the degree of importance.
as compamd to ~Y. We say that X2 is unimportant if fY The variance decomposition in (2) suggests a related
rmd fy 1~, are not substantially different for all values of identity from a one-way analysis of variance, in which
X2 of interest. Similarly, we say that Xl contains all the total sum of squares is written as the sum of two
the important inputs if XZ is unimportant. Of course, the components, a “between level” component and a “within
actual way to compare fv and ~yl., must be determined. level” component. It will be the analysis of variance
We use the term “screening” to mean au initial approach we will use to suggest which components of X
process of separating inputs into Xl, potentially impor- belong in X1 and which in X2. What we will do is to
tant ones, and X2, potentially unimportant ones, In the use r “replicate” Latin hypercube samples of size k. The
next section, a simple method of partitioning the inputs, same k values of each component of X will appear in
following McKay, Beckman, Moore and Pickard (1992), each replicate but the matching within each one will be
will be discussed. done independently. The k values will correspond to the
k levels in the sum of squares decomposition.
In an LHS as introduced by McKay, Conover and
7 A SIMPLE SCREENING HEURISTIC
Beckman (1979), when the inputs are continuous and
We now describe a simple, twestep screening process. stochaatically independent, the range of each component
The first step is to partition X into a set of “important” of X is divided into k intervals of equal probability
components, Xl, and a set of “unimportant” components, content. Simple modifications can be made to handle
X2. The second step is a partial validation to estimate discrete inputs (McKay 1988) and dependence (McKay
how the components in X2 actually change fY ICz, to be 1988, Stein 1987, Iman and Conover 1982). For a true
used to decide if X2 is really unimportant. LHS, a value is selected from each interval according
to the conditional distribution of the component on the
7.1 Partitioning the Input Set interval. For this application, it will be sufficient to use
the probability midpoiut of the interval as the value. The
We say that X2, a subset of X, is (completely) unim-
k values for each input are matched (paired) at random
portant when the marginal distribution of Y, equals the
to form k input vectors. For the replicates needed in this
conditional distribution of Y given X2.
screening heuristic, r independent matchings of the same
(1) values are used to produce the n = k x r input vectors
fy = fyi., for ~1 ValW of X2
in total.
A way to get an idea of how closely the equality in A design matrix, M, for an LHS is given in (3).
(1) holds is through the variance expression (2) which Each column contains a random permutation of the k
expresses the marginal variance of fy in terms of the values for an input. Each row of the matrix corresponds
conditional mean and variance of fu 1ZZ. The variance of to a random match ; of values for the p inputs used in
Y can be written as a computer run.
Vll V12 ... Vlp 1
V[Y] = E[V[Y I X2]]+ V[E[Y \ X2]] . (2) 7)21 ‘l&z ... v2p
M= (3)
.. .. .. ..
Equality of the maqjnal and conditional distributions in . . . .
(1) implies that the conditional mean and variance are ?)k1 v~z ... vkp ]
562 McKay

A design matrix for any of the r replicates in this ap- 7.2 Validation of the Partition
plication is obtained by randomly and independently per-
Very simply stated, in the validation step we look at Xl
muting the values in every column of M.
and X2 and try to assess how well the partition meets
After making the necessary n computer runs using the objective of isolating the important inputs to Xl. We
replicated LHS, we begin by looking at the components of propose using a very elementary sequence of steps that
X one at a time. Let U denote the component of interest begins with a sample design resembling Taguchi’s (1986)
iuX, anddenote thekvaluesof Ubyul, UZ, . . . . uk. inner array/outer.
We label n values of the output as y~j to correspond to
the ith value w, in the jth replicate (sample). The sum 1. Select a sample, S2, of the X25 and a
of squares partition corresponding to the input U takes sample, S1, of the Xls.
the form
2. For each sample element Z2 E S2, ob-

f-x
‘i=l
j=l
(Yo - ?32
= & (R’-7)2+ ;5
i=l i=l j=l
(Yo -%)2
tain the sample
{Z2 @ Sl}.
of Y corresponding to

3. Calculate appropriate statistics for each


sample in Step 2, e.g., T(zz), s~(xz) and
SST = SSB + SSW
Fy[z,.

where 4. Compare the statistics and decide if the


-r k difference X2 makes is acceptable.
iji =
:~
yijandgd~ ~~i ~
J=l 2=1
The differences seen in the statistics in Step 4 are due only
to the different values of X2 because the sample values
The statistic we have chosen to use to assess the for Xl are the same in each. Hence, the comparisons
importance of U is R2 = SSB/SST. Although R2 is are reasonable.
bounded between O and 1, the attainment of the bounds The reliability of any validation procedure needs to
is not necessarily a symmetric process. The upper bound be evaluated. In this case, S2 may not adequately cover
is reached if Y depends only on U. In that case, for any the domain of XZ, particularly as the dimension of X2
fixed value of U, say u~, the value of Y will also be increases. Merely increasing the size of S2 may not be
fixed, making SSW equal to O. As a result, R2 will be 1. an acceptable solution if the increase in the number of
On the other hand, if Y is completely independent of U, runs to generate the sample of Ys becomes impossible
we do not expect SSB (and, therefo~, R’) to be O. We to accommodate. Inadequate coverage can be due to two
now examine this last point in more detail. reasons. First, regions where the conditional distribu-
In general, the probability distribution of R2 will tion of Y really changes with XZ alone may be missed.
be unknown. To gain a little insight, however, suppose Second, there may be regions where the interaction be-
that we arbitrarily partition a random sample of size n tween X2 aud Xl in the model has a significant impact on
from a normal distribution to form R2. (An arbitrary the conditional distribution of Y. Although it has obvi-
partition would correspond to Y independent of U.) The ous deficiencies, LHS is an appropriate sampling method
expected value of R2 is (k – I)/(n – 1), which goes to for generating S2 because it provides marginal stratifica-
zero with k/n as n increases. Thus, one might consider tion for each input in X2, meaning that the individual
(k – I)/(n – 1) as a working lower bound associated ranges within the components likely have been sampled
with a completely unimportant input. adequately. Whether or not interaction between Xl and
X2 will be detected is unknown. As m alternative to
Issues that still need to be addressed include the
LHS, one might use an orthogonal array as described by
apportionment of n between r and k, the extension of the
Owen (1991), which provides marginal stratification for
design and decomposition to more that one component at
all pairs of input variables.
a time, and the interpretation of values of R2.

Whether or not one uses R2 or additional methods


to develop the sets Xl and X2, there remains the issue
of evaluating the partition to see how effective it is in 8 APPLICATION
satisfying (1). In fact, iterating between a partition and
validation is what one would do in practice. The next For an application of these methods, the reader is referred
section discussion validation. to McKay, Beckman, Moore and Plckard (1992).
Latin Hypercube Sampling 563

ACKNOWLEDGMENTS McKay, M. D., Beckman, R. J., Moore, L. M., and Picard,


R. R. (1992). An alternative view of sensitivity in
This work was supported by the United States Nuclear
the analysis of computer codes. In Proceedings of the
Regulatory Commission, Office of Nuclear Regulatory
American Statistical Association Section on Physical
Research. The author expresses his thanks folr many
and Engineering Sciences, Boston, Massachusetts.
valuable conversations with and help from Richard Beck-
McKay, M. D., Conover, W. J., and Beckman, R. J.
man, Richard Pickard, Lisa Moore, Thomas Bement, Paul
(1979). A comparison of three methods for selection
Kvam and other members of the Statistics Group at the
values of input variables in the analysis of output from
Los Alamos National Laboratory.
a computer code. Technometrics, 22(2):239-245.
McKay, M. D,, Conover, W, J., and Whiteman, D. E.
(1976). Report on the application of statistical tech-
niques to the analysis of computer codes. Technical
REFERENCES Report LA-NUREG-6526-MS, Los Alamos National
Laboratory, Los Alamos, NM.
Bowker, A. H. and Lieberman, G. K. (1972). Engineer- Morris, M. D. (1991). Factorial sampling plans for pre-
ing Statistics. Prentice-Hall, Englewood Cliffs, New liminary computational experiments. Technometrics,
Jersey. 33(2):161-174.

Cox, D. C. (1982). An analytical method for uncertainty Oblow, E. M. (1978). Sensitivity theory for reactor
analysis of nonlinear output functions, with application thermal-hydraulics problems. Nuclear Science and
to fault-tree analysis. IEEE Transactions on Reliability, Engineering, 68:322-337.
R-31(5):265-68. Oblow, E. M., Pm, F. G., and Wright, R. Q. (1986). Sen-
Cukier, R. I., Levine, H. B., and Shuler, K. E. (1978). sitivity analysis using computer calculus: A nuclear
Nonlinear sensitivity analysis of mukiparameter model waste isolation application. Nuclear Science and En-
systems. Journal of Computational Physics, 26:1-42. gineering, 94:46-65.

Downing, D. J., Gardner, R. H., and Hoffman, F. O. Owen, A. B. (1992). Orthogonal arrays for computer
(1985). An examination of response-surface method- integration and visualization. Statistic Sinica, 2(2).
ologies for uncertainty analysis in assessment of mod- Pierce, T. H. and Cukier, R. I. (1981). Global nonlinear
els. Technometrics, 27(2):151-163. sensitivity analysis using Walsh functions. Journal of
Iman, R. L. and Conover, W. J. (1982). A distribution Computational Physics, 41:427-43.
free approach to inducing rank correlation among input Sacks, J., Welch, W. J., Mitchell, T. J,, and Wynn, H. P.
variables. Communications in Statistics—Sirr~ulation (1989). Design and analysis of computer experiments.
and Computation, B 11:311-334. Statistical Science, 4(4):409-435,
Iman, R. L. and Helton, J. C. (1988). An investigation Saltelli, A. and Homma, T. (1992). Sensitivity analy-
of uncertainty and sensitivity analysis techniques for sis for a model outpu~ Performance of black box
computer models. Risk Analysis, 8(1):71-90. techniques on three international benchmark exercises.
Iman, R. L., Helton, J. C., and Campbell, J. E. (1981a). Computational Statistics & Data Analysis, 13:73-94.
An approach to sensitivity analysis of computm mod- Saltelli, A. and Marivoet, J. (1990). Non-parametric sta-
els: Part I—introduction, input variable selection and tistics in sensitivity analysis for model outputi A com-
preliminary variable assessment. Journal of Quali~ parison of selected techniques. Reliability Engineering
Technology, 13(3):174-183. and System Safety, 28:229-53.
Iman, R. L., Helton, J. C., and Campbell, J. E. (1981b). Stein, M. (1987). Large sample properties of simula-
An approach to sensitivity analysis of computer mod- tions using Latin hypercube sampling. Techrwmetrics,
els: Part II-ranking of input variables, response sur- 29(2):143-151.
face validation, distribution effect and technique syn- Taguchi, G. (1986). Introduction to Quality Engineering.
opsis. Journal of QualiQ Technology, 13(4):232-240. Kraus International Publications, White Plains, New
McKay, M. D. (1978). A comparison of some sensitivity York.
analysis techniques. Presented at the ORSALUMS Tietjen, G. L. (1986). A Topical Dictionary of Statistics.
annual meeting, New York. Chapman and Hall, New York.
McKay, M. D. (1988). Sensitivity and uncertainty analy- Wong, C. F. and Rabitz, H. (1991). Sensitivity analysis
sis using a statistical sample of input values. In Ronen, and principal component analysis in free energy cal-
Y., editor, Uncertainty Analysis, chapter 4, pages 145- culations. Journal of Physics and Chemistiy, 95:9628-
186. CRC Press, Bcca Raton, Florida. 9630.
564 McKay

AUTHOR BIOGRAPHY Laboratory for 19 years. His current research interests


include uncertainty analysis of computer models and the
MICHAEL D. MCKAY has been a technical staff mem- tmatment of categorical variables as random effects in
ber in the Statistics Group at the Los Alamos National ordinary regression models.

You might also like