This action might not be possible to undo. Are you sure you want to continue?

BooksAudiobooksComicsSheet Music### Categories

### Categories

### Categories

Editors' Picks Books

Hand-picked favorites from

our editors

our editors

Editors' Picks Audiobooks

Hand-picked favorites from

our editors

our editors

Editors' Picks Comics

Hand-picked favorites from

our editors

our editors

Editors' Picks Sheet Music

Hand-picked favorites from

our editors

our editors

Top Books

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Audiobooks

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Comics

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Sheet Music

What's trending, bestsellers,

award-winners & more

award-winners & more

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

ISSN 1979-3898

Journal of Theoretical and Computational Studies

**Latin Hypercube Sampling for uncertainty analysis
**

N. A. Wahanani, A. Purwaningsih, T. Setiadipura J. Theor. Comput. Stud. 8 (2009) 0408 Received: July 7th , 2009; Accepted for publication: August 30th , 2009

Published by

Indonesian Theoretical Physicist Group http://situs.opi.lipi.go.id/gfti/ Indonesian Computational Society http://situs.opi.lipi.go.id/mki/

**Journal of Theoretical and Computational Studies
**

Journal devoted to theoretical study, computational science and its cross-disciplinary studies URL : http://situs.jurnal.lipi.go.id/jtcs/ Editors A. Purwanto (ITS) A. S. Nugroho (BPPT) A. Sopaheluwakan (LabMath) A. Sulaksono (UI) B. E. Gunara (ITB) B. Tambunan (BPPT) F.P. Zen (ITB) H. Alatas (IPB) I.A. Dharmawan (UNPAD) I. Fachrudin (UI) Honorary Editors B.S. Brotosiswojo (ITB) M. Barmawi (ITB) M.S. Ardisasmita (BATAN) Guest Editors H. Zainuddin (UPM) T. Morozumi (Hiroshima)

Coverage area 1. Theoretical study : employing mathematical models and abstractions of a particular ﬁeld in an attempt to explain known or predicted phenomenon. E.g. : theoretical physics, mathematical physics, biomatter modeling, etc. 2. Computational science : constructing mathematical models, numerical solution techniques and using computers to analyze and solve natural science, social science and engineering problems. E.g. : numerical simulations, model ﬁtting and data analysis, optimization, etc. 3. Cross-disciplinary studies : inter-disciplinary studies between theoretical study and computational science, including the development of computing tools and apparatus. E.g. : macro development of Matlab, paralel processing, grid infrastructure, etc. Types of paper 1. 2. 3. 4. Regular : an article contains an original work. Comment : an article responding to another one which has been published before. Review : an article which is a compilation of recent knowledges in a particular topic. This type of article is only by invitation. Proceedings : an article which has been presented in a scientiﬁc meeting.

J.M. Tuwankotta (ITB) L.T. Handoko (LIPI) M. Nurhuda (UNIBRAW) M. Sadly (BPPT) M. Satriawan (UGM) P. Nurwantoro (UGM) P. W. Premadi (ITB) R.K. Lestari (ITB) T. Mart (UI) Y. Susilowati (LIPI) Z. Su’ud (ITB) M.O. Tjia (ITB) P. Anggraita (BATAN) T.H. Liong (ITB) K. Yamamoto (Hiroshima)

Paper Submission A The submitted paper should be written in English using the L TEX template provided in the web. All communication thereafter should be done only through the online submission page of each paper. Referees All submitted papers are subject to a refereeing process by an appropriate referee. The editor has an absolute right to make the ﬁnal decision on the paper. Reprints Electronic reprints including covers for each article and content pages of the volume are available from the journal site for free.

Indonesian Theoretical Physicist Group Indonesian Computational Society Secretariat Oﬃce : c/o Group for Theoretical and Computational Physics, Research Center for Physics LIPI, Kompleks Puspiptek Serpong, Tangerang 15310, Indonesia http://situs.opi.lipi.go.id/gfti/ http://situs.opi.lipi.go.id/mki/ c 2009 GFTI & MKI ISSN 1979-3898

J. Theor. Comput. Stud. Volume 8 (2009) 0408

**Latin Hypercube Sampling for uncertainty analysis
**

N. A. Wahanani, A. Purwaningsih and T. Setiadipura Pusat Pengembangan Informatika Nuklir BATAN, Kawasan Puspiptek Serpong Gedung 71 (lt. 1), Tangerang 15314, Indonesia

Abstract : The assessment and presentation of the eﬀects of uncertainty are now widely recognized as important parts of anlysis for complex system. Thus, uncertainty analysis is an important tools for a wide area of application, from the nuclear reactor waste analysis to economic calculation. Sampling-based method is one of the powerful method for the uncertainty analysis. There are several steps needed to run an uncertainty analysis,(i) construction of distribution to characterize subjective uncertainty, (ii) sampling procedures, (iii) propagation of uncertainty through model, (iv) display of uncertainty in model prediction. In this paper, development of the Latin Hypercube Sampling (LHS) module at PPINBATAN as the sampling procedure in the uncertainty analysis is reported. The LHS module is able to demonstrate the stability of the LHS method in estimating the cumulative density function. This development is a platform for further advance sampling analysis method, and a step to build an in-house complete tool of uncertainty analysis. Review of the LHS method and its superior to standard random sampling and importance sampling is also discussed. Keywords : sampling-based uncertainty analysis, Latin Hypercube Sampling E-mail : sintaadi@batan.go.id Received: July 7th , 2009; Accepted for publication: August 30th , 2009

1

INTRODUCTION

The assessment and presentation of the eﬀects of uncertainty are now widely recognized as important parts of anlysis for complex system1. Thus, uncertainty analysis is an important tools for a wide area of application, from the nuclear reactor waste analysis to economic calculation. Speciﬁcally, uncertainty analysis refers to the determination of the uncertainty in analysis results that derives from uncertainty in analysis inputs, slightliy diﬀerent from sensitivity analysis which is refers to the determination of the contributions of individual uncertain inputs to the uncertainty in analysis results. The uncertainty under consideration here is often reffered to as epistemic uncertainty which derives from a lack of knowledge about the appropriate value to use for a quantity that is assumed to hace a ﬁxed value in the context of a particular analysis. This uncertainty, in the conceptual and computational organization of an analysis, is generally considered to be distinct from aleatory uncertainty which arises from an inherent randomness in the behaviour of the system under study. [2] The underlying idea is that analysis results y(x) = [y1(x), y2(x), ., ynY(x)] are functions of uncertain c 2009 GFTI & MKI

analysis inputs x = [x1, x2, ., xnX]. In turn, uncertainty in x results in a corresponding uncertainty in y(x). This leads to two questions: (i) What is the uncertainty in y(x) given the uncertainty in x, and (ii) How important are the individual elements of x with respect to the uncertainty in y(x) The goal of uncertainty analysis is to answer the ﬁrst question, and the goal of sensitivity analysis is to answer the second questions. In practice, the implementation of an uncertainty analysis and the implementation of a sensitivity analysis are very closely connected on both a conceptual and a computational level. [3] The illustration of the above problem is depicted in Fig. 1. There are several steps for the sampling-based uncertainty analysis, as will be mentioned in the next section, this paper focus on the sampling phase. The objective of the research is to develop a sampling module using latin hypercube method as part of the uncertainty analysis tools, and to demonstrate the stability of the latin hypercube sampling estimate compare to standard random sampling. This development is a platform for further advance sampling analysis method, and a step to build an in-house complete tool of uncertainty analysis. Review of the LHS method and its superior to standard random sampling and im0408-1

2

Latin Hypercube Sampling for uncertainty...

Figure 3: Figure3.Comparation of SRS (left) and LHS (right) where x1 and x2 are triangular distribution.

Figure 1: Illustration of uncertainty analysis of the output which arises from uncertainty of the input4 [4].

portance sampling is also discussed. 2 SAMPLING-BASED ANALYSIS UNCERTAINTY

Sampling-based methods are based on the use of a probabilistic procedure to select model input and result in a mapping between analysis inputs and analysis outcomes that is then used to produce uncertainty and sensitivity analysis results. Desirable features of Monte Carlo analysis include (i) extensive sampling from the ranges of the uncertain variables, (ii) uncertainty results that are obtained without the use of surrogate models (e.g., Taylor series in diﬀerential analysis and response surfaces in RSM), (iii) extensive modiﬁcations of, or manipulations with, the original model are not required (Le., as is the case for the other techniques), (iv) the extensive sampling from the individual variables facilitates the identiﬁcation of nonlinearities, thresholds and discontinuities, (v) a variety of sensitivity analysis procedures are available, and (vi) the approach is conceptually simple, widely used, and easy to explain. The major drawback is computational cost. This is especially the case if long-running models are under consideration or probabilities very

Figure 2: Comparation of SRS (left) and LHS (right) where x1(100, 3, 0.1) and x2(100, 5, 0.2) are normal distribution.

close to zero or one must be estimated [1]. There are ﬁve basic components that underlie the implementation of a sampling-based uncertainty and sensitivity analysis: (i) Deﬁnition of distributions D1, D2, ., DnX that characterize the uncertainty in the components x1, x2, ., xnX of x, (ii) Generation of a sample x1, x2, ., xnS from the x.s in consistency with the distributions D1, D2, ., DnX, (iii) Propagation of the sample through the analysis to produce a mapping [xi, y(xi)], i = 1, 2, ., nS, from analysis inputs to analysis results, (iv) Presentation of uncertainty analysis results (i.e.,approximations to the distributions of the elements of y constructed from the corresponding elements of y(xi), i = 1, 2, ., nS), and (v) Determination of sensitivity analysis results (i.e., exploration of the mapping [xi, y(xi)], i = 1, 2, ., nS) [3]. This paper is limited to the second phase, generation of sample, particularly using latin hypercube sampling. Some type of sampling procedure are used to generate the sample in Monte Carlo analysis, such as Simple Random sampling , Stratiﬁed sampling and Latin Hypercube sampling. With random sampling, there is no assurance that a sample element will be generated from any particular subset of the sample space. In particular, important subsets of sample space with low probability but high consequences are likely to be missed. Stratiﬁed sampling has the advantage of forcing the inclusion of speciﬁed subsets of sample space while maintaining the probabilistic character of random sampling but there is a problem with stratiﬁed sampling, the necessity of deﬁning the strata and calculating their probabilities. when the dimensionality of sample space is high, the determination of strata and strata probabilities become a major undertaking. The determinations are further complicated when many analysis outcomes are under consideration, in particular, strata deﬁnitions that are appropriate for one analysis outcome may be inappropriate for other analysis outcomes. Latin hypercube sampling is based on a combination of simple random sampling and stratiﬁed sampling techniques that leads to statistically signiﬁcant results with substantially fewer real-

0408-2

3

Latin Hypercube Sampling for uncertainty...

Figure 4: Comparation of SRS (left) and LHS (right) where x1 and x2 are uniform distribution.

izations [1]. Latin hypercube sampling (LHS) displays properties between simple random sampling, which involves no stratiﬁcation, and stratiﬁed sampling, which stratiﬁes on sample space. The basis of LHS is a full stratiﬁcation of the sampled distribution with a random selection inside each stratum, sample values are randomly shuﬄed among diﬀerent variables. To generate probability distributions, the LHS was performed using the following steps: (i) Assign an inverse cumulative distribution function (cdf) for each input variable, (ii) choose the number of simulations (N) to be performed, (iii) divide the cdf for each variable into N equi-probable intervals, (iv) for each interval, choose a random sample, x, from the inverse cdf and develop a data set for each parameter, (v) randomly select from each parameter input data set to create N model input sets, (vi) use an analytical or numerical model to determine a realization for each model input set. The current development accomodate uniform, triangular, and normal distribution which are commonly used in practice [5]. The particular diﬀerent of the LHS and standard random sampling is in the third step of the above steps. 3 DEVELOPMENT RESULTS AND ANALYSIS

Figure 5: Comparation of SRS (left) and LHS (right) where x1, x2, and x3 are Triangular (100, 2.5, 6), Uniform (100, 2, 6), and Normal (100, 7, 0.1) distribusion, respectively.

uncorrelated. The estimate for the cumulative density function of the y(x1, x2) from both sampling method are compared. To check the combination of diﬀerent input distribution another simple monotonic function y(x1, x2, x3) = x1+x2+x3 to accommodate the three distribution involved. The result of the cdf of the y is depicted in Fig. 5. ACKNOWLEDGMENTS Authors are very gratefull to Dr.Syahril at BATAN for trigerring the uncertainty problem at computational division, particularly for the techno-economic problem. Also to Mrs. Khairina, chair of the Computational Division for its support in the development procceses.

JTCS

REFERENCES [1] J.C. Helton and F. J. Davis, Sandia Report SAND2001-0417 (2002). [2] J.C. Helton, J.D. Johnson, C.J. Sallaberry and C.B. Storlie, Sandia Report SAND2006-2901 (2006). [3] J.C. Helton, Sensitivity Analysis of Model Output, M.H. Kenneth and F.M. Hernez (eds), Los Alamos National Laboratory (2005). [4] S. Tarantula, Global Sensitivity Analysis JRCEC (2008). [5] L.P. Swiller and G.D. Wyss, Sandia Report SAND2004-2439 (2004).

A software of the LHS and standard random sampling based on the above mentioned procedure is developed as a stand alone software using JAVA programming language. Result of the sampling is tested to produce an estimate of a monoton simple function which also demonstrate the powerfull of LHS method to SRS method. For the normal distribution the inputs needed is the sample number, mean, and standard deviation. The triangular distribution using four parameter which are the sample number, minimum value, modus, and maximum value. The uniform distribution need three parameter the sample number,minimum, dan maximum value. For the purpose of comparing LHS and SRS sampling, simple monotonic function y(x1, x2) = x1 + x2, where x1 and x2 assumed to be

0408-3

Journal of Theoretical and Computational Studies - Latin Hypercube Sampling for uncertainty analysis

Journal of Theoretical and Computational Studies - Latin Hypercube Sampling for uncertainty analysis

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue listening from where you left off, or restart the preview.

scribd