Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
A Spectral Approach
Revised Edition ROGER G. GHANEM
Johns Hopkins University
POL
D.
SPANOS
L.B. Ryon Chair in Engineering Rice University
r
a;l .. \.) ~ :~:; ll~ ~
!'.::>
~.:_::.":'._;:~·'I
Sev8nans
COTErfl~Y.,1.fSGJ.lfl
N° INV
I
I
11 \) il
(j
9 '1
,
".'
1
Mineola, New York
To my Mother and my Father, R.G.G.
Copyright
Copyright © 1991 by SpringerVerlag New York, Inc. All rights reserved.
Bibliographical Note
This Dover edition, first published in 2003, is a newly revised edition of the work originally published by SpringerVerlag New York, Inc. in 1991. A new Preface has been prepared especially for this edition.
To my parents Demetri and Aicaterine, my first mentors in quantitative thinking; to my wife Olympia, my permanent catalyst in substantive living; and to my children Demetri and Evie, a perpetual source of delightful randomness.
P.D.S.
Library of Congress CatalogingillPublication Data
Ghanem, Roger. Stochastic finite elements: a spectral approach / Roger G. Ghanem, Pol D. Spanos.Rev. ed. p. em. Includes index. ISBN 0486428184 (pbk.) 1. Finite element method. 2. Stochastic processes. 1. Spanos, P. D. (Pol D.) II. Title. TA347.F5G562003 620'.00I'51535dc21 2003046062 Manufactured in the United States of America Dover Publications, Inc., 31 East 2nd Street, Mineola, N.Y 11501
", .. The principal means for ascertaining truthinduction and analogyare based on probabilities; so that the entire system of human knowledge is connected with the theory (of probability) ... " Pierre Simon de Laplace,
A Philosophical Essay on Probability, 1816.
" .. , Nature permits us to calculate only probabilities, yet science has not collapsed." Richard P. Feynman,
QED: The Strange Theory of Light and Matter, 1985.
Preface to the Dover Edition
Since the publication of the first edition of this book in 1991, the field of stochastic finite elements has greatly benefited from the concepts of spectral representation of stochastic processes in terms of thematic diversity and mathematical foundation. This, in fact, was anticipated in the epilogue of the first printing of the book. In this context, it has been tempting to proceed with a second edition towards incorporating certain of these developments. However, it has been felt that the simplicity and tutorial effectiveness of the original version could be compromised by some of the logistic and conceptual details which would have to be incorporated. Therefore, the authors have decided to proceed with this Dover publication with the hope that the original, and in many respects seminal, concepts would become widely available to broad audience for a longer period of time. It is hoped that in this manner, the many requests that we had for reprinting, worldwide, will be answered. Thanks are expressed to Dover Publications for accommodating this need and helping disseminate the original contribution towards a rational foundation for uncertainty quantification. Nevertheless, a somewhat exhaustive list of references to the utilization, refinement, and extension of the concepts of this monograph is included as an addendum at the end of the book. It is hoped that this addition will help as a catalyst for even more accelerated use of spectral approaches to new applications and mathematical formulation. The Authors September 2002
Thus. Alternatively. leads to an explicit expression for the response process as a multivariate polynomial functional of a set of uncorrelated random variables. The random system parameters are modeled as second order stochastic processes defined by their mean and covariance functions. and timing are correlated with the intention of accelerating the evolution of the challenging field of Stochastic Finite Elements. In this context. Then. over a vii . in which a spectral representation in terms of the Polynomial Chaoses is identified. These concepts are further elucidated by applying them to problems from the field of structural mechanics.Preface This monograph considers engineering systems with random parameters. and integration of the material of this monograph. the problem is cast in a finite dimensional setting. format. Implementing the concept of Generalized Inverse as defined by the Neumann Expansion. Its context. The corresponding results are found in agreement with those obtained by a MonteCarlo simulation solution of the problems. development. Relying on the spectral properties of the covariance function. various spectral approximations for the stochastic response of the system are obtained based on different criteria. The financial support. the solution process is approximated by its projection onto a finite subspace spanned by these polynomials. The authors wish to thank Rice University. the solution process is treated as an element in the Hilbert space of random functions. and the National Center for Supercomputing Applications (NCSA) for the extensive use of their computational facilities during the course of the studies which have led to the conception. the Houston Advanced Research Center (HARC). The concepts presented in this monograph can be construed as extensions of the spectral formulation of the deterministic finite element method to the space of random functions. the KarhunenLoeve expansion is used to represent these processes in terms of a countable set of uncorrelated random variables.
4 2 ) Motivation .4. 2. ~ MEf!' 63 65 65 66 68 69 70 3.2 Preliminary Remarks .3 Construction of the Polynomial Chaos 17 21 24 42 42 44 47 63 3 SFElVI: Response Representation 3.2 1. 3. Deterministic Finite Elements. Spectral Methods and Hierarchical Finite Element Bases. . the National Center for Earthquake Engineering Research at the State University of New York at Buffalo.1 3. the stimulating discussions with a plethora of students and colleagues are greatly appreciated. Review of the Theory .4.2 Variational Approach . and Rice University is gratefully acknowledged. from the National Science Foundation.4. The Mathematical Model Outline .3 1.D.2. 2. Review of Available Techniques... the Air Force Office of Scientific Research.2.3 Preliminary Remarks .3. 2. 2.1 Preliminary Remarks .2 Definitions and Properties . 3.4 pAdaptive Methods. Contents 1 INTRODUCTION 1.2. Stochastic Finite Elements.1 Problem Definition 3.1 Derivation . R.1 1. Further.jj:''''S t.Q. 1 1 3 9 12 13 13 14 17 REPRESENTATION 2. Ghanem P..G.3.. ix .3 Solution of the Integral Equation Homogeneous Chaos ./r~~ Qr .3... Spanos October 1990.2 Properties . 3... 2.2.3 Galerkin Approach .3 ... KarhunenLoeve Expansion 2.2 2.Vlll period of years.1 OF STOCHASTIC PROCESSES s= ~ 2.
.1 3.7 Preliminary Remarks .3.2 Second Order Statistics . ..6 3.2 Results . . Neumann Expansion Method .2.3 Results . One Dimensional Dynamic Problem 5. 113 113 114 114 119 142 142 152 5.2 3. . Statistical Moments . .1 Moments and Cummulants Equations 4.3 3.3 5.2 Preliminary Remarks .2.2 Results .3.4 179 179 181 185 189 6 SUMMARY AND CONCLUDING REMARKS 6. . One Dimensional Static Problem 5.3.1 Formulation .3. 5.3.2. Improved Neumann Expansion Projection on the Homogeneous Chaos..x CONTENTS 3. Monte Carlo Simulation (MCS) Perturbation Method .3 4. ..3.1 Description of the Problem 5..4.4 3.1 SUMMARY AND CONCLUDING REMARKS 189 195 BIBLIOGRAPHY ADDITIONAL REFERENCES INDEX 211 219 ..4.1 4.5 3.4 98 98 107 109 112 5 NUMERICAL EXAMPLES 5.2. . 5.4.3.3. Approximation to the Probability Distribution Reliability Index and Response Surface Simulation 4.1 5.2 Reliability Theory Background .1 Formulation . Two Dimensional Static Problem 5.2 Implementation. 5. Geometrical and Variational Extensions 70 71 72 75 77 81 92 95 95 STOCHASTIC FINITE ELEMENTS 4 SFEM: Response Statistics 4.3. 4.
Such is the case. Another example that is of more direct concern in the present study pertains to the properties of a soil medium. In this category fall. The level of uncertainty associated with this class of problems can usually be reduced by recording more observations of the process at hand and by improving the measuring devices through which the process is being observed. (Random~ess)can ~e defi~ed as aQack)of~at~n) or regularity. however.1:. with a higher level of sophistication. The first one is an inherent irregularity in the phenomenon being observed. in the deterministic sense.1 Motivation ""''''1''"' jI~ . This feature can be observed in physical realizations of most objects that are defined in a spacetime context. or even at a relatively large number of points.Chapter 1 INTRODUCTION 1. This process. 1989). from a complete knowledge of the flow of goods at the smallest levels. It is quite impractical. with the Uncertainty Principle of quantum mechanics and the kinetic theory of gas.. to measure them at all points. for instance. the econometric series whereby the behavior of the financial market is modeled as a stochastic process. can be entirely specified. Two sources of randomness are generally recognized (Matheron. these properties may be modeled as random variables or. The other source of randomness can be related to a generalized lack of (knowledge) about the processes involved.. These properties are uniquely defined at a given spatial location within the medium. for instance. and the impossibility of an exhaustive deterministic description. From a finite number of observations. as random processes with the actual medium properties 1 . U1. however.
In addition to being invaluable for the safe design of such structures. indeed. The second case. Such structures include. 'Within this class of problems. Note that equation (1. Random aspects aside. From a causality perspective.1) with a deterministic operator A and a random excitation f has been studied extensively (Y.1 ) Au = f where A is a linear stochastic differential operator. Obviously. with or without random inputs. 1. Yang and and Kapania. that of a random operator. A major factor in this respect is whether the random aspect of the problem is modeled as a random variable or as a random process. it has to be compatible with the available numerical schemes. 1986). From an engineering mechanics perspective. 1967. among others. Lin et. Mathematically. basic theoretical developments have been underway in relation to the analysis of random processes. offshore platforms. earthquakeinduced ground motion. and reentry space vehicles. nuclear power plants. is much more complicated and the underlying mathematical tools are still in the developing stage. 3 viewed as a particular realization of these is apparent that deterministic models can to the corresponding physical problems. in are approximations to the actual nonlinear The randomness in the response of a system can be induced either by the input or the system operator. Therefore. Paralleling this technology induced option for more sophisticated models.K. the most common stochastic system problem involves a linear differential equation with random coefficients. Finite element algorithms have a sound and well developed theoretical basis and their efficiency has long been proven and tested on a variety of problems. These developments were triggered by the need for more accurate models in theoretical statistics and electrical engineering.2. REVIEW OF AVAILABLE TECHNIQUES. They can be thought of as random variables or. Y. it is clear that for any probabilistic modeling of a physical system to be useful. The finite element method has proven to be well suited for a large class of engineering problems. A parallel probabilistic argument can be made to the extent that the output cannot be random without either the input or the operator being random. a natural extension of the basic ideas of the deterministic finite element method to accommodate random functions. more accurately and with an increasing level of complexity. u is the random response. as well as the interaction of the structure with the surrounding medium. The functionality of many modern structural systems depends to a large extent on their ability to perform adequately and with a high level of reliability under not absolutely controllable conditions.2 Review of Available Techniques. The case of a random input and a deterministic operator has been amply studied and numerous results have been obtained in the field of engineering mechanics. With the startling technological growth witnessed during recent decades. The formulation presented herein is. the output cannot exist without the presence of an excitation or some initial conditions on the operator. thermal and acoustic loadings. the degree of complexity depends again on the extent of sophistication required from the model. The case where the operator A is stochastic is considerably more difficult and only approximate solutions to the problem have been reported . 1.2 CHAPTER 1. These coefficients represent the properties of the system under investigation. the development of accurate and realistic models of engineering systems has become feasible. random ocean waves. modern engineering structures are quite complex to analyze. and uncertain fatigue effects. the probabilistic approach provides crucial concepts for the task of code development for general engineering practice. These are examples of important structures that should be designed with a negligible probability of failure. This operator induced randomness is also termed parametric. INTRODUCTION processes. Addressing such complexity requires recourse to accurate and efficient numerical algorithms. This importance factor is related to such concepts as the generalized cost to be incurred from a potential failure. 1984. as random processes with a specified probability structure. the problem can be formulated as (1.al. Such complexity results from the intricate interaction among the different parts of the structure. highrise buildings. The randomness in these cases results from such diverse phenomena as soil variability. From the preceding it be considered as approximations a similar sense that linear models behavior of these problems. Lin. the requisite accuracy and hence sophistication of the model depends on the relative importance of the system being investigated.K. and f is the possibly random excitation. namely the representation theory for random processes. More will be said in Chapter II concerning that portion of the theory that is directly related to the theme of this monograph.
1) when the coefficients of the operator A possess a prescribed probabilistic structure. to be of a qualitative and specialized W nature. usually as determined by their second order statistics. in general. the NavierStokes equation for flows with high Reynolds numbers was numerically analyzed by Chorin (1973). and no general mathematical theory is available for obtaining exact solutions. a compact probabilistic representation has been suggested by Masri and Miller (1982). He suggested the construction of a differential equation for the characteristic functional of the random solution. and by Vanmarcke (1983). In this case. 1989). Secular terms that cause instability of the approximate . For this class of problems the Ito calculus breaks down. and response quantities of interest are obtained.(~jth~~. The perturbation scheme consists of expandi~g all the random quantities arou~d their respective mean values via a Taylor series. BharruchaReid (1959) and Hans (1961) were among the first to investigate the problem from a mathematical point of view. 1984). Firstly. and through a judicious use of the relationship between diffusion and randomwalk. This fact is mainly due to the math[ ematical simplicity of the method.4 CHAPTER 1. the solution of equation (1. Along similar lines.2. The stability of linear stochastic equations with Poisson process coefficients was also investigated (Li and Blankenship. 5 in the literature. wave propagation. the solution is a Markov process the probability distribution of which satisfies a FokkerPlanck. specially in connection with turbulence related problems (Ghoniem and Oppenheim. This solution strategy hinges on an analogy between the diffusive part of the NavierStokes equation and randomwalk. and by Traina et. ·~~. with a realistic probability distribution. Implicit in the Ito equation is the assumption that the random coefficients associated with the operator are white noise processes. exact solutions except in some rare instances. 1986). to perturbing the random fields and keeping one or two terms in the expansion. Interest in this class of differential equations has its origins in quantum mechanics. Later. INTRODUCTION 1. Kozin (1969) presented a comprehensive review of the stability problem of these equations as well as equations of the Ito type. __ ·c··~. and functional integration.1) is of the Ito type (Ito. They studied the existence and uniqueness of the solution to random operator equations. however. Good accounts of these applications are presented by BharruchaRei(1968) and by Sobczyk (1985).~k~i98c7b))and Benaroya and Rehak (1988). which does not. It is this second stage which is of immediate concern in the present study. partial differential. turbulence theory.__ The most widely used technique for analyzing random systems in engineering is the perturbation method. A more realistic problem is that of a stochastic operator the random coefficients of which possess smooth sample paths. REVIEW OF AVAILABLE TECHNIQUES. Also. Some attempts have also been directed at obtaining exact solutions to the problem by functional integration (Hopf. from which a complete probability description can be obtained. random eigenvalues. or using a Born approximation for the random fields. the random properties of the system must be modeled adequately as random variables or processes. Comprehensive reviews of the current trends for analyzing random systems in engineering are given by Iyengar and bashT1976t1Ibrahlm (1987). The relationship between ordinary equations and equations of the Ito type was considered by Wong and Zakai (1965). A good treatment of this modeling phase is presented by Benjamin and Cornell (1970). Secondly. which would otherwise require a prohibitively fine discretization. The method was applied by Hopf to a turbulence problem. thus permitting the simulation of the process of vorticity generation and dispersal by using computergenerated pseudorandom numbers. The methods of solution range from solving the averaged equations for an equivalent solution. the resulting differential equation must be solved. The approach which was used featured a gridfree numerical solution of the governing equations. 1951). equation which in general is not amenable to analytical solutions. Kotulski and Sobczyk (1984) developed a scheme to construct the characteristic functional of the stochastic evolution equation and obtained expressions that could be evaluated only under very restrictive conditions.al (1986). and obtained a FokkerPlanck equation for the characteristic functional of the solution. ~c OJ' From the above discussion it becomes clear that the current state of) knowledge concerning stochastic differential equations Precludes. along with a number of applications. The characteristic functional approaches tend. Namely. A rjgorous mathematical theory has been developed for the solution process when equation (1. The problem in dealing with stochastic equations is twofold. Various convergence studies of this method have since been carried out (Roberts. Lee (1974) applied functional integration to the problem of wave propagation in random media. guarantee its validitv as a method of solution. The problem becomes even more intractable when applying the results described above to engineering problems with intricate geometries and boundary conditions and various types of excitations. 1952).
not very popular in connection with the class of problems treated herein.1) where the stochastic operator A is split into a deterministic part L and a random part II (L Solving for u results in (1. Consider. At some point in the substitution to be made of the form . fB'l~u. the higher frequencies showed considerable spread around their mean values. 1988). eigenvectors and the random parameters of the system. however. are identical with those obtained from the more standard method mentioned previously.» = < (L1II)"> :u>. It is often justified intuitively by a local independence argument. is the hierarchy closure approximation (BharruchaReid.3).6) where <. Also. apply the operator II to equation (1. restricted their approach to rather small levels of random fluctuations. The perturbation expansion was carried out at a different stage of the finite element assembly process. They used a first order perturbation scheme which again.4) Substituting equation u.3). as witnessed in the available literature. The resulting equations. They used the integral equation approach to invert the governing differential operator.L1 II L1 f + L1 II L1 II u . to derive IIu = IIL1 f .4) into equation (1. Liu et.for example. Boyce and Goodwin (1964) used the perturbation approach for the solution of the eigenvalue problem of random strings and beams. a solution to the problem based on the perturbation approach does not provide. without a prohibitively large amount of additional analytical and computational effort. REVIEW OF AVAILABLE TECHNIQUES. The method is based on expressing joint statistical moments of the output and of the system as functions of lower order moments. All the methods discussed above are based on a perturbation type expansion of the random quantities involved.  < (L1II)" u. equation (1. These facts restrict the applicability of the method to problems involving small randomness. allowing for random boundary conditions. The uncoupling in equation (1. They expressed the frequency response of the chain as a perturbation type expansion in terms of the small deviations of the random variables involved. 1968). the moments of u depend on those of IIu. = L 1 (1. in which case a first order approximation yields fairly good results. computations beyond the first or second terms are not practical. They concluded that even for small levels of disorder in the system. It was suggested by Adomian (1983) that closure at a certain level of the hierarchy is equivalent to the same order of perturbation in a perturbation based solution. Another more elegant method. Collins and Thompson (1969) used the perturbation approach to treat the general problem of computing eigenvalue and eigenvector statistics of a system whose parameters were random variables described by their covariance matrix. They concluded that the second order perturbation was too untractable to be of any practical interest in solving real physical problems.5) has + II) u =f. Liu et. an approximation and so on.6 CHAPTER 1. gives (1. Later. They also investigated the stability problem of a column and concluded that stability problems were more sensitive to crosssectional area uncertainty than frequency problems.3) Note that on averaging equation (1. Using the same perturbation type expansion.3). higher order statistics. Dendrou and Houstis coupled results from an inference statistical analysis of a soil medium (1978a) with the finite element method to solve a soilstructure interaction problem (1978b). Soong and Bogdanoff (1963) used transfer matrix techniques to investigate the behavior of disordered linear chains. (1.al applied their approach to analyzing random nonlinear and transient problems (1985..ilL lIIu . 7 solution appear in higher order terms. (1.2) f . and considered randomness caused both by material properties and boundary conditions. To obtain these latter moments."> denotes the operation of mathematical expectation.al (1985) introduced a new implementation scheme for the perturbation based fimte element method. (1. 1986. process. INTRODUCTION 1. The difficulties in . Nakagiri and Hisada (1982) and Hisada and Nakagiri (1985) investigated again the random eigenvalue problem. Also. A second order Taylor series about the mean value was used to represent the eigenvalues.2. Hart and Collins (1970) and Hasselman and Hart (1972) extended the work to develop a perturbation scheme compatible with the finite element method. Their validity is restricted to cases where the random elements exhibit small fluctuations about their mean values.6) has no rigorous basis.
That is.~i.t) i> /I . Capital letters are used to denote algebraic structures and spaces as well as operators defined on these spaces. ~ ~ ~ Good results were obtained for small values of the coefficient of variation of <S .._w_. the mathematical complexity.Ij·. the . to some extent. The degree of abstraction is kept to a necessary minimum to avoid obscuring the engineering aspect of the problem. Further. a fact that handicaps the I . \J 1903). L~jO.. ~~~'"'~~ are involved. the space of functions mapping 0 onto the re~illne is denoted by 8. Similarly. (3(8)) = 10 Q(8)(3(8)dP (1. " . 1979) defined over a domain D.J.cJ.conventional engineering kind in that it involves concepts of a rather abstract and mathematical nature.al (1985) introduced the Monte ~:. which constitutes aiplend'of the Neumann . _ [0 integral in equation (1.. The :Jj result can be viewed as a series approximation to the random solution process.f'l'. L.. and the random resolvent kernel of the problem IS computed. again a Neumann expansion based solution.. Obtaining higher order moments for the solution process is quite cumbersome since it involves averaging products of random matrices. that such an approach is vital for a complete and mature understanding of the problem as well as directing any future research aimed at extending the present formulation.7) The class of problems dealt with in this stud~nof of the.3.. Under very general conditions. and even with the assistance of MACSYMA r. respectively. Yamazaki et.. tt ~ ~_ : A~)'(i?1•. i I.\1 the random elements. Closed form solutions were obtained for the statistics of the response. however. ® ~l r® bvf The domain D represents the physical space over which the problem is defined. It may seem.. given any two elements Q(8) and (3(8) in 8..\r expansion with the Monte Carlo simulation. for any two elements hi(x) and hj(x) in H. 8 be an element of Oi Then. It IS based on the Neumann expansion for th~ inverse of thestocD:astlc coefficient matrix whereby digital simu.8) requires simulating the random field several times in order to ~ro\d::~~iable \. '>R_~~nes a randon::"y~ria~~ Elements of Hand e are denotea by roman and greek letters respectively. ~~.J.JL .3 The Mathematical Model r~. THE MATHEIVIATICAL MODEL results. The Hilbert space of functions (Oden. INTRODUCTION 1.lt<ft0r Carlo Expansion (MCE) method. The inner products over H and over 8 are defined using the Lebesgue measure and the probability measure. Also a notation is introduced that simplifies.. at first. the inverse of the ral:dom operator is expanded in a Neumann series (Courant and Hilbert. Vanmarcke and Grigoriu (1983) presented a finite element analysis Of] a simple shear beam with random rigidity.~\ ".However. especially when sizable random fluctuations 9 formulating higher order closure approximations are substantial enough to limit the applicability of the method to cases of small random fluctuations. hj(x) ) is defined as / ". The method was applied to problems with large coefficients of variation and good agreement was [ observed with the standard Monte Carlo simulation. their inner product ( hi(x) . their inner product is defined as (Q(8). ? (1986). Later. for solving problems with random media. ..o_". They implemented f (} a ~"st or~" approximation which gave acceptable results only for the case of vel'v small flu~tuations.. Let0 be an element oLD and. ap~li~d Shinozuka and Deodatis (~986) for the solution of problems '_'rrrve.1~_1 Adomian (1964) introduced the concept of stochastic Green's function for the solution of differential equations involving stochastic coefficients. The concept was further developed by Adomian and Malakian (1980) and Adomian (1983) who suggested using a decomposition method for solvinz " b nonlinear stochastic differential equations. I \ I .8 CHAPTER 1.. i n t " ~r::~ l{ § with v~l~es on the rea~liEe 1hJ~~~ H. PLcle~te_~ probability space.:y . with greek letters referring again to those operators defined on spaces of random functions. However.8) is equivalent to the average of the integrand with til 'J{ \ \. The ~t method was applied by Benaroya (1984) and Benaroya and Rehak (1987) $ for the solution of singledegreeoffreedom structural dynamics problems. Shinozuka and "'{. 101 ~J". However.including more than two terms in the expansion. difficulties were experienced in '(_.lation ] techmques are used to generate the random matrix.¥1rrg"_~~~~ates and be~ls. Accordingly.St~ :\ where dP is a probability measure.~ Nomoto (1980) developed the delta method. the very restrictive assumptions concerning the characteristics of the beam make the method of a rather limited interest. a symbolic manipulation package.~  I :! '\ ~~. It is believed.~: method when dealing with sizable randomness. that a mathematical abstraction of the problem is gratuitous..l (1. 'V PAX. t. the method 1. It is both necessary and instructive to introduce at this point the mathematical concepts which are used in the sequel.. The method was subsequently \.
e) is a random variable with such a distribution. the usually limited number of observed realizations of a random process cannot. INTRODUCTION ~ 1. Clearly. though.13). A finitedimensional description of the processes involved is required if the solution is to proceed in a computational setting. A conceptual modification. FromJIl1\ observing a finite number of realizations of the process. e) as a function of both its arguments. since most physically measurable [processes are of the second order type. the joint distribution at all xED is required.] is some nonlinear functional of its arguments.P~ y \~'oUjJ. Before a solution to equation (1. f(x. all the finite dimensional distribution functions are also Gaussian.<)JJ if .3. e) can be decomposed into a purely deterministic component and a purely random component in the fo:m As was mentioned above.8(e)) = a(e)(3(e). the Gaussian distribution appears to be the most likely candidate for many physical applications. an abstraction of the discretization process can be introduced which is mathematically equivalent . y. invoking the central limit theorem. However. the discussion and comments just made concerning the coefficients processes apply to the solution process as well.I' _. suggest any definite distribution.GtJ r' \1 iI~' v (jt' ._~ = f(x. However. '( ~' . if the possible realizations of the random process were numbered continuously on the interval [0.'lJ 11 respect to the probability measure d.10) have been defined through their probability distribution functions. whose random coefficients are restricted to being second order random processes. e).e) = f(x. for a fixed e = e* E [0. Alternatively. the main question remains as to what is meant by a solution to the problem.' Y () . 1]. e)u(x. 1]. having the same covariance function as the process ak(x. e) is a zeromean random process.9) r~fJ Any two elements of the Hilbert spaces defined above are said to be orthogonal if their inner product vanishes. . ak(x. e) is some operator defined on H x e. so that J fie. A is a differential operator with coefficients exhibiting random fluctuations with respect to one or more of the independent variables. e) (a(e).e)]u(x.ory ~ can be used to construct the distribution of the process along the dimension. For a given x. e).12) is sought.10) can then be written as [L(x) + II(x. in general.13). With no loss of generality. and ak(x. a finite dimensional representation cannot be achieved through partitioning of these spaces as is usually done with the deterministic finite element analysis. The physical model under consideration involves a medium whose properties exhibit random spatial fluctuations and which is subjected to a random external excitation. a complete description of the response would involve the prescription of its joint distribution with the various processes appearing in equation (l. distribution the. for a complete description of the process. The mathematical representation of this problem involves an operator equation A(x.k(X) is equal to the mathematical expectation of the process ak(x. can be introduced. e*) is a deterministic function of x. Specifically. and these numbers were assigned to the variable e. This information could form the basis for a rational reliability and risk assessment. Clearly. Obviously. such a task seems to exceed the capability of currently used methods. The aim then is to solve for the response u(x.13) ak(x. e) is a differential operator whose coefficients are zeromean random processes. then. A is assumed to be a differential operator. with range the interval [0.cf '> (1. Viewed from this perspective a random process can be regarded as a curve in either of H or e. if the process is assumed to be Gaussian. However. denotes the Hilbert space of functions defined on the O"field of events generated by the physical problem. y~. e).e) . x] (1. A random process may then be described as a function defined on the product space D x n. each one of these coefficients ak(x. where g[. a quite general form of the solution process can be expressed as e e u=g [adx. In other words.11) where G. This is not a severe restriction for practical problems. Then.10 CHAPTER 1. el.1]' the process Cl:k(X. Given the abstract nature of the functional spaces over which random processes are defined. e) (1. In other words. it is essential to clarify what is meant by such a solution. Once the coefficients in the operator equation (1.P. a realization of the process. Equation (1. e) = ak(x) t Cl:k(X. e) (1. (1. given the infinite dimensional structure of the random processes appearing in equation (1. THE lVIATHEMATICAL IVIODEL y/~. Obviously.12) where L(x) is a deterministic differential operator and II(x.10) ~ ~ ~'{ < where A(x.
.• Finally in Chapter VI. it is shown how the developments of the previous chapters can be used in a number of approaches to compute approximations for the probability density function of the response of the system.Vv.J. a brief review of the theory will ~iovide)ni'or~Ci~sight into the problem at h~nd and ~C0:'6" a(broad)perspective of the relevant mathematical concepts.. a number of spectral representations are introduced in the text which permit the algebraic manipulation of random processes through that of an equivalent discrete set of random variables. These are the KarhunenLoeve expansion and the Homogeneous Chaos expansion. In this context the abstract • Hilbert space foundation of the finite element method becomes useful as /it can be extended to(deal)with random functions._!> ~_~~~_ I REPRESENTATION OF STOCHASTIC PROCESSES n "r r: ' . . '~J Similarly to the case of the deterministic finitj element method.fp 13 ."'~ ..t q_. to problems of mechanics treatable by the finite element method. however. INTRODUCTION to a discretization with respect to a spectral measure. o. relevant elements of the representation theory for continuous stochastic processes are presented.l'()h. c.e:~~~~~ be.."r I In the deterministic case discretization of the domain has a physical (appeal) The domain in the stochastic case not. In a more applied sense.. this monograph is a study in the application of spectral discretizations. . The requisite mathematical rigo. ~ j .r. These methods are shown to provide a sound theoretical extension of the deterministic finite element method to problems involving random system properties.I'.o..12 CHAPTER 1..1 Preliminary Remarks .~ ~>.. \ Chapter 2\ t~J:~~I{.I> t. Further. indeed..~".. the heuristic local averaging representation..lle.t. it is sought to describe random processes in such a manner that they can be implemented in a finite element formulation of the physical problem. a number of the most widely used techniques for treating the model developed in this chapter are reviewed.)L"'C\">" ~k . This is.i. pertinent conclusions and a broad perspective the problem at hand are presented..r for a com~lete~rstanding of th~ theo~y of representation • for stochastic processes IS beyond ithe scope of this book.. thereby discretizing the process.~~~~_:.vv ~. J#:.. The theory is reviewed and the two representations used in the ensuing development of the stochastic finite element method are discussed in detail. lc) .l <. mathematical concepts useful in the ensuing analysis are presented. 1.e.. (whereby) functions are represented by a denumerable(set)of parameters consisting of the values of the function and its derivatives at the nodal points..~c_c=.. have a physical ' meaning that permits a sensible discretization.. "r" p~. "'t'ri • ~l> == . In Chapter II." . A ) . the main impetus of the book. _+ cc.~q_"__~ . Attention is focused on methods based on the spectral representation of stochastic processes as outlined in Chapter II.~~~=~~ .u>_ /\ • &" O~IIt_!"~ 2. taken in the context just explained. In Chapter IV reliability methods in structural engineering are briefly reviewed. and the more rigorous spectral :::presentat~n are reviewed in the next section. Further.. In this context.~ ! .. ~~_::~:'_Jwm' r )vJ. the problem (encountered)in the stochastic case is that of representing a random process by a denumerable set of random variables. ~I1E4. " ..~~~.4 Outline In Chapter I. As the title indicates.. Further.:...~. Chapter III includes a review of deterministic finite element methods emphasizing the features that can be extended to the stochastic case.~ ~W.Y ~U kAlA ~~..ck  ~.~... available techniques for addressing problems with stochasticity are reviewed. In Chapter V detailed numerical examples are given which demonstrate the usefulness of the spectral approaches introduced in Chapter III. However. Indeed...
b~0!._ J Cww(Xl' X2) = J g(Xl) g(X2) <dIJ.. the random processes involved are(subs~t'llt~~_~~by ~ variables that are so chosen as to coincide with some local average of the process~ over each element.d.is of these.>.:~ ~(. and has had its origin in the need for more sophisticated models in applied statistics. in its most general form._iS. It is expected.!)~ d "" .b.P qu.l p . 8) is a stochasticprocess.Jb ~ r}w.. of random variables.t_~. REVIEW OF THE THEORY 15 o(AON'c.=t> . The associated computational problem is. for the random behavior of the process. J'~~ ..'"~.~ tlvtv! 1" J. It is obvious that a ( relatively large number of random variables is required to represent a random \.1) r . respectively...~ 1 = 1 = +00 . whereas the second approach introduces additional irregularities..1?JJ) I'~J_~ M<S"'r.+. involved.}rhe shape and size of the finite element ~§. This suggests that the two approaches provide lower and upper bounds.:=field W of random events.2(8» .JL P~ . ~ Yc.:.d method for representing random processes. In the limit. Q.0 t:. In either case. .R_a".2) In equation (2. ~ Jt p. An important specialization of the.1.. A contmuous random process IS formally defined as an indexed set. density of the st'i:itiOnaryprocess. (2. defined ~..S closely i1S ~esired)by restricting the index to a(se~(dens~in the indexing set.\_\i\: spectral decomposition occurs if the process w(x. 'rV~U1vto.. CM..tE~~igh~~gj~~_ragej ove~asubser'Of the indexing set.~"''2A) see rJ/. and irregular geometry. as reflected by the number of random variables used to represent the underlying processes. the smaller the region over which the process shows a definite (patten~.vhosecovarjance Cww(Xl' X2) function admits of the decomposition . These~~. Here...b~ J.. dlJ.d~en""t:.1(8) dIJ.14 CHAPTER 2._ '. !This stress distribution is UStrarl}~'T~i~n"'d~e~p~en. That is. as the size '\ of each subset becomes vanishingly small.yits value at some point in the subset. however.s_lJ. the dimension of the problem. the~can be approximated §:.2. of prohibitive dimensions..(8) is an orthogonal set functic)n.. the index(belonging) to some continuous uncountable set. a rigorous exposition of the basic concepts of the theory of representation for random processes can be formulated (Parzen. 8) (.2 o~ ~'" Review of the Theory . .' c In this case. of random processes (Gel'fand and Vilenkin.1f.1) can be shown to reduce to the VVienerKhintchine relation (Yaglom.W dlJ. in general. In the context of the finite element analysis of a given system.1. d1b v.ftFiat of~' the~' ~ ~u. can be stated as 8) = g(x) dlJ.F' ~l~ather\lheuristic._."J'CUhIlb 2.1)..als(rtermea~ortl~gonal stochastic measure. The ~nite element analys.V{ c~ ~_.. It is noted that local ~ragin~ parallels/pointwise) approximations of deterministic functions.:crth~tion average)over ea~h such subset whereby the process over the subsetTsi:epIacecn.3) YI') +00~~\ ei(X1X2)TW S(w) dwT .0r::d~:.:::::::~).1'''' ~j.t. "JL.Z~~G/ of the random parameters /rP~:?/:~x.. 8) is~vid~sense stationary.systems usuall~ requires 1eGo~ to cur~d elements and nonumform spacmg of the nodal. Then."~ .process in this fashion. The size of the individual/subsets) used for the averaging process does in general depend on the (frequency content of the process.. t:.RllElIClJ!X_ dictat~ (by the stress distribution within the structure.Jr"' . the symbol T denotes vector transposition. 02i~D_t. 1959).(8) Jr~·bJ: (2.S:ncrwlsthe'. the (broader) the frequency content of the process.:. in general. .~~ CJJA:( h d. the representation resulting from r the two approaches should converge to the exact process.J!' ~. f:lOints. This theory is a quite rich and mature mathematical subject.AM. t~ ±i. and thus the smaller the size of the necessary subset to meet a certain precision criterion.OAC..v:. 1962) and the following equations hold _ 'i ) w(x.~ 00 _I. (Local averages) are usually of two kinds. E" ~ ~\/)'/. Perhaps the most important result is the sl2ectral representation..:a.:.~v>_"'J!. This fact necessitates either the use of an independent mesh for ~:n<ulation. Alternatively to the heuristic arguments associated with the local averaging approach._ i \~. which.. Further.01". a random process can be represented by its values at a discrete(se1)of points in its domain of definition.~. The development of the theory parallels that of the modern theory of random processes. g(x) is a deterministic function. ar." (2. This problem with local averaging is particularly ( (( crucial in the context of the finite element analysis of structures with curved \. equation (2. the first approachlsmootheslthe random process. It is then clear how the(foregoin~ discussion for random ~'iables can be extended to the case of random proc§. Stretching mathematical rigor(furthey. T f/ e'x ./. that the result would depend to a notable extent on the averaging method used. Most of the related results have been derived for the class of second order processes. It is clear that. is quite large. ~~~~(:.. or the use of a mesh size such that both stresses and material properties are adequately and consistently represented.~ ~ . local averaging is probably theunost widely)use.~~.. 1964).(w.~ .vave S(w) is the usual spectral number vector. REPRESENTATION OF STOCHASTIC PROCESSES 'nil ~\~ ~"" 2.y_od_! .8) and Cww(Xl' X2) .
"""~ t w(x.<tensive!Yused in th.to (dea0with I (af)stracYrneasure spaces that have limited Ehysical~ntuitive support. processes in finite element analyses.e sequel.ionals.5) . however. 8) can be expanded in terms of a denumerable set of orthogonal random variables in the form ".ei~.3)./\ 9. it may be regarded as an abstract discretization of the random process. The resulting expansion. In his development of the theory. and developed what is now known as the Homogeneous Chaos. using the ideas he had developed on Differential Spaces (1923) and Homogeneous Chaos (1938). where a large number of tools are available to the analyst. bilinear time series models (Rao and Gabr.ms. Cameron and Martin (1947) ~evelop~d the Four. 1984) have been shown to be closely related to the Volterra series expansion (Brockett. Specifically. Given the crucial role that the KarhunenLoeve expansion has in relation to the methods discussed in this monograph. .n~ce~~1¥x. collocationlike. it is important to note that this equation can be viewed as a representation of the process w(x. their\ applications have been restricted to ~ndomly excited deterministic sys_::. Also. au I nN~J?.~)J'. consists \~jl' r i. A would be to r ~ 'jhvJ fxp:. the representations discussed thus far can be thought of as linear operators or filters acting on processes with independent increments (Doob. However.8) L 00 J.idel~Used method. which is by definition Gaussian. these concepts can be generalized to allow for the representation of nonlinear functionals of the orthogonal stochastic measures dJ.~2:he~i ::~IY~::~G: V n:~{~a:wach v~ ~ One of the major difficulties associated with the numerical incorporatio~ ( ra~1~.5) constitutes a representation of the random process in terms of a denumerable set of random variables. The theory of nonlinear functionals was developed by Volterra (1913).3.'.Nnamely random variables defined on the CTfield of random events. The most(\\i. Again it was Wiener (J . who first applied Volterra's ideas to stochastic analysis. Jlxp~j~ KarhunenLoeve Expansion D .t!Uf \fi. a quite large number of points needs to be sampled if a good approximation is to be achieved. Collectively. REPRESENTATION OF STOCHASTIC PROCESSES 2.ierHermite expansion'T~vhich is a ~ouriertype ex~ansion ror nonlinear funct. and identifying the Volterra or 'WienerHermite expansion of the resulting process. is the KarhunenLoeve expansio. was applied by a number of researchers to a large variety of problems. and are therefore set in an infinite dimensional space. and one which is j . n \Y_he. 1953). Note that since equation (2. KARHUNENLOEVE EXPANSION 17 u The preceding representations have had a strong impact on the subsequent development of the theory of r~om processes.8) as a curve in the Hilbert space spanned by the set {~i(8)}. it will be treated in greater detail in section (2. Segall and Kailath. J in a random. Obviously. The random process w(x.}1.=1 where {J. Further. 1988) are based on replacing the infinitedimensional process by one with finite memory.:tC<1'c' 'J.l( 8). This is largely attributed to the fact that aIr of these representatJons mvolve differentials of random functions. scheme. Note that in contrast to the well established linear theory. 8). The i major conceptual difficulty from the viewpoint of the class of problems i) considered (her. However._l2i. of sampling these funct ons at randomly chosen (O)l~}]l.Wiener theory for nonlinear systems can be found in the monographs by Schetzen (1980) and Rugh (1981).li(B)} is a ~of orthogonal random variabl~s and 9i(X) are deterministic functions. It was 'Wiener. An application oriented treatment of the Volterra. not readily amenable to computational algorithms. 1976).li(8) 9i(X) .' (\_/l \'_. More recent applications of the Volterra and WienerHermite expansions can be found in the field of time series analysis. Subsequent researchers attempted to generalize Wiener's work to accommodate functionals of other processes (Ogura 1972. Based on Wiener's work. 'Wiener used the 'Wiener process. reby a random process w(x. is the. He generalized the Taylor expansion of functions to the case of functionals. Interestingly. whi~al1be related to the cClvariance kernel of w(x. (Priestley. the Monte Carlo simulation.jJ_ J [" .. 8) is expressed as a direct sum of orthogonal projections in this Hilbert space whereby the magnitudes of the projections on successive basis vectors are proportional to the corresponding eigenvalues of the covariance function associated with the process w(x. the WienerHermite expansion. as a basis for expanding the functionals.( 2.3 23 1 new theory to problems involving random phenomena. statedependentmodels (SDM).8) . involves the treatment of functions defined on these ab(\. An alternative formulation of the spectral representation. (2._stract spaceS'. envat·//r Ion qpo~_. 1976). \ ..16 CHAPTER 2.. the nonlinear theory is quite recent and much more intricate to deal with.2 I . it does hold the promise for substantial extensions of both theoretical and applied nature._s_o_f_this CTfield.~l J}fi< )fNf(.
W't~ t'.. 1953) (2. e) . function of the position vector x defined over the domain D. 1947. respectively. e) and taking the expectation on both sides.~_ ~ 0~!.12) n=O m=O LL CXJ <~n(e) ~m(e) > VAn Am fn(Xl) fm(X2) . REPRESENTATION these functions in a Fouriertype series as OF STOCHASTIC PROCESSES 2. using equation (2. e) = w(x) + o n=O L CXJ ~n(e) A fn(x) . X2).. Kac and Siegert. This 1SeX3:ctly what the KarhunenLoeve expansion achieves.~<j and integrating over D. w(x. = An fn(X2) .~~dom ~aria~.9) where onm is the Kronecker delta.13) J}"..\ (2.__. n=O W IVtvfl2.15) Equation (2. gives where An and fn(x) (kerneV. By definition of the covariance function. e) = n=O L A~n(e) CXJ fn(x) '__j.15) can be rearranged to give (2. multiplying both sides of equation (2.~ J. \"'v'\~?2. integrating over the domain D. it is bounded..8) Then. e) where.~~ II C(Xl.6) \ Second order properties of the random variables (n can be determined by multiplying both sides of equation (2. Clearly.9) leads to (2. .7) Multiplying once more by fl(Xl) n=O L CXJ <~n(e) ~k(e) >~ fn(Xl) . and C(Xl) X2) denote its covariance function."A) are the eigenvalue and the eigenvector of the covariance That is. 1947)..\.16) (2.N<lV~'<'J"'1§". process. e) can be written as In fn(x) fm(x) dx = as (2. e) = w(x) + a(x. (2. e) over all possible realizations of the process. The p~~eXITandeCl in terms of the eigenfunctions fn(x) as o (x.3.10) Thus. ~ • ~~\~~. it has the spectral decomposition (Courant and Hilbert. .n(Xl) _ !V~1 fn(X2) ' . tu Then.12) by fk(X2).. yields (2.~. e) > CXJ where QTn(e)}i~~set of .lAf") @w(x. with etbelJhging)to the space of random events n._. Thus. 1948. KARHUNENLOEVE EXPANSION 19 w(X. Due to the symmetry and the positive definiteness of the covariance kernel (Loeve. They can be normalized according to the following criterion  (2. Let w(x) denote the expected v~x. (2. they are the solution to the integral equation iD C(Xl' X2) fn(x) dx. the random process w(x.. e) be a random. its eigenfunctions are orthogonal and form a complete set.to b~~\ i~ constan:t:an~n is an orthonormal set of deterministic functions. Loeve. oti. symmetric and positive definite.17) (2.~ wherela(x.18 CHAPTER 2.X2) = . a process wrtn zeromean and covariance function C ( xj . Specifically. and making use of the orthogonality of the eigenfunctions. e) can be written .~tL\j. The expansion was derived independently by a number of investigators (Karhunen..18) .11) by a(x2 ..>J'. it is found that <a(Xl' e) a(X2. w(x. 1977).. errs w(x.
Lp \. 1968).:...j dJ.j}!) hi!)~) \) ~('f\ > V ~. e). pattern recognition. 1959 ) that if a function f can be represented in terms of linear operations on the family {C( . gives (2.~~(r\( '''~/' \y : " w(x. In that context. it can be shown (Parzen.19) ~ o/An explicit e~preSSiOI1for . or equivalently._l_1~_E()ma..er 'n(e) ~ VA. to the space spanned by the set of random variables {~n(e)}. X2 E Z}. Indeed. X2 ) .17) at the Mtherm.. and image processing as an efficient tool to store random processes (Devijver and Kittler." Or.X2)..20) with equations (2. It is well known from functional analysis that the steeper a bilinear form decays to zero as a function of one of its arguments. the broader is the corresponding spectral density. e) can be approximated in a convergent series of the form w(x.. fn(x) are so~tio to equation equation (2.8). Parzen. 0) fn(x) i.3. as it minimizes the mean square error resulting from truncation after a finite number of terms. and the greater the number of requisite terms to represent the underlying random process by the KarhunenLoeve expansion.3.\.serie~ in ~i<fV'/. It is optimal in the Fourier sense.11) or (2. . It is worth noting at this point that the KarhunenLoeve expansion was independently derived in connection with stochastic turbulence problems (Lumley.20) I I / f ! / I \ ! \ I \ \ \ ..8) are obtainable for some quite important and practical forms of the kernel C(Xl' X2) (Juncosa. 1945. the more terms are needed in its spectral representation in order to reach a preset accuracy. Van Trees. 1961. KARHUNENLOEVE EXPANSION 21 a.1ecrnbeo~~~plYing equation IS. . This property can be proved as follows.11)..21) y. \ &:. It may seem that any complete set of functions can be used inlieu of the eigenfunctions of the covariance kernel in the expansion (2. Another point of practical importance is that the expansion given by equation (2. 1982).:~t dx. . has some desirable properties that make it a preferable choice for some of the objectives of the present approach. some of which are treated later in this chapter.. Observe the similarity of equations (2. e) is minimised. In fact.(t).1 . 1970)./..(0 tfi"1\.2 Properties Error Minimizing Property The generalized coordinate system defined by the eiqetdunctions of the couaruuu:e kernel is optimal in the sense that the meansquare error resultmg from a finite representation of the process w(x.8)." lI. and the congruence between the two Hilbert spaces may be expressed in terms of an orthogonal family spanning this RKHS by means of the same linear operations used to represent f in terms of {C (. 2. e) L An ~n(e) n=O 00 hn(x) (2. it is reminded that a necessary and sufficient condition for a process to have a finite dimensional Markov realization is that its spectrum be rational (Kree and Soize..20). 10 a(x. it may be concluded that the faster the autocorrelation function tends to zero. Given a complete orthonormal set of functions hn(x). note that analytical solutions for the integral equation (2. 1959). Viewed from a lReprodu:crngRernel Hilbert Space (RKHS) point of view (Aronszajn. either of equations (2..l!::_p . ) (2.1~'f\. Further. X2) . the process w(x.nd An.. Noting that the Fourier transform operator is a spectral representation. However.\ {) J2. estimation. In the same context. this simulation procedure is used in conjunction with the Monte Carlo method for one of the illustrative examples in Chapter IV.~ \ »: (2. X2 E Z}. REPRESENTATION OF STOCHASTIC PROCESSES 2. Slepian and Pollak. Truncating the . 1986). is an expression for the congruence that maps the Hilbert space spanned by the functions fn(x) to the Hilbert space spanned by the random process. then f belongs to the RKHS corresponding to the kernel C(Xl.11)~y fn(x) and ~nt~~~~~. It is this congruence along with the covariance function of the process that determines uniquely the random process w(x. e) = w(X) + L M n=O ~n(e) vAn fn(x).20 CHAPTER 2. respectively. 1968).' flV'o'l.Q "r\~' Vr\i i J (.7) and (2. the associated eigenfunctions can be identified with the characteristic eddies of the turbulence field. I//~. For the special case of a random process possessing a rational spectrum.19) can be used in a numerical simulation scheme to obtain numerical realizations of the random process. the integral eigenvalue problem can be replaced by an equivalent differential equation that is easier to solve (Van Trees. The expansion is used extensively in the fields of detection. 1950.\Jrt~ . I J if. it will now be shown that the KarhunenLoeve expansion as described above.11) and (2.
.23) for ~m(e) back into equation (2. it is interesting to note that some investigators (e. X2) hm(Xl) hm(X2) dXl dX2· (2.f dx = m=M+l t iD iD Rww(X1. then.27) = _2_ Am JD r k Rww(X1' X2) hi(X2) dX2 (2.Y.25) Am fm(X1) The problem. C\ 0/ . Lawrence.3. Then w(x. To show the "only if" part._J'.11). the meansquare error E].X2) dx  hm(X1) hm(X2) dX1 dX2 D In the context of this last theorem. e) has the KarhunenLoeve decomposition given by equation (2.23) which is satisfied when dX2 = 0 (2. equation (2.1'1/ [uP}r ~I))\ l\jVj u. e» hn(x) hm(X1) hn(X2) The random variables appearing in an expansion of the kind gwen by equation (2. X2) hm(X1) hn(xz) dX1 dX2 .o/I yl l' Expansion of Gaussian Processes . KARHUNENLOEVE EXPANSION 23 equation (2. nfc/'/d ' u? D i::» .28) where use is made of the orthogonality property of the set hn(x).X2).26) with respect to hi(X) equal to zero. It is obvious that such an expansion cannot form a basis for the representation of random o I processes.10) are orthonormal if and only if the orthonormal [unctions {fn(x)} and the constants {An} are respectively the eiqenjunciions and the eigenvalues of the couaruince kernel as gwen by equation (2.24) n=J'vl+1 . gives and setting the result Eivl = ~ An ~n( e) hn(x) .30) J~ E~.f.11) with orthogonal random variables and orthogonal deterministic functions that do not satisfy equation (2. Substituting equation (2..22). (2.12) can be used with <~n(e)~m(e» = omn to obtain 00 L m=M+1 iD iD L hm(x) n=M+1 C(X1' X2) = LAn n=O fn(X1) fn(x2) over D gives An fn(X1) 0 (2. In other words.22 CHAPTER 2. II.22) Differentiating equation (2..29) Rww(X1.j Am [ iD hm(x)hm(x) 1 J. the solution minimizes the functional given by the equation F[hdx)] m=M+l f lr lr D Rww(X1.21) at the kIth term results in an error 00 equal to (2.24) over D and using the orthonormality {hi(x)} yields of the set ~ n=O Omn (2. e) S(X2 dX1 dX2] L m=lvI+1 <S(X1 00 .f can be written as Uniqueness of the Expansion If kiD 00 (2.8).e) be a Gaussian process with covariance function C(X1.17) with ~ . is to minimize JD E11subject to the orthonormality of the functions hn(x). 1987) have used an expansion of the kind given by equation (2.26) LetJU(x.g. . REPRESENTATION Truncating OF STOCHASTIC PROCESSES EivI 2. e) hm(x) dx throughout gives (2.21) by hm(x) ~m(e) and integrating w(x.8). Multiplying both sides by fm(X2) and integrating 00 Integrating equation (2. The "if" part is an immediate consequence of equation (2. n=!vI+1 Multiplying equation (2.
1977) that for Gaussian expansion is almost surely convergent. 5._. These properties.\ ~ r1 ~. 1957)." expansion ~ .34) In addition to the meansquare error minimizing property. This fact simplifies the ensuing analysis considerably in Of special interest in engineering applications is the class of onedimensional random processes that can be realized as the stationary output of a linear filter to white noise excitation. <6(8) . Specifically. there correspond linearly independent eigenfunctions.. Loosely speaking. REPRESENTATION OF STOCHASTIC PROCESSES 2.35) (Kree and Soize. Being an autocovariance function. Specifically.35) where N(.3.31 ) 2. The eigenvalues are all positive real numbers. at most a finite number of (2. however. The kernel C(Xl.X2) admits of the following uniformly convergent where the summation extends into sets of two elements. Of these.24 CHAPTER 2. 3.3. and partition. The theory underlying this kind of equations has been extensively investigated and is well documented in a number of monographs (Mikhlin. These properties are independent of the stochastic nature of the process involved.3 . any subset of ~i(e) is jointly Gaussian. the kernel C(Xl' X2) is bounded. For each eigenvalue Ak.. . the KarhunenLoeve Other Properties H~l = k=l L Ak 00 fk(Xl) fk(X2) (2. Some important consequences derive from this property. it can processes. 4. The set fi(X) of eigenfunctions is orthogonal and complete. 1986). the KarhunenLoeve expansion has some additional desirable properties._. symmetric. which allows the expansion to be applied to a wide range of processes including nonstationary and multidimensional processes.1. Since these random variables are uncorrelated. the Markovian property of a process implies that the effect of the infinite past <>: . their Gaussian property implies their independence.32) over all the partitions of the set {~i(8) the product is over all such sets in a given be shown (Loeve. the minimum representation entropy property is worth mentioning. These processes have a spectral density of the form (2.) are polynomial operators of order nand d respectively. 1.~~ii~i. A detailed study of the properties of the KarhunenLoeve expansion is given by Devijver and Kittler (1982). and positive definite.X2) expansion C(Xl.~~YR~~~~ The ~of Solution of the Integral E~ _.33). are of no relevance to the present study and will not be discussed any further. KARHUNENLOEVE EXPANSION 25 and the the random variables ~i(8) forming a Gaussian vector.33) is a homogeneous Fredholm integral equation of the second kind. Rational Spectra ~~~~  1·~) ~ 0~ <2. Equation (2.33) where C(XL X2) is an auto covariance function. . There are at most a countably infinite set of eigenvalues. The KarhunenLoeve expansion of a process was derived based on the preceding analytical properties of its covariance function. That is.\k the KarhunenLoeve on the . 6n+l(8» and o (2.J. Furthermore. that it guarantees a number of properties for the eigenfunctions eigenvalues that are solution to equation (2.:bility to solve the integral equation of the form (2. The interest in this class of processes stems from the fact that a necessary and sufficient condition for a process to be realizable as a finite dimensional Markovian process is that its spectral density function be of the form expressed by equation (2.) and D(.
the above equation is equivalent to the WienerHopf integral equation. Equation (2. and the solution of the associated integral equation is next detailed. Clearly.42) N[d Xl 2] f(XI) f(X2) where 5(. equation (2.) denotes the dirac delta function. the eigenfunctions and eigenvalues of the covariance function given by equation (2. Taking into consideration the onedimensional form of equation (2.38) Differentiating equation (2. 5(XI .37) twice with respect to Xl is equivalent to multiplying the integrand by w2.38). It is assumed that the process is defined over the one dimensional interval [a. 1968) (2.36) 'When the domain D covers the whole real line. (2.39) Differentiating tained Introducing once more with respect to A f"(x) Xl. That is. 1934.40) k f(X2) i: e~w IXIXzl ~~~:~ dw dX2 = A f(XI) . This equation may be solved in terms of the parameter A and of 2d arbitrary constants which are calculated by backsubstituting the resulting solution into equation (2.X2) dX2 (2. KARHUNENLOEVE EXPANSION 27 on the present is negligible.43) = (2 c + c2 A) f(x) the new variable (2. For stationary process.x21 by selecting a suitable value of the parameter b. parenthetically. for a number of kernel functions. 1962). In the remainder of this section. This kernel has been used extensively to model processes in a variety of fields (Yaglom. the preceding treatment of equation (2.36). (2. are given in Youla (1957). where b is a parameter with the same units as X and is often termed the correlation length. Thus. 1958). Note. Further. that explicit expressions for the transcendental characteristic equation associated with the differential equation (2. In case the domain D of the problem is the onedimensional segment [a.37) where c = lib. The case where D is finite.43) becomes /' (x) + w 2 f(x) o a< X < +a.38) can be viewed as a reformulation of equation (2. (2. the solution of which may be found explicitly (Paley and Wiener.4).40) can be written as Differentiating equation (2. the process has a finite memory. since it reflects the rate at which the correlation decays between two points of the process. REPRESENTATION OF STOCHASTIC PROCESSES 2. and substituting for S(w) from equation (2. a]. Equation (2.45) To find the boundary conditions associated with the differential equation .41) with respect to Xl and rearranging eC(XIX2) gives dX2 . (2. equation (2.39) are the solutions to the following integral equation (Van Trees.36) is applied to the important kernel representing the first order Markovian process. however. This important kernel is given by the equation (2. Noble. C(XI' X2) can be made rapidly attenuating versus IXI .35). (2.3.36) as a homogeneous differential equation.26 CHAPTER 2. applying the differential operator D [d2 I dXI] to this equation yields 1 D N[d d2 d2 X2 2] f(X2) .33) becomes. the following equation is ob. it is noted that higher order Markovian kernels may be expressed as linear combinations of first order ones.44) equation (2. a].36) becomes. is more relevant to the context of the monograph.
f Li s.46) Third Eigenfunction Fourth Eigenfunction Second Eigenfunction f (a) o.4. / r '/ t .45) with appended boundary conditions given by equations (2.45).5.1: Eigenfunctions fn(x).\* = n w*2 n + c2 . Further.45) is solvable. n = 1.49) is equal to zero.w tan (wa) + a2 (w + a2 (w c tan c tan (wa)) (wa)) o o. It can be shown that w2 ~ 0 is the only range of w(§forwhich equation (2. x 0. gives conditions specified by equations (2.47) /.45) is transformed into the ordinary differential equation (2. the resulting (2. equations (2. by w*. Covariance.47).5 :S x :S 0. \ Thus.3.5 0. the integral equation given by equation (2.2.5 w tan (wa) .48) f(x) = al cos (w x) + a2 sin (w x) .\ I 1 I \ / . f '<.0 LENGTH. (2. (2. Correlation Length=l.41) and (2. Exponential + The corresponding 2 w2 n C eigenvalues are (2. (2.46) and \ I I o \ \ \ . 0. KARHUNENLOEVE EXPANSION 29 (2. the boundary conditions become First Eigenfunction cf(a) c + f'(a) !'(a) =0 = (2. Setting this determinant equal to zero gives the following transcendental equations \:! ~ l and \r '"' e I . (2. After rearrangement.54) = 0.52) . . REPRESENTATION OF STOCHASTIC PROCESSES 2..3. .49) Figure 2.\. applying the boundary (2._ "\ 0.42) are evaluated at x = a and x = +a.46) and (2.47).: ' \ '\ \\ \ \ \ \ '.50) ~'\~ .\.53) Nontrivial solutions exist only if the determinant of the homogeneous system in equation (2.Q_ (2.28 CHAPTER 2. the solution being given by the equation.: \\ / / \ /\ . An + c2 2c ~ w tan (wa) and c tan (wa) =0 ~ 13 N .51) Denoting the solution of the second of these equations eigenfunctions are cos (wnx) a + sin (2wna) and f~(x) a sin (w~x) sin (2w~a) 2 w~ (2. for even n and odd n respectively.
2: Trends of the Eigenvalues An of the Exponential Kernel for Various Values of the Correlation Length b.. An assumes values for n = 1..30 CHAPTER 2... only.... . a. IX21 ::.2. IXli ::.3..1 . a...0 ·b=10. Figure 2.. ...0 b=5.. 2 4 10 EXPONENTIAL COVARIANCE Surface versus INDEX.. a.8 t§ ~ b=O. a.. EXPONENTIAL EXPONENTIAL Figure 2.2 ~ 1 I I~ ~~ \.0 ~ 6 .. a.5 . Maximum Error = 0... IXII ::..b=1.. COVARIANCE COVARIANCE Surface versus Xl and X2.3: Exact Covariance Correlation Length = 1. KARHUNENLOEVE EXPANSION 31 l.6 '§ 04 0. Correlation Length = 1.n Figure 2. REPRESENTATION OF STOCHASTIC PROCESSES 2.0 b=2. Correlation Length = 1. Figure 2. \_~'~ .. .1126. ..U 0. a. Xl and X2... IX21 ::.0 0. b=0.Term Approximation of Covariance IXII ::.4: 4.. IX21 ::.5: 4Term Relative Error Surface of Covariance Approximation versus Xl and X2. 0.
32 CHAPTER 2..33) corresponding to nonrational spectra. its four term approximation. Correlation Length = 1. )pL._ I. ~_. a.58) boundary are given by the equations (2.. Here. Consider realizations of this process on the interval [0.x21 of null correlation between W(XI . Nonrational Spectra. d is a parameter which can be used to adjust the distance IXI . X < a. Note that the smaller the value of b.r t:L.3.~ There is no general method for the solution of the integral equation (2. IX21 ::. the following equiv f~(x) The associated + w. IX21 ::.59) EXPONENTIAL COVARIANCE Figure 2.56) Figure 2. The method described in the previous section may be applied successfully to a number of these kernels as will be demonstrated for the case of the triangular kernel given by the equation EXPONENTIAL COVARIANCE Surface versus (2.7: 10Term Relative Error Surface of Covariance Approximation versus Xl and X2. t:. ° ::.4 . its ten term approximation and the associated errors. Figures (2. a. REPRESENTATION OF STOCHASTIC PROCESSES 2. respectively. Several of these equations have been investigated and explicit solutions have been obtained for certain covariance kernels. Maximum Error = 0.0425. iXll ::.. The eigenfunctions and eigenvalues of C(Xl' X2) are obtained as the solution to the integral equation (2.57) twice with respect to alent differential equation is obtained X2. J J.A.60) .7) show the exact kernel. 8) and W(X2 .£tI. a. a. .A(M. 8). KARHUNENLOEVE EXPANSION 33 values of b... (2. (2. Correlation Length = 1.6: 10Term Approximation Covariance IXII ::. fn(x) conditions = 0.t .57) Differentiating equation (2..3)(2. a]. Xl and X2. the more contribution should be expected from terms associated with smaller eigenvalues.
J 0.=~''i'*''''=' =: ~ <. KARHUNENLOEVE EXPANSION 35 (2.5 j 1 I '\ \~ \ \  .9: Trends of the Eigenvalues An of the Triangular Kernel for Various Values of the Correlation Length b: An assumes values for n = 1.8) shows a plot of the first four eigenfunctions associated with this kernel.. (2.n Figure 2.6 .66) The resulting normalized eigenfunctions and eigenvalues are (2.0 b=5.2.4./ \ / 2 (2..0 0..2 0.. \ \ \ and.. x 0.0 ·b=IO.<.10)(2. ..65) 1. \ \ \ \ \ \ \ _.... I \.60) is. for n even.59) and (2.8: Eigenfunctions fn(x).14) show the exact kernel. for n odd. = 2. It is given by the equation (2. only.5 . .a) 2 ' for ti and Wn = n. / /. L. The eigenvalues are shown in Figure (2.. .1) (2  sin(2wna)) 4wn 2 + sin (wna).3. / I \ 'I./.3.O ~ < > z ~ 9 .62) ·1 \ " '.0 0 <x < 1./"~\' / cos(wnx) a + a a tan(i")sin(wnx) W .5 .5 W ::.9) for various values of d.: ..l 1.3.. i 1 Figure (2..O b=2.. .58) subjected to the boundary scribed by equations (2. Correlation Length=1.. \ \ \ I / .64) 2. I '.. I + (tan (2) 2 wna .. conditions de2 l 1 I First Eigenfunction Second Eigenfunction Third Eigenfunction Fourth Eigenfunction ::x\_" .. Ti a forn = 1."..0 0.6 LENGTH... its fourterm approximation.. its tenterm approximation and the corresponding errors. Covariance.. 0. / <.1 ..4. Wn tan  (wna) o "...34 CHAPTER 2. n 1...''rr... REPRESENTATION and OF STOCHASTIC PROCESSES 2.61 ) The solution of equation (2.67) b=O. Triangular 2 Wn (2 . I :::::6 10 2 4 INDEX. Figures (2.b=l. .2. Another kernel that may be treated by the same method is the kernel of the Wiener process...0 l .4 0. \. b=0. (2. \\ /\ ..63) 2wn is the solution to the following equation wa tan( _2:_) 2 = Figure 2.8 LO + where Wn a 2 sin(2wna) (2...
10: Exact Covariance Surface versus X2 ::. COVARIANCE Xl Covariance Surface versus Length = 1.1226. Xl TRIANGULAR COVARIANCE Xl ::. Figure 2. Correlation Length = 1. X2 < a. . a.13: 10Term Approximation Covariance Surface versus o ::. X2 ::. 0 ::. Xl ::. Correlation Length = 1.Term Relative Error Surface of Covariance Approximation versus Xl and X2. 0 ::. Xl ::. a. 0 ::. Correlation Length = 1. Maximum Error = 0. X2 ::. KARHUNENLOEVE EXPANSION 37 TRIANGULAR COVARIANCE Figure 2.36 CHAPTER 2. 0 ::.Term Approximation o ::. and X2.11: 4. a. TRIANGULAR Figure 2.3. a. and X2. Xl ::.12: 4. Figure 2. REPRESENTATION OF STOCHASTIC PROCESSES 2. 0 ::. a. a. a. Correlation COVARIANCE Xl TRIANGULAR and X2. 0 ::.
Taking D to be the finite domain over which the process is being observed may often be the most obvious choice. n = 1 .69) This process is a uniformly modulated nonstationary process (Liu. REPRESENTATION OF STOCHASTIC PROCESSES 2. If ergodicity is needed for some particular problem. The associated eigenfunctions and eigenvalues were obtained in explicit form by Slepian and Pollak (1961).71) where Jk[. 1962). after substituting for C(Xl' X2) from equation (2. The final solution (Juncosa. Xl ::. 0 ::.33) twice with respect to Xl. (2. 1945). beyond the scope of the present work (Slepian. rn > 0. 2 . whose power spectral density vanishes outside a certain interval where it is constant.33) to infinity.73) 4T2 (7i'2(2n + 1)2) . a. or kernels that are sums of exponential and triangular kernels (Kailath. with modulating function it is useful in modeling processes correlation function has maximum value decaying with time.1. this choice does not induce the ergodic assumption (Lin. n = 0.. They are the angular prolate spheroidal wave functions and the radial prolate spheroidal wave functions respectively. . whose eigen(2.3..70) Differentiating equation (2. This is by no means a handicap of this approach since the ergodic assumption is usually introduced for convenience and is not necessary for the present study. 1945) is (2. (2. 1966). . (2. the same method may be applied to kernels that are integrals of the triangular kernel. and Another process that is useful in engineering applications ance kernel given by the equation has a covari (2.al. This is a truncated white noise process.] is the Bessel function of order k and TRIANGULAR COVARIANCE An = b 4 rn 2 .68) In addition to the examples discussed above.14: 10Term Relative Error Surface of Covariance Approximation versus Xl and X2i 0 ::. which involves observing infinite length records. 1977. Note that the Wiener process is an example of a nonstationary process.. 1970). X2 ::. Clearly. 1964. Maximum Error = 0.0425. 1961. J1jbl[rn] = 0.70).69) where [2 is constant. Landau et.. then it may be recovered by extending the limits of integration in equation (2. . however. A final remark is in order concerning the choice of the domain D of definition of the random process being investigated. a fact that emphasizes the generality of the KarhunenLoeve expansion and its applicability to such processes. Correlation Length = 1. The functions and eigenvalues of the covariance kernel given by equation Je=X.72) Figure 2. 1968.38 CHAPTER 2. leads to a homogeneous differential equation of the Bessel type. They have some quite interesting properties that are. KARHUNENLOEVE EXPANSION 39 kernel given by the following can be related to those of the nonsymmetric equation (J uncosa. a. Another process that has attracted much attention in the literature has a covariance function given by the function (2. 1965. 1967) for the process.
. The preceding procedure can be implemented using piecewise polynomials as the basis for the expansion... Exact and Approximate values. .80) (2. This error is equal to the difference between the left hand side and the right hand side of equation (2. . (2.15: Eigenvalues An. With this choice . ~"'J. this procedure will be illustrated through its application to a curved geometry twodimensional problem.81) and equation (2.) ~("l. 10 2 with an error fN resulting from truncating the summation after the Nth term.33).77) Denoting (2.. Backsubstltutmg into equation (2. : Numerical Solution 1> J c (~ J.33) yields the following expression for the error (2. 9. to the approximating space yields D"J' 1.82) where C. ..76) = d(j) z . . .82) represents a generalized algebraic eigenvalue problem which may be solved for the matrix D and the . Covariance Kernel. (4) YIelds the eigenfunctions of the covariance kernel. n (2. Substituting equation (2.) * * * * * Exact Eigenvalues +++++Approximate Eigenvalues In this section.77) becomes CD (2.. an explicit solution of the integral equation is available for the infinite domain and not for the finite one.=1 N d~k) hi(x) . (2.. of the Exponential I. (2. .. REPRESENTATION OF STOCHASTIC PROCESSES 2. (2.40 CHAPTER 2..l i ~ f{"'. N.79). 4 6 INDEX... for instance.. In Chapter IV.. Equivalently. Let hi(x) be a complete set of functions in the Hilbert space H. Each eigenfunction of the kernel C(X1' X2) may be represented as Jk(x) .78)(2.74) . .74) into equation (2. Band D are three Ndimensional matrices whose elements are given by equations (2. ..75) Figure 2.genvalues Ak. i = L . Equation (2..33)..78) = ABD.3.)4..79) Requiring the error to be orthogonal equations of the following form. KARHUNENLOEVE EXPANSION 41 This modification may be convenient numerically if.. .n .. a Galerkin tYJ!~ocedure is described for the solution of the Fredholm equation (2..
Obviously. McKean.83).2.13) is rewritten as (2. In particular. the CameronMartin expansion and the Ito approach. the eigenvectors and eigenvalues computed based on the above scheme provide convergent estimates to the true values. Engels (1982) and Kallianpur (1980) have attempted to develop a unified treatment of the Volterra series. In this case. George.~~. note the excellent agreement. Homogeneous Chaos . Huang and Cambanis. H01'vl0GENEOUS CHAOS 43 of basis functions. Certain properties of these particular estimates make this scheme computationally attractive. Yasui (1979).] is a nonlinear functional of its arguments. Such an expansion could involve a basis of known random functions with deterministic coefficients to be found by minimizing some norm of the error resulting from a finite representation. whereby the series coefficients are determined so as to satisfy some optimality criterion.4. This concept was first introduced by Wiener (1938) and consists of an extension of the then obscure Volterra's work on the generalization of Taylor series to functionals ( Volterra. 1965. Cameron and Martin (1947) constructed an orthogonal basis for nonlinear functionals in terms of FourierHermite functionals. Ogura. statistical physics (Imamura et.. However. In equation (2.al.. the 'Wiener series. a fact that substantially simplifies the numerical solution. engineering mechanics (Jahedi and Ahmadi. the Galerkin scheme described above can be shown to be equivalent to a variational treatment of the problem. a useful modification of the problem suggested by t .39) and the results from the algorithm described above. For this.83) involves functionals of the Brownian motion. 1965ab) and mathematics (Hida and Ikeda. Based on Wiener's ideas. 1953). Figure (2. note that the accuracy in estimating the eigenvalues is better than that achieved for the eigenfunctions (Delves and Mohamed. Murray and VonNeumann (1936) were establishing the parallel algebraic structure of rings of operators. As far as the system under consideration IS con~ erned.13) is noted. the columns of the matrix D become the eigenvectors h computed at the respective nodal points of the induced mesh. Note that both matrices C and B are symmetric positive definite.15) shows the exact eigenvalues of the kernel given by equation (2. 1978. 1979.. To clarify this important idea further. 1977).2. In the same manner that the Homogeneous Chaos can be viewed as an orthogonal development for nonlinear functionals with Gaussian measure. Further. Numerical implementation of the basic ideas as well as convergence properties were addressed in these reports. . 1959). 1979). An alternative expansion is clearly needed which circumvents this problem. It is clear now that what is required is a nonlinear expansion of h[. Wiener's Homogeneous Chaos was subsequently refined by Ito (1951) into what is known as the "Multiple Wiener Integral". the Discrete Chaos (Wintner and Wiener. This should be construed as similar to the Fourier series solution of deterministic differential equations. Hida and Ikeda. 1913 ). 1973. 1983). If the processes defining the operator are Gaussian. equation (1. the random processes involved have all been replaced by their corresponding KarhunenLoeve representations. neuroscience (Palm and Poggio. this implies that the expansion can be used for the random CDefficients in the operator equation.___. Specifically. 1982). They have concluded that the last three approaches are equivalent and that they are superior to the Volterra series in terms of their convergence properties and their applicability.42 CHAPTER 2. This theory has drawn the interest of investigators in the fields of communication (Yasui. since its covariance function and therefore the corresponding eigenfunctions are not known. it cannot be implemented for the solution process.4. 1972) is ( _____. and the il element of the matrix C becomes the weighted correlation between the process at nodes i and j. 1956. 1980. This implies that the convergence of each eigenvalue is monotonic in N. this set is a sampled derivative of the Wiener process (Doob.4 "'. Wiener's theory was further developed through research efforts that led to a series of reports (Bose. 1943. This is exactly what the concept of Homogeneous Chaos provides..] in terms of the set of random variables ~i (e). of the process being expanded. About the same time that this analytical and measuretheoretic development of the theory was being pursued. Kallianpur. REPRESENTATION OF STOCHASTIC PROCESSES 2.83) where h[. Engels. 'Wiener's contributions were the result of his investigations of nonlinear functionals of the Brownian motion. This property ensures that the computed eigenvalues are a lower bound of the correspondingly numbered exact eigenvalues. 1985) equation (1. equation (2. 1965.1 Preliminary Remarks is clear from the preceding discussion that the implementation of the KarhunenLoeve expansion requires knowir::[ the covarianc~ func~ion.
h(e) from the space e admits the following representation. which involve a specific random variable out of the set {~i(e)}~l increases with p.84) where f p(.84) can then be simplified. the representation given by equation (2.. The superscript ni refers to the number of occurrences of ~Pi (e) in the argument list for T p(. square integrability must be construed to be with respect to the probability measure defining the random variables. ~( G~i\Ct!iJ J'~":. the ring cI>e(~) is dense in the space (Kakutani. Polynomials of different order are orthogonal to each other. _ The set o~~is a linear subspace of the space of squareintegrable random variables e.>tl~' o~n:!:!a~ls. j. it becomes clear that Polynomial Chaoses are functions of functions and are therefore func:. then. ~i3(e)) + L LLL '1=1 '2==1 '3=1 i1 2'2 13 ai122'3'4f4(~il(e)'~i2(e)'~i3(e)'~i4(e)) i4=1 +.. and also between the coefficients aJ and ail .85) reflect the symmtery of the Polynomial Chaoses with respect to their arguments. of all polynomials in {~i(e)}~l of degree not exceeding p. resulting in the following expanded expression for the representation of random variables. any element j.Then. Furthermore.2 Definitions and Properties 00"""" )t'!I.:::. Consider he space r.. Let p represent the set of all polynomials in I' orthogona~ to I' p pl' FinallY.85) is the assumption that the expansion . ~i2(e)) L LL 00 aili2'3f3(~il (e).h(e) = ao I'o + 21 L 21 ==1 co ailr1(~il (e)) (2. At times in the ensuing developments.4. (2. REPRESENTATION OF STOCHASTIC PROCESSES 2. This fact plays an important role in connection with the finite dimensional Polynomial Chaoses to be introduced in the sequel.4. it will prove notationally expedient to rewrite equation (2. The upper limits on the summations in equation (2. so are same order polynomials with different argument list. Indeed. This means that any squareintegrable random function (fl _. the subspace fp of e is called the pth Homogeneous Chaos. as discussed above. thus preserving the generality of = L 00 aJ Wj[e(e)].lletJ rp be the space{spanned\ by F p.84).] and I'[].? ( Let{~i e)} ~l be a se. In this context. e (2.'" appearing in equation (2.) is the Polynomial Chaos of order p. a symmetrical polynomial can be obtained from a nonsymmetrical one by taking the average of the polynomial over all permutations of its arguments. Such a symmetrization is always possible. Based on the above definitions. The Polynomial Chaoses of any order will be assumed to be symmetrical with respect to their arguments. The Polynomial Chaoses of order greater than one have mean zero. I"G 2. that the number of Polynomial Chaoses of order p. and I'p is called the Polynomial Chaos of order p. and is a ring with respect to the functional multiplication fpfl(W) = fp(w)fl(W).84) involves t: distinct random variables out of the set {~i(en~l' with the kth random variable ~k(e) having multiplicity nk.t of orthonormal Gaussian random variables. 1961). since random variables are themselves functions. Also. Extensions to general measures have been investigated by Segall and Kailath (1976).. The form of the coefficients appearing in equation (2.85) in the form fL(e) where I'p(. Briefly stated.. it can be shown that under some general conditions.85).44 CHAPTER 2. Implicit in equation (2. the resulting ring is denoted by cI>e(~). Denoting the Hilbert space spanned by the set {~i en ( by e(~). the Polynomial Chaoses of any order p consist of all orthogonal polynomials of order p involving any combination of the randomZ:=VaIla6Fes{Me)}~l' It is clear. the Polynomial Chaos appearing in equation (2.. Then.85) + + il==l 22==1 LL 00 Zl co aili2f2(~il Z2 (e).). ~i2(e).tJtl1~A p''!.86) j=O where there is a onetoone correspondence between the functionals w[. the double subscript provides for the possibility of repeated arguments in the argument list of the Polynomial Chaoses.) are successive Polynomial Chaoses of their arguments. and such that the total number of random variables involved is equal to the order p of the Polynomial Chaos. HOMOGENEOUS CHAOS 45 an orthogonal development with respect to the Poisson measure. the expansion being convergent in the meansquare sense. R) can be approximated as closely as desired by elements from cI>e(o' Thus. and is called the ring of functions generated by e(~).
. the same symbol will be used for both.92) This results in the following equation (2. 6) + a2ll r3(6. the functionals \[1[.] is evident. 6) of equation (2. however. Since the set {~i}consists of zeromean elements. the orthogonality condition implies (2. Having noted this.] are identical. except for a different indexing convention. which is a function of only n of the uncorrelated random variables ~i' As n _.)LV)J) \:oJ> J0 _. Note that for this case.89) I I The first order polynomial has to be chosen so that it is orthogonal to all zeroth order polynomials. as defined above. 6) + a221r3(6.87) can be recast in terms of \[I j [. In a computational setting. 6. 6. In the ensuing analysis. Construction of the Polynomial Chaos \ 'J"~ (j.6) + a22r2(6.88) (2. 6.4. Since the finite dimensional Polynomial Chaos is a subset of the (infinitedimensional) Polynomial Chaos.87) is identified with the term a7\[17 of equation (2. That is .4. cL")t. this choice will be based on the KarhunenLoeve expansion of an appropriate random process. It is felt that.90) !he second order Polynomial Chaos consists of second order polynomials m {~i} that are orthogonal to both constants and first order polynomials.93) Allowing 23to be equal to i1 and i2 successively. For clarity. In view of that.85) is carried out in the order indicated by that equation. 6) + a12r2(6. 6) . the twodimensional counterpart of equation (2. the ndimensional Polynomial Chaos of order p is the subset of the Polynomial Chaos of order p. 6) ~i2) ~i3> = 0. REPRESENTATION OF STOCHASTIC PROCESSES 2. it seems logical to introduce the concept of a finite dimensional Polynomial Chaos. although somewhat cumbersome.6) + am r3(6. permits the coefficients ail and aiz as the evaluation of In view of this last equation.] and r[.] and r[. the infinite upper limit on the summations in equation (2. As defined above. In other words. e e 2. (2.. 6. the Polynomial Chaos as defined previously is recovered.] as follows (2. + all r2(6. the symbol has been used to emphasize the random character of the quantities involved.3 A direct approach to construct the successive Polynomial Chaoses is to start with the set ~f homogeneous polynomials in {~i(e)} and to proceed. orthogonality is understood to be with respect to the innerproduct defined by equation (1.OI! ~ Ii ra = 1.85) is rewritten.46 CHAPTER 2. and throughout the previous theoretical development. (2. the contribution of polynomials of lower order is accounted for first. with the dimension being specified. each Polynomial Chaos is a function of the infinite set {~i}' and is therefore an infinite dimensional polynomial. The zeroth order polynomial is a constant and it can be chosen to be 1. + a222 f3(6. the convergence properties of a representation based on the ndimensional Polynomial Chaoses depend on ti as well as on the choice of the subset {~AJf=l out of the infinite set. the symbol will be deleted in the ensuing development whenever the random nature of a certain quantity is obvious from the context. In this regard. <r2(~il' 6.85) is replaced by a number equal to the dimension of the Polynomials involved. this infinite set has to be replaced by a finite one.8). Up to now. through a sequence ot orthogonalization procedures.87) where the constants are so chosen as to satisfy the orthogonality The second of these requires that conditions. in a fully expanded form. Formally.. equation (2. a second order polynomial can be written as (2. this notation underlines the fact that a random variable is a function defined over the space of events of which is an element. In this context.91) (2.. the term a211r3(6. Specifically.88). HOl\lIOGENEOUS CHAOS 47 (2.94) . Obviously. For example. as e from which the correspondence between \[1[. it becomes clear that. co. .
103) becomes (2.98) Due to the Gaussian property of the set {~i}' the following equation holds (2.106) = Oi1'2 (2.4. first order polynomials.105) yields +aip3~il ai2i3~i2~is ail'2i3~il ~i2~i3' + + Oili40i2iS = 1 1 0. the second Polynomial Chaos can be expressed as (2.102) and ao 5i425 + ail i2 [ Oil'25i425 + ail'S + [s.5 Oi2245is25 + + + Oili5 Oi224 Oili55isi4 Oi4'5 ai22S [ Oi22SOi4i5 o.112) The last orthogonality condition is equivalent to (2. (2. (2.110) can be rewritten as ai122 [ Oi1'40i2i5 + 5i125Oi224 +ai22S 1 + ail'3 + [ Oil i4 Oisi5 + 5i2i50is.95) can be normalized by requiring that (2.48 CHAPTER 2.95) ao Oi4i5 aili2<~il~i2~i4~i5> + ai1's<~il~i3~i4~i5> + ai2is<~i2~is~i4~i5> Equation (2. The second condition implies that 0.110) (2.96) This leads to ao = O.4 Oi1'50is24 1 (2. HOMOGENEOUS CHAOS which results in 49 results in (2. the coefficients ai1'2 .111).107) and (2.100) That is.99) for the expectations ailOil'4 + + ai26i224 OiIi20isi4 + in equations ais Oisi4 Oilis Oi2i4 (2. aili3' and can be evaluated as aip2 aiIis 0 0 0.109) with conditions of being orthogonal to all constants.3 + + aiI22~iI~i2 Substituting (2. REPRESENTATION The first orthogonality condition OF STOCHASTIC PROCESSES 2.40isi5 1 ai2iS 0. From equation (2. which leads to (2. The first of these conditions implies that (2.104) ai2iS .107) Thus. + + + 5il'40i225 Oili45is..105) The above equations can be normalized by requiring that (2. the third order Polynomial r3(~il'~i2.103) (2. equation (2.~i3) =ao + ail~il ~i3 + + ai2~i2 + Chaos has the general form ai3~. 1 Oi2250iS24 J = o.101).111) [5i2.108) In a similar manner. (2.97) Then equation (2.101) Substituting for ao from equation (2. (2. and second order polynomials.
50 CHAPTER 2. it is found that ao Equation (2. Note that the Polynomial Chaoses as obtained in equations (2.4. n odd where 1f(. This equivalence is implied by the orthogonality of the Polynomial Chaoses with respect to the inner product defined by equation (1. REPRESENTATION Using equation OF STOCHASTIC PROCESSES 2. Thus.115) The third order Polynomial Chaos can then be written as e e After laborious algebraic manipulations. .86). can be expressed as the fourth order Polynomial Chaos = ~i1 ~i2 ~i3 ~i4 (2.90). The term W.al. (2. This fact suggests another method for constructing the Polynomial Chaoses.118) I: (_l)rl o I: II~ik rr(ilo""'n) k=l r < II ~il> l=r+l n from which the coefficients ah. the nth order Polynomial Chaos can be Equation (2.1) . which makes them identical with the corresponding multidimensional Hermite polynomials (Grad. two. the Polynomial Chaos of order n can be obtained as (2. ~ir} is modified by the permutation. in these tables refers to the quantity appearing in equation (2.117) ~il ~i3 Oi2i4 ~i2 ~i4 Oi1i3  ~h ~i2 Oi3i4 ~i2 ~i3 Oi1i4  ~h ~i4 Oi2i3 ~i3 ~i4 Ohi2 . ai2' and ai3 are found to be. HOMOGENEOUS CHAOS o 51 (2. 1949). 1965ab). three and fourdimensional Polynomial Chaoses up to the fourth order along with the values of their variances.119) ail ai2 ai3 Oi2i3 Oi1i3 Oi1'2 (2.4) display expressions for the one. 1979). (2.109) can be rewritten = O.. + It is readily seen that.117) and (2. and the summation is over all such permutations such that the sets {~i1' .89).118) are orthogonal with respect to the Gaussian probability measure.114) (2.113) I: (1)" r=n 7' r n even n even as (2. . (2..98). (2. These polynomials have been used extensively in relation to problems in turbulence theory (Imamura et. namely from the generating function of the Hermite polynomials. Tables (2.110) again. written as Oi1i20i3i4 + Oi1'30i2i4 + Ohi40i2i3 in general.(2.) denotes a permutation of its arguments.8) where dP is the Gaussian measure e~eT d~ where denotes the vector of n random variables {~ik}k=l' This measure is exactly the weighing function with respect to which the Hermite polynomials are orthogonal in the L2 sense (Oden.119) can be readily evaluated symbolically using the symbolic manipulation program MACSYMA (1986).
2: TwoDimensional 66 ~~ .1 1 .4. ~r~~ .36 ~i . REPRESENTATION OF STOCHASTIC PROCESSES 2.6~r + 3 Polynomial <W2> J 1 1 2 6 24 'II] 1 6 6 6 ~r .3~16 ~i .Order of the Homogeneous Chaos i'" Polynomial Chaos p. n = 2 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 .36 ~~66 6~§6 ~~36 ~f .52 CHAPTER 2.36 ~i .> 1 i I I 0\ 11 2I 31 4T p=o p=l p=2 p=3 p=4 0 1 I I2 I !3 4 I5 6 p P 0 1 i 1 1 1 1 j_ P I 2 Table 2..366 ~2 ~2 _ .1 ~r .1 [f .3 .366 . ti =3 .2 + 1 23 .6 ~~ . 28 I 29 30 31 32 33 34 111 10\ 7I 8I 9I p 3 p 4 I ! 6~~ .~~+ 1 \ 6~~ .2 ~2~~ . HOMOGENEOUS CHAOS 53 j j\ p.1 66 ~5 1 ~f.2+1 12 2 ~r66 .1 3 3 <'II.6~~+ 3 ~~6 .~2 .1 66 66 ~~.366 ~2~2.6 6~~ . n = 1 j I I p.2e . Order of the Homogeneous Chaos lh Polynomial 'II] 1 6 6 Chaos <wh ] 1 1I 1 2 1 2 6 2 2 6 24 6 I 4 6 24 0 1 2 3 4 5 6 7 8 9 10 11 p=o p=l p=2 \ p=3 p=4 12 13 14 Table 2.1: OneDimensional Chaoses and Their Variances.1 .<.2 _ .366 6~~6 66 66~§ 66 6a366 ~i.~i ..36 ~r6 .6~~ + 3 ~r 1 \ Polynomial Chaoses and Their Variances.3: ThreeDimensional Polynomial Chaoses and Their Variances.366 ~§.6~5+ 3 I I I 2 1 1 2 1 2 6 2 2 2 1 2 6 2 2 6 24 6 6 4 2 4 6 2 2 6 2 6 4 6 24 I I Table 2.36 ~r6 6 ~r66 6~~ 6 666 6~56 ~~.Order of the Homogeneous Chaos j'fh Polynomial Chaos 'II] 1 6 ~r .e 66<..6~r + 3 ~r6 .6~r + 3 a6366 ~r6 .2+ .
36~4 6a  + 3 6 24 ~§ .1 6~4 ~~ .54 CHAPTER 2.> I I 0 1 2 3 4 p=O p=1 p=4 i I ~f  6 6 ~4 1 p=2 \~ i8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 ~r  ~f~~ ~t . 2 4 2 4 6 2 2 2 p=3 ~r 1 2 6 1 1 2 1 .~3 ~r~4 . Chaos I <\II.4. n = 4 FourDimensional Polynomial Variances.66 ~1~~~4 6~~ .6 66~4 6~~ .::21:2 6 2 4 6 2 2 ~56~42 ~~ + 1 '>2'>4 6~4 1:2 '>2 + 1 6~~ . Order of the Homogeneous Chaos ith Polynomial \II.6 ~~ . .~~+ ~f~§ ~r66 ~f6~4 66 6~4 6~4 6~r + 3 ~r6 .6~4 1: '>4  66~l . H01V10GENEOUS CHAOS 55 J i p.366 6~§~4 . n = 4 .6 66~4 6~~ .36~4 24 6 6 6 1 4 2 66 66 6~4 \ ~§  1 2 1 ~f + 1 ! ~~ .36 ~§~4 .66 666~4 66d .6~l + 36~4 6 24 Chaoses and Their Table 2.366 ~r6 .~4 ~~ .4 (continued): ~26 6~4 ~§ .1 36 ~r6 .L Ii 35 36 37 38 39 40 I 41 ! 42 43 P.36 ~~6 .a .36~4 6~~ + 3 ~~6 .36~4 ~f~3~4  I 1 2 1 2 6 2 2 2 2 .~§ ~l 6a  36~4 ~l + 3 i 1I 6 4 \ .366 ~~~4 . 3334 .1 \ 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 I I 64 I 65 I 66 67 68 69 Table 2.~2 ~r6 .366 6~§~4 .366 2 6~26 . REPRESENTATION OF STOCHASTIC PROCESSES 2.66 6~~~4 .~4 6~§ .6~4 66a .6~4 66~~ .~4 6~~ .66 6~~ . . 6~~  ~2  ~3 3~4 ~: .66 6~~ . Order of the Homogeneous Chaos lh Polynomial Chaos \ \II] 1 6 <\Ill> \ 1 1 1 1 1 2 1 1 .36~3 ~r~4 .6 ~~~4 .L ~i  2 2 6 24 6 I i 27 28 \ 29 30 31 32 \ 2 6 2 2 2 1 2 6 2 2 6 ~~ .~4 i ~r ~.~r+ 1 ~ 6~~ .~§ a .4: FourDimensional Polynomial Chaoses and Their Variances.6 666 66~4 6~§ .
y[i])*prod(y[k] .k.log(x[i])). hom:p/prod(y[i] .i.85) represent the Gaussian component of the function /L( 8). . This can be achieved by using the inner product defined by equation (1. For a given nonGaussian process defined by its probability distribution function. MACSYMA Polynomial Macro to Generate the TwoDimensional Chaoses and Evaluate Their Variance. REPRESENTATION OF STOCHASTIC PROCESSES 2.56 CHAPTER 2.2. if oddp(order) then 0 else (if order=O then hom else if order=2 then hom*kdelta(y(l] . ( logexpand:super.order))))$ Table 2.85) is a convergent series representation for the functional operator h[. kdelta(x. 1979). Therefore. this expansion reduces to a single summation.l. The first two terms in equation (2. for i: 1 thru n do z:coeff(log(p).] appearing in equation (2.8) to determine the requisite coefficients. the coefficients ail being the coefficients in the KarhunenLoeve expansion of the process. HOMOGENEOUS CHAOS * kronecker : *\ 57 The expressions corresponding to these polynomials were obtained using the MACSYMA programs shown in Figure (2. 1971.y): :=buildq([x.16) for the twodimensional polynomials and Figure (2.i. for a Gaussian process. if z#O then for j:l thru z do y[j+order] :x[i].y]. norder:O.83). Note that equation (2.order) Iy[i] .2. This concept has been successfully applied in devising efficient variance reduction techniques to be coupled with the Monte Carlo simulation method (Chorin. Maltz and Hitzl.5:.17) for the fourdimensional polynomials. a representation in the form given by equation (2.order). order:order+z ).4.y[2]) else hom*sum(kdelta(y[l] . if x=y then 1 else 0)$ \* average : *\ average (p) ::= buildq ([p].85) can be obtained by projecting the process on the successive Homogeneous Chaoses.
x[iJ)/gen) )$ for i:1 thru 2 do (for j:i thru 2 do (ind:ind+1. G2[indJ :expand(diff(diff(gen. G2[ind] :expand(diff(diff(diff( diff(gen. G2[indJ :expand(diff(diff( diff(gen.4.x[j]). for m:1 while integerp(p[m1])=false (mm:m.p[m1]».pp) ) »)$ Table 2.1»=1 then average(p[m1]) elsemap(average. G2[indJ :expand(diff(gen. p[m] :if length(p[m1])=2 and integerp(part (p [m1J .x[j]) .5. for j:1 thru num do (pp:expand(G2[j]*G2[jJ). load(average) . pCO] :pp. print(j. :=(buildq([num]. Polynomial MACSYMA Macro to Generate the TwoDimensional Chaoses and Evaluate Their Variance. REPRESENTATION \* herm2 : *\ OF STOCHASTIC PROCESSES 2. . ( 59 ind:O$ G2 [OJ:1$ gen:exp(sum(x[iJ2/2.x[kJ)/gen»»$ for i:1 thru 2 do (for j:i thru 2 do (for k:j thru 2 do (for l:k thru 2 do (ind:ind+1.i.2»$ for i:1 thru 2 do (ind:ind+1.2»=true and length(part(p[m1] . HOMOGENEOUS CHAOS \* square2 : *\ square2(num). (continued):.var2[jJ . var2 [j]:p [mm].58 CHAPTER 2.x[l])/gen »»)$ Table n:2.5.x[jJ)/gen»)$ for i:1 thru 2 do (for j:i thru 2 do (for k:j thru 2 do (ind:ind+1. (continued):. do MACSYMA Macro to Generate the TwoDimensional Polynomial Chaoses and Evaluate Their Variance.x[kJ) .x[i]).x[iJ). load(herm2).1. mm:O.x[iJ) . 2. load(kronecker).
HOMOGENEOUS CHAOS \* herm4 : *\ 61 average (p) : := buildq ( [pJ .x[i]). G4 [ind] :expand (diff (diff (diff ( diff(gen.yJ. order))) )$ . G4[indJ :expand(diff(diff(gen.order) ind:O$ G4[OJ :1$ gen:exp(sum(x[iJ2/2.4))$ for i:1 thru 4 do (ind:ind+1.x[l])/gen) ))$ ))))$ Table 2. hom:p/prod(y[iJ .6 (continued):.k. G4 [Lnd] :expand (diff (gen. G4 [ind] :expand (diff (diff ( diff(gen. 2.2. if z#O then for j:1 thru z do y[j+order] :x[iJ.i.log(x[iJ)).x[iJ).x[kJ).60 CHAPTER 2. for i: 1 thru n do z:coeff(log(p).x[jJ).y): if x=y then \* average : *\ :=buildq([x. order:O.y[iJ)*prod(y[kJ Iy [iJ .xUJ).i.1.1. ( logexpand:super.x[jJ)/gen) for i:1 thru 4 do (for j:i thru 4 do (for k:j thru 4 do (ind: ind+1. REPRESENTATION \* kronecker kdelta(x.order). if oddp(order) then 0 else (if order=O then hom else if order=2 then hom*kdelta(y[1J . .x[k])/gen))))$ for i:1 thru 4 do (for j:i thru 4 do (for k:j thru 4 do (for l:k thru 4 do (md: ind+1. MACSYMA Macro to Generate the FourDimensional Polynomial Chaoses and Evaluate Their Variance. order:order+z ).6:. x [iJ ) 1gen) )$ for i:1 thru 4 do (for j:i thru 4 do (Lnd: ind+1.i.x[iJ).MACSYMA Macro to Generate the FourDimensional Polynomial Chaoses and Evaluate Their Variance. Table 2.y[2J) else hom*sum(kdelta(y[1J .4. 1 else 0)$ : *\ OF STOCHASTIC PROCESSES 2.
62 CHAPTER 2. REPRESENTATION \* square4 : *\
OF STOCHASTIC
PROCESSES
square4(num): :=(buildq([num],
(
n:4, load(kronecker), load(average), load(herm4), for j:1 thru num do (pp:expand(G4[jJ*G4[j]) , p[O]:pp, for m:1 while integerp(p[m1])=false do (mm:m, p[m] :if length(p[m1])=2 and integerp(part(p[m1] ,2))=true and length(part(p[m1] ,1))=1 then average(p[m1]) else map(average,p[m1])), var4 [j] :p [mm], pr Lrrt ,var4[jJ .pp) ) )))$ Cj
Table 2.6 (continued):, MACSYMA Macro to Generate the FourDimensional Polynomial Chaoses and Evaluate Their Variance.
Chapter 3
STOCHASTIC FINITE ELEMENT METHOD: Response Representation
3.1 Preliminary Remarks
~.. s, . 
r
(
(J
\0 i
Recent analytical methods for addressing the problem of system stochasticity in the context of structural mechanics have involved either solving for the second order statistics of the response or implementing a FirstOrder or a SecondOrder (FORM/SORM) (DerKiureghian et.al, 1987a) algorithm in conjunction with a finite element code. Monte Carlo simulation methods have also been investigated towards improving their efficiency and versatility (Shinozuka and Astill, 1972; Shinozuka and Lenoe, 1976; Polhemus and ~mak andS~ 1984; Shinozuka, 1987; Spanos and Mignolet, 1987). However, due to the substantial requisite computational effort, the Monte Carlo simulation method is still used mainly to verify other approaches. The perturbation method (Nakagiri and Hisada, 1982; Liu, Besterfield and Belytschko, 1988) and the Neumann expansion method (Shinozuka and Nomoto, 1980; Adomian and Malakian, 1980; Benaroya and Rehak, 1987; Shinozuka, 1987) have been developed and shown to provide acceptable results for small random fluctuations in the material properties. results from these two methods have been restricted to second order statistics of the response quantities. Obviously, this information does not suffice for a meaningful risk assessment and failure analysis. In a different
j
63
64
CHAPTER 3. SFEM: RESPONSE REPRESENTATION
3.2. DETERMINISTIC
FINITE ELEMENTS
65
context, a stochastic finite element method has been developed that couples response surface techniques with deterministic finite element formulations. The emphasis in these methods has been on developing efficient search algorithms for locating the point, termed the design point, at which the response surface is to be expanded in a first or second order Taylor series (DerKiureghian et.al, 1986, 1987, Cruse et.al, 1988) and on formulating the problem using equivalent Gaussian variables (Wu, 1987; Wirshing and Wu 1987; Wu and Wirshing, 1987). Following this approach, a number of finite element simulations are performed, the solutions of which define an approximation to the response surface. From this approximation, the design point is subsequently determined. Once located, the distance from this point to the origin can be used to obtain a rough approximation to the probability of failure. The accuracy of this approximation deteriorates rapidly as the dimension of the space increases. A number of stochastic finite element codes have emerged from these research efforts. They all utilize a deterministic finite element code which is coupled with either a standard optimization algorithm or a search algorithm that is customized to implement the specific reliability concept being adopted. Of interest are the codes NESSDS /EXPERT, being developed at the Southwest Research Institute in collaboration with NASALewis Research Center (Millwater et.al, 1988; Millwater et.al 1989; Cruse and Chamis, 1989), and CALRELFEAP, being developed at DCBerkeley (Liu,et.al, 1989). Note that alternative methods to describe the response surface that do not rely on a Taylor series expansion have been proposed in the literature. Of these, the polynomial approximation suggested by Grigoriu (1982) is worth mentioning. It relies on fitting a polynomial to a number of points on the response surface. Each one of these points, however, is obtained as the solution to a finite element simulation. Alternatively, methods based on the optimal feature extraction properties of the KarhunenLoeve have been recently added to the arsenal of approaches to address the associated class of problems. The KarhunenLoeve decomposition has been coupled with either a Neumann expansion scheme (Span~s and Ghanem, 1989) or a Polynomial Chaos expansion in conjunction with a Galerkin projection (Ghanem and Spanos, 1990) to achieve an efficient implementation of the randomness into the solution procedure. In this chapter the deterministic finite element method is first briefly reviewed. Then, the nonspectral methods for solving a class of stochas
tic problems are discussed. Finally, two spectral methods involving the KarhunenLoeve expansion and the concept of Polynomial Chaos for stochastic finite element analysis are examined in detail. It is noted that the prime theme of this chapter is an expeditious representation of the system response itself; the determination of the statistics of the response will be addressed in Chapter IV.
3.2
3.2.1
Deterministic Finite Elements
Problem Definition
In the deterministic case, the space of elementary events is reduced to a single element, coinciding with the actual realization of the problem. It is obvious, then, that this is a special case of the problem defined in Chapter 1. For convenience, and omitting the randomness argument equation (1.12) is rewritten as
n
e,
L(x) [u(x)]
=
f(x)
,
xED.
(3.1)
In the following, two equivalent formulations for deterministic finite element analysis are presented. They are the variational formulation and the ~rkin formulation. The stochastic finite element formulation as introduced in the sequel is based on a Galerkin projection in the space e of random variables. However, the variational formulation is presented to draw a physically appealing analogy between the mechanical energy of a system as given by its potential energy, and the uncertainty energy of a system, as given by its informationentropy. Note that the exposition in this section is meant to outline some of the basic concepts of the deterministic finite element method that are relevant to the stochastic case; it is not meant to provide an account of the stateoftheart of finite element techniques. Equation (3.1) can be viewed as a mapping from the space over which the response u(x) is defined to the space over which the excitation f(x) is defined. All the excitations dealt with in the sequel, as well as in most engineering applications, have finite energy. That is,
k
f(x)2
dx
<00 .
(3.2)
The range of the operator L(x) is thus the space of all squareintegrable functions. The domain of L(x), that is the space spanned by the solution, is
66
CHAPTER
3. SFEM: RESPONSE
REPRESENTATION
3.2. DETERMINISTIC
FINITE
ELEMENTS
67
obviously determined by the special form of L(x), as well as by the boundary conditions and the initial conditions associated with the physical problem. Assuming that L(x) is an mth order differential operator, let em denote the space of all functions that are m times differentiable. Then, the solution space is some subspace of em whose elements satisfy the associated homogeneous essential boundary conditions of the problem. The exact nature of this subspace depends on the specific finite element formulation employed and will not be dwelled upon any further. The solution function u(x) can be expanded along a basis in this space, and equation (3.1) becomes L(x) [~
functional derivative of I[v(x)] with respect to v(x) to zero, results in L(x) [v(x)]

f(x)
= o.
(3.5)
Physically, the functional I[v(x)] corresponds to a measure of the energy in the system, which assumes its stationary value at the true response. Substituting equation (3.3) into equation (3.4) gives I[v(x)]
=
LL
~=l )=1
N
N
[Vi Vj ( L(x) [gi(X)] , gj(X) ) ]
N
(3.6)
Ui 9i(X) ]
L Ui L(x)
00
[gi(X)]
i=1
f(x)
,
(3.3)
2
L
,=1
Vi ( f(x)
, gi(X) ) .
where Ui is the component of the solution u(x) along the basis function gi(X). In the sequel, the above summations are truncated at the Nth term and the problem then is to compute the coordinates {Ui} of the response u(x) with respect to the finitedimensional ~2 basis {9i(X)}~1'
Setting the variation of I[v(x)] to zero, and recognizing that u(x) = v(x) at the stationary point of 1[.] gives,
aI[v(x)]
ov'
)
Iv(x)=u(x)
2
L (L(x)
,=1
N
[gi(X)] , gj(x) ) Uj )
(3.7)
Variational Appro:§!y
2 ( f(x)
, gj(x)
Variational principles were introduced and studied well before the introduction of the finite element method. The theory has become an integral part of functional analysis with a solid mathematical foundation. 'I'he physical meaning of the corresponding minimization problem, although (quit~ helpful, is no longer necessary, as long as the operator describing the system satisfies certain conditions of selfadjointness (Rektorys, 1980). In fact, it may be shown that to every selfadjoint operator equation, is associated a quadratic functional whose stationary value coincides with the solution of the equation. Physically speaking, a selfadjoint operator results from situations where reciprocity, as given by Betti's law (Shames and Dym, 1985) for example, is applicable. Such js the case with most linear differential operators of even order. Then, the solution to equation (3.1) is that function u(x) which minimizes the functional
0, j = 1, ... ,N . Equation (3.7) can be expressed as a system of algebraic equations,
L
where
u=f
(3.8)
(3.9)
fi = (f(x),
gi(X) ) .
(3.10)
AlV(x)]
where (. ,. )
= (L(x)
[v(x)] , v(x) ) 
2 ( f(x)
, v(x) ) ,
(3.4)
Note that as given by equation (3.9), the basis {gi(X)};~1 must span an Ndimensional subspace of em. At this point, it may be noted that performing the integration indicated by equation (3.9) by parts, the operator L(x)[.] can be split into two lower order operators as in (3.11)
denotes a suitable
inner product.
Indeed, setting the
. an analogy is drawn to this principle for the stochastic case. 1979). j = 1. 9j(X)) (l(x).9) or equation (3. Obviously. as the number of finite elements is reduced to one.3 3. OJ This equation is equivalent to = = 1. In the limit. Thus. Here again. Spectral Methods and Hierarchical Finite Element Bases A condition associated with the variational formulation of the finite element method is that the operator L(x) [. .14) <ii. the set 9i(X) is chosen to be the set of piecewise polynomials of order ti + 1.2. Ld. a global approximation is obtained.4 Galerkin Appro~ pAdaptive Methods.11) is used to compute L'J' the result is an equation of the form (3.2. SFEM: RESPONSE REPRESENTATION 3. This restriction excludes a class of important practical problems. 1. The Galerkin method will form the basis for the spectral extension of the finite element method to problems involving stochastic operators. as more degrees of freedom are introduced. New degrees of freedom are introduced through mesh refinement with the new coefficients representing the physical quantities at the new nodal points.~1' yields a set of N algebraic equations Whether equation (3. 1989). The method consists of expanding the response function along a basis of a finite dimensional subspace of an admissible Hilbert space (Oden. This is the socalled hmethod whereby the approximation error is reduced through successive mesh refinement. the "test space". This limiting case is often referred to as a spectral approximation. however. a new error reduction techniques with better convergence properties has been actively investigated. (3. and requiring that the error resulting from taking a finite number of terms in the expansion be orthogonal to another Hilbert space. Further. In Chapter IV. where the energy norm is replaced by an uncertainty norm. N.] and L2['] are two appropriate operators of lower order than Requiring this error to be orthogonal to the subspace spanned by {9i(X)}. whose functions are called "test functions".8). the approximation error diminishes. This method constitutes the most widely used version of the finite element method.68 CHAPTER 3.:'1 of a subspace of the space em introduces an error of the form EL L L(x) i=l N [9i(X)] . . This choice for the basis set generates a natural discretization of the domain D. The main point to carryover from this section to the stochastic case is that the solution of the problem can be obtained as the element of a certain Hilbert space of admissible functions that minimizes a certain norm of the energy of the system..13) . L N ui(L(x) [9i(X)]. integration by parts can be used to enlarge the admissible space..=1 which is similar to equation (3. Thus.f(x). The successive interpolation bases over each element are introduced in a hierarchical manner permitting results from previous approximations to be efficiently used in computing higher order approximations. expanding u(x) in equation (3. Usually.1) in terms of a finite dimensional basis {9i(X)}{.. In other words. The socalled pmethod consists of reducing the approximation error by using higher order interpolation within each element (Babuska. 9j(X)).8). Indeed. Recently.12) It is customary in finite element analysis to regard the coefficients of the basis functions as representing physically meaningful nodal variables. the variational principle for a given problem may lack the intuitive physical association with the energy principle.al.N . it may be advantageous or even necessary to resort to the Galerkin formulation of the finite element method (Zienkiewicz and Taylor. (3.J be selfadjoint. in some cases. It has some obvious advantages when dealing with functions that are defined over domains that are either too difficult to discretize or too abstract for such a discretization to be intuitively feasible. et. the test space is chosen to coincide with the admissible space. The variational approach just outlined is used later in this chapter to generate a set of algebraic equations from the governing differential equation. (3. 1986). DETERMINISTIC FINITE ELEMENTS 69 where now L(x). the same discrete mesh is used in successive approximations. .. Ordinarily. the space .
as tri~gered. The ~ethod has been known and used extensively in various fields such as health care. However.tool capable o(~andling fituations where all other methods fail to succeed. Various optimality criteria can be used in the design of these filters (Rabiner and Gold. Subsequent developments have involved the application of 'Wiener's filtering theory to problems of structural mechanics. To show explicitly the dependence of TI[. This computational availability \ h. The usefulness of the Monte Carlo Simulation method (MCS) is based on the fact that the next best situation to having the probability distribution function of a certain random quantity is to have a corresponding large population. x) ] [u(a(x. the degree of uncertainty associated with the process and can be related to its coefficient of variation. For a review. \ Shinozuka and Jan (1972)_j:iave had a pioneering 'iolernintro(rucin~ima:tutlrenerdOfengineering mechanics.12). Most of the uses of the IvICS have been in the study of random vibrations of deterministic media.] on the random process a(x.3 3.re~ttndeveltrI1:i:B.2 FINITE ELEMENTS (MCS) 71 of random variables is just such a space. Later. in engineering mechanics it has attracted intense attention only recently following the widespread availability of inexpensive computational systems. x)] f(x. The broader the frequency content. in. 1981. One of the first applications involving a random medium was presented by Shinozuka and Lenoe (1976) whereby a twodimensional FFT algorithm was used to . They have suggested simulating a random process as the superposition of a large number of sinusoids having a uniformly distributed random phase angle. a meaningful definition of the range of applicability of a given method is the range of coefficients of variations and the range of frequency content that the method can accommodate. Still.~istic~ted and effici:nt \. In view of that.1 Stochastic Finite Elements Preliminary Remarks In this section three of the methods used for treating problems involving random media are 1'rrst discussed. agriculture. 8) this equation can be rewritten as 8) . see Spanos and Mignolet (1989).3. The frequency content of the random fluctuations and the magnitude of the fluctuations. Then. functions defined over this space can be discretized using an adequate spectral measure. solving the deterministic problem associated with each member of that population. 1986). 1967). The Monte Carlo method is a quite versatile mathematical. The main idea here is to view the process to be realized as the output of a linear filter excited by white noise. a certain extent. ~ recently developed methods are presented. Monte Carlo Simulation 3. Spanos and Hansen. All three methods to be discussed are based on performing direct operations on equation (1. and various other random phenomena. The former feature can be used to situate the random process with respect to white noise. 8) . and econometrics. SFEM: RESPONSE REPRESENTATION 3. This population can then be used to obtain statistics of the response variables . This approach has been successfully used in a variety of problems for the simulation of earthquake records. An additional appeal of the spectral formulation is the fact that global approximations can make efficient use of hierarchical basis functions. 8) .deed. This feature reflects the level of correlation of the process at two points in its domain. The problem is then reduced to computing a set of coefficients that specify the filter which can be excited with simulated white noise to produce the desired realizations.3. simulation algonthms. ~.70 CHAPTER 3. and obtaining a population corresponding to the random response quantities.3. The measure associated with this space is a probability measure that cannot be made to correspond with the Lebesgue measure used in spatial discretization procedures. The implementation of the method consists of numerically simulating a population corresponding to the random quantities in the physical problem. the set of basis functions used in the ensuing spectral stochastic finite element formulation is hierarchical and the associated solution procedure can be optimized to make use of this fact. The three methods have been selected due to their popularity with various investigators and their compatibility with the finite element method. In this context.15) [L(x) + TI(a(x. Spanos and Mignolet. Shinozuka used an FFT algorithm in conjunction with the MCS to achieve a more efficient implementation of the simulation procedure (Shinozuka. STOCHASTIC 3. 1975. the closer is the process to white noise (Lin. two important features can be observed. (3. The second feature reflects to. seawave elevations. Examining realizations of a random process. . 1974). The theoretical foundation and the numerical implementation for each of these methods are discussed. discretization refers to expanding the functions using a set of basis functions that are globally defined. Interestingly.
whereas the second approach is preferred for temporal random fluctuations.16). Further. Recently. Then. e) appearing in equation (3. This task is considerably complicated. results have also been obtained for the simulation of twodimensional processes (IvIignolet.2) or by representing the process as the response of a linear filter to a white noise excitation. The first two terms of this succession are L(x) [u(x)] 3. j. Further. The method has been applied extensively in recent years to problems involving random media. As far as using the IvIeS in problems involving random media is concerned. the functions and operators involved can be expanded in a Taylor series about their respective mean values. x) + . and Liu et. The procedure is repeated a number of times for different values of E n.18) + t. and proceeding with a deterministic equation to be solved for u( o:(x.x)] = f(x) (3. x) and u(o:(e). e Assuming small random deviations of the variables O:k(e) from their mean values. e).16) Expanding II(o:(e). 1977).. x) (3. Realizations of the process are obtained.. Equating same order polynomials on both sides yields a set of equations to be solved sequentially for the successive derivatives of the response. usually. that O:k(e) is a zeromean  II(o:(e). In this case. These are terms the maznitude of which increases with increasing approximation order. secular terms appear in higher order expansions. it is assumed that the random process representing the system variability has been modeled by r random variables and that the excitation is deterministic. the digital simulation of nonGaussian processes has been investigated (Yamazaki. thus causing the expansion to diverge. . x}.72 CHAPTER 3.17) + and u(o:(e). for a fixed value of e. Nakagiri and Hisada (1982) and Hisada and Nakagiri (1985). O:i(e) O:j(e) r 82 80:i (e)80:j (e) u(o:(e). 1986. 1985. It involves numerically generating realizations of the random processes O:k(X.x)] [u(o:(e).al (1985. the more terms should be included in equations (3. either as local averages as discussed in section (2. the contributions from all O:k(8) are small compared to that of the average quantity. a multidimensional polynomial in O:k(e) is obtained. equation (3. Over a period of years the application of the MeS to problems of structural mechanics has involved onedimensional Gaussian processes. 1987). The righthand side is obviously a polynomial of order zero in 0:.3. x) (3. The first approach is usually used when spatial randomness is involved. This is obviously a collocation scheme in the space n of elementary random events. x) [u(x)] i o (3. 1973. 1986. e) and f(x. thus greatly limiting the applicability of the method.17) and (3. substituting back into equation (3. Given certain smoothness conditions. the computational cost of the approach is apparent.15). Subsequent applications to problems of statics and dynamics involving random media were again carried out by Shinozuka and a number of coworkers (1980. the MeS is used as a brute force technique for assessing the validity of other approaches. x) about their mean values and noting The larger the magnitude of the random fluctuations is. O:i(e) 80:i(e) r u(o:(e). Ordinarily. 1988) have applied the method to linear and nonlinear problems of statics and dynamics. Good results have been obtained for narrow band random fluctuations of small magnitude. 1987). 1987). STOCHASTIC FINITE ELEMENTS random vector.20) [L(x) + II(o:(e). x)] i = f(x) (3. SFEM: RESPONSE REPRESENTATION 3.15) becomes + 8 80: (e) II(o:(e).3 Perturbation Method 8 L(x) [80: (e) u(o:(e). leads to 73 translate a twodimensional random plate problem into a format compatible with the finite element method.18). Jordan and Smith.3. To outline the method. x) u(x) + r 8 t.19) The perturbation approach as applied to problems of random media is an extension of the method used in nonlinear analysis (Nayfeh.
Subsequent contributions were made by Adomian (1983) and Adomian and Malakian (1980). Namely.tI). It is noted at this point that the convergence of the series (3. For implementing the perturbation method into a finite element analysis. The first term in equation (3. equation (3. which when carried out.tI)] . The theory was developed by Neumann and was further investigated by Fredholm (1903).15) as the series ) :'i .21) only first order second moment statistics are usually computed. It is obvious that the method cannot be readily extended to compute the probability distribution function of the response process.3. Benaroya and Rehak (1987).3. note that the higher the frequency of random fluctuations.15) reflects the problem of computing the inverse of a given operator.23) It is apparent that the algebraic manipulations incurred by including higher order terms.18).21) with its conjugate. tI) included in the expansion. Subsequent extensions of the theory include the concept of generalized inverse (Nashed. The concept was applied to the solution of stochastic operator equations by BharruchaReid (1959). these methods can be applied to equation (3. The second moment of the response process can be obtained. the response process can be written formally as u(o:(tI).21) is not guaranteed even for small levels of random fluctuations due to the presence of secular terms in higher order expansions. more elements are required and the problem can become quite large. yields the same sequential algebraic equations as before. It is well known that when it exists.19) and (3. The difficulties associated with that computation are such that 3. yielding a sequential system of algebraic equations to be solved for the successive derivatives of the response. analyzing a singledegreeoffreedom system and using the symbolic manipulation program MACSYMA.22) To guarantee the convergence of series (3. 1976). . order here refers to the order of the polynomial in O:i(X.x) L (_l)i 2=0 co [L1(x) II(o:(x. a random process can be replaced by the random variables corresponding to its spatial average over a number of sub domains in its domain of definition. even second order terms. upon multiplying equation (3. at least conceptually.x) r [f(x. either the variational principle or the Galerkin approach can be used directly on equations (3.16).21) is obviously the average response.22) to . Alternatively.20) have been solved. Yamazaki et. + w here all the partial deri vati ves are evaluated at the mean value (zero) of the random variables {O:i(tlni=l' The case where the random aspect of the problem is modeled as a random process can be reduced to the previous case by recalling the discussion in Chapter II concerning local average representation of random processes. STOCHASTIC FINITE ELEMENTS 75 Once several of the sequential equations (3.4 Neumann Expansion Method From an operatortheoretic perspective. Later. Usually these subdomains are the same as the elements used in a finite element analysis.IL1(x)II(0:(x.x) (3.17) and (3. the inverse of an operator. The Neumann expansion method consists of expressing the solution to equation (3.B). prior to performing the perturbation expansion. in the expansion are quite involved. In terms of the comparison criterion defined at the beginning of this chapter. That is. SFEM: RESPONSE REPRESENTATION 3.22). u(o:(tI). Note that the number of random variables involved in this case is equal to the number of sub domains over which the averaging is performed. (3.20). (3. it is necessary that the following criterion be satisfied. 1957).74 CHAPTER 3. can be expanded in a convergent series in terms of the iterated kernels (Mikhlin. were not able to include more than the second term in the expansion. The treatment was largely theoretical until Shinozuka and Nomoto (1980) introduced this concept to the field of structural mechanics.19) and (3. x) u(x) +~ r 0 O:i(tI) OO:i(tI) u(o:(tI). The amount of computations required to perform the indicated operations becomes prohibitively large rather quickly as more terms are included in equations (3. the more random variables are needed to represent the process. by averaging.x)11 < 1. The method proved the implementation of higher order terms in the expansion to be quite laborious. Shinozuka and coworkers coupled the Neumann expansion with the Monte Carlo simulation method to produce an efficient algorithm.al (1985) suggested applying a Monte Carlo approach to equation (3. called the resolvent.
resulting in equation (3.27) and (3. x). x).26) where 11. At this point.28) become [L(x) + a(x. X2). (3.3. xED. e). the simulation required by the method described in this section is circumvented. e) taken at x = xi and x = X2 respectively..1). e) (3. STOC_f[ASTIC FINITE ELEMENTS tJ.15) which is rewritten here for convenience n [L(x) l 11( a(x. These properties are assumed to be the realization of a second order vector random process. With regards to representing the random process. n denotes the number of degrees of freedom in the finite element mesh. let the domain be subjected scribed by the operator equation conditions + [L 11 r 1 F k ~(x. in view of the discussion of section (3. e) . this space may be extended. It is obvious that u(x. Let w(x.28) is satisfied on aD. leading to U [L In this section the KarhunenLoeve expansion presented in Chapter II is implemented in the Galerkin formulation of the finite element method.e) L 1 F . it is assumed that w(x. e) belongs to the Hilbert space of functions satisfying equation (3. Then. it is assumed that only one parameter of the medium is random and that it appears as a multiplicative factor in the operator II( a(x. replacing the random process by its local average over each finite element yields 3.25) [u(x.'t~UvVA~ 77 5' simulate the processes ak(x. Let D denote a domain in R". In view of that. the same discussion presented for the perturbation method is pertinent. the processes are replaced by their local averages over the finite elements. e) is represented by its mean value w(x) and its covariance matrix [ Cww( xj . At this stage. a Neumann expansion of the inverse operator may be performed. However. e). and D to the actual domain occupied by the object being investigated. equations (3. e ) . where :E is a random operator and aD denotes the boundary of the domain D. e).X2) reduces to a single function Cww(X1. e) .28) Further.5 Improved Neumann Expansion [L+IIjU=F. Further. it can be seen that it is quite difficult to extend the method to obtain higher order moments than the first two. " In general the response vector u(x. either the variational or the Galerkin method may be used in equation (3. from which various statistics can be obtained. Further. Here. 3. Also it is assumed that the medium is subjected to a set of deterministic boundary conditions. SFEM: RESPONSE REPRESENTATION .11 denotes some norm in R x e. producing a statistical population for the response vector U. through integration by parts to include functions not smooth enough to be operated upon as required by equation (3. To avoid obscuring the notation and to be able to proceed without digressing to a specific problem. e E denote a vector of random properties of this domain. to a set of boundary = 0. Namely. e) and Wj(x. The method makes it possible to compute the moments of the response in an explicit form. L 00 1 111 k=O Further. (3. Equation (3. e) R(x)] [u(x. However. only the deterministic average operator L1(x) needs to be inverted.15). In the next section an improved Neumann expansion is presented in conjunction with a Galerkin based finite element method. e) is related to the external excitation through a transformation defined by the medium D and its physical properties.24) where L is an n x n deterministic matrix.76 CHAPTER 3. In terms of implementation into a finite element code.e)] xE aD.27). The result is an explicit expansion for the response process. II is an n x ti random matrix function of the random variables {ai (e) }i=1' and F and U are ndimensional vectors representing the excitation and the response. x)] = f( x . x2 )]. the covariance matrix Cww(Xl. the successive orders in the expansion can be obtained numerically. Here. n refers to the physical dimension of the problem. simulation can be used to numerically generate realizations of the matrix 11.3. e)] = f(x. The ilh component of this matrix represents the crosscorrelation function between the processes Wi(X. Note that with this implementation sequence.27) in D and such that equation (3. the method still requires simulation. a fact that necessitates several runs of the simulated problem in order to assess the reliability of the results. e) and hence the operator II(a(x. x)] [u( a(x. (3.25) is valid provided that II L 1 1111 nxn <1 (3. e).27) pre(3. respectively. Thus. Specifically. Let the domain D be subjected to a general external excitation denoted bv f(x.29) .
Substituting equation (3. the left handside of equation . a(x. gives 'L r RiJ i=l L N + n=l 'L M ~n K~j') l J I u. (3. . SFEM: RESPONSE REPRESENTATION 3. The response can be written as u(x. in terms of its pointwise realization. e) into equation (3. (3. N.29) and dropping the argument e for convenience. 'Writing equation (3..35) for a(x. This choice for the basis induces a discretization of the domain D and makes the coefficients of the generalized coordinates 9i (x) directly related to the components u(x. Instead.1). This approach forms the basis for the methods discussed at the beginning of this chapter.36) where RiJ 'L Ui . A detailed account of this procedure is deferred to the next Note that if.37)(3. (3. at this point.=1 Multiplying yields equation N [L(x) + a(x) R(x) 1 9i(X) = f(x) . (3. e). (3.39).37) (3. and ~n is a set of orthonormal random variables. leads to a set of N equations to be solved for the random response at the various nodes. Substituting equation (3. can be rewritten as (3. = fJ . Specifically.35) 'L Ui(e) i=l N 9i(X) . ell = 0. Denote by 9i(X) a basis for the admissible space corresponding to the operators L(x) and R(x). where the matrices are defined by equations (3. e) is expanded along the same basis as u(x.34) gives where Ui(e) denotes the random magnitude of the ith degree of freedom.36) for all values of j = 1. the integrals over D in the previous equations reduce to integrals over a small number of elements.32) over D k k fj [ L(x) 9i(X) 1 9j(X) dx (3.3.34) would involve a random matrix with fully correlated elements.31) where An . (3. e) is expressed in its KarhunenLoeve expansion.30) where R(x) is a deterministic operator.40) + iD a(x) iD f(x) 9j(X) dx .31) into equation (3.38) ~N I liD This equation [L(x) + a(x) R(x) 1 9i(X) 9j(X) dx = 1 Ui (3. e) at the nodes of the induced mesh. STOCHASTIC FINITE ELEMENTS 79 ~(x) [u(x. an(x) are the eigenvalue/eigenvector doublets corresponding to the covariance function of a(x. e) = (3.78 and CHAPTER 3.32) throughout by 9i(X) and integrating an(x) [L(x) 9i(X) 1 9j(X) dx (3.39) iD f(x) 9j(X) dx . At this stage the boundary conditions may be imposed on R and each of the K(n) matrices individually. let 9i(X) be the set of piecewise polynomials of degree high enough to allow it to span the admissible space. a(x. . . e).3.34) ~ N r liD [L(x) 9i(X) 1 9j(x)dx [R(x) 9i(X) ] 9j(X) dx ] Ui = Note that with the choice of the set 9i(X) as piecewise polynomials.33) = iD f(x) 9j(X) dx . as discussed in section (3. in the form o (x) = n=l 'L 1v1 An ~n an(x) . (3. as described in section 3.
51 ) I ·1: ' i i . the response vector takes the form (3. Equations (3. in general.29) and (3. Yet. without loss of generality.30) constitute the starting point. e) f(x. provides an expression which is computationally tractable and to automation.6 Projection on the Homogeneous Chaos g Equation (3. the response vector u is a nonlinearly filtered version of ~n' A straightforward scheme for obtaining the response vector from equation (3. In addition. this restriction is not prohibitively severe. amenable in the limit as lV1 ~ 00.43) system. that u(x.46) r L(x) + n=l t ~n gn(x) R(X)] u(x.49) Expanding u equation (3. governing formally written as = K . Two approaches are described that embody these concepts and lead to identical final equations. Expanding a(x. STOCHASTIC FINITE ELEMENTS 81 section where the problem is again encountered.. the results obtained by the preceding are identical with the results obtained from a direct Neumann solution as described in section (3.3. e) is a second order process. (3. it is limited in its applicability by equation (3. (3. Clearly. :' . e) in a KarhunenLoeve series gives L In terms of Q(n). all the methods described previously lack the geometrical appeal that underlies the deterministic finite element method.1 f. e) . of the discretized = g.41).42) 3.47) I:: I:: I:: ~k k=l m=l n=l III ~m ~n Q(k) + .45) The development in the previous section may be considered as a useful modification of existing techniques. SFEM: RESPONSE REPRESENTATION (3. it lends itself to a KarhunenLoeve expansion of the form u(x.3. a formulation of the problem exhibiting this attribute is introduced as a natural extension of the geometrical concepts of Hilbert spaces that form the basis for the deterministic finite element method.48) Assuming. equation (3. Symbolically. given the range of applications encountered in practice it is desirable to devise a formulation which is more versatile.40) can be 3.3).44) the behavior [I + W[{~n}ll u where W[{~n}l is some functional of its arguments. ] g.3. where (3.41) Note that approach expansion however. 80 CHAPTER 3.. (3. e) bj(x) dx .46) gives I "M l IIv! t n=l 1vl ~n Q(n) + I:: I:: ~m ~n Q(m) m=l n=l Q(mj Q(n) M M where Q(n) and (3.I . Xj(e) e~ in I u(x. can be (3. e) = I:: e j=l L J Xj(e) bj(x) .50) (3. In this section. Equation normalized by multiplying throughout by R1 to obtain Iv! [ I +~ 1 ~n Q(n) J u=g (3... The present formulation. However. (3.44) relies on performing a Neumann expansion for the inverse operator to derive (3.44) suggests that.26).
the covariance function Cuu(X1' xz) of the response process is not known at this stage. e) L Wd{~T}] LXV) J=l L Cj(X) i=O . e). the second order random variables Xj (e) can be represented by the meansquare convergent expansion 1M 2 4 6 p=order of the Homogeneous 0 1 2I3 4 1 1 3 7 Chaos 15 6 I 10 15 i 35 28 I 83 15 70 210 L + 00 Table 3.54) Table 3. Thus. e). u(x. .3. Given the number M of terms used in the KarhunenLoeve expansion. and the order p of Homogeneous Chaos used.48) becomes dj(x) (3.86). e) can be completely determined once the functions di(x) are known.52) Changing the order of summation in equation P (3. p = order of the Homogeneous Chaos expansion. e).48) is of little use in its present form. as discussed in equation (2. SFEM: RESPONSE REPRESENTATION 3.49) becomes u(x. P may be determined by the equation P=1 where. in the following form.. In terms of the eigenfunctions bj (x) of the covariance function of u(x. The response u(x. Further. Therefore. di(X) can be expressed as e L L xl J=l P j ) W.57) for u(x.56) L L al{.~ ~n an(x) R(x) ] ~ \f!j[{~T}] = f(x) .. equation (3.53) where x.58) (3..53) for Xk(e).1 shows values of P for combinations of p and M Substituting equation (3. STOCHASTIC FINITE ELEMENTS 83 Obviously.1: Coefficient P for the size of the extended system corresponding to selected values of IVI and p M = order of the KarhunenLoeve expansion.55) gives where a(j) is the pth order Homogeneous Chaos.[{~T}] di(x) . equation (3. (3. .57) L xl k=l L j ) Cj(X) .55) L j=l L x~j) ej bj(x) .52) is truncated after the pth polynomial and is rewritten for convenience. lp are deterministic constants independent of e and r p( ~i1 . the set Xj(e) is not a Gaussian vector..=0 L = P W.4) concerning the Homogeneous Chaos.53).82 CHAPTER 3. ~ip) u(X. eJ and bj (x) are also not known. equation + . not being a Gaussian process.p and r P(~i1' "'~ip)' respectively. (3. Relying on the discussion of section (2. (3.=0 .i2 '1=1 '2=1 00 q rZ(~i1'~i2) +L 00 L L al~12i3 q 'l2 r3(~i1'~i2'~i3) '1=1 '2=1 i3=1 (3. where (3.59) + L L p s=l 1 s! IIs1 r=O (Iv! + 1') .[{~T}] Cj(X) .j) and \f!d{~T}] are identical to aW. (3. Equation (3. di(X) Substituting [ L(x) equation tl ". In equation (3. e) = where reference to the parameter was eliminated for notational simplicity. excluding the zeroth order term. (3. (3. P denotes the total number of Polynomial Chaoses used in the expansion.
Note that the index j spans the number of Polynomial Chaoses used. (3. (3. l= = 1.60).62) equation (3.64) by gz (x) and integrating !'vi throughout Note that. the function dj(x) may in the space em as (3.59) may be written in a more suggestive form )=0 k=l LL dk) [ \[!j[{~r}] M +L k L(x) gk(X) gz(x) dx i=l ~i \[!j[{~r}] = Setting in f = in R(x) gk(X) gz(x) dx ] (x) gz (x) dx . if u(x. X2) == 0 for Xl E BD1 or x2 E BDL. + Equation 2:= 2:= P 11'1 ~i(tI) \[!j[{~r}] )=0 ~=l k=l 2:= N dk) R(x) 9k(X) f(x) . . .71) .61) LkZ k in L(x) 9k(X) gz(x) dx (3. while the index k spans the number of basis vectors used in em. M (3.. }]\jjm[{~r }]>RikZ .60) Multiplying yields P both sides of equation (3.63) may be rearranged to give <\jjj[{~r}] one can derive N \[!m[{~r P N ll> = s.2). . (3.64) k=l L <\[!~[{~r }]>Lkldkm +L )=0 k=l L dk) L <~i(e)\jjj[{~r .65) L )=0 P \[!j[{~)}] L(x) dj(x) + LL )=0 ~=1 P lvI ~i \[!j[{~r}] R(x) dj(x) f(x)..50). M <\[!~[{~r il> . . . N .. m = <fz \[!m[{~T}l>..69) by \[!m[{~r}]. equation (3. (3. (3. Then.=1 1. Equation (3.. STOCHASTIC FINITE ELE1'vIENTS 85 j=l L L y~j) bj(x) . (3. l = 1. Thus.70) ]=0 k=l LL P !'vI dk) [\[!j[{~r}] L(x) gk(X) f(x) .. l = 1.69) l fz .68) )=0 k=l LL P N dk) \[!j[{~r}] L(x) gk(X) (3..61) becomes = k=l L N dk) 9k(X) . N .65) becomes P N fz = f(x) gl(X) dx j (3.66) This form of the domains deterministic be expanded the equation shows that dj(x) belongs to the intersection of of R(x) and L(x). tI) satisfies homogeneous deterministic boundary conditions on some section of the boundary BD 1 E BD. .. averaging throughout and noting that (3. bj(x) == 0 for X E BDL' It can then be deduced that the set bk(x) satisfies the homogeneous boundary conditions imposed on the problem and so do the elements of the set di(x) by virtue of equation (3. . SFEM: RESPONSE REPRESENTATION 3. N . then Cuu (x 1.63) LL )=0 k=l [ \[!j[{~r}] LkZ +~ M ~i(e) \[!j[{~r}] RikZ j dkj (3.. in view of the discussion of the finite element method in section (3.67) dj(x) Then..3.84 CHAPTER 3. Multiplying equation (3. according to equation (3.
73) = 1. FourDimensional Polynomial Chaoses. without loss of generality. m = 1.72)was implemented using the symbolic mampu...N. equation (3. The nonzero values corresponding 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 I 27 28 29 30 31 32 33 34 5 6 7 8 15 16 17 18 19 20 21 22 23 24 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 1 I .. equation (3.kl CiJm = <fl Wm[{~r ll> . lation program MACSYMA (1986).3. For a large number of index combinations the coefficients CiJrr: are i~entically zero.71) becomes P N dkJ 2:= k=l N Lkl dkm + I 2:= 2:= J=O k=l 2:= .86 CHAPTER 3. that the Polynomial (2) (3. (3.72). 1 1 6 2 2 2 2 1 1 2 1 2 24 6 6 6 4 2 2 4 2 4 6 2 2 2 1 2 6 2 2 6 Table 3. Introducing and assuming.. .. (3.3: Coefficient c(4l < ~i 'II] 'Ilk> '1J.2: Coefficient Cijk = <~i Wjih> .72). STOCHASTIC FINITE ELEMENTS 87 I i Indices of the Polynomial k j 1 Chaoses (2) cij_k I i=l Indices of the Polynomial j k 1 Chaoses \ i=l \ 2 3 4 5 6 7 8 9 i=2 1 2 3 4 5 6 7 8 3 4 6 7 8 2 1 2 3 10 11 12 13 4 5 7 8 \ 9 11 9 (2) 12 13 14 (2) 6 2 2 24 6 4 6 1 2 2 2 6 6 4 6 24 4 5 6 '7 I 8 9 10 11 I I Table 3.72) Chaoses have equation been normalized.. cijk = cik] TwoDimensional Polynomial Chaoses. Equation (3..d4 d!)J " zJ.=1 M R. Z l . . SFEM: RESPONSE REPRESENTATION 3.)'v1.
i '¥] 'J!k> .T" I I 6 2 2 2 1 2 6 4 2 2 6 2 2 2 1 2 24 6 6 4 2 4 6 2 2 6 (4) I 1 1 2 3 4 5 6 7 8 9 10 I 11 I I I 1 12 ' 131 14 15 16 17 18 19 20 21 . 2 1 2 1 2 1 2 2 1 6 2 2 6 2 4 2 2 2 1 6 2 2 6 4 2 6 2 2 24 6 3.T.88 CHAPTER 3. STOCHASTIC FINITE ELEMENTS 89 r \ r i Indices of the Polynomial j Chaoses ! i=2 1 2 3 4 5 6 7 8 9 10 11 k 6 9 10 11 c.22 23 24 I 25 26 27 28 29 1 30 31 32 I 7 10 12 13 17 20 22 23 26 28 29 31 32 33 37 40 42 43 46 48 49 51 52 53 56 58 59 61 62 63 65 66 67 68 Coefficient c(4 ) k 'J" I . ..3 (continued): FourDimensional = < Polynomial Chaoses.3 (contmued): Coefficient cijk = < i. SFEM: RESPONSE REPRESENTATION 3.72). ..(4) ~"k . cijk FourDimensional Polynomial Chaoses. equation (3.T" '¥] ..T'k> '¥ .k 11 1 J.72Y '" c. equation (3.~k I 1 2 1 1 2 2 1 1 i I i3 Indices of the Polynomial j k Chaoses c.. . Table 3. . 16 19 20 21 25 26 27 28 29 30 36 39 40 41 45 46 47 48 49 50 55 56 57 58 59 60 61 62 63 64 (4) .3. I 12 13 14 15 16 17 18 1 19 20 21 22 23 24 25 26 27 28 29 30 I 31 I Table • 32 33 34 .
Forming equation (3.73) for all P values of m.. FourDimensional Polynomial Chaoses. equation (3.3 for two and four terms in the KarhunenLoeve expansion (K = 2 and K = 4).90 CHAPTER 3.1 E ~k(e) Q(k) ] U = g. (3. if the computations leading to equation (3. respectively.78) Equation (3.3 (continued): Coefficient = < ~i W J .79) . In equation (3.74).3. For convenience equation (3. an equation is obtained involving a set of unknown random variables representing the response at the nodal points of the finite element mesh.W k > . SFEM: RESPONSE REPRESENTATION 3. (3. (3.66) and (3. respectively. denote Ndimensional square matrices whose klth element is given by equations (3. Wj[{~r}] .67). 23 24 25 26 I 27 28 29 30 31 32 \ 33 34 J of the Polynomial k 8 11 13 14 18 21 23 24 27 29 30 32 33 34 38 41 43 44 47 49 50 52 53 54 57 59 60 62 63 64 66 67 68 69 Chaoses c~~ I I I \ I 1 1 1 2 2 1 1 2 2 1 2 2 2 6 6 2 2 4 2 1 2 2 2 6 6 2 4 2 2 6 6 4 6 24 the first four Homogeneous Chaoses are shown in Tables 3. In fact. Once these coefficients are obtained.74) can also be derived by a different approach. which is a simple variation of what has been presented above. (3.72). .74) J where G and R are block matrices of dimension N x M.74).75) and (3..76) In equations (3. [G + R 1 d = h. Their mjth are Ndimensional square matrices given by the equations blocks (3.2 and 3.41) is rewritten as [I + Table 3. the form .75) and Rmj = i=l L M CiJm R.57) yields an expression of the response process in terms of the Polynomial Chaoses of the form U = L j=O p d. produces a set of N x M algebraic equations oJ. c. Land R.41) are carried out in the same manner as above.k c. back substituting into equation (3.76). h signifies the N x 1\11vector whose mth block is given by the equation (3.77) The Ndimensional vectors dm can be obtained as the subvectors of the solution to the deterministic algebraic problem given by equation (3. STOCHASTIC FINITE ELEMENTS 91 i i=4 1j 1 Indices I 1 2 3 4 5 6 7 8 9 10 I 11 12 I 13 14 15 16 17 18 19 20 21 22 .
where c is a vector of the same dimension as u. Unlike the deterministic case.4) using the improved Neumann expansion. In the Neumann expansion case. any complete basis in 8 can be used as an alternative to the Polynomial Chaoses in the development of the stochastic finite element method. the optimality criterion. CiJm Q(i) Cj = <g Wm[{~r inner products H> .84) (3.80) the elements of the set ~r.7 Geometrical and Variational Extensions of the Solution Process Geometry The development in the last section hinges on the fact that the Polynomial Chaoses form a complete basis in the Hilbert space 8. [ + (3. P M over 8 of equation (3. (3. where the set gk(X) has to satisfy certain smoothness and boundary conditions.k LL 00 (3. SFEM: RESPONSE REPRESENTATION random variable. Obviously. CHAPTER 3. however. k=O a: + . the validity and convergence of which depend upon the behavior of the response process in the neighborhood of its mean value.81) with {Wm[{~r}]}~=l' j=O i=l equation (3. e) u = 2: c j=O J Wj[{~r}] . or equivalently that the Homogeneous Chaos ring <1>8(0 defined in Chapter I is dense in e. In this context.3. In fact. orthogonal expansion yields much simpler forms for the response statistics. there is no merit for using a nonorthogonal expansion in 8.47) obtained in section (3.53) or. Using such a set in the previous development would yield an expansion of the response process u(x.83) Carrying out the P indicated yields an equation identical to 3.92 Further. is not well defined..82) Forming the inner product gives c. Clearly.. equation (3.4. one set is the set of the homogeneous polynomials formed by simple products . Namely. The Neumann expansion is an expression of the response in terms of a complete nonorthogonal basis in the space 8. each of these methods is based on a different error minimization criterion. and a single element is used in the approximation. as will be observed in the next chapter by comparing the resulting expressions for the covariance matrices of the response. Instead.3. The Neumann expansion can then be viewed as a special case of a more general procedure.81) Equation (3. (3. STOCHASTIC FINITE ELEMENTS 93 each element of the vector u. Also it is noted in Chapter V that the Homogeneous Chaos approach has better convergence properties than the Neumann expansion approach. the interval along the random dimension is not discretized. the refinement in the results is achieved by increasing the order of the functional polynomial approximation. whereby the response process is expressed in terms of a complete set in the space 8 of secondorder random variables. (3. Substituting back into equation J _:_.3. the perturbation approach is a Taylor series expansion. in terms of some norm of the error.3). Viewed from this geometrical perspective. These polynomials are not orthogonal but they are easier to generate than the Polynomial Chaoses. The orthogonality property of the Polynomial Chaoses is not a requirement for the Galerkin procedure and is the result of the construction procedure followed in section (2. 3.~ ~i(e) Q(i)] c) Wj[{~r}] = g. Unlike the deterministic case where nonorthogonal bases may have the merit of being flexible so as to satisfy the smoothness and boundary conditions.74) with G L 2: == I.84) is similar to equation (3. \ J Ui = 2: )=0 p p 1 Gi) W j [{~r }] . the Homogeneous Chaos approach is an orthogonal expansion in 8 of the response process. being a secondorder to obtain can be expanded as i n equation(3.81) t. the methods presented in this Chapter lend themselves to interesting interpretations. Indeed. it is worth pointing out the similarity of the approximation scheme employed here to the pmethod of the deterministic finite element method discussed earlier. since such conditions do not exist.79) gives I + . +Pr::. Finally. e) in the form u(x.
The 95 .. for every set of such parameters. it is important that any method that is used for the numerical treatment of stochastic systems be compatible with some rationale that permits a reliability analysis.~Y • . this problem has been addressed by relying on the second order statistical moments of the response process. a related variational principle can be formulated based on the concept of maximum entropy.. 4. Traditionally. Stochastic energy may be interpreted as being the uncertainty associated with the system under consideration.I( f U1J. A question that arises in this context involves the possibility of developing an analogous theory based on the extension of the variational principles to the stochastic case. the response of any system to a certain excitation may be viewed as a point in the space defined by the parameters describing the system. the response is uniquely determined as a function of the independent variables. . second order moments do not suffice for a complete reliability analysis. Such an uncertainty is well known in the context of information theory and it has an associated measure.> I . In other words. In this context. Indeed.::h. It may then seem plausible to develop a stochastic variational principle based on the concepts presented above. That is the informationentropy of the system. When these parameters are regarded as random variables. The stochastic parallel to this energy norm would be a measure of the stochastic energy dissipated by the system. defining a surface that corresponds to all the possible realizations of the random parameters.".fi. These moments offer useful statistical information regarding the response of the system and equations will be developed in an ensuing section for their determination. Chapter 4 STOCHASTIC FINITE ELEMENT METHOD: Response Statistics [L.94 CHAPTER 3. SFEM: RESPONSE REPRESENTATION Variational Extensions It may be observed that the development of the Homogeneous Chaos expansion may be considered as an extension of the Galerkin finite element method to the stochastic case. In other words.1 Reliability Theory Background An important objective of a stochastic finite element analysis of an engineering system should be the determination of a set of design criteria which can be implemented in a probabilistic context./.<.~. there is no indication as to how to assemble these ideas into a coherent theory. Indeed.i r. The variational approach in the deterministic case usually expresses the constraint that the true solution to a given problem is that function which minimizes a certain norm or measure of the energy released by the system. however. in the sense of the inner product in e. At this point. to the subspace spanned by the approximating basis. However.. For this. the error resulting from a finite expansion is made orthogonal. the response function may be described as a point in the space spanned by these variables. higher order moments and related information are required.
96 CHAPTER 4. it is seen that the optimal point for the linearization of the limit state function. ou (4. ThoftChristensen.al (1979) have suggested treating the selection of this joint distribution as a problem of decision theory and proposed a number of optimality criteria to resolve this issue.1) is not satisfied. Specifically. For nonlinear limit states. to the case of random variablJs. In this context.. ~r}' The difficulty . This is a matter that persistently poses severe limitations on the amount of usable probabilistic information. Cornell. Also note that if a function g[. 1981) that the mean value <11.al. SFEM: RESPONSE STATISTICS reduced. the safe or some the safe 4.1> does not represent the best linearization point. as defined by equation (4. the greater the mean of the limit state relative to its standard deviation. Note that. and aM its standard deviation. however.]. al.5) where L. The reliability index was introduced as an attempt .1) is not satisfied. the safety margin would have a Gaussian distribution. if the random variables {(i} are Gaussian and the limit state function given by equation (4. so does any function that is a power of g[. termed region..3) in engineering practice is obvious. In other words. of efficiently implementing these distributions into numerical algorithms. Grigoriu et. be confined to a certain region in this space. realizations of the set {~d are often scarce and do not permit inference about their joint distribution beyond their joint second order moments. 1966. A deficiency of the HasoferLind index is a lack of comparativeness (Ditlevsen. measure of it.4). This so called lack of invariance problem was resolved by introducing a new reliability index (Hasofer and Lind. P].. 1986) (3 = <M> . the probability that equation (4. in using equation (4. it is usually required that the response.1) is linear in its arguments. the standard deviation of the safety margin is related to higher order moments of the system random variables which often are not known. the safer the system is.2) is the equation of the failure surface in the espace. as well as any function that shares the same zeros with g[. denotes the limit surface and C( is the covariance matrix of the design variables {~d. Madsen et. This equals the volume. Finally.]. as given by equation (4. 1981.1) where (4. the dimension of the ~space is usually quite large.4) where ft(e) is the joint probability distribution of {6. That is. the less likely is the response to wander beyond this state. Given these computational and theoretical difficulties. 1947. . Freudenthal et. with respect to some probability measure. it is well known (Ditlevsen.4). 1969.3) to meet these needs. Further. would be equal to the distance from the mean value of Jv1 to the hyperplane representing the limit state. Formally.1.<e>)T C. In order to be able to use {3. 1987. xELx ~ (4. As an intuitive measure. Ditlevsen.] satisfies equation (4. 1987). the domain of integration is rarely available in an analytical form. Given the set {~i}'{3HL is uniquely determined as the shortest distance to the limit state surface. For operation of a given system. Of special interest in reliability assessment is the failure probability. 1982. 1981) associated with the fact that any surface that is tangent to the failure surface at the design point . {3.l (x . the limit state surface is usually linearized using the first term in a Taylor series about some point in the ~space.1). this condition can be expressed as (4. The problem remains.. 1983. of the region in the espace over which (4. it became desirable to introduce a standard measure for reliability analysis. is the point on that surface that is closest to the mean value after the design variables have been transformed to a set of uncorrelated variables. Melchers. The basic idea behind the reliability index concept is the fact that the less uncertainty there is concerning the limit state of a system. Further. the reliability index was first defined as (Freudenthal. <Ivl> denotes its expected value. (4. RELIABILITY THEORY BACKGROUND 97 case where the parameters are modeled as random processes may be as discussed in Chapter II. to the extent that the requisite integration can seldom be carried out either analytically or numerically (Schueller and Stix. Such a measure would be based on readily available second moment properties of the random variables involved and would reflect a safety measure associated with the operation of the system. That is. where M denotes the safety margin which is the difference between resistance and load effects. Thus. 1974) defined by the equation {3HL == min V(x .<~» .
SFEM: RESPONSE STATISTICS 4. However.. STATISTICAL MOMENTS 99 has the same /3HL.. The rth component of u. the cummulant generating function for u(r) is given by the equation (4. eXrp(r) "'e cummulants of e.7) where II>. the response process vector may be represented by the series u where per) is the operator indicated by equation (4.7) as applicable to u(r). the polynomials II are simply the homogeneous polynomials given by the equation IIAO . (4. In this context the general series representation of the response. Specifically. the magnitude of the error is hard to estimate but is known to increase drastically with increasing number of design variables. Ak are deterministic vectors.10) e From the preceding review it becomes clear that the statistical moments of the response not only can provide bounds about its expected range. ... (4. following McCullagh (1987). and a repeated index implies summation with respect to that index over its range. indicial notation is used. Transforming the design variables to a new set having the Gaussian distribution (Rosenblatt.6) is exact for linear limit states and becomes approximate for nonlinear limit states.8) these are the Polynomials (4. associated with the stochastic finite element method presented in Chapter III. for Gaussian variables. 1987) describes methods to obtain the cummulant generating function for a polynomial of the form given by equation (4. 1983). in the improved Neumann expansion.9) whereas for the Homogeneous Chaoses Chaos formulation. up to an arbitrary order.. and eXrp(r) is an operator yielding = [I + Xr per) of + ~ XrXs on per) pes) + . When the failure surface is a hyperplane. Thus.. k=O L 00 a>'O . 4.. Given the reliability index. associated with the replacement of the failure surface by a tangent hyperplane.2.. The main advantage of the first order second moment method as reviewed herein is its simplicity and ease of implementation.98 CHAPTER 4. 6k} and a>'o . Ak (e) = i=l II ~>'i k . it becomes desirable to develop equations for the determination of the statistical moments... the HasoferLind index coincides with the previously defined index given by equation (4.7).1).2. >'k (e) .6) where <I> is the normal probability function.12). is quite well suited for an efficient computation of the corresponding statistical moments.. The design point obtained in this way can be shown to be the point of maximum failure likelihood when the design variables are uncorrelated and Gaussian (Shinozuka.8).. of the response process.. 1987). 1952) resolves this issue. manifested by the linearity of equation (4. the failure probability may be approximated by the expression (4.4). Then.2 4.11) where represents upon expanding. As obtained either from the improved Neumann expansion or from the Homogeneous Chaos formulation. u(r) can be written as u(r) = per) (4. Ak (e) is a kth order polynomial in the variables {6o. Equation (4.13) "'e . the method has some serious shortcomings. II>'O .s. it is noted that eXrp(r) "'e is obtained 0 components I appearing = in equation Specifi(4.o . (Schueller and Stix. McCullagh (1984. but they can also provide meaningful information about the reliability of the random system. by compounding (4. ..] .1 Statistical Moments Moments and Cummulants Equations In equation (4. (4.12) the The result of the operation action of the individual cally.
2k "Hl'" "HI.. STATISTICAL MOMENTS 101 (4. Relying on equation (4..15) averaged products of twodimensional Polynomial Chaoses up to a certain order. the m + n order moment involving u(r) and u(s) is <[u(r)]m [u(s)]n> = < If a~~ . For example. cross moments between elements of the response vector u may be obtained by direct manipulations of equation (4. (4.16) where ". It is noted that a trivial modification of these codes is required to permit the evaluation of averaged products of Polynomial Chaoses of arbitrary order. (4..18) m!=p The expectation of the product on the right hand side of equation (4. Again.17) involves deterministic tensor multiplication and averages of polynomials of orthonormal Gaussian variates. the generalized cummulants of u can be obtained. m).12).. SFEiVl: RESPONSE STATISTICS 4. and random variables in the set {~i}' any level of accuracy for any order moment can be achieved.7). Specifically..7).Ak IIAQ . which may be expedited by using the recursion <61" ·6k> = L p=2 k <61 6 > p < II 6 > m 771=2 .17) k=O Note that equation (4.100 CHAPTER 4. (4.2) variables.2."1 . 'HI+! . 2k+l+= is the generalized cummulant of order (k.. I. Figure (4.18) involves (k . ordinary cummulants and moments may be deduced. Table (4..14) (4.1) displays a computer code written for the symbolic manipulation package MACSYMA. By including additional terms in the expansion (4. Alternatively. Note that symbolic manipulation packages can be quite helpful in carrying out the symbolic algebra involved in these equations.2) shows the code corresponding to the fourdimensional Polynomial Chaoses. Ak(e)]m L k=O [f a~sLAk II"'o"'Ak(e)]n >. the algebraic manipulations involved may be quite laborious but lend themselves to treatment with a symbolic manipulation program.. which evaluates the . Then.
x[i])/gen) )$ for i:1 thru 2 do (for j:i thru 2 do (ind:ind+1. order:order+z ). G2[ind] :expand (diff(diff(diff( diff(gen.2.k. if oddp(order) then 0 else (if order=O then hom else if order=2 then hom*kdelta(y[1] .x[j])/gen»)$ for i:1 thru 2 do (for j:i thru 2 do (for k:j thru 2 do (Lnd:ind+1. if x=y then 1 else 0)$ \* average : *\ average (p)::= buildq ([p]. of ind:O$ G2 [0]:1$ gen:exp(sum(x[i]2/2. for i: 1 thru n do z:coeff(log(p) . of . if z#O then for j:1 thru z do y[j+order] :x[i].x[i]).log(x[i]». G2[ind] :expand(diff(diff( diff(gen. order:O.x[k]).2.1. G2[ind] :expand(diff(diff(gen.i.y): :=buildq([x. ( logexpand:super.x[j]).x[k])/gen»»$ for i:1 thru 2 do (for j:i thru 2 do (for k:j thru 2 do (for l:k thru 2 do (ind:ind+1.y].1. IvlACSYMA Macro to Average Products TwoDimensional Polynomial Chaoses.y[2J) else hom*sum(kdelta(y[l] . G2[ind] :expand(diff(gen. (continued):.x[i]).order).order)/y[i] . hom:p/prod(y[i] . SFEM: RESPONSE STATISTICS 4.x[l])/gen »»)$ Table 4.2»$ for i:1 thru 2 do (ind:ind+1.i.x[j]).x[i]).order»»$ Table 4.102 \* kronecker : *\ CHAPTER 4.1:.2.i.l. MACSYMA Macro to Average Products TwoDimensional Polynomial Chaoses.y[i]) *prod(y[k] . STATISTICAL \* herm2 : *\ MOMENTS 103 kdelta(x.
i. load(herm2). load(average) .k] : p [mm] . if oddp(order) then 0 else (if order=O then for m:1 while integerp(p[m1])=false (mm:m.j.c2[i. if c2[i.1»=1 then average(p[m1]) else map(average. order:O. j . MACSYMA Macro to Average Products of TwoDimensional Polynomial Chaoses.2:. do p[m] :if length(p[m1])=2 and integerp(part (p[m1] . for i:l thru num do (for j:i thru num do (for k:j thru num do (pp:expand(G2[i]*G2[j]*G2[k]). p[O] :pp.p[m1]». MACSYMA Macro to Average Products of FourDimensional Polynomial Chaoses.2.k.104 \* product2 : *\ CHAPTER 4.y].pp) ») »)$ order=2 then (y [1J .1.i. \* average : *\ average (p) ::= buildq ([p] . ( kdelta(x. .2. hom:p/prod(y[i] .order»»$ Table 4.2.order)/y[iJ . if z#O then for j:1 thru z do y[j+order] :x[i]. ( logexpand :super . load(kronecker) .y [2J) (kdelta(y[1J . mm:O.2»=true and length (part (p[m1] . STATISTICAL \* kronecker : *\ MOMENTS 105 product2(num): :=(buildq([num].j. order:order+z ). for i: 1 thru n do z:coeff(log(p). (continued):. Table 4.k]#0 then print(i. SFEM: RESPONSE STATISTICS 4.y[i]) *prod(y[kJ .k] . if x'=ythen 1 else 0)$ n:2. c2 [i.1.order).j.k.log(x[i]».y): :=buildq([x.
product4(num): :=(buildq([num]. load(kronecker) . SFEM: RESPONSE STATISTICS 4.1.x[j]).x [iJ)/gen) )$ for i:1 thru 4 do (for j:i thru 4 do (ind:ind+1. (continued):.x[lJ)/gen)))))$ Table 4. mm:O. one obtains the following equation for the mean of the . (continued):.1). load(herm4). The stochastic finite element method as described in the Chapter III can be used quite efficiently to produce accurate approximations to these statistics.pp) ))) )))$ Table 4.p[ml])). MACSYMA Macro to Average Products of FourDimensional Polynomial Chaoses.106 \* herm4 *\ CHAPTER 4.3. MACSYMA Macro to Average Products of FourDimensional Polynomial Chaoses. G4[indJ :expand(diff(diff ( diff(gen.x[j]) .k] u . if c4[i. STATISTICAL \* product4 ( MOMENTS 107 : *\ ind:O$ G4 [oJ: 1$ gen:exp(sum(x[i]2/2.4))$ for i:1 thru 4 do (ind:ind+1.2))=true and length(part(p[m1] . Specifically.2. load (average) . for i:1 thru num do (for j:i thru num do (for k:j thru num do (pp:expand(G4[i]*G4[jJ*G4[k]).c4[i. G4[indJ :expand(diff(diff(diff( diff(gen.x[kJ)/gen))))$ for i:1 thru 4 do (for j:i thru 4 do (for k:j thru 4 do (for l. p[O] :pp. the second order statistics of response quantities are of particular importance in reliability analysis.j .k thru 4 do (ind:ind+1.x[kJ).5) in the procedure of the previous section.2. Second Order Statistics As it has already been discussed in section (4. G4[ind] :expand(diff(diff(gen.k] :p [mm]. for m:1 while integerp(p[m1])=false do (mm:m.j.x[iJ). G4 [indJ :expand (dif f (gen . c4 .j. incorporating the improved expansion method described in section (3.x[i]) .x[j])/gen)))$ for i:1 thru 4 do (for j:i thru 4 do (for k:j thru 4 do (ind: ind+1 . p[m] :if length(p[m1J)=2 and integerp(part(p[m1] .2.x[i]).2.k.i.1))=1 then average(p[m1]) else map (average.k]#0 then print(i. n:4.j. can provide preliminary estimates of the values of the reliability index the probability of failure.
5) and (2.23) Ruu = l (4. The moments computed according to the previous section can be used either directly to estimate probabilities of failure. for Further. (4. the mean response vector is given by the equation (4. Further.25) (4. relying on the orthogonality of the Polynomial Chaoses one can derive the following simple expression for the covariance matrix of the response. recalling that these Polynomials have zeromean. SFEM: RESPONSE STATISTICS 4.6).3 <~i ~j [Q(i) Q(j) Q(k) Q(i) Q(j) G Q(k) G Q(i) Q(j) Q(k) Q(l) Q(l) Approximation to the Probability Distribution +Q(i) Q(j) Q(k) G Q(l) Q(k) Q(l) + Q(i) G Q(j) + J (4. In this regard recall that values of the variances of some of the Polynomial Chaoses.. (4. (4. to approximate the probability distribution function of the response process.. certain properties of the Polynomial Chaoses can be used to simplify considerably the calculation of the associated response statistics. An alternative approach to obtaining these probability distributions consists .108 response process.J .3." ~ ~~ i+jeven < [f.. == <u> = do (4.27) where G Expanding equation r (4.20) <61 62 63 64 65 66> = <61 62> < ~A3 64 65 66> + <61 63> <62 64 65 66> + <6164> <62 63 65 66> + <6165> <62 63 64 66> + <61 66> <62 63 64 65>.18) which. whereas MACSYMA symbolic manipulation programs to compute them are shown in Tables (2. R. 1987). making use of equation (4._ ~ i'v[ [Q(i) Q(i) Q(j) Q(j) Alternatively.J g.6).1)(2. bjk .22: Ruu = L j=O p <Wj[{~r}] Wj[{~r }J> dJ df ' (4.22) yields [Q(i) Q(j) G lvf I+ ~ M M + Q(i) GQ(j) + GQ(i) ~k ~l> + Q(j) J Q(l) G where the superscript H over a vector denotes its hermitian transpose. M M k=l <m Q(m) r G [f.19) Equation (4...19) becomes U= r l +~ I M Q(i) Q(i) j. (4.27) are displayed in Tables (2.24) can be simplified by relying on equation the the sixth order term becomes.3. >. APPROXIMATION TO THE PROBABILITY DISTRIBUTION 109 u == <u> = [I + ~ Q(i) Q(i) (4. or in conjunction with some expansion. CHAPTER 4. +L i=l LLL lvf J=l 1=1 4. appearing in equation (4. f. such as the Edgeworth expansion (McCullagh. Specifically.3).26) The covariance matrix Ruu of the response can then be obtained by evaluating the averaged outer product of the response vector with itself. <" Q(n) r where the notation conforms to that introduced in section (3.18) in the form <~i ~j equation ~k ~l> = bij bki + M bik bjl + s.24) + .21 ) Ii + Q(i) Q(j) Q(j) Q(i) + Q(i) Q(j) Q(i) Q(j) J + . Specifically.
. augmented is. additional equations may be obtained by requiring the vector ~ to be jointly normal. A different approach is followed herein which is suggested by the fact that the entire probabilistic information concerning the response process is actually condensed in the coefficients of the expansion (4. This density function can be expanded 111 a multidimensional Edgeworth series as Pu. and A is a rectangular matrix. a is the vector of known coefficients in equation (4. the polynomials in equation (4.29) gives i: k=O L co a"\O"'''\k p=O L oo b..30) have only K variables. whereas the polynomials used in the expansion (4. This distribution is completely determined by the joint second order moments of the response process and the variables {~d.. it is conceptually possible to obtain approximations to the probability distribution of u. (4.Ak rAO . The expected in equation (4. (4. the value of the integral is equal to sums of moments of the multidimensional normal distribution given by equation (4.35) involve products of multidimensional polynomials with the Gaussian distribution...7) are orthogonal.~) iP(u.~(u.Ak(u.~) = L k=O oo bAO. Note that the polynomials used in equation (4.. it is orthogonal to all the other Polynomial Chaoses. APPROXIMATION TO THE PROBABILITY DISTRIBUTION III of directly estimating the coefficients appearing in their expansion.\o .30).32) where b is the vector of unknown coefficients in equation (4.~) is an (K + I)dimensional j_oint distr. and ai is the coefficient of ~i in equation (4. lJ'u is the standard deviation of the response process. thus bypassing the moment calculation stage.7) by r"\O .~) . "\p (4. Grigoriu (1982) has called attention to methods for estimating the probability distribution of polynomials as weighted averages of some judiciously chosen probability distributions.32) can be written as 0'2 u al .30) into equation (4. IK ak C r I al (4.3. That is.33) . As a consequence of the general form into which the response process was cast.ution .31) c <ZZT> ..110 CHAPTER 4. These moments can be readily obtained by multiplying equation (4.35) iP(u.7) by ~i upon averaging..36) 1 (21l')(K+1)/2 IQz 1 e _1 (zz)T 2 (4. Therefore the number of unknowns is greater than the number of available equations.~) is the (K + I)dimensional Gaussian the covariance matrix of the vector ~. Further.31). That C (zz) . multiplying equation (4. (4.30) Note that the integrands in equation (4.ib. assume without any loss of generality that they are normalized... Then.28) may be formally written as (4.7).. is used to denote any element of the vector u. SFEIVI:RESPONSE STATISTICS 4.~) where Pu..denSity of u and ~.7).7). a. the covariance matrix in equation (4.37) (4.~) where and = distribution derived from by the response u...7) and (4.34) l ak where IK is the Kth order identity matrix."\p(u. iP(u. Equation (4... Consequently.~ (u. Z= <z> (4. Ap(~) and averaging gives (4.~) du d~ r"\O"'''\k(~) rAO . In this regard. "\m(~) r. equation (4.28) where u.30) have K + 1 variables to accommodate the response u. Thus. Substituting equations (4. Note that using the Homogeneous Chaos formulation.35) can be cast in a matrix form Ab = where iP(x.\o .7).29) value Clearly ~i represents a first order Polynomial Chaos.
or probability distribution fu. In this context. Following that.112 CHAPTER 4. Computing the reliability index. note that upon determining the vector b. In fact.81). Finally. SFEM: RESPONSE STATISTICS where <p(~) is the Kdimensional normal distribution of independent identically distributed. e). or some other measure of the reliability of the system.. are used to derive a representation of the response process. as given by equation (3. Statistical moments and probability distribution functions are then <obtained as discussed in Chapter IV. Standard optimization algorithms may be used to compute the reliability index and the corresponding probablefailure point.1 Preliminary Remarks As defined in the introductory section to this Chapter. 6p} there corresponds a unique realization of the response process u(x. once the coefficients for the representation of the response process in the form (4. however.5) and (3. to each realization of the set {60' .. and the Polynomial Chaos expansion.38) Chapter 5 4.30) to express Pu. These statistics can be in the form of either statistical moments. Clearly. 1984) are particularly useful. In addressing these problems. Solving the augmented matrix equation.30) by integration over the ~space.36) can be augmented to involve a coefficient matrix which is square. As a first step in the solution procedure the variational formulation of the finite element method is used to obtain 'a spatially discrete form of the problem. as given by equation (3.~ (u. That is. is unnecessary. as the calculation of the probability distribution of the response is itself within reach. the vector b can be found and used in equation (4. Using these additional conditions. The KarhunenLoeve expansion may then be used to recover the physical value of the failure point in the space spanned by the random process itself. The point on the limit surface corresponding to this shortest distance was described as being the most probable failure point. The latter is best evaluated by simulating points on the response surface.4 Reliability Index and Response Surface Simulation NUMERICAL EXAMPLES 5. Equation (4. Nonparametric kernel density estimation methods (Beckers and Chambers.7) with the left hand side replaced by U failure may be regarded as an explicit expression of the failure surface in the space spanned by the ~i' This surface is obviously nonlinear. and a beam resting on a random elastic foundation and subjected to a random dynamic excitation. 113 .. Clearly.3.46). it is reminded that the ultimate goal of a stochastic finite element analysis is the calculation of certain statistics of the response process.3. the marginal distribution of the response process may also be obtained from equation (4. the generalized reliability index is the shortest distance to the limit state surface from the mean design variables. ~). The methods introduced in sections (3. (4. These problems involve a beam with random rigidity.nction. this procedure yields a sample population for the response process which can be used to estimate its probability density function. a plate with random rigidity. the Neumann expansion for the i~verse. zero mean Gaussian variables. it becomes possible to obtain point~ on the response surface using simulation techniques.7) have been computed.6) are exemplified in this chapter by considering three problems reflecting some interesting applications from engineering mechanics. equation (4.
of length L..X2).1 One Dimensional Static Problem Formulation Consider the Euler Bernoulli beam shown in Figure (5. &) is the random bending rigidity as described above. equation STATIC PROBLE1'v1 115 (5.TRIANGULAR MODEL = 1.1) becomes CANTILEVER BEAM <EI>= 1. Assuming the material to be linear.0 p where w(x... &) ( ox fj2 2 u(x. Note that the order ne of the polynomials used in this problem needs to be at least equal to two for the integrand in equation (5.0 L= 1. Under Uniform Load.2. NUMERICAL EXAMPLES 5. (5. and u(x. These sub domains represent the induced finite element mesh over the beam. .h .1: Beam with Random Bending Rigidity Exponential and Triangular Covariance Models. as indicated by the argument representing the transverse displacement and slope at some point in Ie.3) into equation (5.114 CHAPTER 5. 5. the work performed the form by the applied force may be put in (5. clamped at one end and subjected to a deterministic static transverse load P.." I . &) is the random transverse displacement of the beam.5) where O'(x) and €(x) represent the stress and the strain as a function of the location on the beam. In .> LOCAL COORDIKATES P=====:::I::iE~i======::i·~ L CORRELATION CORRELATION LENGTH . which involves the modulus of elasticity E and the crosssectional mass moment of inertia I. along a basis of piecewise polynomials as follows e u(x.EXPONENTIAL LENGTH . .V') can be expressed .lI .4) i 0' (x) € (x) dx . as a function of x.3) Figure 5.__ . .2) to remain finite. V (5. Pe ( X ) (ne) e. is the realization of a Gaussian random process indexed over the spatial d'omain occupied by the beam. ONE DIMENSIONAL of Hooke's law. (5.2. &) )2 dx. At this point. It is also assumed that the random bending rigidity w is represented by its mean value ill and its covariance function C(Xl. (5.0 MODEL = 2. I . The subscript e on the superscript n provides for different order polynomials over different subdomains. and making use The Minimization of the total potential energy ( V .2) L. It is assumed that the bending rigidity w == EI of the beam. It is reminded here that E [2.0 V 1 = '2 j L w(x. IS a vector or po ynomia stat except over a sub domain Ie where they are of order ne such that certain compatibility conditions are satisfied on the boundaries of I".2 5. &) = L e=l N ue(&) p~ne)(x) . are identically zero .2) gives thiis equa t'ion. Substituting equation (5. lie (&) is a vector of unknown random coefficients. where [2 denotes the space of random events.0 P= 1. u(x. The strain energy V stored in the beam can be written V=~ as where N is the total number of degrees of freedom in the discrete model.1) In a similar manner. Further.1). &) may be expanded.
N]. Substituting w(x.7) (5. after substituting for fk(x) from equation (2. equation (5. Ak and Jk(x) are as defined in section (2. .10) and (5. the argument will be dropped except where it is needed to emphasize the random nature of a certain quantity. (5..9) (5.12) for (5.116 mathematically as CHAPTER 5. .18) . Then. L ~ J fk(x) ( d2 (ne) ) dx2Pe (x) "" ~ N = 1 r (2. a change of variable is introduced involving the local coordinate (x Xl) e Note that.7) for all s E [1.6) Substituting equations (5.. To expedite the calculation of the above integrals.51) and (2..8) becomes L k=O 1'v1 ~k(e) K(k) U f.13) where n is some integer. e). Further..)d dx2Pe f(x) (n) e (x) w(x. k = 1. NUMERICAL EXAMPLES 5. since the polynomials p~ne) (x) have bounded support.11) In the remainder of this chapter. w(x. or using a quadrature scheme if the eigenvectors have been computed numerically at discrete spatial points. e) r (5. whose spatial variation is not suitably defined for the purpose of carrying out the integrations.3).56).8) and ~O p~nj)(x) dx . Further. (5. .5) into equation (5.17) where x. the integrals over L in the above equations may be replaced by integrals over the corresponding supporting subdomains. leads to K where U = f. (5. (5. dx (5.16) (5. e).14) can be viewed as a combination of integrals of the form = ill(x) + L k=l Iv1 ~k(e) ~ hex) . Evaluating equation (5.18) where ~k(e).39) and the triangular model defined by equation (2.2..52). N . these integrals involve the random process w(x. Ai. e) \ (2 d (nj) dx2Ps (x) ) dx . denotes the coordinate of the left end of the element (e). ONE DIMENSIONAL STATIC PROBLEM where 117 a(v aUe(e) V' ) = o.14) may be performed either analytically if the eigenfunctions of the covariance kernel are known.15) (5.6) gives ue(e))L K~~) = . s = 1. == 1. the covariance kernel of the random process is modeled using both the exponential model defined by equation (2. The integration in equation (5. equation (5. This problem can be resolved by expanding w(x. For the exponential model. equation (5. Specifically. e) in its truncated KarhunenLoeve series.4) and (5.
13) is rewritten as Upon solving equation (5.2 (5. piecewise cubic polynomials were used.2) shows the where (5. the first two elements of the vector u are equal to zero.21) to reflect the boundary conditions leads to (5. equation (5.sin(2wka)) + sin2(wka) Wk k even [I +~ ~k k=l IvI Q22 (k) l J U2 = g2 (5.25) for U2. the algebraic manipulations.24) gives 1''11 'r1 1 Jo.3.63). 5. For the triangular kernel. ne = 3.2.21) Results Expansion r l +E Iv1 l ~k(e) Q(k) Improved Neumann U I J g. These elements represent the displacement and the slope at the fixed end. Expanding equation (5.20).22) and g (5.20) for the eleTo simplify Relying on equations (5.! \ I I V COS(Wk(rle a + tan2(¥) + Xl)) 4Wk + tan(¥) sin(wk(rle + Xl)) tan(w ka) 2 r" dr .19) or (5. k odd l+ J 1'v1 ~ k=l ~k [Q(k) Q(k) 21 11 where a = L/2 and c = lib. and equation (5. ONE DIMENSIONAL STATIC PROBLEM 119 r" dr .5).2. Figure (5.. k even Due to the boundary conditions imposed at the clamped end of the beam. rn dr . Substructuring the matrices in equation (5. closed form expressions ments of the matrix K defined by equation (5.22). Following the discussion in section (3.19) and the covariance matrix of the response as in equation (4. and (a .27) The average response may be expressed as in equation (4. and b is the correlation length.118 becomes CHAPTER 5.18) becomes (2.19) I 2 [ IN2 r" dr . the response U2 may be expressed as (5. That is.62) The substructuring is performed such that U2 represents the displacements and the rotations of the beam at the free nodes.14) are derived.oS(Wk(rle c /a ~ V + 2 Xl)) sin(2wka) +. respectively.26) yields a direct expression for the unknown reactions. Further. the beam was discretized into ten finite elements. NUMERICAL EXAMPLES 5.26) k=l II Jo t' .23) .25) l !I +~ ~k Qi~) ] U2 = gl (5. equation (5. In the numerical implementation of the preceding analysis which produced the following numerical results. resulting in twenty two (N = 22) degrees of freedom. the eigenfunctions are given by equations and (2. The 2 x 1 vector gl corresponds to the reactions at the fixed end and may be recovered once the vector U2 is computed. k odd 2Wk (5.
Triangular Covariance. M:::2. Figure 5.25 0.0 0.04 ~ 0.>.10 BENDING 0.6 Z 9 f« W Q Q 0.05 0... .p=1 .... Exponential Covariance. .05 0.297. O'T.20 0.4 . .p=2 M=4.. Neumann Expansion Solution.: c 0.010 1 l \ ..< c....p=2 M=2.. .3 M'. O'EI = 0. i Z co :.0 J 0. p=3 . c< 0.0 0. Exponential Covariance. .. Exponential Covariance.0 Triangular Covariance: c=2.20 0. 0..005 0.M=2.3: Standard Deviation of the Displacement Along the Beam.0 l l 0. p=2 M=2.p=2 M=2..01 "" >0 0.. 0.3.0 0..15 RIGIDITY.0 2 4 NODE 6 8 10 Figure 5..02 0.p=3 M=4. < Z 0 1. Neumann Expansion Solution.. 0. .max = 0.< < 0.0 p = order of the Neumann Expansion M = order of KL expansion * 0... M=4.30 ~ Q Z E < Vl 0. p=I .. ONE DIMENSIONAL STATIC PROBLEM 121 1. order of the Neumann Expansion M = order of KL expansion Standard Deviation of Medium = 0...8 **** 0. Figure 5.. 0. p=Z M=4..2 < . e:: f< :. peI Z < Q Vl 0." .. pe l MeeZ.. Numerical Values Determined Only for the Ten Nodes.. p=3 M=2. pe=L M=2. p=3 M=4. pel . . p=S M=4.. p=4 M=4. NUMERICAL EXAMPLES 5.0 p = order of the Neumann Expansion M = order of KL expansion Standard Deviation of Medium = 0.30 2 Figure 5..2 M=2.025 ..0 4 NODE 6 8 10 ~ 0.25 0. .: j U 0.8 j J I E 0. 0 . Numerical Values Determined Only for the Ten Nodes.298. 0..:.5: Beam Tip Deflection Normalized Standard Deviation versus Bending Rigidity Standard Deviation.15 RIGIDITY.4: Standard Deviation of the Displacement Along the Beam.p=3 M=4..06 Exponential Covariance: c=Lf) p ::.015 . M=2.2: Beam Tip Deflection Normalized Standard Deviation versus Bending Rigidity Standard Deviation..4 ~ :.05 Z w Q 0 6 "" 0.M=2 .. p=2 M=4.020 Exponential Covariance: c=1.120 CHAPTER 5. 6 ~ w Q 0.2 0. p=S Exponential Covariance: ce lD p == order of the Neumann Expansion M = order of KL expansion '" '" '" '" 5000~Sal11ple MCS M=4. ..max = 0. p=4 M=4. p=2 M=4.p=1 .. O'T.6 '" a Z 0 E=: «: eLl Q 0. M=2.2. O'EI = 0.. Neumann Expansion Solution.10 BENDING 0. Z w w :. p=4 M=2.: < w >0 5000~Sample MCS M=4./ '" a c.. 0.2. Neumann Expansion Solution.03 .M=4.: eLl U 0.'2.
(k) J Q k=l k M l I c. p=2 .010 M=4.1=2... (JT. 0.5)(5.28) by W j [{ ~l} 1 and taking the mathematical expec . c 5000Sample MCS M=4..p=1 .. Polynomial Chaos Solution. order of KL expansion 0. (JEI = 0.4 ~ ::..25 0.3..30 BENDING RIGIDITY..l c.max = 0. Wd{~l}l (5.10 0.2 I'.3)(5. Figures (5.6 :2 Q Q Z 0.15 0. Projection on the Homogeneous Chaos As discussed in section (3.M=2 .03 0.. Neumann Expansion Solution..297. p=3 M=2. M=4..81).05 0.:: order of the Neumann Expansion M = order of KL expansion Standard Deviation of Medium. Covariance.01 0.p=l .p=3 '" (/l 0. Triangular Numerical Values Determined Only for the Ten Nodes.05 Triangular Covariance: c=2. 0 Z **** 0.0 p . Figure 5. p=3 M=4.7) show the corresponding results for the triangular model..6: Standard Deviation of the Displacement Along the Beam.M=2 . L 2=0 MUltiplying equation p r l' I + L ~.3 M=2. again for various values of p and M: These figures show the results corresponding to the exponentially decaying covariance model.p=3 0... Neumann Expansion Solution. p=2 M=4.p=3 '" ti co Q u 0. ONE DIMENSIONAL STATIC PROBLEM 123 I '" :2 (/l Z 0.005 ~ '" 0.0 2 4 6 8 10 0.p=3 M=4. NUMERICAL EXAMPLES 5...02 0.G!5  ..2. Triangular Numerical Values Determined Only for the Ten Nodes. '" 1 1. 0. Figures (5.6). standard deviation of the displacement at the tip of the beam.p=2 M=2.M=4..020 8 u.8 Exponential Covariance: c=l. p=4 M=2.p=1 0. Exponential Covariance. NODE Figure 5. order of the Neumann Expansion M.26) gives 0. p=2 M=2.O p ..2 0.7: Standard Deviation of the Displacement Along the Beam. p=l ~ ~ I 0. plotted against the standard deviation of bending rigidity of the beam.28) 22 (5.4) show the standard deviation of the response along the beam. (JEI = 0..0 0.p=2 :2 f< ~ >' M=2.04 0. order of the Homogeneous Chaos M . (JEI.8: Beam Tip Deflection Normalized Standard Deviation versus Bending Rigidity Standard Deviation. Incorporating this expansion into equation (5.p=1 .025 U 0:: ..3. Covariance. order of KL expansion Standard Deviation of Medium. the response process may also be expanded as in equation (3. p=2 M=4.0 Triangular Covariance: c=2..20 0. (Ju.2.. M=4.0 p. for several values of the order p of the Neumann expansion and for various values of the order M of the KL expansion.122 CHAPTER 5.0 ~ 0.0 2 4 6 8 10 NODE Figure 5.. p=l M=2.
10 0.2. .. Iv1 = 2.. . Displacement Representation.5 Third Order Chaos..::::::.1.10 Z Q ru.. EXPONENTIAL COY ARIANCE EXPONENTIAL ___ Oth Order Chaos.124 CHAPTER 5. i = 0.4 < I.. .2 0.05 0.2 .10: Linear Interpolation of the Nodal Values of the Vector c. ..05 0. i=I.29) for the Beam Bending Problem..1. Displacement Representation. i=1. Triangular Covariance. CJT. p=3 M=2.Second Order Chaos..0 0.0 0.10 BENDING 0.15 RIGIDITY.Second Order Chaos..05 0.':.0 Triangular Covariance: c. p=2 M=2.25 0.1.. . ]v! = 2. NUMERICAL EXAMPLES 5..5 COY ARIANCE 0. 1=0 _ . 1=6.3. p = 0. 1.29) for the Beam Bending Problem.4. Oth Order Chaos.11: Linear Interpolation of the Nodal Values of the Vector c.5.4.. First Order Chaos.05 0. . . p=l 0. >.2 j 1 I 5000·SaropJe MCS M=l.0 . p=2 M=l. 2 Terms in KL Expansion.9: Beam Tip Deflection Normalized Standard Deviation versus Bending Rigidity Standard Deviation.. 0...< >. 2 Terms in KL Expansion. p=4 M=2..max = 0. p = 0.2 ._ 0.6 .J I.20 0..9 COY ARIANCE 0..Ll co ~ ~ 0. Displacement Representation. Polynomial Chaos Solution. 2 Terms in KL Expansion.10 I I __ _.. i=3. i=O . ONE DIMENSIONAL STATIC PROBLEM 125 1. of Equation (5. p=3 M=l.< * &:l . i = 0.05 2 4 6 NODE 8 10 2 4 NODE 6 8 10 Figure 5.298. of "''"i''~V''~'' (5.0 p = order of the Homogeneous Chaos M = order of KL expansion EXPONENTIAL Oth Order Chaos.9. M = 2.0 I t 0.30 2 4 NODE 6 8 10 Figure 5.. p> l M=2. Figure 5.8 0...::::2. of Equation (5.2..29) for the Beam Bending Problem. i=3.0 0.. i=O . First Order Chaos. Figure 5.. First Order Chaos..2.2. ..8..7. I 0..12: Linear Interpolation of the Nodal Values of the Vector c.. i = 0. 1=1. p = 0.Ll Q 0.
126
CHAPTER 5. NUMERICAL EXAMPLES
5.2. ONE DIMENSIONAL STATIC PROBLEM
127
EXPONENTIAL
l
COVARIANCE 0.15
UlOJ.. iii Oth Order Chaos, i=O         First Order Chaos. i= 1.2 Second Order Chaos, i=3,4,5
uZ0
«.:iii
Z .0;.)
0.10
OJ.. 00< C/l;:e: 0;.)0 OU
...l~
0.05
>;:e:
::;]Cl
80;
zo:C/l
CS
1 1
j
I
I
___
Oth Order
Chaos,
1==0
.. " ____ _ _ __
  . _ 
First Order Chaos, 1==1,2 Second Order Chaos. i=3,4,5 Third Order Chaos, i=6,7,8,9 Fourth Order Chaos, i=1O,11,12,13,1
Oz
0.10
C/l0 0;.),,
~~ >u
.:...:.:;.:~:.:  :..:::.::....~ =
0.0
_==
...lg;
0.05
=_..::=
Cla 00:3
z
0.0
 ~=======
I
0.05
2 4
·0.05
J
6 NODE 8 10
2
4 NODE
6
8
10
•
igure 5.13: Linear Interpolation of the Nodal Values of the Vector c, of Equation (5.29) for the Beam Bending Problem, i = 0, ... , 14; Displacement Representation; 2 Terms in KL Expansion, A1 = 2; P = 0,1,2,3,4.
J:<
Figure 5.15: Linear Interpolation of the Nodal Values of the Vector c, " of Equation (5.29) for the Beam Bending Problem, i = 0, ... , 5; Slope Representation; 2 Terms in KL Expansion, M = 2; p = 0, 1,2.
0.15 i.;
't<
j
    
Oth order Chaos, i=O first order Chaos, i=1,2
0.15
Oz
;;:u 00
OJ.. iii
C/l0 ""'0<
0.10
0::;: ...lo
u:;;: 5!@ C/l0
gj"...l~
«':u >0;.) ~C/l

Oth Order Chaos, i=O First Order Chaos. i=I,2 Second Order Chaos, i=3,4,5 Third 'Order Chaos, i=6,7,8,9
0.10
::;]g;
0.05
0.05
~ul
0.0
...lo.. «':0 O...l
0.0
~~ ~::::: == ;:;;;;~ ~ ~ ~ ~' ~ ~ ........
:..;
.;..,
~:'.:::..:.:.....:::.:::: ...:_::.: .. :..: ..:..... :.:, ..
. ...:...:...:. .. .:.,: . ....:. . .:.,:
".;..,.
~
..
·0.05 2 4 NODE 6
................. _
"
_"......
0.05
.
_.8

.
8
10
2
4
6 NODE
10
Figure 5.14: Linear Interpolation of the Nodal Values of the Vector c, of Equation (5.29) for the Beam Bending Problem, i 0,1,2; Slope Representation; 2 Terms in KL Expansion, M = 2; P = 0, l.
Figure 5.16: Linear Interpolation of the Nodal Values of the Vector c, of Equation (5.29) for the Beam Bending Problem, i = 0, ... ,9; Slope Representation; 2 Terms in KL Expansion, M = 2; p = 0,1,2,3.
128
CHAPTER
5. NUJvIERICAL
EXAMPLES
5.2.
ONE DIMENSIONAL
STATIC
PROBLEM
129
0.020
I
0.15
EXPONENTIAL
F;rst Order Expansion, p=O.l
COY ARIANCE
./'
/' •. '
Ul<
u;.iii
3:;;
<8
>r.a
Oz "'0 r.ao..
....lp..
0.10
1
!

  _   
First Order Chaos, i=I.2 Second Order Chaos, 1=3,4,5 Third Order Chaos. 1=6,7,8,9 Fourth Order Chaos, i=lO,11,12
"0 "'0.. 0::;;
VlO
Z ._ Z
'"
E
/.
0.015
" ..•.•
     . Second
Order
Expansion,
p=O, 1,2
<: :




0.05
 :.:~~:.::..:..:.:....::.:=;...;:;.:.:.:..:.::.;...:..:..:
~
f',1u ~   =.  ..... = 
<0 O~
Om Z
0.0
0.05
j
2
4
=
= 

_=
=
====;::::
0<
0....l
~r.a >:::;: ....l", <u Zo..
::2
0.010
j
 Third Order Expansion. p=Ovl ,2,3 Fourth Order Expansion, p=o, 1,2,3,4
/
./"." /.•••. .
••.>
0.005
°
0.0
j
~.,
~/
»:
4
//
p""
//
6 NODE
8
10
2
6 NODE
8
10
Figure 5.17: Linear Interpolation of the Nodal Values of the Vector c, of Equation (5.29) for the Beam Bending Problem, i = 0, ... ,14; Slope Representation; 2 Terms in KL Expansion, M = 2; p = 0, 1,2,3,4.
Figure 5.19: Linear Interpolation of the Nodal Values of the Vector c, of Equation (5.29) for the Beam Bending Problem, i = 2; Displacement Representation; 2 Terms in KL Expansion, M = 2; P = 3,6,10,15.
0.0
l
J~
<'<2 0::;;
""'0
tZ
0.01
J
~.~
EXPONENTIAL
COY ARIANCE
0.014
EXPONENTIAL
COY ARIANCE
....~.~. ~.
~'~"
I
J~
52 ",:::;:
Z
E
0.012 0.010
    .. Second Order Expansion, p=O,1,2      Third Order Expansion, p=O, 1,2,3
Fourth Order Expansion, p=O, 1 ,2,3,4
"'u ~,_ z
0.02
J
;:~
0....l Zo.. m
~t:J 0<
5
0.03
j
<,
           _. First Order Expansion, p=O, 1 Second Order Expansion. p=O,1,2 Third Order Expansion. p=O, 1.2.3 Fourth Order Expansion, p=O,1,2,3,4
~'"
~"
r.a0 ;=>u 0.008 ....lE<Z >~ ....lou 0.006 <u
:
/
0<
Zo..
0....l
0.004 0.002 0.0
0.04
<,
10
15
2
4 NODE
6
8
2
4 NODE
6
8
10
Figure 5,18: Linear Interpolation of the Nodal Values of the Vector c, of Equation (5.29) for the Beam Bending Problem, i = 1; Displacement Representation; 2 Terms in KL Expansion, 1\11 = 2; P = 3,6, 10, 15,
Figure 5.20: Linear Interpolation of the Nodal Values of the Vector c, of Equation (5.29) for the Beam Bending Problem, i = 3; Displacement Representation; 2 Terms in KL Expansion, 1'',11 = 2; P = 6,10,15.
130
CHAPTER 5. NUMERICAL EXAMPLES
5.2. ONE DIMENSIONAL STATIC PROBLEM
131
0.0 ' 0.002
0.0 '_..
' "<, '.
EXPONENTIAL
COY ARIANCE
EXPONENTIAL
COVARIANCE
.
<,
J~ IJ. 0
0'"
2:
,
0.004 0.006 0.008 0.010 0.012
~2 '
0::;:: ClJo
Wu
;; '<,
oW .~;z: 0001
~
ClJ23
..::;z:
5 i::
oj z; BJ
Q
>UJ :;;:! ~ c.U
:...::.
... __ _,_ __ __ lL
I
SecondOrderExpansion,p=D,1,2
","":...::.
:l~
1
, ,
",
W
0.002
".".~.
~',
";":":., •• •
.JilS "::u 0..:: O.J
;z:<>.
j
.....
__  Third Order Expansion. p=O,l.2.3 . Fourth Order Expansion, pD,1.2,3,4
,
....
",
[@
0.003
......
10
.
Th_ir_d_o_rd_er,E_xp_an_sl_on_,_p=_o_'j'_2,,3 Fourth Order Expansion. p=O,j,2.3,4
r_'_'_~·_~_c_"_,~  "'"
2
4 NODE
6
8
10
2
4 NODE
6
8
Figure 5.21: Linear Interpolation of the Nodal Values of the Vector c, of Equation (5.29) for the Beam Bending Problem, i = 4; Displacement Representation; 2 Terms in KL Expansion, M = 2; P = 6,10,15.
Figure 5.23: Linear Interpolation of the Nodal Values of the Vector Ci of Equation (5.29) for the Beam Bending Problem,i = 6; Displacement Representation; 2 Terms in KL Expansion, M = 2; P = 10,15.
EXPONENTIAL 0.003
COY ARIANCE 0,005
I
EXPONENTIAL

COY ARIANCE
_______  _ _. Second Order Expansion, p=O, 1,2 __    Third Order Expansion, p=O, 1,2,3 Fourth Order Expansion, p=O,1,2,3,4
<
oW ._ ;z:
~
0.004 ""0 ""<>' 0::;::
1
 Third Order Expansion, p=O.1.2.3 . Fourth Order Expansion, p=O,t,2,3.4
ClJO
Wu
0.002
:l~
:;;:!t:j Q..::
O.J
;z:",
0.003
;;~
0.002
l
j
2
"
0.001
15
0,001
0.0 2
4
0.0 6 NODE 8 10 4 NODE 6 8 10
Figure 5.22: Linear Interpolation of the Nodal Values of the Vector c, of Equation (5.29) for the Beam Bending Problem, i = 5; Displacement Representation; 2 Terms in KL Expansion, M = 2; P = 6,10,15.
Figure 5.24: Linear Interpolation of the Nodal Values of the Vector c, of Equation (5.29) for the Beam Bending Problem, i = 7; Displacement Representation; 2 Terms in KL Expansion, M = 2; P = 10, 15.
This fact causes the projections to require updating as more terms are included in the summations of equation (5. Figures (5. represent the magnitude of the projections of the response process u(x. corresponding to the exponential covariance model.!z 0.1.8) and (5.z}].29) leads to a set of algebraic equations to be solved for the vectors c. NUMERICAL EXAlVIPLES to 5.14)(5. An autoregressive filter of order twenty is designed to match the spectral density of the process.35) for the slope process.0005 . 1981.[{'. Note the rapid decrease in the magnitude of the projection on the higher order polynomials. The figures suggest that a reasonable approximation may be obtained by assuming. the same problem is treated by the Monte Carlo simulation method.5) to figures (5.3.13) show these projections for the displacement process.36)(5.2. Fourth Order Expansion.3 and for a correlation length equal to 1.2) and (5.20). Observe the excellent convergence achieved by the leading projections with only a second order expansion. the probability distribution function of the response can be approximated.18)(5. as noted in section (4. Note that these leading terms provide the largest contribution to the response process. Once the coefficients in the Polynomial Chaos expansion have been computed. Figures (5. The convergence of these projections with increasing order of the Polynomial Chaos used is shown in Figures (5. and to test their convergence properties.k Wd{~r}] Wj[{~r}]> in equation (5. As expected. is coupled.2.4 2 4 NODE 6 8 10 Figure 5.8) and (5. ONE DIMENSIONAL STATIC PROBLEM 133 tation.29). p=O.0 = 1 and Q~~ = I.0015 '""'w '':U ~ ~0. and four terms in the KarhunenLoeve expansion for the variability of the system.29) for the Beam Bending Problem. although sparse.29) where '.6) it becomes clear that equation (5. 1.Third Order Expansion. The vectors c. Spanos and Mignolet. a statistical population corresponding to the response process can be readily generated upon noting that to each realization of the set of random variables {~i}' there corresponds a realization of the response vector. Comparing figures (5. Up to third order Polynomial Chaos is used in the associated computations. lVI = 2.0025    .9). Due to the term <'. the resulting system of equations.15. gives 0. Using nonparametric density estimation techniques. of the random process representing the material variability using equation (2. for a value of the coefficient of variation of the bending rigidity equal to 0. Figures (5. for higher order polynomials..132 CHAPTER 5. 2 Terms in KL Expansion. This assumption may then be relaxed sequentially as more accuracy is required..43) show the probability distributions.9) show the standard deviation of the displacement at the tip of the beam versus the standard deviation of the bending rigidity._ Z ~~ o :20.2.3. of Equation (5.. equal contributions from polynomials of the same order. or equivalently requiring the representation error to be orthogonal the space spanned by the approximating Polynomial Chaoses. For Gaussian processes. only the first coefficient of the filter is nonnegligible. Monte Carlo Simulation To assess the validity of the results obtained from the analytical methods.25: Linear Interpolation of the Nodal Values of the Vector c. i = 8.26) for the displacement process and in Figures (5. e) onto the spaces spanned by the successive Polynomial Chaoses W. P = 10.29).3 _. Recalling the discussion in section (3.0020 is 0.10)(5. Realizations of the bending rigidity of the beam are numerically simulated using digital filtering techniques (Spanos and Hansen. p=O. these variables are uncorrelated zeromean Gaussian random variables with unit variance. 1986). Displacement Representation.4).27)(5. obtained as described above. using two. it is observed that a third order approximation with the Polynomial Chaoses achieves a convergence comparable to that of a fourth order Neumann expansion. Figures (5. Realizations of these random variables are obtained from realizations !2:0. effectively .0010 "'0 Wu ::Of< ::.W .0 EXPONENTIAL COY ARIANCE (5.17) show similar results pertaining to the slope process along the beam.: >: ~0.
1.ois '\ .020 .01 0.2.l. 0. 0.15....First Order Expansion..2. i = 2.. i. i = 9. 6 8 10 I 0.10.4 2 4 NODE 6 10 2 4 NODE 6 8 10 Figure 5...29) for the Beam Bending Problem.. p=O. p=O. 1..29) for the Beam Bending Problem..3..6. IvI = 2.ors 0.15. Figure 5..27: Linear Interpolation of the Nodal Values of the Vector c..10.Second Order ExpanS.Third Order Expansion. p=O..2.06 2 4 .3.3 Fourth Order Expansion.2. M = 2.~. p=D.• est< "'z .4 I 2 4 NODE 6 8 10 NODE Figure 5.03 '\" o. of:@ "'0 ~"" .::.. P = 3.J \.Second Order Expansion. .~~~.~ 0.15.2 .010 l t !:' /' // j/ _First Order Expansion. ._..05 <. M = 2.1. i = 1.. of Equation (5. P=O. 1 .29: Linear Interpolation of the Nodal Values of the Vector c.. p=O. p=O.005 I / / f ...... Displacement Representation.. ~.1. 2 Terms in KL Expansion.:: . I . of Equation (5. Slope Representation..: o .2. 2 Terms in KL Expansion.1..l .1...3. .. p=O. M = 2.Third Order Bxpanston.. .l . p=O.3 Fourth Order Expansion. Slope Representation.134 CHAPTER 5. Figure 5.2.2. P = 10.. pe O.2. 2 Terms in KL Expansion.lOn. p=O.2.020 EXPQNmmLC&V~r::.::E   .~ 0. ~ .10. EXPONENTIAL COY ARIANCE _.02 \.3 Fourth Order Expansion. P = 6. Slope Representation. ~g: oui Z 0...l Second Order Expansion.3 Fourth Order Expansion.04 ~'.2 Third Order Expansion.4 0. p=O...010 0..u QQ 0. 0.15.29) for the Beam Bending Problem. NUMERICAL EXAMPLES 5.29) for the Beam Bending Problem. P = 3.2 Third Order Bxpansron. p=D. 2 Terms in KL Expansion.26: Linear Interpolation of the Nodal Values of the Vector c. of Equation (5.3. of Equation (5..6. ONE DIlVIENSIONAL STATIC PROBLEIVI 135 EXPONENTIAL 0..~~~ o. i = 3.0004 COY ARIANCE /.0002  _   .28: Linear Interpolation of the Nodal Values of the Vector c.
 0.001 EXPONENTIAL COY ARJANCE Third Order Expansion..2.1.1.4 0.:::.002 4 NODE 6 10 2 4 NODE 6 8 10 Figure 5.012 0. p=O. M = 2. 'c.2.10.::eSE ""'0 / / / / 0.Third Order Expansion... of Equation (5. Figure 5..g [flli:..:::.// 2 :l1~ ~u . \. ~(/) 0.32: Linear Interpolation of the Nodal Values of the Vector c. \ '.. Slope RepresentatlOn.::..1..15.'  .l.GE. . p=O.29) for the Beam Bending Problem. "'z "'z Ul2 ::>::. Slope Representation. p=O.: : ·C0Y..30: Linear Interpolation of the Nodal Values of the Vector c. ONE DIMENSIONAL STATIC PROBLEM 137 0. i = 5. ::>::. \ :2 >U _.3 Fourth Order Expansion.10.lg: <0 O_. 1.010 0.3.2 Third Order Expansion.004 0.005 ~u :2 i5: 0.::. of (5.3.005 2 4 NODE 6 8 10 2 4 NODE 6 8 10 Figure 5. P = 6.. :2 0 >u OUl <J E 0.3 Fourth Order Expansion.006 1 J ~ EXPONENTIAL COY ARlANCE .004 ".006 <i ~ 0...2.3 Z 0. \ \.007 ExpPNEN'tiAl.31: Linear Interpolation of the Nodal Values of the Vector c~ of Equation (5.:::.0015 \. P = 10._ COVARJA.".!. EXPONENTIAL 0... 2 Terms in KL Expansion. 2 Terms in KL Expansion. p=O.15.. NUlvIERICAL EXAMPLES 5.33: Linear Interpolation of the Nodal Values of the Vector c. Slope Representation. p=O..016 l I I.2. 1.2.1>l.014 0. Z 0 0. 2 Terms in KL Expansion. M = 2.0040 .3 Fourth Order Bxpanston.003 Third Order Expansion. ':':':: . ··· ...2.. 2 Terms in KL Expansion.iOn. Figure 5.:::.29) for the Beam Bending Problem.l..0020 0.Ul 0". ..2 .0035 "'it Oz ifl ~ .008 Z iflli:. <J!Z 5 Ul 0.•. p=O. p=O. p=O.. ._.1.29) for the Beam Bending Problem." :2g: 03 ._.3'1 f 0. of Equation (5.15.004 . __ Second Order Expansion. j \ 0. 1.136 CHAPTER 5.003 ~ '" 0. c.3 Fourth Order Expansion.2. 0..29) for the Beam Bending Problem. i = 7. "". '\'..:. ..ARIANCE v. Slope Representation. P = 6.0025 00 0.3.'  0..Second Order Expa~S.0030 5!.:.15. <"" o3 j . ". M = 2.2. \ \. p=O. i = 4.002 0. <J!Z 0.2.. P:O. P = 10. i = 6. M = 2.
15..0002 ~ 0.3 '" Q ..000Sample Monte Carlo Simulation and up to Third Order Homogeneous Chaos.I ____. 0.0030 5 '/\\.4 DISPLACEMENT 0. ~ ::.. Slope Representation. P = 10.000Sample Monte Carlo Simulation and up to Third Order Homogeneous Chaos.15.8 1. . "'0 CIlo..6 DISPLACEMENT 0. M = 2.2 0.3.~~"'\"::=. 2 Terms in KL Expansion.2.36: Probability Density Function of the Displacement at the Tip of the Beam Using 10.0 6 NODE 8 10 Figure 5.~ 0. u~ 5\@ >:CIl 0.4 1 \ \ ~ u~ 5\@ ~::E 0 ~ 2°·0002 1 6 Z CIl 0. of Equation (5.29) for the Beam Bending Problem..0004 EXPONENTIAL COY ARIANCE Third Order Expansion. Figure 5.34: Linear Interpolation of the Nodal Values of the Vector Ci of Equation (5.0 Figure 5.1.. Two Terms in the KL Expansion.I\ I \ Monte Carlo Simulation l st Order Homogeneous Chaos I~\ .5 \ 0.====:.l0.1 0. Figure 5.2 0.4 _ Monte Carlo Simulation 1st Order Homogeneous Chaos 2nd Order Homogeneous Chaos 3rd Order Homogeneous Chaos Correlation Length = 1 Standard Deviation = 0..:===J 0. P = 10.29) for the Beam Bending Problem. ONE DIMENSIONAL STATIC PROBLEM 139 I 0.2nd Order Homogeneous Chaos 3rd Order Homogeneous Chaos I \ _ 11\ Ii .138 CHAPTER 5.4 0.1.3 <8 <0 § r. Z 0.3 p=O. p=O.0010 EXPONENTIAL COY ARIANCE 15 ' J I ~ .2..2. .2.::==:. 0..0. p=O..3 p=O. i = 9.~/ 0. Slope Representation.J = 2.0015 Third Order Expansion. 2 Terms in KL Expansion. i = 8..0 o 2 4 ~Il. 1\. Fourth Order Expansion.35: Linear Interpolation of the Nodal Values of the Vector c.__ ..u .. 03 0". i'l .3 ~ >< ~ 0.l"" <0. Two Terms in the KL Expansion..2.l . 10 Correlation Length = 1 Standard Deviation = 0.0 1 l 2 4 NODE 6 8 10 \ >< >Q >Q 0.8 1.6 0..2 ~ < 0 0.0020 Fourth Order Expansion. NUMERICAL EXAMPLES 5.37: Tail of the Probability Density Function of the Displacement at the Tip of the Beam Using 10.0 .0025 Z 0.3.l ..
2 0.3 1.40: Probability Density Function for the Displacement at the Tip of the Beam Using 1O.0 15 Correlation Length = 1 Standard Deviation = 0. Monte Carlo Simulation 1st Order Homogeneous Chaos 2nd Order Homogeneous Chaos 3rd Order Homogeneous Chaos Correlation Length = 1 Standard Deviation = 0.0 0. Four Terms in the KL Expansion.6 DISPLACEMENT 0..1 ~"I>I.0 DISPLACEMENT Figure 5...0 0.000Sample M~nte Carlo Simulation and a Third Order Homogeneous Chaos. Figure 5..6 0. Z 0 1...._ 0..8 1.8 1.5 Chaos \"~ u 0.4 0.6 0..8 1.4 0..5 0..95 0. . '" 1' 1. Figure 5. Monte Carlo Simulation 3rd Order Homogeneous 0.140 CHAPTER 5.96 .2 ~ ::> I .:"..3 0 6 Z ::> " 0.41: Tail of the Probability Density Function for the Displacement at the Tip of the Beam Using 10.0 0....4 0..l 0.0 Figure 5.. Four Terms in the KL Expansion.000Sample Monte Carlo Simulat...8 1. ONE DIMENSIONAL STATIC PROBLEM 141 Z 1._ .000Sample Monte Carlo Simulation and up to Third Order Homogeneous Chaos..l 0.1 " .6 0.4 > r::: 0: ui ( _j 0.00 2.39: Tail of the Cumulative Distribution Function for Displacement at the Tip of the Beam Using 10..0 20 j .3 6 z ii: Z 0 I § ~ a Ct.0 o 0.3 10 1\ .: ::> 0.97 > ~ .000Sample Monte Carlo Simulation and up to Third Order Homogeneous Chaos.98 Correlation Length = 1 Standard Deviation = 0. Monte Carlo Simulation 1st Order Homogeneous Chaos 2nd Order Homogeneous Chaos 3rd Order Homogeneous Chaos Correlation Length = 1 Standard Deviation = 0.0 0..38: Cumulative Distribution Function for the Displacement at ~he Tip of the Beam Using 10.4 DISPLACEMENT 0.6 DISPLACEMENT 0.0 ·0.99 1.ion and a Third Order Homogeneous Chaos. NUMERICAL EXAMPLES 5...0 ....8 z S ::> '" S Q '" 0. Monte Carlo Simulation 3rd Order Homogeneous 5 Chaos u 0... Two Terms in the KL Expansion. Two Terms m the KL Expansion. Jr~' {t .2 0.2 .2 0...2.
0 DISPLACEMENT F~gure5. a truncated Gaussian distribution is used for EI to the extent that equation (2. .1 Two Dimensional Formulation Static Problem In this example. For O'EI in the neighborhood of 0.2 0. For each of the simulated realizations. In physical problems of related interest.39). X2).3. first a thin square plate is considered that is clamped along one edge and is subjected to a uniform inplane tension along its opposite edge.45).2. In treating this problem a slightly different variational formulation is followed than the one used in the previous example. as a function of the location x within each element. the problem of dealing with realizations involving negative values of the bending rigidity becomes crucial. which corresponds to the covariance function given by equation (2. This agrees with the fact that the process being simulated.8 1.20) is only approximately correct for the corresponding ~i. The same problem is then formulated and solved for the plate with curved geometry shown in Figure (5.2 ~ U 0.32) (5. This problem is a consequence of the Gaussian assumption which permits negative values for the process EI.8 s E=: ~ ~ lon j I 0.142 CHAPTER 5.30) 5.3 i'S 0. and its compatibility with various formulations of the finite element method. Let the domain A of the plate be discretized into N fournoded quadrilateral finite elements. the stress may be expressed in terms of the strain as . the associated deterministic problem is solved and a data bank for the response process is generated to compute its statistics. thus demonstrating the applicability of this stochastic finite element method to problems with arbitrary geometry. CT(X) and E(X) denote the .6 0.44).a differe~tial element in Ae.33) .0 j I 5 ~ u 0.31) where De is the matrix of constitutive relations. the small fraction of the realizations with negative EI is excluded. Here CT and of stress and strain as given by the equation E are the vectors (5.3.6 I I r 0. NUMERICAL EXAMPLES 5.42: Cumulative Distribution Function for the Displacement at the TIP of the Beam Using 10. The strain energy Ve stored in each element Ae can be expressed as (5.0 I Monte Carlo Simulation . (5. Further. each element having eight degrees of freedom. Assuming linear elastic material behavior..OOOSampleMonte Carlo Simulation and a Third Order Homogeneous Chaos. TWO DIMENSIONAL STATIC PROBLElvI 143 yielding a first order filter.4 Correlation Length = 1 Standard Deviation = 0.str:ss and the strain vectors respectively. During the simulation. where dAe is.3rd Order Homogeneous Chaos . The process is repeated for various values of the standard deviation of the bending rigidity O'EI.4 eLl > E=: "" ( ) _j 0. This is done to demonstrate the flexibility of the KarhunenLoeve expansion. In essence. The results from the simulation are shown on the same plots as the results obtained from the analytical analysis. is a first order Markov process. The problem is depicted in Figure (5.3 5. so that the Gaussian assumption is even more appropriate.. it is assumed that the external excitation is deterministic and of unit magnitude. :2 Z 1. The filter is then used to simulate five thousands realizations of the bending rigidity.0 0.3. the coefficient of variation corresponding to a medium is usually less than 0. Further. Four Terms in the KL Expansion.3 0. The modulus of elasticity of the plate is assumed to be the realization of a twodimensional Gaussian random process with known mean value E and known covariance function C(Xl.
Z 1. == 1. De is given by the equation ~e (0..97 nO.3 Q > 0.96 ~ ::.44: Plate with Random Rigidity.. The two dimensional displacement vector u(x. TWO DIMENSIONAL STATIC PROBLEI'vI 145 g t..·1) LJ = 1.for.95 0. and Ee(x.OO~Sample M~nte Carlo Simulation and a Third Order Homogeneous Chaos . Ly 1..3.0 Correlation Length in xDirecnon Correlation Length in yDirecnon Figure 5. <EI>= 1.... Z ::... 0. ~e is the elemental Poisson ratio.43: Tail of the Cumulative Distribution Fu~ction .0 0 1. where oXi is the stress along direction Xi and EXi is the strain along that same direction. For the plane stress problem considered herein.8 1. correlation length In ydlrecuon=L. u 0.34) 11 :: Ly I1 20 I 1: I x 10 . Ue(B) is the random nodal response vector.144 CHAPTER 5. B) representing the longitudinal and transverse disx placements within each element may be expressed in terms of the nodal displacements of the element in the form (5.5.45: Plate with Random Rigidity.. where He(7'l' 7'2) is the local interpolation matrix. Monte Carlo Simulation 3rd Order Homogeneous Chaos (·1. For . and 7'1 and 7'2 are local coordinates over the element.99 <EI>=l P=I Yf ~~~~~ A Z 0 "" P I ~ ~ Vl 0. Figure 5. B) is the elemental modulus of elasticity. .6 DISPLACEMENT 0. NUMERICAL EXAMPLES 5.4 0. . Exponential Covariance Model.> where P" is a deterministic matrix.1) correlation length in xdirection=L.. P=l.. rour Terms in the K1 Expansion.. . Exponential Covariance Model. the Displacement at the Tip of the Beam Using 10.35) Figure 5..I) '" ~ < ::. P 25 A = o 1 11~') 1 (5.98 Correlation Length = 1 Standard Deviation = 0.00 0. Lx » 1.
(5.42) as (5. In performing the numerical integration. IX1 J IX2 IX2 (5. a fourpoint quadrature rule is used. The interpolation matrix H (Tl. The total strain energy V is obtained by summing the contributions from all the elements. and a procedure identical to the one described by Akin (1982) is employed to compute the value of the Jacobian at each integration point. The integration is then performed over this masteTelement. and transverse. (5. (5. 0 11'1 . is the dimension of the side of the rectangular element along the ith dire~ti~n.. (5. and Xic is the coordinate along t?at ~irectio~ of the lower left corner of the element. Using equation (5.zi..146 the rectangular CHAPTER 5. T2) IJeldT1 dT2 ue . NUMERICAL EXAMPLES elements used herein. IX1 0 zi. e) = Be U" .43) _ T2) .45) Using equation (5.41) U" = C" V. e) = \ I a aXI 0 0 [) 1 aX2 u(x.45).36) 5. e) BeT(T1.3..30) gives (5. T2J can be expressed (1+1'2) IX1 0 (1+1'I) IX2 (1+1'2) IX1 (1+1'2) IX1 0 _Ll.38) Substituting equation (5.40) is rewritten as where C" is a rectangular permutation matrix of zeros and ones reflecting the connectivity of the elements and the topology of the mesh. lXl 0 (11'I) IX2 0 (1+1'1) IX2 0 _Ll.46) €(x. (5. IX2 _n.35). iX2 0 .37) Substituting equation (5.34) into equation (5.40) The local representation of the response is related to the global representation through the following transformation L a a aX2 aXI J (5.za.39) where IJei denotes the determinant of the Jacobian of the transformation that maps an arbitrary element (e) onto the fournoded square with sides equal to one. T2.. e) . This procedure gives The strain within an element is related to the displacements. IX2 (1+1'2) IX1 za.41) back into equation (5. IX1 _n. through the relation longitudinal €(x. T2) r= Be(T1. equation (5. the following expression for the total energy stored in the system is obtained (5.39) leads to vwhere (1 _ Tl)(l = ~ veT fa1 fa1 E(T1. TWO DIMENSIONAL STATIC PROBLEM with 147 r Be = Further I . .
58) = ).47) Thhe solution ~f equation t e two equations (5. distances in the Xl and X2 directions and ~ + C1 cD ' for i odd.X ' ) f(2l(X J 2/ '\ = tan (w~a) = 0 and for i odd (5.148 where CHAPTER 5.59) 'lx1/ ) 2 eCllx1Y1[ f.58).52) 1 22 x/ eC2[X2Y2[ f?l(Yz) dY2 .55) Differentiating each of equations (5. lX2/2 1 w. and assuming that fn(Xl.56) (5.3.56)(5.ll(Y1) dYl IX1/2 J (5.(2) ~ J (5.(5. the symbol mg transcendental equation C1 Wi refers to the solution of the follow f(l)(. (5. (5. The next stage in the computations involves solving the integral eigenvalue problem associated with the covariance kernel. The solution of the first of these equations produces the followin~ eigenvalues and normalized eigenfunctions.' differenti . (5. . + C1 tan (wia) =0 for i even. two second order I e~e~tlal equatlOns. "'Ice. (5.56)(5.49) for the covariance kernel.52) is the product of the individual solutions of Equation (5.55) is identical to equations . J .54) and (5.(1) ). That is. as described by Akin (1982).48) reduces to the solution of the equation 1 .49) = (Wi2 2 where b1 and b2 are the correlation respectively.50) a~nequations sin(2wia) 2Wi Wi for i even.53) The solution of equation (5.51) the solution of equation ). TWO DI1VIENSIONAL STATIC PROBLEM 149 K= L N e=l eeT Ke ee .. with where (5.57) Square Plate .59). (5.54) and (5 55) t . and ). are obtained along with their associated boundary C?ndltlOns. by using a bookkeeping procedure.(2) ~ J ' (5.Analytical Solution Substituting equation (5.2 ) (5.(1) ).n X2) = /llcx \ /2) (X ~ I. NUMERICAL EXAMPLES 5.47) may be performed without computing the matrices C". A(l) The kernel used in this example is defined by the equation (5.
The second function being obtained from the first one by permuting the subscripts. and the two will be combined from here on. In this case. (2) is the interpolation matrix in terms of local coordinates Tl and r2.62) and performing a transformation to local coordinates. (2) pel dN f~. the complete normalized eigenfunctions are given by the equation fn(x.X2. Note that if Cl = C2. equation (5. (5.rdU(1 + (1)(1(5.Y2) dAe dAJ (5.61) (5. The result is an equation identical to equation (2.62) where He(rl. Y1. X2.60) where now the lh nodal points and CD=ABD. The assembly procedure just mentioned consists of combining entries corresponding to the same node (Akin. The KarhunenLoeve expansion for the modulus of elasticitv mav be substituted into equation (5. Y1) Vi = = e=l L N 1 Ae Lj e=1 N u" T(x.46) into" " Interpolating for the value of the unknown function within an element in terms of its nodal values results in the following expression (5. 1982). y) where jJel is the Jacobian of the coordinate transformation. For this particular problem. Yl) = L e=1 N 1 Ae in: X2. The work performed by the externally applied forces is Substituting equation (5. the terms are ordered in descending order of the magnitude of the eigenvalues An· Curved Plate . column of D is the jth eigenfunction calculated (5. (5.rd(l + (2) 1 from global The integrations indicated in equation (5. and f~ is the vector of nodal values for the unknown function associated with element (e). (j) f(x) dAe . TWO DIMENSIONAL STATIC PROBLEM 151 replaced by C2.65) at the (5.67) IN rr JAf C(X1. Y2) dA e .50). = by assembling matrices HeTh. =~ [f?)(x)f?)(Y) + fY)(x)f.68) where (Xl. then to each eigenvalue there correspond two eigenfunctions of the form given by equation (5.44) to transform equation (5.66) In the expansion of the random process.r2) eeJ and BeJ where .64) by requiring the corresponding error to be orthogonal to all the interpolation functions used.48) becomes An fn(Xl. equation Matrices C and B are obtained GeJ and C(X1.3.61) becomes An fn(Xl.(2) HJ(T1.63) V (2) (2) = 2' 1 UT k=1 L M ~k((j) K(k) U.44) may be performed either analytically or using some numerical quadrature scheme.64) U T ti 1'v[ Ae ceT he He T(x) f(x) dN (5. Y2) H (T1. (2) denote respectively the global and local coordinates of a point in Ae.82).70) . Y2) fn(X2. e C(X1.Y1. therefore. NUMERICAL EXAMPLES 5.69) (1+ rd(l + (2) (1.150 Cl CHAPTER 5. (5. bilinear interpolation e is used over fournoded quadrilateral element. (5.Numerical Solution Subdividing the domain A of the plate into N finite elements A e. Y1) and (T1. (2) is then given by the equation (1 . The matrix H (r1. A system of algebraic equations is obtained from equation (5.(2)(y) ] (5. Once the eigenvectors and eigenfunctions have been calculated. the solution procedure is similar to the one employed for the square plate example.
P . p=4 M=2.3.74). TWO DIMENSIONAL STATIC PROBLEM 153 = L e=l N ceT j Ae He(x) f(x) dN . versus Standard Deviation of the Modulus of Elasticity. leads to ~k(e) K(k) 1 J U = f. ' . Exponential Covariance' Neumann Expansion Solution.48)( 5.2 [~ <~kwd{~l}lwj[{~l}l> K(k) ] c..§ u LCi 2=0 '" « wd{~l}l as the solution (5. Note that the fluctuations of the nodal values of these vectors reflects the fact that the twodimensional geometry of the plate is represented by a onedimensional figure. respectively. four along each direction. Analytical The square plate is divided into sixteen elements.0 Exponential Covariance: cl=1.4 0.50) show the projections on the Homogeneous Chaos for various orders of the expansion.72) From this point on.46) and (5. am ax = 0.25 0. p=I MCS 5.47) show the standard deviation of the response at the free corner of the plate versus that of the modulus of elasticity for the improved Neumann expansion and for the Homogeneous Chaos expansion. (5. Observe that better convergence is achieved with a third order Homogeneous Chaos expansion than with a fourth order Neumann expansion.0 5000Sample M=2. the treatment of this two dimensional problem is identical to that for the previous onedimensional beam problem.73) to the following '" Ci ::Z (:J s: 0..30 OF ELASTICITY.0. Specifically.8 0.: Z 1.15 0. Figures (5.0 0. = <Wj[{~l}l f>.V') with respect to U. appearing in equation (5.6 with the vector coefficients c. The nodes are numbered in increasing order away from the clamped edge and counting from left to right.. Figure 5.152 where f CHAPTER 5. ~ Z '" ~ 0 ~ ~ * >I< * >I< (5. p=3 M=2.0 P ~ Order of the Neumann Expansion M = Order of KL Expansion :. j = 1.3.46: Normalized Standard Deviation of Longitudinal Displacement at Corner A of the Rectangular Plate.05 0.10 MODULUS 0. evaluated system of algebraic equations Ei Z 0 . Further..433.74) u 0. c2=1. NUMERICAL EXAMPLES 5.20 0. Figures (5. the response vector U is expanded U as P . These are the vectors c..l § 0. p=2 M=2..2 Results Solution 0.71) Minimizing the total potential the equation I K(O) + L k=l t energy (V . (5.
47: Normalized Standard Deviation of Longitudinal Displacement at Corner A of the Rectangular Plate.0 0.i = O.0 Exponential Covariance: cl=1. p=2 M=2.63) show the results corresponding to the probability distribution of one of the response variable at the free corner of the plate. NUMERICAL EXAMPLES 5.0 ~ 0 0 Z 0. is shown in Figures (5. p=3 M==2.6 z 0< 0. The standard deviation of the longitudinal and transverse displacements at node A. O"max = 0. p=2 M=4.8 S ::> c.1 .433.68)(5. p=l MCS '" c ....4 0 . . the effect of the number of terms used in the KarhunenLoeve expansion of the material stochasticity is more pronounced than for ~ '" ~ :.) l ""' ~ 0 t' 0.1 OF ELASTICITY.45).69).0.) 0. Exponential Covariance.56)(5. Ot Order Chaos.73) is increased.25.20 0.. ~ ~ 0 I:: 0 0. 1=1. Figure 5. Figures (5.48: Linear Interpolation of the Nodal Values of the Vector .64) and (5. respectively. f Equa. ..66)(5. TWO DIMENSIONAL STATIC PROBLEM 155 ~ ~ ""' 1. Polynomial Chaos Solution it is seen that the third order Chaos contributes a small amount to the total variation as compared to the first two. p = 0.. . pe l M=2.0 P = order of the Homogeneous Chaos M = order of KL expansion u .1. M = 2.~. p=3 M=4.2 u 1 I 5000·Sample M=4. 1=0 First Order Chaos.25 0. plotted against the standard deviation of the modulus of elasticity... using two terms in the KL expansion for the material stochasticity. 2 Terms in KL Expansion . that is as the value of P in equation (5. c2=1.154 CHAPTER 5. The curved side is a ninety degree arc of a circle of unit radius. '" 0 0.55) show the convergence of the individual projections as the order of the expansion is increased.: .73! for the Rectangular Plate Stretching Problem. Note the excellent convergence. and (5. The method described in section (4.0 ~ 0.) ""' 0<: 0.51)(5. \ 10 15 20 5 NODE Figure 5. Longitudinal DIsplacement Representation. versus Standard Deviation of the Modulus of Elasticity.3.tion.4) is used here again. The length of the straight edges is equal to 1..10 MODULUS Po< 0.c{.67). Figures (5.30 is 0. The curved plate is shown in Figure (5. Also note that for this example.15 0..65). The results corresponding to four and six terms in the KL expansion are shown in Figures (5. (5.05 0.
0 0. of Equation (5. .2..50: Linear Interpolation of the Nodal Values of the Vector c.2.:_ _.73) for the Rectangular Plate Stretching Problem..:: '" ::0 ' .   . M = 2. .. Longitudinal Displacement Representation..__~ <. ~ 0 0.../..156 CHAPTER 5..3. 9. . " . i = 0.. NUIvlERICAL EXAMPLES 5.1. TWO DIMENSIONAL STATIC PROBLEM 157 <S u.' _ T.3.~ _. IVI = 2.. 0.5._ .. ..49: Linear Interpolation of the Nodal Values of the Vector Ci of Equation (5.:.1 <S 0.1... 2 Terms in KL Expansion. 0. ...I. 2 Terms in KL Expansion.1 0 I.1 "... NODE Figure 5. _" . p = 0. .73) for the Rectangular Plate Stretching Problem. p = 0. . . Longitudinal Displacement Representation.1 5 10 15 20 5 10 15 20 NODE Figure 5..> 0 Z ~ ~..:: ~==~\.0 ' ~''" ' < ." '  '" ~ ~ ~ 0 ~ "l "0 l I ~ I 0. i = 0.
P = 3.73) for the Rectangular Plate Stretching Problem.'" // /. 1.\ COY AR<:C ~ i \ EXPONENTIAL 0...10 /.p.0 0. Second Order Expansion.158 CHAPTER 5. p=O.1'12 Third ~der Expansion. p=O. p~O.t .. TWO DIMENSIONAL STATIC PROBLEM 159 0.':::':'::'::' . i = 1.' /. Figure 5.. of Equation (5. 1\ .04 ! .51: Linear Interpolation of the Nodal Values of the Vector c. 3.04 ... i..l . 1..:1 t/'~ '::.3.3 \1. NUMERICAL EXAMPLES 5.Third Order Expansion. p=O. Displacement Representation. ! // 1 1\ / / \ \ / 0.10. 10._ SeconddrderExpanslOo. 2 Terms in KL Expansion. 1.~~: V " 0.3 // /.02 . 0.6.. r: 1...52: of Equation Longitudinal M = 2. p=O...·· /:.6./ :. IvI = 2. Longitudinal Displacement Representation./ 0.../ .73) for the Rectangular Plate Stretching Problem. P = Linear Interpolation of the Nodal Values of the Vector Ci (5.. i = 2._.15 COVARIANCE / 0.. 2 Terms in KL Expansion.2..' / .f .:... ExeON~. .2 . _____ First et Expansion.~/ 0.0 15 20 5 10 NODE 15 20 5 10 NODE Figure 5.. /.05 i ..02 __ First Order Expansion.
2.Third Order Expansion.. 0..04 ___... p=O.015 . p=O. 2 Terms in KL Expansion. Longitudinal Displacement Representation. S~i:ond Order Expansi" 0 0 Z 0. .73) for the Rectangular Plate Stretching Problem.''.. P = Linear Interpolation of the Nodal Values of the Vector c. _. of Equation (5.. TWO DIMENSIONAL STATIC PROBLEM 161 0. 2 Terms in KL Expansion. 10._J 0 C/l c..54: of Equation Longitudinal JvI = 2.08 5 10 NODE 15 20 5 10 NODE 15 20 Figure 5.3 0. Displacement Representation..005  . ~ 0. l EXPONENTIAL COVARIANCE 0 en 0.. Figure 5.1. 1..010 ~ u.p=O..l < g c 0. =: iVI = 2. 6.53: Linear Interpolation of the Nodal Values of the Vector c..02 j ' Z < > < 0. NUMERICAL EXAMPLES 5...0 0 0 0.1.3 1 ~ I .160 CHAPTER 5._..0 0.EXPONENTIAL q:OV ARIANCE 0.Third Order ExpanSIOn.2 .l .. . i 3.005 "" .. (5.06 .10..3..2 > .2. Second Order Expansion. .i = 4..010 0..73) for the Rectangular Plate Stretching Problem..=> ... P = 6.
55: of Equation Longitudinal M = 2.5 1. Figure 5. P = Linear Interpolation of the Nodal Values of the Vector c.0 Figure 5. ..006 0. Exponential Covariance....3 0 (/) w 0. .002 1 1 .  :... and Using Third Order Homogeneous Chaos. Probability Density Function Using 30. I 0.2 . NUMERICAL EXAMPLES 5.5 3.0 1.2 ~ :.. i = 5. Monte Carlo Simulation I st Order Homogeneous Chaos 2nd Order Homogeneous Chaos 3rd Order Homogeneous Chaos Correlation Length = I Standard Deviation = 0. 2 Terms in KL Expansion. p=O.•. 1.. Displacement Representation. 6. u.l .2.. (5..=::::==.3.0 \.000Sample MSC..014 0. Second EXPONENTIAL COV ARIANCE 3     Order Expansion. .0 2.008 0. ~~::=::=:=:~ 0..56: Longitudinal Displacement at the Free End of the Rectangular Plate..Third Order Expansion.10. Two Terms in the KL Expansion.5 DISPLACEMENT 2.012 0.010 2 0.004 0.. TWO DIMENSIONAL STATIC PROBLEM 163 0.: > ~ 0 0 Z l 5 10 NODE 15 20 a L.162 CHAPTER 5...73) for the Rectangular Plate Stretching Problem. p=O.
4 0.58: Longitudinal Displacement at the Free End of the Rectangular Plate.57: Longitudinal Displacement at the Free End of the Rectangular Plate: Tail of the Probability Density Function Using 30.: c:: 0. TWO DIMENSIONAL STATIC PROBLEM 165 z 9 Z 0.3 tt: >"l Q "l 12 ~ 0_8 Correlation Length = 1 Standard Deviation = 0.164 CHAPTER 5.E ::0 U 0.3.2 ~ .0 o DISPLACEMENT 2 3 Figure 5.6 '" 5 >t.2 0. Two Terms in the KL Expansion. Cumulative Distribution Function Using 30. 0.0 2. and Using Third Order Homogeneous Chaos.5 ti ~ l 0.0 0.5 DISPLACEMENT 3. NUMERICAL EXAMPLES 5.0 _r I 1 I Monte Carlo Simulation 3rd Order Homogeneous Chaos 0.. Figure 5.2 1.2 ~ e.OOOSampleMSC. Two Terms in the KL Expansion.5 2. . Exponential Covariance. '" 0.0 1. and Using Third Order Homogeneous Chaos. Exponential Covariance.1 ~ ::0 ::.2 t: Z 0.4 § c:: E 1 ~  Monte Carlo Simulation I st Order Homogeneous Chaos 2nd Order Homogeneous Chaos 3rd Order Homogeneous Chaos Correlation Length = 1 Standard Deviation = 0.OOOSample MSC.
975 1..  Monte Carlo Simulation 1st Order Homogeneous Chaos 2nd Order Homogeneous Chaos 3rd Order Homogeneous Chaos Correlation Length . Exponential Covariance..990 co 0 1 1 .3.0 2. Figure 5.000Sample MSC: and Using Third Order Homogeneous Chaos.0 2.980 u 0. Four Terms in the KL Expansion.0 . and Using Third Order Homogeneous Chaos.60: Longitudinal Displacement at the Free End of the Rectangular Plate.000 z 6 Z ::0 u.0 DISPLACEMENT 2._.985 :5 ~ ::0 0.. Two Terms in the KL Expansion. TWO DIMENSIONAL STATIC PROBLEM 167 1.3rd Order Homogeneous Chaos o 1._.' " 3 .5 1. NUMERICAL EXAMPLES 5..000Sample MSC.2 '" 2: >0. )< 0. Exponential Covariance.166 CHAPTER 5. Probability Density Function Using 30. .5 3.: 1 Standard Deviation = 0.0 Figure 5.2 2 [I' ~. .0 ~ 1. Z 0. .59: Longitudinal Displacement at the Free End of the Rectangular Plate' Tail of the Cumulative Distribution Function Using 30.5 Monte Carlo Simulation .. Standard Deviation = 0. Correlation Length = 1 : .:} 0.0 0. 0.5 DISPLACEMENT 2..995 > c:.5 3.
0 I 2 DISPLACEMENT 3 Figure 5.6 1 1 I Correlation Length = 1 Standard Deviation = 0.: 0."\~ ~ 2.8 t:: 0.5 .4 ~ c ::0 ::E ::0 0. and Using Third Order Homogeneous Chaos: Four Terms in the KL Expansion.. . TWO DIMENSIONAL STATIC PROBLEM 169 z 0.0 DISPLACEMENT 2.0 ···'h~..1 ~ [\\\'" . NWvlERICAL EXAMPLES 5..62: Longitudinal Displacement at the Free End of the Rectangular Plate.5 3. Figure 5.2 ~ 0.168 CHAPTER 5..0 U L o Monte Carlo Simulation 3rd Order Homogeneous Chaos "" 0.0 Q ti Z z <0 2 Q 0. :>< <Zl U 0. j 1.::. Exponential Covariance.2 S u.OOOSample MSC.4 &:: !3 0..2 Vl "" i5 <0 <0 5 <: 0.3 =I = 0..61: Longitudinal Displacement at the Free End of the Rectangular Plate.2 \ 0.l Z I 0.5 \\ "' '""" "' .3. Tail of the Probability Density Function Using 30. Exponential Covariance. and Using Third Order Homogeneous Chaos. Cumulative Distribution Function Using 30. Monte Carlo Simulation 1 st Order Homogeneous Chaos 2nd Order Homogeneous Chaos 3rd Order Homogeneous Chaos Correlation Length Standard Deviation Z 0 1.OOOSample MSC. Four Terms in the KL Expansion..
15 *.70)(5. Monte Carlo Sirnulatton 3rd Order Homogeneous Chaos ~ w j c. the Cholesky decomposition of the positive definite covariance matrix is obtained (Golub and Van Loan. NUMERICAL EXAMPLES 5.25 3.. 0.' / / ~ >~ ffi C/O (:) 0..64: Normalized Standard Deviation of Longitudinal Displacement at Corner A of the Curved Plate.. ~ u ~ 0 0.000Sample MSC. and Using Third Order Homogeneous Chaos.::.. p=3 p=Z p=1 ~ 0. ~/ * . M=2.975 1..05 0.2 _. the ilh element of which corresponds to the correlation of the process at points i and j.Ll ?:: 0.0 DISPLACEMENT 2...2  > 0. the process is obtained by premulti .6 * 1'.. The columns of the Cholesky decomposition factor are then used as the basis for simulating the process. The probability distribution functions corresponding to one of the response variable at node A are depicted in Figures (5. !i 1.::' '" .. Monte Carlo Simulation The two dimensional process representing the modulus of elasticity of the plate is simulated as follows. M=2. . therefore more terms are required for its adequate representation.. Results corresponding to up to six terms in the KarhunenLoeve expansion and third order Polynomial Chaos are shown. on 0.3.980 U 0.:. 0.0 1.r: **** SOOSample MCS M=2.8 p = order of the Homogeneous M = order of KL expansion Chaos u c: :..4 ".10 MODULUS 0.. .20 0.995 l 1 ~ Correlallon Length = 1 Standard Deviation = 0. Figure 5.' /.000 z 9 Z U 0. versus Standard Deviation of the Modulus of Elasticity. Following that. TWO DIMENSIONAL STATIC PROBLEM 171 the square plate example..5 2. Exponential Covariance.. the covariance matrix is constructed. Four Terms in the KL Expansion. 1984)..... First.0 OF ELASTICITY.985 I0<: J ~ ::. In other words.4.990 ~ E0 J e Z <>:: \. Two Terms in the KL Expansion..170 CHAPTER 5. This behavior is attributed to the fact that in this case the random field involves intricate geometric boundaries.4'.75).'. (J'max = 19.5 Figure 5. Exponential Covariance.' /.~~.63: Longitudinal Displacement at the Free End of the Rectangular Plate: Tail of the Cumulative Distribution Function Using 30.! 0.
versus Standard Deviation of the Modulus of Elasticity.05 0. M=2.8 p order of the Homogeneous M = order of KL expansion Chaos '" < u "" '" i3 .20 OF ELASTICITY. M=4. amax = 19. 15 ::. = 20. p=3 p=2 pe l MCS I E 0.4 1 J 0._.05 0.10 MODULUS 0.25 0.4 0 ~ ..8 p = order of the Homogeneous M = order of KL expansion Chaos .6 '" < a:: '" i3 ~ z i3 U '" '" ffi > Z '" g ~ ~ 0 u 0. p=3 p=2 pe l MCS I '1 fS u a:: ~ ~ I 0._. I I 0.4.4. TWO DIMENSIONAL STATIC PROBLEM 173 .20 0. versus Standard Deviation of the Modulus of Elasticity. Exponential Covariance. M=4. Four Terms in the KL Expansion.66: Normalized Standard Deviation of Longitudinal Displacement at Corner A of the Curved Plate.172 CHAPTER 5.3. .l i 0.: 0. Exponential Covariance. M=2.2 500Sample M=4.l ffi Z ~ 1 1 I I I 0.15 0.6 0.65: Normalized Standard Deviation of Transverse Displacement at Corner A of the Curved Plate. Two Terms in the KL Expansion.15 0. Figure 5. Z '" ::..: 0.25 OF ELASTICITY.2 j 500Sample M=2. NUMERICAL EXAMPLES 5. O'max Figure 5.10 MODULUS 0.
0..'" 0. Figure 5..10 MODULUS 0. Exponential Covariance. Six Terms in the KL Expansion.4 «: p:: r ffi Z ~ 0 ' 0.67: Normalized Standard Deviation of Transverse Displacement at Corner A of the Curved Plate. M=4. '" U Z E" 0.4. O'max = 20. /.2 .6 .15 0. .' l ~ :::> .:. M=4. /._.:. p=3 p=2 p=l u l 0.... : / .68: Normalized Standard Deviation of Longitudinal Displacement at Corner A of the Curved Plate... Exponential Covariance. Figure 5.20 0.25 OF ELASTICITY..20 0.. / : '" a «: Q '" s: «: *.4..8 p = order of the Homogeneous M = order of KL expansion Chaos / :.2 1 I _.. "...10 MODULUS 0.25 s: 0.:.:¥ " "" 0. p=2 p=l "" < s:: u 0. versus Standard Deviation of the Modulus of Elasticity. .:.' *. versus Standard Deviation of the Modulus of Elasticity. Z co ::./ ". M=4.6 '" ~ '" '" > z '" j I /. O'max = 19. OF ELASTICITY. ~¥.174 CHAPTER 5.4 t: SODSample MCS M=4.3..05 0.15 **** ~ ~ 0 ~ < " g 0.. TWO DIMENSIONAL STATIC PROBLEM 175 . NUMERICAL EXAMPLES 5. Four Terms in the KL Expansion.' /' .. 0..05 0.~ " '" '" ** SO~Sample MCS M=4.8 p = order of the Homogeneous M = order of KL expansion Chaos '" «: */~/ U s: '" a 0.
176 CHAPTER 5. O'max = 20. The orthogonal random variables appearing in that equation are obtained as pseudorandom computer generated uncorrelated variates.20 0.> ~ I . Exponential Covariance.6) with the number of terms equal to the number of nodes in the system. Two Terms in the KL Expansion. 0.03 Correlation Length = 1 Standard Deviation = 0. plying the Cholesky factor with a vector consisting of Gaussian uncorrelated variates. The resulting simulated random field is not as sensitive to the mesh size and nodal point distribution as the field obtained using the more conventional procedure described for the previous example..25 MODULUS OF ELASTICITY.10 0.05 0.6 ~==::::~~~~~~~~~~~~"T. Exponential Covariance. * 500Sample MCS M=4..'I) . For the example involving the curved geometry problem.02 0.01 .. The random field is then simulated using equation (2..8 p = order of the Homogeneous Chaos M = order of KL expansion I 0.2nd Order Homogeneous Chaos .. The issue is addressed by using the KarhunenLoeve expansion to simulate a truly continuous random field.: ffi ::. the eigenvalues and eigenfunctions the covariance kernel are computed as described in the previous section.15 0..<' on Z g r:<: Z 0. Six Terms in the KL Expansion. ': I 0. 1'.05 0. versus Standard Deviation of the Modulus of Elasticity. The results from using the Monte Carlo simulation method are superimposed on the same plot as the analytical results. Figure 5.70: Longitudinal Displacement at the Free End of the Curved Plate.OOOSamples MSC.. To this end. M=4.1st Order Homogeneous Chaos . and Using Third Order Homogeneous Chaos.1 !'.0 1LI I 'I' It . a different simulation technique is used.3rd Order Homogeneous Chaos 0. \1 I'...~~~':. Observe the good agreement between the analytical and the simulated results even for large values of the coefficient of variation.*::'====~ 120 100 80 60 40 20 DISPLACEMENT \\ ~ is on "" ffi ."'. Probability Density Function Using 30.04 Monte Carlo Simulation . NUMERICAL EXAMPLES 5.4 ~ "" a u ~ ." "" on u I 0..2 l \ \ >i< * '. TWO DIMENSIONAL STATIC PROBLEM 177 0.. . with zero mean and unit variance.2 ~.4. ' . p=2 p=1 Figure 5.69: Normalized Standard Deviation of Transverse Displacement at Corner A of the Curved Plate. It is prompted by the fact that the nodal points are not uniformly distributed over the domain of the plate. ~ 0.J \.3. 0.
4..75) where Xl and X2 denote two locations on the beam.002 13 <:..OOOSample NISC. . and Using Third Order Homogeneous Chaos. Covariance.OOOSample MSC.1 One Dimensional Dynamic Problem Description of the Problem Figure 5.0 § ~ S > 0.03 0. The last example to be considered involves an EulerBernoulli beam shown. e). w is the wave number associated with time.76).. as reflected by its spectral density function which is given by the equation Iwbl > 0. Let the beam be supported by an elastic foundation having a reaction modulus k(x.02 0.008 .0 l ~ 120 ·110 ·100 DISPLACEMENT ·90 ·80 I Figure 5.72: Longitudinal Displacement at the Free End of the Curved Plate' Probability Density Function Using 30.  Monte Carlo Simulation 1st Order Homogeneous Chaos 2nd Order Homogeneous Chaos 3rd Order Homogeneous Chaos 0..01 z 0. t.4. ONE DIMENSIONAL DYNAMIC PROBLEM 179 0.05 0. in Figure (5.2 ·100 ·80 ·60 ·40 ·20 DISPLACEMENT 0. modulus of elasticity E. and b is the correlation length of the excitation process. Probability Density Function Using 30. and Order Homogeneous Chaos.004 ~ ~ >'1 «: I>. Four Terms in the KL Expansion. of length L. 0. area moment of inertia I.1 (5.006 1 I ·120 Correlation Length = 1 Standard Deviation = 0. Two Terms in the KL Expansion..71: Tail of the Using Third Exponential Longitudinal Displacement at the Free End of the Curved Plate. e) which is considered to be a onedimensional Gaussian random process with mean value k and covariance function Ckk(X. featuring both temporal and spatial random fluctuations. The beam is assumed to be acted upon by a zeromean stationary random process f(x.. and mass density m.4 5.178 CHAPTER 5.:: 0. 5. e). Exponential Covariance. t. NUMERICAL EXAMPLES 5.04 0.010 ~ 0 6 5 Z "" 0 <='I VJ 0.
0. Adopting the discretization of the previous examples.:J 0.73: Longitudinal Displacement at the Free End of the Curved Plate.76) f(x.4. Exponential Covariance. Four Terms in the KL Expansion..2 Implementation governing the motion of the beam with constant The differential equation bending rigidity is + EI 8x4 u(x.: f . t. In the present analysis.. U(t. Then. e) ui».006 "" >< '~ Correlation Length = 1 Standard Deviation = 0. e) f(t. e) 84 (5..180 CHAPTER 5. 5. e) + K.: : '/ \: \ 8 "" >< 0..4. ONE DIMENSIONAL DYNAMIC PROBLEM 181 Z 0 r:: u ~ tl:: Z .t. NUMERICAL EXAMPLES 5. . t.. and Using Third Order Homogeneous Chaos.000Sample MSC. Six Terms in the KL Expansion. (5.03 Correlation Length = 1 Standard Deviauon e 0.04 Monte Carlo Simulation 1\ .77) .2 0."l \ \ .:J r:: j 1 I 0. e) + k(x. an equation involving 2N x 2N matrices is obtained of the form M U(t. where c is a coefficient of viscous damping. e) + C U(t.0 0. "" ~ 0 c.2 ! 120 100 80 60 40 20 DISPLACEMENT 02 f< '" 8 :::2 0..74: Longitudinal Displacement at the Free End of the Curved Plate.004 "" c 0.000Sample lVISC. although any dependence on frequency can be accommodated.. e) + K U(t. e). the correlation length b is assumed to be a constant.e) .2nd Order Homogeneous Chaos. Exponential Covariance.002 Figure 5. the beam may be divided into N finite elements. "/ 02 f '" 8 ~ b :::2 Z r 0.02 :) 0. and Using Third Order Homogeneous Chaos._.(j..:J c.0 ~ 120 110 100 DISPLACEMENT 90 80 Figure 5. Z 0. Tail of the Probability Density Function Using 30.1st Order Homogeneous Chaos .i'" .008 Z 0 .. Probability Density Function Using 30.01 . "" ~ 0 c.010 Monte Carlo Simulation 1st Order Homogeneous Chaos 2nd Order Homogeneous Chaos Srd Order Homogeneous Chaos I g U .
K.=> u.010 Monte Carlo Simulation . I" 1/ I. Replacing the process k(x.0 I 120 \10 100 DISPLACEMENT 90 80 Figure 5. Six Terms in the KL Expansion.006 Correlanon Length = 1 Standard Deviation = 0.~00Sample MSC.83) :L k=l ~kKjk) U(t) = f(t) .78) (5. for simplicity.75: Longitudinal Displacemen~ at t~e.002 \l j .=> '< "" P2 >< f 0. ONE DIMENSIONAL DYNAlVIIC PROBLEIV[ 183 where a dot denotes differentiation with respect to time and M.2nd Order Homogeneous Chaos Q tU (5. (5. NUMERICAL EXAMPLES 5.77) then becomes. to be of the form C .1st Order Homogeneous Chaos .' Z . Taking the in which the argument 8 is deleted for notational Fourier transform of equation (5. Free End of the Curved Plate.85) .83) yields (5.004 'cc' " 0 ..79) and Z 0. where /k (x) is the kth eigenfunction of the covariance kernel and Ak is the corresponding kth eigenvalue.84) where U(w) (5. where Clef and CK are constants.8) series gives in equation (5. Equation (5. Z 0. the damping matrix C is assumed.4.0:: 0:: c.. simplicity. ~nd Using Third Order Homogeneous Chaos.80) by its KarhunenLoeve 0.182 CHAPTER 5.008 0 ~ . Tail of the Probability Density Function Usmg 30.80) Further. M U(t) + C Uri) +K + lvi U(t) + Kf U(t) (5. Kf are generated by assembling the elemental matrices Me = J1e r HeT m He die (5.2 / I I " = Civi M + CK K. Exponential Covariance. 0.81) '" is :J 0.
Sff(X1.W) .89) has the same form as equation (5. Indeed. X2.87) can be written as [ I + L 1. Suu(w) I l I + Iv! E ~k Q}k) ]1 Spp(w) Tv1 " ~ ~k Qf (k) lH J 5. w) in its spectral series and perform the spatial discretization on the eigenfunctions of the expansion.W) = L/hi . This . Equivalently.96) Equation (5. ONE DI1vIENSIONAL DYNAMIC PROBLEM 185 (5. e).77) shows the spectral density Suu (w) of the displacement at one end of the beam along with pertinent Monte Carlo simulation results for a frequency increment equal to 0.28). The numerical values of other parameters are included in Figure (5. equation (5. the spectral density of the response can be represented in the following form Suu(w) [ where H (w) is a deterministic = H(w) + L M k=l LL 1=1 J=l 00 00 fi({O fj({O a. using the Homogeneous Chaos approach. (5.91) Sff(X1.3 Results where the superscript H denotes hermitian transposition.95) matrix. Neumann expansion for the inverse operator.=1 gi(Xl.96).W) gi(X2.X2. the exponential covariance kernel is used for the reaction modulus k(x.4. X2. with a correlation length equal to 1. Sff(Xl. causing the norm of the matrix [ I + I:t~l ~k Q}k) ] to increase beyond the radius of convergence of the Neumann expansion.87) = [ _w2 M + u» C + K + Kf ] (5. X2. (5. is a vector that is obtained as the solution to a system of linear equations as indicated earlier. w) is given by equation (5. Spp(w) aT (5.92) can be put m the form Suu(w) tf 1=1 j=l (_I)i+J l~k Qjk) r Spp(w) l~k Qjk) r In the numerical implementation of this problem.86) Equation (5. Using the. w) is expanded as K pew) = H(W)l F(w) .90) and In addition. The spectral density of the response process can be written as In equation (5. Specifically. it is numerically efficient to expand S ff (Xl.88) where a. Note the excellent agreement of the two solutions until the exact damped natural frequency of the beam is approached.4.1 ~k Q}k) ] U(w) = pew) . while gi(X. /hi refers to the eigenvalues associated with the kernel Sff(Xl. Note that the spectral density matrix of P(w) is related to the spectral density matrix of F(w) by the equation (5.75) which may be regarded as a frequency dependent spatial correlation. (5.84) may be rewritten as Alternatively. equation (5. NUMERICAL EXAMPLES 5.76).89) k=l where (5. Figure (5.. w) denote the corresponding eigenfunctions. X2. (5.94) ~k ~\'\ Kf J U(w) J F(w) .1. Thus.184 and CHAPTER 5. this procedure is adopted in the finite element code developed to solve this problem. w).
.5 4.0 ~ 4.3rd Order Neumann Expansion . Clearly.X2.. .5 4.tral Density of the Displacement on Random Elastic Foundation.5 3.. NUMERICAL EXAMPLES 5. Figure 5..0 2.tral Density of the Displacement on Random Elastic Foundation.0 4..0 == 5.5 5.. I \ I 1st Order Neumann Expansion 2nd Order Neumann Expansion .0 == 0.77: Spec.186 CHAPTER 5.••. Figure (5. LOCAL COORDINATES Ii I 1 5000~Sample MCS . does not improve significantly the quality of the derived solutions. Exponential Subjected to a Random \ Covariance Model.W).79) show the results corresponding to successive orders of approximation using the Improved Neumann expansion and the Homogeneous Chaos methods respectively. It is noted that using more than four terms in the spectral expansion of Sff(Xl.5 I ~ __ ~ __ ~ 3.78) and (5...76: Beam on Random Elastic Foundation Dynamic Excitation. . as described in the previous section. The Monte Carlo simulation is obtained.. ONE DIMENSIONAL DYNAMIC PROBLEM 187 BEAM ON RANDOM ELASTIC FOUNDATION El ~ L= <P> <k> 10. at the End of the Beam . using the Cholesky decomposition of the covariance matrix.0 I i I ! 1~ __ ~ 2.96).0 FREQUENCY 3. at the End of the Beam problem is not encountered with the Homogeneous Chaos approach. equation (5. the number of terms needed for a particular problem depends on the magnitude of the correlation length of the excitation process.3rd Order Homogeneous Chaos L CORRELA nON LENGTH ~ 1. a consequence of the completeness of the Polynomial Chaos basis in the space of random variables.0 p j ' __ '1 ...0 2.4.3rd Order Neumann Expansion 2.78: Spec.0 FREQUENCY 3.5 Figure 5.0 1.
This action might not be possible to undo. Are you sure you want to continue?