You are on page 1of 24

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING

Int. J. Numer. Meth. Engng 2006; 68:401–424


Published online 30 March 2006 in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/nme.1712

Iterative solution of the random eigenvalue problem with


application to spectral stochastic finite element systems

C. V. Verhoosel, M. A. Gutiérrez∗, † and S. J. Hulshoff


Faculty of Aerospace Engineering, Delft University of Technology, 2600 GB Delft, The Netherlands

SUMMARY
A new algorithm for the computation of the spectral expansion of the eigenvalues and eigenvectors of
a random non-symmetric matrix is proposed. The algorithm extends the deterministic inverse power
method using a spectral discretization approach. The convergence and accuracy of the algorithm
is studied for both symmetric and non-symmetric matrices. The method turns out to be efficient
and robust compared to existing methods for the computation of the spectral expansion of random
eigenvalues and eigenvectors. Copyright 䉷 2006 John Wiley & Sons, Ltd.

KEY WORDS: random eigenvalue problem; stochastic finite elements; inverse power method

1. INTRODUCTION

Algebraic eigenvalue problems play an important role in a variety of fields. In structural


mechanics, eigenvalue problems commonly appear in the context of, e.g. vibrations and buckling.
Currently, the computation of eigenvalues and eigenvectors is well understood for deterministic
problems [1, 2]. In many practical cases, however, physical characteristics are not deterministic.
For example, the stiffness of a plate can locally be reduced by material imperfections, or the
velocity of a flow can be influenced by turbulence. The traditional approach of dealing with
these kind of uncertainties in structures or loading conditions is to use safety factors. Since
the influence of uncertainties is in general unknown, the safety factors that are used need
to be conservative, leading to overdimensioned structures and decreased economical efficiency.
This overdimensioning can be reduced by describing the uncertain problem characteristics more
realistically using random variables, since more insight in the effect of uncertainties is then
obtained.
The effect of describing the input parameters of a physical problem as random variables is
that also the desired output will be random. The methods to compute these random results are

∗ Correspondence to: M. A. Gutiérrez, Faculty of Aerospace Engineering, Delft University of Technology,


P.O. Box 5058, 2600 GB, Delft, The Netherlands.
† E-mail: m.gutierrez.tudelft@gmail.com

Received 19 October 2005


Revised 9 February 2006
Copyright 䉷 2006 John Wiley & Sons, Ltd. Accepted 9 February 2006
402 C. V. VERHOOSEL, M. A. GUTIÉRREZ AND S. J. HULSHOFF

in general referred to as stochastic finite element methods (SFEM) [3, 4]. Two types of analysis
can be carried out by means of SFEM. The first type is uncertainty analysis, which focuses
on the computation of the statistical moments (mean, standard deviation, etc.) of the random
output. The second type is reliability analysis [5–7], which focuses on the computation of the
probability of some rare event (e.g. failure). In this paper a stochastic finite element method
for uncertainty analysis will be considered.
Methods for the purpose of uncertainty analysis are commonly divided in two groups. The
first group contains the simulation-based methods. The stochastic moments of the eigenval-
ues and eigenvectors are obtained by performing computations for various realizations of the
problem. This process is also commonly referred to as sampling. The Monte-Carlo method
is the most important simulation-based method. The second group of SFEM for uncertainty
analysis contains the expansion-based methods. In this group, distinction is made between the
perturbation methods [8] and the spectral methods [9]. In the case of the perturbation methods,
the random eigenvalues and eigenvectors are approximated based on Taylor series expansions.
In the case of the spectral methods, the random eigenvalues and eigenvectors are approximated
by projecting them on a global basis.
The perturbation methods are in general computationally inexpensive, but are also relatively
inaccurate. The simulation-based methods are more accurate than the perturbation methods, but
require considerably more computational effort. The advantage of the spectral methods with
respect to the perturbation methods is that the accuracy using a given order of basis functions
is considerably better. This is caused by the fact that perturbation methods use information in
a single point, whereas the spectral methods use information in the whole domain. Although
the computational effort of the spectral method is more expensive than that of the perturbation
methods, it is in general considerably less expensive than the simulation-based methods.
SFEM for uncertainty analysis are well developed for linear algebraic systems, but are
less developed for the random eigenvalue problem. The research that has been done has
mainly focused on perturbation methods [10–12] or on simulation-based methods [11, 13, 14].
Attempts have been made to approximate the stochastic moments of both the eigenvalues
and eigenvectors using spectral methods [15, 16]. The method proposed in Reference [15] is
computationally expensive due to the use of sampling. The method proposed in Reference
[16] rewrites the eigenvalue problem as a set of non-linear equations. This method requires
considerable computational effort and a good initial estimate of the eigenvalues and eigenvectors,
which can be obtained using sampling. In this paper, an algorithm to determine the spectral
expansions of the eigenvalues and eigenvectors without the use of sampling is proposed. The
method inherits its efficiency and robustness from the underlying inverse power method. The
proposed method is also able to compute complex pairs of eigenvalues and eigenvectors, which
can appear in the case of non-symmetric matrices.

2. PROBLEM STATEMENT

2.1. The random general eigenvalue problem


The random algebraic eigenvalue problem can be written as

S̃ũi = ˜ i ũi for i = 1 . . . N (1)

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
ITERATIVE SOLUTION OF THE RANDOM EIGENVALUE PROBLEM 403

where S̃ is a random real and non-symmetric matrix with dimension N. The tilde is used to
indicate that a variable is random. Since the matrix S̃ that is considered is random, also the
eigenvalues {˜ i } and eigenvectors {ũi } are random. Due to the non-symmetry of the matrix S̃,
these eigenvalues and eigenvectors are in general complex. Since the method proposed in this
paper deals with all pairs of eigenvalues and eigenvectors individually, the subscript i which
denotes a member of the set of eigenvalues and eigenvectors, will be dropped.
In this paper it is assumed that the matrix S̃ is normally distributed according to

m
S̃ = S0 + Si z̃i (2)
i=1

with {z̃i } being a set of m independent and identically distributed standard normal random
variables, that have a joint probability density function
 
1 1 T
pz (z) = exp − z z (3)
(2)m/2 2
In the case that the Karhunen–Loeve (K–L) expansion [9] is used to discretize random stiffness
properties, the resulting matrix S̃ will be of form (2).

2.2. Spectral representation


The stochastic eigenvalues and eigenvectors can be approximated by using a spectral expansion.
This expansion for both the random eigenvalues and random eigenvectors is of the form

ˇ =  i  (z̃)
n
(z̃) i (4)
i=0


n
ǔ(z̃) = ui i (z̃) (5)
i=0

˜ } are the generalized


where the •ˇ is used to indicate a spectral expansion. In the expansions, { i
Hermite polynomials [3, 9], often referred to as polynomial chaoses. These polynomials satisfy
the properties

˜ ] = i0
E[ for i = 0 . . . n (6)
i

˜ 
E[ ˜
i j ] = cii ij for i, j = 0 . . . n (7)

For notational brevity, the expectations of the generalized Hermite polynomials will be written
as

˜ 
E[ ˜
i j ] = cii ij (8)

˜ 
E[˜ i  ˜
j k ] = cij k (9)

˜ ˜ 
E[ ˜ ˜
i j k l ] = cij kl (10)

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
404 C. V. VERHOOSEL, M. A. GUTIÉRREZ AND S. J. HULSHOFF

These coefficients have to be computed only once for each basis and can be tabulated. The
computation of the exact coefficients can efficiently be done by means of a Gauss–Hermite
quadrature.

2.3. Moments of complex random variables


In practice, the spectral expansions of the eigenvalues and eigenvectors are not directly useful.
To make interpretation possible, the moments of the eigenvalues and eigenvectors need to be
computed. For complex variables, the definitions of the stochastic moments differ from the
definitions for real random variables. The expectation and standard deviation of the complex
scalar eigenvalue are given by
 

n
˜
 = E i i = 0 (11)
i=0
     n 

n 
n 
n
˜ E  ¯ i 
˜ =  cii i ¯ i
n
2 = E i ¯ j 
˜ 
˜
i j −E i  i i (12)
i=0 j =0 i=0 i=0 i=1

where the bar represents the complex conjugate. The mean vector and covariance matrix of the
random eigenvector can be computed using
 

n
˜
u = E ui i = u0 (13)
i=0
     n 

n 
n
˜ 
˜ 
n
˜ E  uH  ˜ =  cii ui uH
n
uu = E ui ujH i j −E ui  i i i i (14)
i=0 j =0 i=0 i=0 i=1

with the superscript H indicating the Hermitian (conjugate) transpose.

3. SPECTRAL METHOD FOR THE LINEAR ALGEBRAIC PROBLEM

To demonstrate the difficulties that are encountered when determining the spectral solution of
the random general eigenvalue problem (1), the spectral approach applied to the linear algebraic
system

Ãx̃ = b (15)

will briefly be discussed. A detailed discussion of the spectral method for the linear algebraic
problem can be found in References [3, 9]. In analogy with the considered matrix for the
random eigenvalue problem, the matrix à is assumed to be of the form


m
à = A0 + Ai z̃i (16)
i=1

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
ITERATIVE SOLUTION OF THE RANDOM EIGENVALUE PROBLEM 405

The purpose of the spectral method is to determine the spectral expansion of x̃, which is of
the form

n
x̌(z̃) = xi i (z̃) (17)
i=0

where {˜ i } are the generalized Hermite polynomials.

3.1. Definition of stochastic inner product and stochastic norm


Since the spectral method is a projection method, it is necessary to define a stochastic inner
product. The stochastic equivalent of the 2 -inner product

x̃, ỹ = x(z)y(z)pz dz = E[x̃ ỹ] (18)
Rm

is used for the projection, since a projection based on this inner product optimizes the ap-
proximation of the second moment [9]. The random variables x̃ and ỹ are assumed to be a
function of {z̃i }. In the case that x̃ and ỹ are vectors, the inner product is defined as

x̃, ỹ = E[x̃H ỹ] (19)

For spectral expansions of x̃ and ỹ, this can be rewritten as


n
x̌, y̌ = cii xiH yi (20)
i=0

with the corresponding norm




n
x̌ = cii xiH xi (21)
i=0

3.2. Solution procedure for the linear algebraic problem


The spectral expansion of the random variable x̃ is found by applying Galerkin’s method using
the inner product as defined in the previous section. The projection of the random variable x̃
can be found using

Ãx̃, ˜ r 1 = b, 
˜ 1
r for r = 0 . . . n (22)

with 1 being a vector of ones of size N . Substitution of expression (16) for à and the spectral
representation (17) for x̃ yields
 
n 
m  ˜
E A0 + Ai z̃i xj j ˜ r = E[b˜ r ] for r = 0 . . . n (23)
i=1 j =0

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
406 C. V. VERHOOSEL, M. A. GUTIÉRREZ AND S. J. HULSHOFF

Rewriting this expression gives


 

n 
m
A0 cjj j r + Ai cij r xj = br0 for r = 0 . . . n (24)
j =0 i=1

with the coefficients following from Equation (10). The result of this projection is a set of
m × (n + 1) equations and m × (n + 1) unknowns, which can be written as a linear system of
equations
⎡ ⎤⎛ ⎞
Ai ci00 ··· Ai ci0n ⎛ ⎞ x0
b


m ⎥⎜ ⎟
⎢ .. .. .. ⎥ ⎜ .. ⎟ = ⎜ 0 ⎟
⎢ . . . ⎥ ⎜ ⎟ ⎝ ⎠ (25)
i=0 ⎣ ⎦⎝ . ⎠
Ai cin0 · · · Ai cinn xn 0

For notational convenience, the mean matrix of à has been included in the summation. The
spectral expansion x̌ of the random vector x̃ can then be found by solving this linear system.
This can be done using a direct solver, but more efficiently the solution can be determined
using an iterative solver [17–19].

3.3. Application of standard solution procedure to the random general eigenvalue problem
In the case that the spectral method as applied to the linear algebraic problem is used for the
random general eigenvalue problem, it follows that

˜ 1 = ˜ ũ, ˜ 1
S̃ũ,  for r = 0 . . . n (26)
r r

As can be seen, the randomness in the problem is now on both the left- and right-hand side of
the equation. Furthermore, the randomness is in the eigenvectors as well as in the eigenvalues.
Substitution of the spectral expansion for eigenvalue (4) and for eigenvector (5) yields
   

n 
m 
n 
n
S0 cjj j r + Si cij r uj = i cij r uj for r = 0 . . . n (27)
j =0 i=1 j =0 i=0

which can be written as the linear system


⎡ ⎤⎛ ⎞ ⎡ ⎤⎛ ⎞
Si ci00 ··· Si ci0n u0 ci00 I ··· ci0n I u0
m ⎢ ⎥⎜ ⎟  n ⎢ ⎥⎜ ⎟
⎢ .. .. ⎥ ⎜ .. ⎟
.. ⎢ .. .. .. ⎥ ⎜ .. ⎟
⎢ . . ⎥ ⎜ . ⎟ =  ⎢ . . ⎥ ⎜ ⎟ (28)
⎦ ⎝ . ⎠ i=0 ⎣ . ⎦⎝ . ⎠
i
i=0 ⎣
Si cin0 · · · Si cinn un cin0 I · · · cinn I un

This is a system with (n + 1) × N unknowns and only n × N equations. As a consequence,


no unique solution can be found. In Reference [16] it is proposed to avoid this uniqueness
problem by introducing additional equations to prescribe the norm of the eigenvectors. A non-
linear system of (n + 1) × N unknowns and (n + 1) × N equations is then obtained, which can

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
ITERATIVE SOLUTION OF THE RANDOM EIGENVALUE PROBLEM 407

be solved using any suitable iterative technique. It turns out that convergence of this method
is only obtained if a good initial estimate of the spectral components of the eigenvalue and
corresponding eigenvector is available. In general it is therefore necessary to use the Monte-
Carlo method to obtain this initial estimate of the spectral components [15, 20], which is not
attractive from the point of view of computational effort.

4. STOCHASTIC INVERSE POWER METHOD

The inverse power method is a method to compute the eigenvalues and eigenvectors of a
general matrix. Although the inverse power method is no longer competitive with more elaborate
algorithms for the computation of eigenvalues, e.g. QR, Arnoldi, Lanczos, it is still used as
an intrinsic part of these methods. The inverse power method solely consists of simple matrix
operations, which makes it possible to extend the method such that the spectral expansions of
eigenvalues (4) and eigenvectors (5) can be computed.
The inverse power method is known to be an iterative procedure that efficiently computes
an eigenvalue and the corresponding eigenvector if a good initial estimate is present [1, 2].
Since the coefficients of variation that are considered are in general moderate (V <20%), the
deterministic eigenvalues and eigenvectors can be considered as a proper initial guess of the
stochastic eigenvalues and eigenvectors. In contrast to currently available methods, it is therefore
in general possible to compute the spectral expansion of the eigenvalues and eigenvectors
without the use of sampling.
To explain the stochastic inverse power method, first the deterministic inverse power method
will be considered. In order to make the extension of the deterministic method to the stochastic
case consistent, the deterministic algorithm is treated in a slightly different way from the
available literature.

4.1. Deterministic inverse power method


The deterministic inverse power method computes the eigenvalue  closest to an initial guess
q using an iterative procedure [1, 2]. Given an initial guess for the eigenvector u0 , which is
normalized with respect to the L2 -norm, the eigenvalue and corresponding eigenvector closest
to q can be computed using Algorithm 1. In this algorithm, the converged solution is indicated
by the superscript •∗ .
Algorithm 1. The deterministic inverse power method

Initialize: u0 , q
While >∗
Step 1: k+1 = (uk )H Suk
Step 2: uk+1 = (k+1 − q)[S − qI]−1 uk
k+1
Step 3: uk+1 → uuk+1 
L2
Step 4:  = [S − k+1 I]uk+1 L2
End While
Result: ∗ , u∗

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
408 C. V. VERHOOSEL, M. A. GUTIÉRREZ AND S. J. HULSHOFF

Step 1: The deterministic eigenvalue is updated using the Rayleigh quotient of the initial
guess for the eigenvector u0 . In the case that the eigenvector is normalized with respect to the
L2 -norm, the denominator of this quotient drops out of the computation. Since an eigenvector
can be multiplied by an arbitrary constant, it is not necessary to update the eigenvalue in the
deterministic algorithm. In the stochastic case, this step however turns out to be crucial as will
be shown in Section 4.2. In the deterministic case it is still useful to perform this step, since
the eigenvalue is required for the error definition.
Step 2: The eigenvector is updated using inverse iteration. Since eigenvectors can be multi-
plied by an arbitrary constant, this equation can be rewritten as

uk+1 = [S − qI]−1 uk (29)

The convergence of this method to the eigenvector corresponding to the eigenvalue closest to
q can be proved by writing the initial guess of the eigenvector as


N
u0 = i ui∗ (30)
i=1

Under the condition that the initial eigenvector has a component in the direction of the searched
eigenvector, the eigenvector after iteration k can be written as

k
k 
N ∗j − q
u = j uj∗ + i ui∗ (31)
i=1,i =j ∗i − q

Since q is closest to ∗j it follows that

lim uk = j uj∗ (32)


k→∞

hence the method converges to the eigenvector corresponding to the eigenvalue that is closest
to q in magnitude. The rate of convergence depends on the eigenvalue separation. In the case
that eigenvalues are well separated, the convergence of the method is good. In the case of
eigenvalue multiplicity there are problems with the convergence of the method. The method
discussed in Section 3.3 suffers from the same problem. For the cases considered here, the
eigenvalues are sufficiently separated.
Step 3: Normalize the updated eigenvector with respect to the L2 -norm. Although not strictly
necessary in the deterministic case, it is convenient to do this. It prevents the magnitude of the
eigenvector from increasing with each iteration step, which can lead to numerical overflow.
Step 4: The convergence of the iterative method is checked using the L2 -norm of the residual.
In the case that the condition   ∗ is satisfied, the procedure is terminated. Otherwise the
next iteration step is performed.

4.2. Stochastic inverse power method


In order to solve a random eigenvalue problem, the deterministic inverse power method
(Algorithm 1) can be rewritten to yield the stochastic inverse power method (Algorithm 2).
Here the spectral method will be used to perform all the individual steps in this algorithm.

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
ITERATIVE SOLUTION OF THE RANDOM EIGENVALUE PROBLEM 409

Algorithm 2. The stochastic inverse power method

Initialize: ũ0 , q
While >∗
Step 1: ˜ k+1 = (ũk )H S̃ũk
Step 2: ũk+1 = (˜ k+1 − q)[S̃ − qI]−1 ũk
k+1
Step 3: ũk+1 → ũũk+1 
L2
s
Step 4: k+1
1 = [S̃ − ˜ k+1 I]ũk+1 
|Vk+1 −Vk |
k+1 =
2 |Vk |
End While

Result: ˜ , ũ∗ , ∗1

Step 1: The Rayleigh quotient is used to update the eigenvalue based on the normalized
eigenvector ũk of the previous iteration

˜ k+1 = (ũk )H S̃(ũk ) (33)

˜ } is found using Galerkin’s method to yield


The projection of ˜ k+1 on {i

˜ k+1 , 
˜  = (ũk )H S̃(ũk ), ˜ 
r r for r = 0 . . . n (34)

Substitution of the spectral expansion of eigenvalue (4) and eigenvector (5) into (34) gives
     

n 
n 
n 
m
k+1
i ˜ i , 
˜ =
r
˜ 
uik [S0 − qI] ˜
i j+
˜ 
Sp  ˜ ˜ k ˜
i j p uj , r for r = 0 . . . n (35)
i=0 i=0 j =0 p=1

˜ } are orthogonal with respect to the defined stochastic inner


Since the Hermite polynomials {i
product (18), it follows that
 
1 n  n m
k+1
r = uk [S0 − qI]cij r + Sp cijpr ujk for r = 0 . . . n (36)
crr i=0 j =0 i p=1

The update of the eigenvalue can explicitly be obtained using the Rayleigh quotient, and is
therefore inexpensive.
Step 2: The eigenvector is updated by solving the linear system

[S̃ − qI]ũk+1 = (˜ k+1 − q)ũk (37)

Application of Galerkin’s method gives

˜ 1 = (˜ k+1 − q)ũk , ˜ 1


[S̃ − qI]ũk+1 ,  for r = 0 . . . n (38)
r r

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
410 C. V. VERHOOSEL, M. A. GUTIÉRREZ AND S. J. HULSHOFF

After substitution of the spectral expansions (4) and (5), this gives
 
n 
m
[S0 − qI]cjj j r + Si cij r ujk+1
j =0 i=1
 

n 
n
= (k+1
0 − q)cjj j r + k+1
i cij r ujk for r = 0 . . . n (39)
j =0 i=1

The update of the eigenvector can then be found by solving the linear system
⎡ ⎤ ⎛ k+1 ⎞ ⎛ ⎞
00 · · · 0n u0 0
⎢ ⎥⎜ ⎟ ⎜ ⎟
⎢ .. .. .. ⎥ ⎜ .. ⎟ = ⎜ .. ⎟
⎢ . . . ⎥ ⎜ ⎟ ⎜ ⎟ (40)
⎣ ⎦⎝ . ⎠ ⎝ . ⎠
n0 · · · nn unk+1 n

with

m
rj = [S0 − qI]cjj j r + Si cij r (41)
i=1
 

n 
n
r = (k+1
0 − q)cjj j r + k+1
i c k
ij r uj (42)
j =0 i=1

The system to be solved is of size (n + 1) × N , which requires the same computational effort
per iteration step as the method described in Section 3.3.
Step 3: As in the deterministic case, the updated eigenvector is normalized. Since the approx-
imation of the random eigenvector ũ is a spectral expansion ǔ, norm (21) is used to normalize
the eigenvector as

ũk+1
ũk+1 → (43)
ũk+1 
Step 4: The convergence of the method is checked based on two errors. The first error is
defined as the norm of the residual

k+1
I = [S̃ − ˜ k+1 I]ũk+1  (44)

Although the random matrix S̃ is linearly dependent on the random variables {z̃i }, this is in
general not the case for the exact random eigenvalue ˜ and corresponding eigenvector ũ. As a
consequence, it is possible that the solution is not spanned by the spectral basis. In general,
I will therefore not become equal to zero.
The second error is defined as the relative change of the coefficient of variation of the
eigenvalue

|Vk+1 − Vk |
k+1
II = (45)
|Vk |

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
ITERATIVE SOLUTION OF THE RANDOM EIGENVALUE PROBLEM 411

This error definition is more practical to use as a stopping criterion since it will go to zero in
the case of convergence. The first error definition is a good indication for the accuracy of the
converged solution.

4.3. Algorithmic aspects


Although the implementation of the stochastic inverse power method is straight-forward, some
aspects need extra attention.

4.3.1. Selection of the initial settings. Since the convergence of the deterministic power method
depends on the quality of the eigenvalue estimate q, it would be natural to take the deterministic
eigenvalue as the initial estimate for the stochastic iteration. It turns out, however, that this
choice leads to a singular system. In the case that q is very close to the deterministic eigenvalue,
the system will not be singular, but the algorithm has difficulties to converge to the stochastic
solution. In the case that the eigenvalues are sufficiently separated, it is possible to determine
an appropriate initial estimate for the stochastic eigenvalue. In the case that eigenvalues are not
sufficiently separated, convergence problems appear. This drawback of the deterministic inverse
power method is also present in the stochastic case.
The initial estimate of the eigenvector ũ0 can be taken as the deterministic eigenvector. It
turns out however that convergence of the algorithm is improved if the starting vector also has
components in the direction of the other deterministic eigenvectors.

4.3.2. Eigenvector update. In the algorithm described above, the updated eigenvector is used
as the initial eigenvector for the next iteration. It turns out that especially in the case of large
coefficients of variation (e.g. >10%), the error 1 oscillates. This is caused by the fact that
the original problem with randomness in both the eigenvalue and eigenvector is split up into
a part in which the randomness is exclusively in the eigenvalue (Step 1) and a part in which
the randomness is exclusively in the eigenvector (Step 2). The oscillation of the norm of the
residual can be avoided by introducing numerical damping in the system. In analogy with the
semi-implicit schemes that are used for space–time integration, numerical damping is added
using

ũk+1 → ũk+1 + (1 − )ũk with ∈ R(0, 1] (46)

That the method still converges to the correct solution can be demonstrated by substituting
Equation (37) in this equation to yield

ũk+1 = (˜ k − q)[S̃ − qI]−1 ũk + (1 − )ũk (47)

In the case that a converged solution is reached (ũk+1 = ũk = ũ∗ ), this equation reduces to the
original problem (1). Hence the converged solution is a solution of the original problem.
The choice for the parameter depends on the problem. An adequate choice is to start
with the original algorithm ( = 1) and then gradually reduce until the convergence of the
algorithm becomes stable.

4.3.3. Complex eigenvectors. In the case that complex eigenvectors are partially updated, con-
vergence problems occur. These problems are caused by the fact that the eigenvector rotates

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
412 C. V. VERHOOSEL, M. A. GUTIÉRREZ AND S. J. HULSHOFF

in the complex plane. It turns out that this problem can be solved by fixing the argument of
the eigenvector. This can be done by multiplying the eigenvector by a complex number with
modulus one. If i is the index of the first non-zero entry of the normalized vector u0k+1 , the
argument can be fixed using

u0k+1 (i)
ǔk+1 → ǔk+1 (48)
|u0k+1 (i)|

In the case that the eigenvector is fully updated, multiplication by a common (complex) factor
has no influence on the convergence of the method, since this factor cancels out in the Rayleigh
quotient. In the case that the eigenvector is only partially updated, the (complex) factor does
not cancel out and therefore influences the convergence behaviour.
In the case that a system has a pair of complex eigenvalues and eigenvectors, also the
complex conjugate of this pair is an eigenvalue and eigenvector. The initial guess for the
eigenvalue q should be a complex value, since otherwise the distance to the eigenvalue and its
complex conjugate is the same. This would lead to problematic convergence of the method.

4.3.4. Computational effort. The computational effort is almost fully determined by the step
in which the eigenvector is updated (40). The reason for this is that this is the only step in
3 ) operation, in
which a linear system is solved. In the case of a direct solution, this is a O(N
which N is the dimension of the involved matrix, which equals

N = (n + 1) × N (49)

In the case that the spatial dimension (N ) is doubled and the dimension of the spectral basis
(n + 1) is kept constant, the computational effort will approximately increase by a factor of
eight. The same holds in the case that the dimension of the spectral basis is doubled and the
spatial dimension is kept constant.
The dimension of the spectral basis depends on two variables. The first is the number of
independent standard normal random variables in the expansion. The second variable is the
order of the Hermite polynomials. Since the dimension of the spectral basis is in general a
non-linear function of these two variables, doubling the size of either of them will dramatically
increase the dimension of the spectral basis.

5. APPLICATION TO SYMMETRIC MATRICES

To demonstrate the stochastic inverse power method for symmetric matrices, the lowest stochas-
tic eigenvalue of a free vibrating infinitely long plate that is fully clamped on two sides
(Figure 1) is computed. The plate has deterministic length L, thickness h and density
. The
modulus of elasticity of the plate is modelled as a random field of elastic properties Ẽ. The
plate as used in this section can be considered as a beam. For consistency with the fluid–
structure interaction (FSI) problem that is used to demonstrate the application of the stochastic
inverse power method to non-symmetric matrices in Section 6, a plate is considered instead of
a beam.

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
ITERATIVE SOLUTION OF THE RANDOM EIGENVALUE PROBLEM 413

h,E,ρ

Figure 1. Schematic representation of a fully clamped plate.

5.1. Discretization of the problem


5.1.1. Random field discretization. Uncertain material properties such as the modulus of elastic-
ity are described using random fields. For the plate considered, the modulus of elasticity could
be defined by a random variable at infinitely many points. This would not be realistic, since
there is a spatial correlation of the randomness. Due to this spatial correlation, the random
modulus of elasticity of the panel can accurately be modelled using a finite set of random
variables.
To model this spatial correlation, the modulus of elasticity is discretized using the K–L
expansion [9]. The K–L expansion is a normal random field that approximates an exact (mean)
spatial correlation function. The random field for a panel with local mean modulus of elasticity
E , local standard deviation E and spatial correlation length Lc can be approximated using


m
E(x, z̃) ≈ E + E z̃i gi (x, E , Lc ) (50)
i=1

The K–L functions {gi } can be found by solving the eigenvalue problem
 L/2
Cov [x1 , x2 ] gi (x2 ) dx2 = i gi (x1 ) for i = 1 . . . m (51)
−L/2

For isotropic materials, the exact spatial covariance function (kernel) is often taken of the form

Cov [x1 , x2 ] = 2E e−|x1 −x2 |/Lc (52)

In Reference [21] it is proposed that in the case that the eigenfunctions {gi } can be computed
exactly, the K–L expansion is the most efficient method to discretize a random field. This means
that given a certain accuracy, the random field is discretized using the minimum number of
random variables. Since in the considered case an analytic solution of the eigenvalue problem
(51) exists, the K–L discretization is the most efficient option. In the case that no exact solution
for the eigenvalue problem (51) exists, for example in most higher-dimensional problems, other
discretization methods like the optimal linear estimation method (OLEM) [21] are more efficient.

5.1.2. Spatial discretization. The spatial discretization of the plate is performed using a finite
element approach. Since the beam equation involves fourth-order derivatives with respect to
space, the basis functions { i (x)} should satisfy

i (x) ∈ H 2 (53)

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
414 C. V. VERHOOSEL, M. A. GUTIÉRREZ AND S. J. HULSHOFF

To satisfy this requirement, Hermite elements are used. These are elements for which the nodal
displacement as well as the nodal rotation is a degree of freedom. The discretized system can
then be written as

Mü + Ku = 0 (54)

with M, K and u being, respectively, the mass matrix, the stiffness matrix and the nodal
displacement vector. Both the mass and stiffness matrix are symmetric. Since the stiffness
matrix depends linearly on the modulus of elasticity, the matrix K(z̃) can be written as

m
K(z̃) = K0 + Ki z̃i (55)
i=1

5.1.3. Modal analysis. The computation of the lowest stochastic eigenvalue of the vibrating
˜ is considered. This eigenvalue can be found by assuming a harmonic motion
plate, ,

u(t, z̃) = û(z̃)e (z̃)tı̂ (56)

with û(z̃) being a random eigenvector and (z̃) being the corresponding random eigenvalue.
Substitution of this expression in Equation (54) yields

[−(z̃)M + K(z̃)]û(z̃) = 0 (57)

which can be rewritten in the form of the random eigenvalue problem (1)

S(z̃)u(z̃) = (z̃)u(z̃) (58)

with

S(z̃) = M−1 K(z̃) (59)

In accordance with Equation (55), the matrix S(z̃) is of the form



m
S(z̃) = S0 + z̃i Si (60)
i=1

5.2. Convergence study


The convergence of the stochastic inverse power method is studied using the settings assembled
in Table I. The correlation length is fixed to 20% of the length of the plate. In this case,
Table I. Settings as used for the symmetric testcase.
L 0.5 m
h 4 mm

2700 kg/m3
E 72 GPa
0.33
Lc 0.1 m

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
ITERATIVE SOLUTION OF THE RANDOM EIGENVALUE PROBLEM 415

100
µ
σ

10−1

10−2
ε

10−3

10−4
100 101
N

Figure 2. Error of the mean () and standard deviation () of the eigenvalue ˜ as a function of the
number of spatial elements N , with respect to the 64 element solution for VE = 10%.

a K–L expansion with four random variables is used. The exact mean correlation function is in
that case approximated using functions with frequencies up to 10 times the lowest frequency.
For the purpose in this report, this approximation is sufficiently accurate.
Although the K–L functions are computed analytically, the randomness in the response is
still influenced by the number of spatial elements that is used. When the correlation length is
small, local perturbations in the response can be expected. Using a very coarse finite element
discretization could smooth out these local perturbations, leading to an underestimation of the
randomness in the problem.
For a correlation length of 20%, however, relatively few elements are required. As can be
seen in Figure 2, the error of the first and second moment is already below a half percent
when only four elements are used.

5.2.1. Influence of the eigenvalue approximation q. In the deterministic case, the quality of q
determines the rate of convergence of the inverse power method. The closer q is to the desired
eigenvalue , the faster the rate of convergence and the lower the number of required iterations.
To investigate the dependence of the convergence on the initial guess q in the stochastic case,
the update parameter is taken to be = 0.8 and the variation coefficient of the local stiffness is
VE = 10%. As can be seen in Figure 3, the number of iterations until convergence decreases as
the approximation of q comes closer to the deterministic eigenvalue. When looking at the norm
of the converged residual, however, it becomes clear that the accuracy of the converged solution
also decreases. Since only a couple more iterations are required in the case that q is not close
to the deterministic eigenvalue, the initial guess q should be selected such that the converged
value of the residual is as small as possible, since this indicates that the solution found is the
most accurate. As can be seen in Figure 3 an appropriate choice for q to approximate the
lowest eigenvalue is to take 10% of the deterministic eigenvalue. It is also possible to take
a q that is considerably larger than the deterministic eigenvalue. One should in that case be
careful that q remains sufficiently far from the next largest eigenvalue.

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
416 C. V. VERHOOSEL, M. A. GUTIÉRREZ AND S. J. HULSHOFF

10
10−2

9
10−3

8
10−4
k*

ε*
1
10−5
6

5 10−6

4 10−7
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −1 −0.5 0 0.5 1 1.5
(q−λ1)/ λ1 (q−λ1)/ λ1

Figure 3. Number of required iterations for convergence and norm of the residual of the converged
solution versus the initial eigenvalue approximation q for the symmetric testcase with VE = 10%.

5.2.2. Influence of the update parameter . The influence of on convergence behaviour is


examined for q being 10% of the deterministic eigenvalue. The variation coefficient of the
modulus of elasticity is again taken to be 10%. When small are chosen, the eigenvector
is only partially updated. As a consequence, the converged solution is reached later. From
an accuracy point of view, the update parameter should be close to one, while preventing
oscillations in convergence behaviour. For the considered testcases is typically in the range
from 0.6 to 0.9.

5.2.3. Convergence plots. The convergence of the stochastic inverse power method is studied
using linear basis functions. The convergence is checked based on two error measures. The
first is the relative error of the variation coefficient with respect to the Monte-Carlo solution,
which is considered to be the exact solution,

|Vk − V
ex|
k1 = ex|
(61)
|V

The second error definition, k2 , is the stochastic norm of the residual normalized with respect to
the norm of the deterministic matrix S0 . The convergence of both errors is shown in Figure 4.
As can be seen in Figure 4, the second error is gradually reduced. The norm of the residual
does not become equal to zero, which indicates that the solution of the random eigenvalue
problem is not exact. This is caused by the fact that the solution is not spanned by the basis
functions. As can be seen in Figure 4, the quality of the approximation becomes worse as
the variation in the problem increases. This is caused by the fact that the dependence of the
eigenvalue on the standard normal random variables becomes more non-linear.
From the first error definition, it is observed that the convergence of the variation coefficient
can be separated in two stages. In the first stage, the distribution of the randomness between
the eigenvector and eigenvalue is determined (Figure 5). In that stage, the quality of the
approximation is not increased. Once the correct balance between the two is determined (in

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
ITERATIVE SOLUTION OF THE RANDOM EIGENVALUE PROBLEM 417

Figure 4. Convergence plots of the stochastic inverse power method for the symmetric testcase.

3.5

2.5
Vλ/Vu

1.5

1
1 2 3 4 5 6 7 8 9 10 11
k

Figure 5. Ratio between the coefficient of variation of the eigenvalue and the coefficient
of variation of the eigenvector for VE = 10%.

the considered case after six iterations), the components of the spectral decomposition are
accurately determined. In this second stage, the accuracy of the approximation is gradually
increased.

5.3. Accuracy of the stochastic inverse power method


The accuracy of the method is studied using a Monte-Carlo simulation with a 20K sample
size as a benchmark. The accuracy is studied for the first-order spectral method (FOSM) and
the second-order spectral method (SOSM). The results are shown in Figure 6. As can be seen,
the SOSM performs better than the FOSM in all cases. In the case of coefficients of varia-
tion of the modulus of elasticity up to 10%, both methods produce results with errors smaller

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
418 C. V. VERHOOSEL, M. A. GUTIÉRREZ AND S. J. HULSHOFF

Figure 6. Errors of the mean () and standard deviation () of the first eigenvalue with respect to
a 20K samples Monte-Carlo simulation for the symmetric testcase.

Figure 7. Dependence of the mean and standard deviation of the eigenvalue  on the coefficient of
variation of the modulus of elasticity VE for the symmetric testcase.

than 1%. In the case of larger variation coefficients, both methods become relatively
inaccurate.
Although the SOSM produces slightly more accurate results than the FOSM, the computa-
tional effort is approximately 27 times larger. This is caused by the fact that the dimension of
the Hermite basis is increased by a factor of three, from five to 15.
The relative success of the FOSM can be explained by considering the dependence of the
mean and standard deviation of the eigenvalue on the coefficient of variation of the modulus of
elasticity. In Figure 7, the mean and standard deviation of the eigenvalue as obtained by a 20K
Monte-Carlo method are plotted versus the variation coefficient of the modulus of elasticity.
In the case that the random eigenvalue would be approximated using the FOSM, the mean

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
ITERATIVE SOLUTION OF THE RANDOM EIGENVALUE PROBLEM 419

k m

Figure 8. Simple mass–spring system and simple static spring with load.

eigenvalue would be constant and the standard deviation would be a linear function of the
variation coefficient of the modulus of elesticity. In Figure 7 it can be seen that the simulated
random eigenvalue exhibits the mentioned behaviour. Although the mean of the eigenvalue
shows a quadratic dependence on the coefficient of variation of the modulus of elesticity,
closer inspection of the scale shows that changing the variation coefficient of the modulus of
elesticity from 0 to 20% only changes the mean by less than 1%. The FOSM is therefore
capable of efficiently predicting the moments of the random eigenvalue.
It can be concluded that the eigenvalue problem (1) for a matrix S̃ of form (2) with moderate
variations VE can closely be approximated using linear basis functions. This observation differs
from what is expected in the case of linear systems. In that case it turns out that increasing
the order of the polynomials in the spectral expansion is an effective way of improving the
approximation [3].
This difference can be explained by considering the simplest example of both applications,
shown in Figure 8. In the case of a statically loaded spring, the deflection is obtained as
f
x= (62)
k
In the case that the stiffness k is a random variable, the random displacement x is inversely pro-
portional to the stiffness. In contrast, the eigenvalue  of the mass–spring system is proportional
to the spring stiffness
k
= (63)
m
In the case that the stiffness is a random variable and the mass is deterministic, the eigenvalue
will depend linearly on the random variables.
In Reference [3] it is mentioned that in the case of solving the linear problem, a critical value
for the variation is present for which the spectral method fails to approximate the solution. This
is caused by the fact that the denominator in expression (62) gives singularity problems. Since
this inverse proportionality does not occur in the case of the random eigenvalue problem (63),
this critical boundary is not present. It should be mentioned that the singularity can reappear

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
420 C. V. VERHOOSEL, M. A. GUTIÉRREZ AND S. J. HULSHOFF

in the case that eigenvalues are used for the computation of a dynamic response. A study of
this is, however, beyond the scope of this paper.

6. APPLICATION TO NON-SYMMETRIC MATRICES

To study the stochastic inverse power method for non-symmetric matrices, the clamped plate
is again considered. To make the problem non-symmetric, work is performed on the plate by
a fluid flowing over it. The pressure difference between the upper and lower side of the panel
is supposed to be of the form
*w *w
p = 1 + 2 (64)
*t *x
The coefficients 1 and 2 depend on the choice of the aerodynamic operator. In the case that
the aerodynamic forces are approximated using linear piston-theory [22], the coefficients can
be written as
1 =
∞ a∞ (65)
2 =
∞ a∞ v∞ (66)

with
∞ , a∞ and v∞ being, respectively, the freestream fluid density, the freestream fluid
speed of sound and the freestream fluid velocity. Using Hermite elements for the discretization
of the problem yields a system of the form
Mü + Du̇ + Ku = 0 (67)
where M and D are, respectively, the symmetric mass and damping matrix and K is the
non-symmetric random stiffness matrix being of the form

m
K(z̃) = K0 + Ki z̃i (68)
i=1

In this expression the non-symmetry is incorporated in the K0 matrix, the other matrices
{Ki }m
i=1 are symmetric. This is due to the fact that the aerodynamic forces that are causing
non-symmetry are deterministic and are therefore not contributing to the randomness in the
problem. Since the damping matrix is proportional to mass matrix
D∝M (69)
this problem can be written in the form of the random eigenvalue problem (1). The random
matrix S(z̃) consists of a non-symmetric mean matrix S0 and symmetric disturbance matrices
{Si }m
i=1 .

6.1. Convergence study


Although the matrix S̃ is non-symmetric, its eigenvalues and eigenvectors are not necessarily
complex. To make the convergence study of the stochastic inverse power method for non-
symmetric matrices as general as possible, the coefficients 1 and 2 are selected such that the

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
ITERATIVE SOLUTION OF THE RANDOM EIGENVALUE PROBLEM 421

Table II. Parameters used to model the fluid.



∞ 1.225 kg/m3
a∞ 340 m/s
v∞ 6800 m/s
1 4165 kg/(m2 s)
2 2 832 200 kg/(m s2 )

Figure 9. Convergence plots of the stochastic inverse power method for the non-symmetric testcase.

lowest eigenvalue of the matrix S̃ is complex. The parameters that are used are assembled in
Table II.
In the case of complex eigenvalues and eigenvectors, the selection of q and is similar to
the symmetric case. For the selection of q, it is advised to use the full imaginary part of the
deterministic eigenvalue. The reason for this is that the complex conjugate of an eigenvalue is
also an eigenvalue of the system. If the imaginary part is left out, convergence problems will
occur due to the fact that the initial estimate q is as close to the desired eigenvalue as to its
complex conjugate.
The convergence plots of the stochastic inverse power method for non-symmetric matrices are
shown in Figure 9. These convergence plots are obtained using linear basis functions. As can
be seen, the norm of the residual converges in a similar way compared to the symmetric case.
The norm of the residual of the converged solution is larger than in the case of a symmetric
matrix. The rate of convergence is also smaller, which is in agreement with the deterministic
case.

6.2. Accuracy
When considering the accuracy of the stochastic inverse power method (Figure 10) it can be seen
that the errors are of the same order as in the case of the symmetric method. Again, both the
FOSM and SOSM are able to accurately approximate the mean and standard deviation of
the first eigenvalue for coefficients of variation up to 10%. For larger variation coefficients,
the FOSM becomes inaccurate. The unaccurate results for the FOSM can be explained by the

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
422 C. V. VERHOOSEL, M. A. GUTIÉRREZ AND S. J. HULSHOFF

Figure 10. Errors of the mean () and standard deviation () of the first eigenvalue with respect to a
20K samples Monte-Carlo simulation for the non-symmetric testcase. No converged results have been
obtained using the SOSM for VE = 15 and 20%.

Figure 11. Dependence of the mean and standard deviation of the eigenvalue  on the variation
coefficient of the modulus of elasticity VE for the non-symmetric testcase.

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
ITERATIVE SOLUTION OF THE RANDOM EIGENVALUE PROBLEM 423

Figure 12. Example of the approximation of the non-differentiable eigenvalues


using FOSM (left) and SOSM (right).

fact that for large coefficients of variation of the modulus of elasticity, the dependence of the
standard deviation of the eigenvalue is no longer close to being linear (Figure 11).
It could be expected that increasing the order of the basis functions would be an effective
way of increasing the accuracy. It turns out, however, that this is not the case. The SOSM
fails to converge for high coefficients of variation. This can be explained by the fact that
probably a root is involved, which makes the eigenvalue as a function of the random variables
non-differentiable (Figure 12). As can be seen in Figure 12, the FOSM suffers less from this
problem than the SOSM.

7. CONCLUSIONS

A new algorithm for the computation of the spectral expansions of the eigenvalues and eigen-
vectors of the random general eigenvalue problem is proposed. The deterministic inverse power
method, which is an intrinsic part of modern eigenvalue solvers, is extended to find the spectral
expansion of the eigenvalues.
The stochastic inverse power method was tested for a symmetric and a non-symmetric
matrix (with complex eigenvalues and eigenvectors). It turns out that for a proper choice of
the iteration parameters, accurate solutions can be obtained for variation coefficients up to
10%. For higher values of the coefficient of variation, the approximation becomes relatively
inaccurate (error larger than 5%). A good indication of the randomness can, however, still be
obtained.
Compared to currently available methods for the computation of the spectral expansion of
eigenvalaues and eigenvectors, the stochastic inverse power method is efficient and robust. The
better efficiency is primarily a consequence of the fact that, in contrast to currently available
methods, the use of sampling is not required. The robustness of the proposed method is inherited
from the underlying deterministic inverse power method.

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424
424 C. V. VERHOOSEL, M. A. GUTIÉRREZ AND S. J. HULSHOFF

REFERENCES
1. Golub GH, van Loan CF. Matrix Computations (3rd edn). Johns Hopkins University Press: Baltimore, 1996.
2. van der Vorst HA. Fundamentals. Linear Algebraic Solvers and Eigenvalue Analysis, vol 1. Encyclopedia of
Computational Mechanics. Wiley: New York, 2004; 552–576, Chapter 19.
3. Gutiérrez MA, Krenk S. Solids, structures and coupled problems. Stochastic Finite Element Methods,
vol. 2. Encyclopedia of Computational Mechanics. Wiley: New York, 2004; 657–681, Chapter 20.
4. Schuëller GI (ed.). IASSAR (Special Issue). A state-of-the-art report on computational stochastic mechanics.
Probabilistic Engineering Mechanics 1997; 12(4):199–321.
5. Ditlevsen O, Madsen HO. Structural Reliability Methods. Wiley: Chichester, 1996.
6. Der Kiureghian A, Ke JB. The stochastic finite element method in structural reliability. Probabilistic
Engineering Mechanics 1988; 3:83–91.
7. Madsen HO, Krenk S, Lind NC. Methods of Structural Safety. Prentice-Hall: Englewood Cliffs, NJ, 1986.
8. Kleiber M, Hien TD. The Stochastic Finite Element Method. Wiley: Chichester, 1992.
9. Ghanem RG, Spanos PD. Stochastic Finite Elements: A Spectral Approach. Springer-Verlag: New York, 1991.
10. Adhikari S, Langley RS. Distribution of eigenvalues of linear stochastic systems. In Proceedings of
the Ninth International Conference on Applications of Statistics and Probability in Civil Engineering
(ICASP 9), Rotterdam, July 2003; Applications of Statistics and Probability in Civil Engineering, der
Kiureghian A, Madanat S, Pestana JM (eds), vol. 1. Millpress: 2003; 201–207.
11. Székely GS. An efficient computational method for the calculation of eigenvectors and eigenvalues within
the Monte-Carlo simulation. Technical Report, Universität Innsbruck, 1999.
12. vom Scheidt J, Purkert W. Random Eigenvalue Problems. North Holland: New York, 1983.
13. Pradlwarter HJ, Schuëller GI, Székely GS. Random eigenvalue problems for large systems. Computers and
Structures 2002; 80(27–30):2415–2424.
14. Székely GS, Schuëller GI. Computational procedure for a fast calculation of eigenvectors and eigenvalues
of structures with random properties. Computer Methods in Applied Mechanics and Engineering 2001;
191(8–10):799–816.
15. Ghosh D, Ghanem R. Random eigenvalue analysis of an airframe. 45th AIAA/ASME/ASCE/AHS/ASC
Structures, Structural Dynamics and Material Conference, Palm Springs, California, April 2004.
16. Ghosh D, Ghanem R. A new algorithm for solving the random eigenvalue problem using polynomial chaos
expansion. 46th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Material Conference,
Austin, Texas, April 2005.
17. Chung DB, Gutiérrez MA, Graham-Brady LL, Lingen F. Efficient numerical strategies for spectral stochastic
finite element models. International Journal for Numerical Methods in Engineering 2005; 64(10):1334–1349.
18. Ghanem RG, Kruger RM. Numerical solution of spectral stochastic finite element systems. Computer Methods
in Applied Mechanics and Engineering 1996; 129(3):289–303.
19. Lawrence MA. Basic random variables in finite element analysis. International Journal for Numerical Methods
in Engineering 1997; 24(10):1849–1863.
20. Pettit CL, Canfield RA, Ghanem R. Stochastic analysis of an aeroelastic system. 15th ASCE Engineering
Mechanics Conference, Columbia University, New York, June 2002.
21. Li C, der Kiureghian A. Optimal discretization of random fields. Journal of Engineering Mechanics 1993;
119(6):1136–1154.
22. Bisplinghoff RL, Ashley H, Halfman RL. Aeroelasticity. Addison-Wesley: Cambridge, 1955.

Copyright 䉷 2006 John Wiley & Sons, Ltd. Int. J. Numer. Meth. Engng 2006; 68:401–424

You might also like