This action might not be possible to undo. Are you sure you want to continue?

Karl Meerbergen Report TW 490, May 2007

n

**Katholieke Universiteit Leuven
**

Department of Computer Science

Celestijnenlaan 200A – B-3001 Heverlee (Belgium)

**The Quadratic Arnoldi method for the solution of the quadratic eigenvalue problem
**

Karl Meerbergen Report TW 490, May 2007

Department of Computer Science, K.U.Leuven

Abstract The Quadratic Arnoldi algorithm is an Arnoldi algorithm for the solution of the quadratic eigenvalue problem, that exploits the structure of the Krylov vectors. This allows us to reduce the memory requirements by about a half. The method is an alternative to the Second Order Arnoldi method (SOAR). In the SOAR method it is not clear how to perform an implicit restart. We discuss various choices of linearizations in L and DL. We also explain how to compute a partial Schur form of the underlying linearization with respect to the structure of the Schur vectors. We also formulate some open problems.

Keywords : Quadratic eigenvalue problem, Arnoldi method, SOAR, Schur decomposition AMS(MOS) Classiﬁcation : Primary : 15A18, 65F15, Secondary : 65G50

As a consequence. (1. Introduction. Saad [20]. and ﬁnally. −iC . 1 3001 Heverlee (Leuven). We discuss various choices of linearizations in L1 and DL. Krylov methods for the solution of quadratic eigenvalue problems have been studied by Parlett and Chen [19]. Belgium.THE QUADRATIC ARNOLDI METHOD FOR THE SOLUTION OF THE QUADRATIC EIGENVALUE PROBLEM KARL MEERBERGEN∗ Key words. AMS subject classiﬁcation. 65F15 Abstract. . the SOAR method is not the preferred method for computing a Schur form of the linearization.3) with S = A−1 B and θ = λ−1 . However. The quadratic eigenvalue problem and solution methods are reviewed by Tisseur and Meerbergen [23].kuleuven.1) because of the quadratic term in λ. and mass matrices respectively and arise from the Fourier transformation of the spatial discretization by ﬁnite elements of the mechanical equation. The goal is to solve the quadratic eigenvalue problem Q(λ)u = 0 with Q(λ) = K + λC + λ2 M . Instead.e.1) The matrices K . Mehrmann and Watkins [17].Leuven. that exploits the structure of the Krylov vectors.meerbergen@cs. the Ritz values are computed from a small Hessenberg matrix and not from a small quadratic eigenvalue problem. We also explain how to compute a partial Schur form of the underlying linearization with respect to the structure of the Schur vectors. Equation (1. 15A18. This allows us to reduce the memory requirements by about a half. Department of karl. the eigenvectors).2) Since we are interested in the eigenvalues near zero.U. The Quadratic Arnoldi algorithm is an Arnoldi algorithm for the solution of the quadratic eigenvalue problem. classsical implicit restarting [21] [18] is no longer possible. damping. (1. quadratic eigenvalue problem. The method is an alternative to the Second Order Arnoldi method (SOAR). we usually solve the inverted problem S u θu =θ u θu (1. ∗ K. the typical structure of the eigenvectors of the linearization is lost .1) is solved when the engineer is interested in the eigenfrequencies (resonance frequencies) and damping properties of the mode shapes (i. We also formulate some open problems. In the SOAR method it is not clear how to perform an implicit restart. Standard methods cannot be used directly to eﬃciently solve (1. and −M are the stiﬀness. All these disadvantages can be solved by the SOAR method [3]. the doubling of the size of the problem increases the memory cost by a factor two . They are large n × n matrices and usually sparse. (1. Arnoldi method.1) can be ‘linearized’ into a problem of the form (A − λB ) λu u =0.be Computer Science. secondly. There are three disadvantages with the Arnoldi method : ﬁrstly. 1.

usually with the real parts on one side of the imaginary axis and often relatively small compared to the imaginary part. i.1). 2] is used when a large number of vectors need to be stored : k iterations of the Arnoldi method require the storage of the order of (2k + 2)n ﬂoating point numbers (real or complex). This memory reduction trick is also used in the SOAR trick. which has consequences to the use of the Q-Arnoldi algorithm. Similar tricks can be used for reducing the storage cost for partial reorthogonalization in the Lanczos method. however. we address which one of the two should be returned as an approximate solution vector of (1. we use the shift-and-invert transformation with a pole near zero. the eigenvalues with small imaginary part are desired. 11. some results in this paper also have consequences to the SOAR method.e. The applications we have in mind to solve arise from vibration problems. to date. smaller than 100. which allows for the computation of a Schur form. This will be discussed in the paper. where Q stands for ‘quadratic’. that may have large modulus. In applications this is usually less eﬀective than using A−1 B when the eigenvalues near zero are wanted. Many applications are not extremely large. we propose a method that is close to SOAR. We call this the Q-Arnoldi algorithm. Therefore. the Arnoldi method is usually preferred for eigenvalue computations. We discuss various issues including the choice of linearization. In §2. The problem is that the Schur vectors. We devote some time on the choice of linearization in L1 and DL [12]. 9] can be used to keep the storage requirements low.2) and present a modiﬁcation of Arnoldi’s method that saves memory. Q-Arnoldi performs a projection that does not produce a quadratic eigenvalue problem : the Ritz pairs are computed from the Arnoldi recurrence relation. Exploiting the Schur form of the linearization in the SOAR method for restarting purposing is not trivial and is. Section 4 shows how to exploit the structure of (1. In §5 we show a numerical example from applications. This transformation has a cluster of eigenvalues near zero and a few out-liers. Nevertheless. and the choice of component of the Ritz vectors. some thoughts on computations in ﬁnite precision arithmetic.000 degrees of freedom. Note that B −1 A can be used as an alternative to A−1 B . be mapped in a Krylov recurrence relation. we review the Arnoldi method for (1. in general.2).2 KARL MEERBERGEN In this document. The Q-Arnoldi scheme exploits the structure of S to reduce the memory requirements to (k + 2)n ﬂoating point numbers. computed by the SOAR method. The eigenvectors of (1. in general. Although the Lanczos method [8.2) in the computation of Schur vectors. When the linearized problems are solved in a Krylov subspace. See [14] for exploiting the Schur form in the inverse residual iteration and Jacobi-Davidson methods. We exploit the structure of the linear problem to divide the storage cost of the Arnoldi method roughly by a factor two .1). The fact that the Krylov vectors have length 2n instead of n may limit their practical use. which is also useful for the SOAR method. Such problems often have eigenvalues spread out along the imaginary axis. we introduce linearizations for (1.1) appear twice in the eigenvectors of (1. Both the SOAR method [3] and the Q-Arnoldi method compute the same subspace of the linearized problem. cannot. an open question. the two computed solutions are. The paper is organized as follows. . diﬀerent. In §3. usually. which allows the use of a direct linear system solver for applying A−1 . Since. especially when the Arnoldi method [1. we propose a locking procedure of converged Schur vectors that do keep the structure of the exact Schur vectors.

y= λx x (2.2). 2. For example. x∗ y for the inner product of two vectors and x = x∗ x for the induced two-norm. B= −M D −C . We discuss this in more detail in §3.3) which also produces (2. one could use D = −M .1) and the A − λB pencil [22] [13] [12].1) where D can be any nonsingular matrix. we mean the transformation of (1. The linearization should be chosen so that the application of S = A−1 B is eﬃcient and accurate. S k−1 b} . It computes the N × k matrix Vk = [v1 . Lemma 2. Sb. An algorithm for computing Vk and Hk is now given. λ = 0 is an eigenvalue of A − λB . Throughout the paper. .2). By linearization.1) is the ﬁrst companion form B= −M D . it may help building the Krylov subspace more eﬃciently.1) is a linearization iﬀ D is non-singular. The matrix Frobenius norm is denoted by · F. and B so that there is a one-to-one correspondance between eigenpairs of (1.1). S Vk − Vk Hk = vk+1 βk e∗ k . it is no longer guaranteed that all eigenvectors have the form (1. . if all matrices are symmetric and M is nonsingular. The Arnoldi method applied to S ∈ CN ×N and b ∈ CN produces the Krylov subspace Kk (S. In addition.1) into (1. this suggests that K should appear in A. The matrix used by [17] for the skew-Hamiltonian Hamiltonian eigenvalue problem does not have the form (2. The linearization can be chosen so that A and B respect special structure of K . Equation (3. so A − λB is not a linearization. . . (3. 3.2.1). A straightforward choice is A= D K . Linearization via the ﬁrst companion form. S 2b. The Arnoldi method. Since K has no factor λ in (1. We also use · to denote the two-norm of a matrix. . b) = span{b. we use x∗ to√ denote the Hermitian transpose. A − λB is a linearization. vk ] of iteration vectors. Let N = 2n. If D is non-singular. A= C D K .QUADRATIC ARNOLDI FOR THE QUADRATIC EIGENVALUE PROBLEM 3 We conclude with some remarks in §6. . but not of (1. An alternative to (2. . The pencil A−λB with (2. y= λx x (2.1. If D were singular. Also. Although A−1 B is independent of D.1). Various choices of linearization can be used.1) ∗ where Vk +1 Vk+1 = I . Proof.1) is called the Arnoldi recurrence relation. . C and M .2) by a suitable choice of A.2) from which D disappears. . It is easy to see that S = A−1 B = −K −1 M I −K −1 C (2. λ = ∞ is an eigenvalue even if it is not an eigenvalue of (1. We therefore assume that A and so K are invertible. the upper Hessenberg matrix Hk and the residual term vk+1 βk so that S Vk − Vk+1 H k = 0 .2).

2. . H j −1 0 0 1 hj .1.4 KARL MEERBERGEN Algorithm 3.2. 2.3. . . Let v1 and w1 be chosen so that v1 2 2 + w1 2 = 1. 2. It results in an important reduction of the memory cost compared to Algorithm 3.1. The Q-Arnoldi algorithm. . Compute the Arnoldi coeﬃcients hj = Vj v ˜j = v ˆ j − V j hj . which are (2 + k )n ﬂoating point numbers. . Get the scalar βj = v End for Steps 2. . Compute w ˆj = −K −1 (M vj + Cwj ) and v ˆj = wj . . wj ∈ Cn . Update v ˜ j 2 and compute vj +1 = v ˜ j /βj .1) can now be written as −K −1 M I −K −1 C Vk Wk − Vk Wk Hk = β k vk+1 wk+1 e∗ k (3. . 1. Compute v ∗ ˆj . The Arnoldi recurrence relation (3. . Roughly speaking.1 more eﬃcient for S from (2. vj into vj +1 . . Compute the Arnoldi coeﬃcients hj = 2.2) from which we deduce that Wk = Vk+1 H k . The coeﬃcients hj form the j th column of Hj and βj is the j + 1. Algorithm 3. .2.1. 3.3. 2. The following algorithm implements this idea. Set the initial vector v1 so that v1 2 = 1. k do ˆ j = S vj .2–2.1) for the linearization (2. j element of H k .2).1. For j = 1. Set the j th column of H j as [hT β ] . k do 2. where 1 ﬂop is the cost for an addition or a multiplication. We decompose the j th Arnoldi vectors into vj = vj wj . 2. (3. . 2. k iterations cost about 2N (k + 3)k ﬂops. Update v ˜j = v ˆj − Vj hj . with vj . w ˜j = w ˆj − [Vj wj ] ∗ Vj∗ ˆj + H ∗ ˆj ) j −1 (Vj w −1 v ∗ ∗ vj v ˆj + wj w ˆj .5. For j = 1. j j End for + w ˜j 2 1/2 ) .4. We now discuss how we can make Algorithm 3. 2 2.4 orthonormalize S vj against v1 . vk+1 and wk+1 to evaluate the recurrence relation.2 (Q-Arnoldi). .1 (Arnoldi method). 2. 2 1. Normalize vj +1 = v ˜j /βj and wj +1 = w ˜j /βj with βj = ( v ˜j T 2.3) This implies that we only have to store the vectors Vk .1.4. excluding the cost for Step 2.

The storage for the Arnoldi method is of the order of 2n(k + 1). Then A − λB is a linearization of (1.2–2.4) and let K be invertible. In addition.QUADRATIC ARNOLDI FOR THE QUADRATIC EIGENVALUE PROBLEM 5 The diﬀerence with Algorithm 3. where row by η1 .e. . The cost for computing H ∗ j −1 (Vj w ﬂops. so A − λB is not a linearization. 3. vj +1 and wj +1 need to be stored on iteration j so that the memory requirements for the storage of the Arnoldi vectors for k iterations is limited to n(k + 2) with the Q-Arnoldi method. A21 . In this section. −η1 M −η2 M A11 − η1 C A21 − η2 C = A11 A21 η1 K η2 K S11 S21 S12 S22 S11 S21 S12 S22 . Let A and B be deﬁned following (3.1.1 is only in Steps 2. A can only be singular when η2 A11 − η1 A21 .2). .2–2. so that the total cost for Step 2.3. Consider the vectorspace of linearizations [13] of the form L1 (Q) = From (3. Proof. The cost for computing w ˜j in Step 2. otherwise the Arnoldi method is not suitable anyway).4 costs 8n ﬂops as for Algorithm 3. A − λB : (A − λB ) λ 1 = η1 Q(λ) η2 Q(λ) .1. The total cost for Steps 2. i.1 requires 4n(k + 1)k ﬂops. Theorem 3. A does not need to be invertible for AS = B to be true. 4nk 2 + 16nk + 3 when k is signiﬁcantly smaller than n (which is usually the case.4 of Algorithms 3. .4) where A11 . η1 and η2 can be freely chosen. we have A= A11 A21 η1 K η2 K and B = −η1 M −η2 M A11 − η1 C A21 − η2 C .4).1) iﬀ A is invertible.3).2 have a cost of approximately 4nk 2 . where Wj −1 is replaced ∗ ˆj ) in Step 2. If we multiply the ﬁrst block of A and B by η2 and the second block ˜− B ˜ with the same eigenvalues and eigenvectors. Algorithm 3. it is easy to verify (2.3).2 ∈ C (3. η1. the extra cost is usually small compared to the computation of A−1 B vj . we ﬁnd the pencil A ˜= A A11 η2 A11 − η1 A21 η1 K 0 ˜= B −η1 M 0 A11 − η1 C η2 A11 − η1 A21 . So.2 is 2(2nj + n + j (j − 1)) ﬂops.1 and 3. Although Q-Arnoldi is slightly more expensive in computation time. Other linearizations. Steps 2. For k iterations the cost is of the order of 4 (k − 1)k (k + 1) ﬂops. (2.2–2. Let S = A−1 B = From B = AS . Since K is nonsingular. If both η1 = η2 = 0.4 is 8nj + 12n + 4j (j − 1). Step 2. Formally. then det(A − λB ) = 0 for all λ’s. Suppose η1 = 0. . Note that only v1 .2.3 is 2((j − 1)j + n(j + 1)) and the computation of v ˜j requires 2nj ﬂops. we study the use of other linearizations than the companion form in (2.2) holds and applying the Arnoldi method to A−1 B produces Arnoldi vectors with the structure (3.2 is 2(nj + j (j − 1)) by Vj H j −1 .

2 = {0.2 = {1. and M are η1. then (1. The pseudo Lanczos method [19] is the Lanczos method applied to A−1 B using the B pseudo-inner product. Following Lemma 2. The generalization of the second companion form is given by L2 (Q) = {A − λB : (λI I )(A − λB ) = (˜ η1 Q(λ) η ˜2 Q(λ)). 0}. A12 . Its general form is A= η1 C − η 2 M η1 K η1 K η2 K and B = −η1 M −η2 M −η2 M −η2 C + η1 K The very common cases for symmetric K . λ−1 . One example that ﬁts in this framework is the solution of palindromic eigenvalue problems [12]. x) is an eigenpair to (1.e.2 ∈ C} .1). However. The intersection of L1 and L2 is denoted by DL(Q) [13]. the pencil A η2 A11 − η1 A21 is non-singular. 1} and η1. the Arnoldi method produces the same recurrence relation. Let K = M T and C = C T . the Lanczos vectors are orthogonal with respect to B . we can show that A and B take the form A= and BA−1 = 0 −M K −1 I −CK −1 . requires the inverse of K and C − M − K = C − K − K T .1. A should be invertible for A − λB being a linearization of (1. −η2 /η1 is not an eigenvalue of (1. note that the Ritz values from Arnoldi’s method do not necessarily come in pairs of the form λ. Using the B inner product xT By rather than the Euclidean inner product x∗ y produces a tridiagonal Hk . The linearized pencil A + λ(−B ) is T-palindromic. which ﬁnishes the proof. C . If (λ. where A and B are chosen in DL. They therefore introduce the linearization with B = −AT in (3. Similarly to L1 . . Note that A is invertible iﬀ 2 2 η1 η2 C − η 2 M − η1 K is invertible. it might be of interest when using the Lanczos method. i. This corresponds to η1 = η2 = 1.1) and the associated left eigenvector is y so that y ∗ Q(λ) = 0. A11 η ˜1 K A12 η ˜2 K with B= −η ˜1 M A11 − η ˜1 C −η ˜2 M A12 − η ˜2 C Note that A−1 B depends on A11 .e. i.6 KARL MEERBERGEN ˜−B ˜ can only be a linearization when is singular. since (−B ) = AT . η ˜1. Since A−1 B is the same matrix for all linearizations of this form.1) is called T-palindromic. methods are advocated that produce eigenvalue approximates that respect this spectral structure. A is invertible when K and C − M − K are invertible. i. Although working with pencils in DL does not look very important for the Arnoldi method. 0 and −1 are no eigenvalues of (1.e. then λ−1 also is an eigenvalue with (right) eigenvector y ¯ and left eigenvector x ¯.2 . Using A−1 B in the Arnoldi method. see [19] [24]. A11 = K and A21 = C − M . and η ˜1.1). In other words. In [12].1).4) of the form A= K C −M K K and B = −M −M K −C −M .

by K 3. and λ − σ by λ. K ˜ = C + 2σM . We can also shift (2. where K to the shifted problem (3.2) with Vk+1 = Zk+1 .5).3. Shifting the linearization. and M ˜ respectively.1 and 3.1) into (A − σB )−1 (A − λB )y (λ) = 0 .1. the Q-Arnoldi method is as backward stable for the recurrence relation as the Arnoldi method.1) into ˜ + (λ − σ )C ˜ + (λ − σ )2 M ˜ )u = 0 . C ˜ and M are related which is exactly the same as (3.5) Without loss of generality we can thus assume that σ = 0. In this section. Applying a shift.6) . Vk Wk = The pencil (A − σB ) − µB does not lie in L1 (Q(λ − σ )). C ˜ . In this section. under certain conditions. A few words about numerical stability are in order. Numerical stability. C . ˜. More speciﬁcally. M (3. we perform a traditional rounding error analysis on the Algorithms 3. Shifting the quadratic equation.3. by replacing K . The convergence of eigenvalues near σ can be improved [20. A11 − η1 C A21 − η2 C Vk Wk ˜j 1 = Aj 1 + σηj M this becomes With A ˜11 A ˜21 A ˜ − σA ˜11 η1 K ˜ − σA ˜21 η2 K −1 −η1 M −η2 M Vk+1 Wk+1 ˜11 − η1 C ˜ + η1 σM A ˜21 − η2 C ˜ + η2 σM A Hk . (K with ˜ = K + σC + σ 2 M . However. We compute Wk = Zk+1 H k and Vk = Zk+1 + σWk . we show that.3.3) changes the Arnoldi method in ﬁnite precision arithmetic. The recurrence relation for (A − σB )−1 B becomes A11 + ση1 M A21 + ση2 M η1 K − σA11 + ση1 C η2 K − σA21 + ση2 C = Vk+1 Wk+1 −1 −η1 M −η2 M Hk .4. C ˜ =M . then we have ˜11 A ˜21 A ˜ η1 K ˜ η2 K −1 −η1 M −η2 M ˜11 − η1 C ˜ A ˜ ˜ A21 − η2 C Zk Wk = Zk+1 Wk+1 Hk .QUADRATIC ARNOLDI FOR THE QUADRATIC EIGENVALUE PROBLEM 7 3. 3. The fact that we compute Wk from (3. 3. and M ˜. we shall give bounds to τR = Wk − Vk+1 H k F (3.2. 2] by shifting the eigenvalue problem (1. when we introduce Zk+1 = Vk+1 − σWk+1 .2.

(3. We ﬁrst show two lemma’s that will help us to make a statement on the backward stability of the Arnoldi method.7) holds with Fj Gj ˆj − Vj +1 H V j ˆ Wj − Wj +1 H j In addition.12 with m = 1 in [7] for vj 2 + wj 2 = 1 + O(u). j =v where v ˜j is computed in Step 2. (3. We denote by u the machine precision. We have ˜ f j u( v ˆj + |Vj | · |hj | ) .10a) .12) .1 produces Vj +1 . The proofs of (3.7a) (3. u wj +1 βj . The proof of (3. we have hj − Vj Wj ∗ F F u |Wj +1 | · |H j | u Hj F u |Vj +1 | · |H j | .8c) vj +1 wj +1 We also have Vj 2 F + Wj 2 F = j + O(u) . uβj . V j ˆ Wj − Wj +1 H j = Gj . . The reader is referred to Higham [7] for more details on computations with ﬁnite precision arithmetic. Algorithm 3.8 KARL MEERBERGEN where Vk+1 and H k are computed by either the Arnoldi method or the Q-Arnoldi method.7b) 3.1): ˆj − Vj +1 H = Fj . We ﬁrst prove (3.9) Proof.8a) (3.8c) follow from the fact that vj +1 and wj +1 are computed as vj +1 = v ˜j /βj and wj +1 = w ˜j /βj . (3. F F u Vj +1 F Hj F F . . Lemma 3. k we have and vj +1 βj − v ˜j wj +1 βj − w ˜j βj − v ˜j w ˜j u v ˜j u w ˜j u v ˜j w ˜j u vj +1 βj . . Analysis for the Arnoldi algorithm. Wj +1 and H j so that (3. In ﬁnite precision arithmetic. For all j = 1. where c is a u independent constant.2. We deﬁne ˜ f ˆj − Vj hj − v ˜j . (3. (3.3. We also bound |X | · |Y | F ≤ X F · Y We deﬁne the error matrices Fj and Gj on the recurrence relation (3. We introduce the symbol to denote α β ⇔ α ≤ cβ + O(u) F.10c) u Wj +1 Hj F v ˆj w ˆj u v ˆj w ˆj . . (3.10b) (3. F (3. (3.10a).1.3 of Algorithm 3.5.11) Proof.8a). Recall that we can omit constant factors from error bounds using the notation .9) follows from Theorem 18.8b) (3. Lemma 3.8b) and (3.

˜ fj = v ˆj − Vj hj − vj +1 βj = f vj − vj +1 βj ) .9) to bound (3. The application of Lemma 3.6.13) where vj +1 and βj are computed from Step 2.20a).17) ∗ ˆj ) H∗ j −1 (Vj w ∗ wj w ˆj respectively.10c). The proof of (3. Algorithm 3. Wj +1 and H j so that (3. We extend Lemma 3. .17) are of the order u[|Vj | · |H j −1 | |wj |]|z | and u ˆj |) |H j −1 |∗ (|Vj |∗ |w |wj |∗ |w ˆj | v ˆj w ˆj + |Vj |∗ |v ˆj | (3. Lemma 3. . In ﬁnite precision arithmetic. (3.20b) .19) respectively.4. . u|Vj +1 | · |H j | . γj = [|Vj | · |H j −1 | wj ] Vj Wj ∗ F .3 to the Q-Arnoldi algorithm.QUADRATIC ARNOLDI FOR THE QUADRATIC EIGENVALUE PROBLEM 9 Denote by fj the last column of Fj .11) readily follows from standard rounding error analysis. Wj z and [Vj H j −1 wj ]z and v ˆj w ˆj are computed as + Vj∗ v ˆj (3.7) holds with Fj Gj ˆj − Vj +1 H V j ˆ Wj − Wj +1 H j F F uδj +1 H j uγj +1 H j u F F . fj ] proves (3. .10a). Accumulating fj in Fj = [f1 . We use (3.4. we ﬁnd for the componentwise analysis that |Fj | = |Wj − Vj +1 H j | which proves (3.16) γ ˜j +1 = [|Vj | · |H j −1 | wj wj +1 ] For the Q-Arnoldi algorithm. F (3. Similar to the proof of (3.18) respectively.14) (3.10b) and (3. Analysis for the Q-Arnoldi algorithm. The componentwise errors on the computation of (3.1 readily produces fj u( v ˆj + |Vj | · |hj | + vj +1 βj ) u |Vj +1 | · hj βj . 3. (3.2 on Step 2.2 produces Vj +1 . The proofs for (3. .4 of Algorithm 3. The normwise error bounds are uγj z and u(γj + δj ) (3. using v ˆj |Vj ||hj | + |vj +1 |βj .20a) (3.10a).10c) are similar.20c) 2 2 γj +1 + δj +1 H j F Proof. . F (3. j + (˜ (3. We deﬁne δj = V j F .15) .

3 of Algorithm 3.max = σmin H j −1 σmax H j −1 2 2 +1 +1 . In exact arithmetic.max is not very large. We deﬁne ˜j g ˜j = w ˆj − [Vj H j −1 wj ]hj − w where w ˜j is computed in Step 2.20c) is similar. For the vibration problems we have in mind. σmin Similarly.3) may inﬂuence the large singular values of S . . The proof for (3. .22) (3.4. Using Lemma 3.20b).min Proof. 2 + δ2 = γj j Vj −1 |Vj | · |H j −1 | 2 ≤ ξj.max . First.23) u( w ˆj + |Vj | · |H j −1 | |wj | |hj | ) . By selecting the pole not too close to an eigenvalue (which is common practice when several eigenvalues are wanted [6] [15]).max . Deﬁne ξj. . (3.2. (3. S usually has very small eigenvalues. The choice of pole σ (see §3.25) Note that ξj. gj ] and noting that γ ˜j +1 ≤ γj +1 proves (3.min ≤ 1 ≤ ξj. (3. where wj +1 and βj are computed from Step 2.5.20b). σmax From Vj −1 Vj H j −1 = Vj 0 0 Vj I j −1 H j −1 Ij −1 H j −1 = ξj.min . . Theorem 3. .min = ξj.2.24) (3. we ﬁnd that gj u( w ˆj + [|Vj | · |H j −1 | |wj |]|hj | + wj +1 βj + [|Vj | · |H j −1 | 0]|hj | ) u [|Vj | · |H j −1 | |wj | |wj +1 |] hj βj . usually leading to ξj. ξj.min ≈ 1. ξj.max . Ij −1 H j −1 = λmin (I + H ∗ j −1 H j −1 ) = 1/2 σmin H j −1 2 + 1 = ξj. We have g ˜j Let gj = w ˆj − Wj hj − wj +1 βj .21) =g ˜j + (w ˜j − wj +1 βj ) + ([Vj H j −1 wj ] − Wj )hj . Accumulating gj in Gj = [g1 .10 KARL MEERBERGEN We now prove (3.

4. For the computation of a number of eigenvalues of a non Hermitian linear problem. We ﬁrst introduce the notion of Q-Arnoldi triple. In practice. when ξk. 4. we write S in the form S= S1 I S2 . 4. we may allow a small error on (3. It turns out that when we force the Schur vectors to satisfy this structure. In addition. H k ∈ C(k+1)×k and for Vk = Vk Vk+1 H k and Vk+1 = Vk Vk+1 H k vk+1 wk+1 (4. H k . we can also keep the structure of the Krylov vectors. The solution of the quadratic eigenvalue problem.1. wk+1 ∈ Cn . (4. min . implicit restarting also keeps the structure of the Krylov vectors. =1.1) and (4. max ≤ Vj 2 −1 ≤ ξj.3) . Deﬁnition and properties of Q-Arnoldi triples. Q = {Vk+1 . min ξj.1) holds 2. and the Arnoldi vectors are orthogonal: ∗ I − Vk +1 Vk+1 = 0 . (4. Inexact Q-Arnoldi triple.QUADRATIC ARNOLDI FOR THE QUADRATIC EIGENVALUE PROBLEM 11 and Vj −1 Vj H j −1 we have that −1 ξj.min is large. To simplify. The solution of the quadratic eigenvalue problem by the shift-invert Arnoldi method is the objective of this section.11) and (3.1.max /ξk.2. wk+1 } is a Q-Arnoldi triple associated with S iﬀ Vk+1 ∈ Cn×(k+1) .1) 1.1. The conclusion from this section.max .2) so that S Vk − Vk+1 H k = Fk . We denote by Qk (S ) the set of all Q-Arnoldi triples associated with S . is that a loss of precision is possible in the computation of H k (see (3. 2 The proof follows from Vj 0 0 Vj I j −1 H j −1 −1 ≤ ξj. Definition 4. we usually compute a partial Schur form.2) Definition 4. the Arnoldi recurrence relation (3.19)) and the recurrence relation in the QArnoldi process. The idea is that we want the computed Schur vectors to have the structure of the exact Schur vectors.1.

4. Definition 4. we have that the recurrence relation becomes W ˜k −V ˜ k+1 H = Gk := SV k where Gk ≤ ( S2 vk+1 + vk+1 Hk + 1) g . wk+1 } ∈ Qk (S ). Modiﬁcation of the vectors of a Q-Arnoldi triple. we have that ∗ ∗ ∗ ∗ I − Z ∗ Vk +1 Vk+1 Z = Z (I − Vk+1 Vk+1 )Z + (I − Z Z ) .1) with Fk ≤ η and γk ≤ ρ.5 always holds.5. Z † H k Z1 . Under the condition of the Theorem. η. Suppose we modify the second block of the Arnoldi vectors as follows: ˜ k = Wk + vk+1 g ∗ . Finally. ρ). (4. 4. Let TZ be a transformation as deﬁned by Deﬁnition 4.4. The set of inexact Q-Arnoldi triples Qk (S. η Z1 . Definition 4.5) vk+1 g ∗ S2 vk+1 g ∗ − vk+1 g ∗ Hk . Z † Z = I . When. Theorem 4. (4. We deﬁne the transformation TZ {Vk+1 .5. Vk+1 H k z + wk+1 ζ } . Let {Vk+1 . Multiplication of (4. η. (4. η. Z1 z Let Z be a full column rank k + 1 × p + 1 matrix of the form Z = with Z1 0 ζ an k × p matrix and k ≥ p.3). H k .1). With W Vk ˜k = V ˜ k . (3. where Z † is the generalized inverse. Transformations of (inexact) Q-Arnoldi triples.2.1. The following theorem characterizes the transformation of an (inexact) Q-Arnoldi triple. TZ Qk (S ) ⊂ Qp (S ). Let Q = {Vk+1 .3) on the right by Z1 proves that the error on the recurrence relation of the transformed triple is bounded by Z1 η . wk+1 } = {Vk+1 Z.4) and (4.1).1) and (4.1. (4. ρ). Z ∗ Z = I . which shows the theorem. in addition to the conditions of Theorem 4. If ZZ † H k Z1 = H k Z1 then TZ Q ∈ Qp (S. Let Q ∈ Qk (S.4) where Fk can be considered a backward error on S .2) are broken. H k . H k . When Z is square. wk+1 } ∈ Qk (S. Z 2 ρ + I − Z ∗Z ) . H k . the elements of TZ Qk (S ) respect the structure (4.4. Not surprisingly.12 and KARL MEERBERGEN ∗ I − Vk +1 Vk+1 = γk .e. Theorem 4.3. i. Proof. ρ) consists of {Vk+1 .3. wk+1 } that satisfy (4.

we can return x2 /θ or x1 .2.QUADRATIC ARNOLDI FOR THE QUADRATIC EIGENVALUE PROBLEM 13 ˜ = H + ek+1 g ∗ . the Ritz vector for the linearization is then x ˜= The residual is 0 S1 I S2 x1 θx1 −θ x1 θx1 = 0 −(S2 − θI )r1 + r2 x1 θx1 . (4. Ritz vectors. The conclusion from this analysis is that for large θ there may be an advantage in using x2 /θ and for small θ in using x1 . (4.1. we have that the recurrence relation for the new vectors becomes Wk ˆk −V ˆ k+1 H ˆ = SV k 0 S1 vk+1 g ∗ − wk+1 g ∗ Hk . we study which of these is the best choice.1). . The orthogonality can be restored in the same way as for V 4. With V vk+1 g ∗ . The Ritz vectors corresponding to Ritz value θ have the Vk z x1 where Hk z = θz . When the Ritz value is an eigen= form x = x2 Vk+1 H k z value. The structure of the vectors (4. Let the residual of the Ritz pair computed by the Arnoldi method be r1 r2 = 0 S1 I S2 x1 x2 −θ x1 x2 = x2 − θx1 S1 x1 + S2 x2 − θx2 .8) If we use x1 as a Ritz vector for (1.1). x2 = θx1 .1).1) is restored by modifying H k k ˆk V . If we use x2 /θ as a Ritz vector for (1. (4. Computing Ritz vectors and Schur vectors.1) can be restored by modifying H k k The recurrence relation for the new H k becomes ˜k − V ˜ k+1 H ˜k = G ˜ k := SV 0 S2 vk+1 g ∗ − vk+1 g ∗ Hk − wk+1 g ∗ . then (4. In this section.2.7) ˜ k. which can be computed by a Cholesky factorization. As a Ritz vector of (1. 4. The orthogonality of V ˜k is restored by The upper bound (4.5) also holds for G ˜ ∗V ˜ applying TZ −1 to the Q-Arnoldi triple where Z is upper triangular and so that V k k = ∗ Z Z . ˆk = Vk + Suppose we modify the ﬁrst block of the Arnoldi vectors as follows: V ˆk = ˆ = H − ek+1 g ∗ Hk . Note that if x is close enough to an eigenvector.6) ˜ k . the Ritz vector for the linearization is then x ˆ= The residual is 0 S1 I S2 θ−1 x2 x2 −θ θ−1 x2 x2 = 0 θ−1 S1 r1 + r2 θ−1 x2 x2 .

2. The eigenvectors usually do not form an orthogonal set of vectors and are not even guaranteed to exist (in the defective case). However. T is in pseudo-upper triangular form when a Ritz value is complex. Alternatively. For the details see [5].) ∗ Let Hk Zk = Zk Tk be the Schur decomposition of Hk . The idea is to reduce the Krylov subspace by throwing away that part that is very unlikely to signiﬁcantly contribute to the convergence of the wanted eigenvalues. The diagonal elements of T are the Ritz values. Implicit restarting. we can similarly select the upper or lower components as Schur vectors. the Schur decomposition of S is S U UT = U UT T. Deﬁne the residual rk = ∗ βk ek Zk .1. Therefore. Schur decomposition. For the linearized quadratic eigenvalue problem. the norms S1 and S2 may also play a role in the decision to make the choice. The Schur decomposition always produces orthogonal vectors and always exists.8). Let x1 = Vk z and x2 = Wk z with Hk z = θz and z = 1 and deﬁne ρ = βk |e∗ k z |. Related to the choice of the ﬁrst or second component is the diﬀerence x2 − θx1 . min ρ ≤ r1 2 ≤ βk vk+1 ∗ 2 ek z . When we do this.min = which is large when the singular values of H k are large.4.2. From (4.6. Theorem 4. We conclude that x2 − θx1 2 is at most ρ. If this diﬀerence is small. 4. (4. Recall that 2 (H ) > 1 . we have that x2 − θx1 2 1 −1 ≤ ξk. We study this problem in detail in §4.3. 4. 1 + σmin k ξk. the storage and computational costs of the Arnoldi method can become unacceptably high. (In the case of real matrices. but can be smaller. some form of restarting or reduction of the basis is desirable. 4.9) The structure in the Schur vectors is lost in the Krylov subspace for the same reasons as the Ritz vectors. Implicit QR step.14 KARL MEERBERGEN then x ˜ ≈ x ˆ . x2 − θx2 Proof. When k gets large. we do not only add an error in the recurrence relation but also perturb the orthogonality of the basis vectors. it probably does not make much diﬀerence which component we take. The Schur vectors computed from a Q-Arnoldi triple have the form Uk = V k Z k = Vk Z k ∗ Vk Zk Tk + vk+1 rk . .3.

The idea here is to purge the undesired part of the Schur factorization of Hk . wk+1 } ∈ Qk (s).5.2. if Q = {Vk+1 . 4. When Z results from the QR factorization H k − µI . TZ Q ∈ Qp=k−1 (S ).2. Hk . Removing the last k − p columns from (4. following Theorem 4.4). we have that ∗ Z∗ k H k = Rk + µZ k I . (4.10) Let the Schur form be ordered so that the last k − p diagonal elements in Tk are unwanted Ritz values.3. The idea is to apply an orthogonal transformation to Hk that pushes the p desired Ritz values of Hk to the principle p × p block. H k . and so ZkZ∗ k H k Z k−1 = Z k Rk Z k−1 + µI = H k Z k−1 . From Tp ∗ rp Zp 1 ∗ H k Zp = H k Zp H k Zp Up = H k Zp Up .11) where Zp are the ﬁrst p columns of Z . Zk and rk from §4.2. (4. Let T have transformation matrix H k Zp = we derive that Zp 1 and so Zp Up 1 So. Also see [21] [18] [10] [11] [16]. So. The idea of purging is to keep the ﬁrst p Schur vectors in the basis. Tp is the leading p × p block of Tk and rp are the ﬁrst p elements of rk . the Arnoldi recurrence relation (3. Zp Up 1 ∗ Up 0 0 1 ∗ Tp ∗ rp 0 1 Up Zp Up 0 Zp 0 0 1 (Deﬁnition 4. . wk+1 } ∈ Qp (S ). Recall the deﬁnitions of Tk . Another way to reduce the subspace dimension is purging [10].QUADRATIC ARNOLDI FOR THE QUADRATIC EIGENVALUE PROBLEM 15 One way to perform such a reduction is called implicit restarting and was proposed by Sorensen [21]. By multiplying on the right by Zk .10) produces S Vk Zp − [Vk Zp vk+1 ] Tp ∗ rp =0. There exists a unitary Up so that ˜ = H p is p + 1 × p upper Hessenberg [5]. T {Vk+1 . Purging.1) can be written in terms of Schur vectors as follows : ∗ S Vk Zk − Vk Zk Tk = vk+1 rk . The orthogonal transformation produces a reduction of the subspace of dimension k + 1 to p + 1 keeping the p desired Ritz values.

7).k−l Tk−l . When we use the upper component as Schur vectors of the quadratic eigenvalue problem.12) This can be considered as a backward error on the solution. we can use the lower part of Uk as Schur vectors. The idea of locking is to set these small elements explicitly to ∗ ∗ ∗ zero assuming that the Schur vectors are exact. following (4. rl . Similarly. For linear problems it is accepted that the recurrence relation of Arnoldi’s method is violated by a small term.14) ∗ −1 This assumes that Tl is non-singular. the structure of the vectors is restored by using ˆ = H k Tk ∗ [0 r ˆk −l ] . Also.12) is usually considered as a backward error on S . The new basis can be orthogonalized by applying an appropriate transformation.13) ∗ Following (4.3. (4. 0). we aim to preserve the structure of the Schur vectors (4. Locking.k−l = Uk − 0 ∗ vk+1 rl 0 0 . Now deﬁne Schur vectors using the lower part of Ul : ˆk = U ∗ −1 Tl Vk Zl + vk+1 rl ∗ Vk Zl Tl + vk+1 rl U1.16 KARL MEERBERGEN 4. (4. As we will see.6] shows an upper bound to the perturbation of the eigenvalues that is proportional to rl and that is small when the eigenvectors associated with diﬀerent eigenvalues are almost orthogonal. The residual term in the right-hand side of (4. The Schur vectors form an orthogonal set. the Schur vectors obtain the form ˜k = U Vk Z l Vk Z l Tl U1.9).k−l U2.6) with g ∗ = −[rl 0]. so the orthogonality is preserved. and destroys the orthogonality. locking ensures that the residual norm of a Ritz pair of interest remains small : it is possible that with restarting a direction that is in favour of improving a Ritz vector is removed. Decompose Zk = [Zl Zk−l ] and Tk = Tl Tl. Decompose rk = [rl rk −l ]. . The reasons for the use of locking in eigenvalue codes are the reduction of the dimension of the Krylov subspace and the computation of multiple eigenvalues without the need of block methods. With g ∗ = −[rl Tl 0].4. Suppose that the ﬁrst l elements in rk are smaller than a convergence tolerance. The corresponding QArnoldi triple is in Q(S. Locking introduces an error in the recurrence relation : S Uk − [Uk vk+1 ] Tk ∗ [0 rk −l ] ∗ = vk+1 (rl 0) . Theorem 3. we ﬁnd ˜ = H k Tk ∗ [0 rk −l ] . The Bauer-Fike Theorem [20. The goal of the section is to restore the orthogonality and analyse the impact on the recurrence relation using the results from §4. Recall that G Gk ∼ rl is small. (4. In the following. ˜k This is precisely the matrix we want to have with locking.1. in combination with restarting.k−l = Uk − ∗ −1 Tl vk+1 rl 0 0 0 . this modiﬁes the error on the recurrence relation.k−l U2.

γR 2 · 10−16 4 · 10−6 is the deviation ∗ ∗ ∗ −1 with r ˆk −l = rk−l − rl Tl Tl. x2 shows to be a better Ritz vector than x1 for the large Ritz values. The problem to be solved has the form (1.0 F 106 1 · 1012 1 · 1012 Arnoldi Q-Arnoldi Table 5. Table 5. 623. C = τ I and M = τ diag(µ1 .1 ∗ Illustration of instabilities in the Q-Arnoldi method.54m × 0. We have run 10 steps of the Arnoldi method with an initial vector x1 ) be a Ritz vector returned by the Arnoldi with equal components. The Q-Arnoldi algorithm is sensitive to large τ ’s. 5.54m × 0. In this section we study the problem of an acoustic box with walls covered with carpet with dimensions 0. Let (θ. .1) with K = I .k−l .3. and the eigenvalues of (1. K . .1 shows (3.6 1. and C = 0. We want to compute the 50 eigenvalues with imaginary ρ = A−1 (θA − B )x . From §3. 5. Selection of component of Ritz vectors. The material has a complex speed of sound 340 + i3. 000.2 shows the Ritz values and the corresponding residual norms. The example illustrates that scaling the matrices may help improve the numerical stability of the Q-Arnoldi algorithm : indeed. . For this example. The residual terms of the ˜ k is used as Schur remaining Ritz values are modiﬁed. Consider the quadratic eigenvalue problem (1. Quadratic eigenvalue problem for an acoustic box. independent of τ . The box is discretized with 64710 hexahedral elements. This is only possible when H 2 and κ(H k ) are large. 5. . γ⊥ = I − Vk +1 Vk+1 of orthogonality and γR the error on the recurrence relation.1. but when τ = 1.225kg/m3 . M = diag(µ1 . The matrices are produced by ACTRAN [4]. Deﬁne ρ1 = K −1 (θ2 K + θC + M )x1 ρ2 = K −1 (θ2 K + θC + M )x2 .4 and the density is 1.0 γm √ m m+1 3.01M with µj = 1/j for n = 10.66 106 γ⊥ 5 · 10−14 4 · 10−10 γR 1 · 10−16 6 · 10−12 104 1 · 108 1 · 108 1. x = x2 method.6).2. We have run k = 10 iterations of the Arnoldi method for a problem of dimension n = 10. and M have norms around one. Table 5. .0 1. . . Numerical examples. The most important point is that the residual term corresponding to the ﬁrst l Ritz values is set equal to zero. The (traditional) Arnoldi algorithm always produces accurate results. the eigenvectors are the same for all cases.03 γ⊥ 1 · 10−13 3 · 10−13 γR 1 · 10−16 1 · 10−16 2 +δ 2 +1. Illustration of numerical instabilities in Q-Arnoldi.1) are divided by τ . µn ). 5. we can see that the Q-Arnoldi method may lose stability when γk is large. This is not the case when U basis. . The error on the recurrence relation is again proportional to rl .QUADRATIC ARNOLDI FOR THE QUADRATIC EIGENVALUE PROBLEM 17 τ κ(H ) H 2 √ 1 30.4.66 1010 γ⊥ 2 · 10−13 3.1) and n = 13. C .55m. µn ) where µj = 1/j for three values of τ . 000 with K = I .

3.342946 0. 1.69189 part larger than 600.3 q 2 + δ 2 for the diﬀerent restarts in the solution of the quadratic eigenvalue problem of the box γk k before restart ﬁrst restart second restart third restart 2. . For the Arnoldi algorithm we also had ∗ I − Vk +1 Vk+1 F 3. 2. 2 + δ 2 for the diﬀerent restarts. In Step 2.1 requires only k − p additional iterations to obtain a subspace of dimension k . The ﬁrst iteration costs k = 100 products with S .73840 1. so we do not expect numerical diﬃculties. The ﬁnal loss of orthogonality in the Q-Arnoldi algorithm was ∗ I − Vk +1 Vk+1 F 3. Until the wanted eigenvalues have converged. 2.1 (Arnoldi for eigenvalues).1.1 10−13 . The goal of the next iterations is to improve these values. Compute Ritz values.183618 0.256429 0. The next call to Step 2. so H k 2 ≤ S 2 is small.0099 0. Order the Ritz values in increasing distance to σ . 50 Ritz values were computed with residual norms smaller than 10−8 . 2 + δ 2 is γk Table 5.1 10−13 . Purge the last m − p columns of the recurrence relation.117528 −0. Algorithm 5.1). We solved the problem using the Arnoldi method with k = 100 and p = 50.2.00203237 0.93901 1.0381711 ± i0. This is no surprise since the shift σ = 600i is not close to an eigenvalue of (1. see §3. We applied the shift σ = 600i. 2.18 KARL MEERBERGEN Ritz value 1.1.14777 1. Ritz vectors and residual norms. After 3 restarts. We used the following algorithm.2 Comparison of Ritz vectors ρ 2 · 10−7 1 · 10−4 3 · 10−3 2 · 10−2 3 · 10−2 3 · 10−2 7 · 10−5 5 · 10−3 3 · 10−2 ρ2 3 · 10−8 2 · 10−5 6 · 10−4 3 · 10−3 5 · 10−3 5 · 10−3 5 · 10−4 6 · 10−5 2 · 10−3 ρ1 2 · 10−7 7 · 10−5 1 · 10−3 5 · 10−3 7 · 10−3 4 · 10−3 1 · 10−5 2 · 10−4 2 · 10−3 Table 5.00132763 Table 5. 2. Choose v1 randomly and normalize.3. Build a Krylov subspace of dimension k . The computations were carried out on a Linux PC.00250052 0. the purging operation keeps p iteration vectors with Ritz values corresponding to the Ritz values nearest σ .3 shows γk k k small. do : 2.3. For all restarts.509808 0.

In addition. 26 (2005). A second-order Arnoldi method for the solution of the quadratic eigenvalue problem. Su.. Ackownledgement. Dongarra. α0. pp. [4] Free Field Technologies. Conclusions. and H. J.Actran 2006. This paper presents research results of the Belgian Network DYSCO (Dynamical Systems. The principle of minimized iterations in the solution of the matrix eigenvalue problem. e. REFERENCES [1] W. Matrix Anal. 2006. the Arnoldi method produces the same results for any linearization in L1 . Philadelphia. A. ξmin usually lies close to one.. J. Bai and Y. The ratio should not be far away from one in order to reduce the chance of cancelation in the numerical computations. SIAM J. Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide. η1. USA. Ruhe.QUADRATIC ARNOLDI FOR THE QUADRATIC EIGENVALUE PROBLEM 19 6. Quart.1 . I − λS : (I − λS ) α0 + λα1 β0 + λβ1 = η1 Q(λ) η2 Q(λ) . Science Policy Oﬃce. the components of the Ritz vectors lie in the same direction when ξmin is large. The same conclusion holds for the SOAR method. The scientiﬁc responsibility rests with its author(s). The Q-Arnoldi algorithm produces a Krylov subspace. [3] Z. Appl. However. due to A−1 . C and M on Vk+1 . We have proposed an algorithm that preserves the structure of the Schur vectors. (A0 + λA1 + · · · + λp )u = 0 is straightforward and leads to a similar algorithm with similar conclusions. The derivation of scalings of S1 and S2 is still an open problem.1 . Demmel. Open problems are restarting the SOAR method. β0. This also implies (as we already knew) that the Arnoldi method in its standard form is not able to preserve structure. and other linearizations than those in L1 . [2] Z. SIAM. . This is still possible in a post-processing step in the Q-Arnoldi algorithm in order to improve the Ritz values or impose spectral structure. whereas the SOAR method projects K . Bai.2 ∈ C The extension to higher order polynomials. initiated by the Belgian State. The Q-Arnoldi algorithm is a memory eﬃcient implementation of the Arnoldi method for speciﬁc choices of linearization of the quadratic eigenvalue problem. The author is also grateful to Micka¨ el Robb´ e who helped with an early draft of the paper. and that shows that implicit restarting. Math. The conclusion is not the SOAR method is useless when more than one eigenvalue needs to be computed or restarting the Arnoldi process is required. User’s Manual.g. Note that S1 and S2 are only known in factored form −K −1 C and −K −1 M respectively. 2000. 9 (1951). PA. funded by the Interuniversity Attraction Poles Programme. pp. as we mentioned earlier. and Optimization). An important conclusion lies in the inﬂuence of ξmin and ξmax . 17–29. 640–659. van der Vorst. Control. Arnoldi. purging and locking similarly preserve the structure of the Arnoldi vectors. for the shift-and-invert transformation. MSC. Applic. As for the choice of linearization. We have some freedom in choosing the pole σ to keep ξmax low.

. Locking and restarting quadratic eigenvalue solvers. and H. pp. 8 (2001). SIAM. Nat. [7] N. 49 (1952). Matrix Anal. The rational Lanczos method for Hermitian eigenvalue problems. Res. 140 (1990). 33–53. Meerbergen. Ye. Applic. 971–1004. pp. Math. Stand. Mackey. N. pp. Numerical methods for large eigenvalue problems. Philadelphia. SIAM J. C. Mackey. 53–88. Matrix Anal. C. Matrix Anal. C. Lanczos. 357–385. Linear Alg. A shifted block Lanczos algorithm for solving sparse symmetric generalized eigenproblems. Solution of systems of linear equations by minimized iterations. Num. Appl. Accuracy and Stability of Numerical Algorithms.. [17] V. Bai and Q. [13] 28 (2006). An iteration method for the solution of the eigenvalue problem of linear diﬀerential and integral operators. [11] R.. PA. 33–52. Yang. Watkins. 66 (1997). Spence. Mehrmann. Structured polynomial eigenvalue problems: Good vibrations from good linearizations. pp. pp. 1992. SIAM J. Applic. Golub and C. Chen. Alg. SIAM J. Implicitly restarted Arnoldi and puriﬁcation for the shiftinvert transformation. Meerbergen. 235–286. Deﬂation techniques within an implicitly restarted Arnoldi iteration. Lin. [20] Y. 43 (2001). pp. Van Loan. pp. Res. Backward error and condition of polynomial eigenvalue problems. Matrix Anal.. [12] D. Manchester University Press. Comput. pp. pp. Z. Applic. Applic. 1998. 255–282. 13 (1992). Nat. 1814–1839. [8] C.. 20 (1999).. pp.. 45 (1950).. The quadratic eigenvalue problem. Mehl. [16] K. .. Simon. 22 (2001). ABLE: an adaptive block lanczos method for non-hermitian eigenvalue problems.. [14] K. [19] B.. Applic.. Matrix Anal. Meerbergen and A. Sorensen. [24] D. USA. pp. [15] . [18] R. Grimes. 1996. Saad. Bur. J. . Higham. 28 (2006). The Johns Hopkins University Press. J. . ARPACK USERS GUIDE: Solution of Large Scale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods. 17 (1996). Tisseur. 1996. 309 (2000). Linear Alg. Matrix Anal. 228– 272. Lehoucq. D.. and V. pp. Matrix Anal. 65 (1996). SIAM J. Manchester. [10] R. Implicit application of polynomial ﬁlters in a k -step Arnoldi method. 667–689. 339–361. SIAM J.. 3rd ed. N. Appl. 1060–1082. PA. and C.. SIAM. B. Bur. Comp. [22] F. SIAM J. On restarting the Arnoldi method for large nonsymmetric eigenvalue problems. Matrix computations. Lehoucq and D.. Tisseur and K. Parlett and H. Sorensen. Comp. 22 (2001). Sorensen. SIAM J. [21] D. Math. 15 (1994). 789–821. 1029–1051. pp.. Applic. Morgan. Appl. Philadelphia.20 KARL MEERBERGEN [5] G. J. 1905–1925. of [9] Stand. [6] R. Vector spaces of linearizations for matrix polynomials. SIAM Rev. D. Use of indeﬁnite pencils for computing damped natural modes. pp. Structure-preserving methods for computing eigenpairs of large sparse skew-Hamiltonian/Hamiltonian pencils. USA. Algorithms and Architectures for Advanced Scientiﬁc Computing. Lewis. Applic. pp. 1213–1230. Mehrmann and D. SIAM J. [23] F. S. UK. Sci. pp.

- Calculo 2
- Equilibrio Químico
- 01 Concept Os General Es
- 03Integracion
- 02Derivacion
- DOCTRINAS
- 2 F PC eva 15.doc
- F PC -- 6- 30-05
- F PC -- 5- 23-05
- F PC -- 4- 16-05
- F PC -- 4- 16-05
- F PC -- 2- 25-04
- F PC -- 1- 11-04
- F PC -- 3- 02-05.doc
- Cur So Completo de Fengshui
- Krylov
- T SA -- 4 T- 02-04 - Repaso
- GUIAS DE LÓGICA IV BIM
- IVBim - TRIG - 5to. Año - Guía 8 - Repaso General.doc
- IVBim - TRIG - 5to. Año - Guía 3 - Sumatorias de senos y cos.doc
- 25 Deflation Techniques for an Implicitly Restarted Arnoldi Iteration
- Historia de La Integrla de Lebesgu
- 32 Residual inverse iteration for the nonlinear eigenvalue problem.pdf
- 38 Implicit application of polynomial filters in a k-step Arnoldi method.pdf
- 32 Residual inverse iteration for the nonlinear eigenvalue problem.pdf

Sign up to vote on this title

UsefulNot useful- Matrix Solution Methods for Systems of Linear Algebraic
- An Accelerated Subspace Iteration Method
- Linear Transformations Qm
- A Iteration
- Solution Methods for Eigenvalue Problems in Structural Mechanics
- hw4
- qqq
- Eigen Vectors
- Eigenvalues and Eigenvectors2
- An Intelligent Method for Accelerating the Convergence of Different Versions of SGMRES Algorithm
- Some Characteristic Function of Fuzzy Matrix
- 03 LinearAlgebra Pub
- MECANICA DE MEDIOS CONTINUOS
- Math 244: Final Exam Review
- Lect-III Vector Space
- Mathematica - Linear Algebra
- Inverse Iteration Method for Finding Eigenvectors
- MIT18 06S10 Final Answers
- Numerical Methods
- On Common Eigenbases of Commuting Operators
- Eigenvalues Eigenvectors
- ominimal-gpappas
- Lab2-R.pdf
- Eigen Values and Eigen Vectors
- Chap 6
- Eigenvector
- Unit 2 EigenValuesAndEigenVectors
- hs5
- Power Iteration Method for Solving Eigenvalue Problems
- Fixed Point (Mathematics)
- 27 The quadratic Arnoldi method for the solution of the quadratic eigenvalue problem..pdf

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue reading from where you left off, or restart the preview.

scribd