CHAPTER 7 : PROJECTING NONLINEAR

PROBLEMS

Heinrich Voss
voss@tu-harburg.de

Hamburg University of Technology
Institute of Mathematics

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 1 / 109

Iterative projection methods

Sparse nonlinear eigenproblems
Let T : D → Cn×n be a family of matrices. Find λ ∈ D and x ∈ Cn , x 6= 0
and/or y ∈ Cn , y 6= 0 such that
T (λ)x = 0 and/or y H T (λ) = 0.

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 2 / 109

Iterative projection methods

Sparse nonlinear eigenproblems
Let T : D → Cn×n be a family of matrices. Find λ ∈ D and x ∈ Cn , x 6= 0
and/or y ∈ Cn , y 6= 0 such that
T (λ)x = 0 and/or y H T (λ) = 0.

For linear eigenvalue problems there are two types of iterative projection
methods:
Krylov subspace methods, which take advantage of normal forms of
matrices (Schur form), the validity of some (Lanczos-, Arnoldi-)
recurrence, and the fact that elements of Kk (u, A) can be represented as
x = p(A)u for some polynomial p ∈ Πk −1 .

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 2 / 109

Iterative projection methods

Sparse nonlinear eigenproblems
Let T : D → Cn×n be a family of matrices. Find λ ∈ D and x ∈ Cn , x 6= 0
and/or y ∈ Cn , y 6= 0 such that
T (λ)x = 0 and/or y H T (λ) = 0.

For linear eigenvalue problems there are two types of iterative projection
methods:
Krylov subspace methods, which take advantage of normal forms of
matrices (Schur form), the validity of some (Lanczos-, Arnoldi-)
recurrence, and the fact that elements of Kk (u, A) can be represented as
x = p(A)u for some polynomial p ∈ Πk −1 .

Methods where the search space is expanded by directions which have a
high approximation potential for the eigenvector wanted next (like the
Davidson or Jacobi-Davidson method).

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 2 / 109

Since normal forms are not known for general nonlinear eigenproblems generalizations of iterative projection methods to this type of problems always have to be of the second type. Find λ ∈ D and x ∈ Cn . Iterative projection methods Sparse nonlinear eigenproblems Let T : D → Cn×n be a family of matrices. y 6= 0 such that T (λ)x = 0 and/or y H T (λ) = 0. the validity of some (Lanczos-. x 6= 0 and/or y ∈ Cn . TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 2 / 109 . Arnoldi-) recurrence. and the fact that elements of Kk (u. For linear eigenvalue problems there are two types of iterative projection methods: Krylov subspace methods. which take advantage of normal forms of matrices (Schur form). Methods where the search space is expanded by directions which have a high approximation potential for the eigenvector wanted next (like the Davidson or Jacobi-Davidson method). A) can be represented as x = p(A)u for some polynomial p ∈ Πk −1 .

V.2005). Jarlebring. Iterative projection methods Iterative projection methods for nonlinear problems nonlinear rational Krylov: Ruhe(2000. (2005) TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 3 / 109 .

Liao. Jarlebring. Liao (2007) TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 3 / 109 . (2005) nonlinear Arnoldi method: quadratic probl. V.2004). Ko (2006).2005). (2003.: Meerbergen (2001) general problems: V. Iterative projection methods Iterative projection methods for nonlinear problems nonlinear rational Krylov: Ruhe(2000. Lee. Bai.

Schwetlick. Jarlebring. Wang (2004. Schreiber (2006). Wang. van der Vorst (1996) Hwang. (2003. Bai. V. Lin. Liao (2007) Jacobi-Davidson: polynomial problems: Sleijpen. (2004). V. Liao. Betcke. (2005) nonlinear Arnoldi method: quadratic probl.2005). V. Boten. Ko (2006). (2004. Fokkema.2005) general problems:T. Schreiber (2008) TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 3 / 109 .2007). Iterative projection methods Iterative projection methods for nonlinear problems nonlinear rational Krylov: Ruhe(2000.2004).: Meerbergen (2001) general problems: V. Lee.

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 4 / 109 . Dense nonlinear eigenvalue problems Dense nonlinear eigenproblems In each step of an iterative projection method one has to solve a dense nonlinear eigenproblem of small dimension. We briefly survey methods for this type of problems.

Dense nonlinear eigenvalue problems Dense nonlinear eigenproblems In each step of an iterative projection method one has to solve a dense nonlinear eigenproblem of small dimension. For polynomial or rational eigenproblems the simplest approach is to use linearization and to apply standard methods for linear eigenproblems. We briefly survey methods for this type of problems. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 4 / 109 .

the classical approach is to formulate the eigenvalue problem as a system of nonlinear equations and to use variations of Newton’s method or the inverse iteration method. Dense nonlinear eigenvalue problems Dense nonlinear eigenproblems In each step of an iterative projection method one has to solve a dense nonlinear eigenproblem of small dimension. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 4 / 109 . For general nonlinear eigenvalue problems. For polynomial or rational eigenproblems the simplest approach is to use linearization and to apply standard methods for linear eigenproblems. We briefly survey methods for this type of problems.

Dense nonlinear eigenvalue problems Characteristic equation For the characteristic equation det T (λ) = 0. it was suggested by Kublanovskaya (1969.1970) to use a QR-decomposition with column pivoting T (λ)P(λ) = Q(λ)R(λ).e. |r11 (λ)| ≥ |r22 (λ)| ≥ · · · ≥ |rnn (λ)|. Then λ is an eigenvalue if and only if rnn (λ) = 0. i. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 5 / 109 . where P(λ) is a permutation matrix which is chosen such that the diagonal elements rjj (λ) of R(λ) are decreasing in magnitude.

it was suggested by Kublanovskaya (1969. where P(λ) is a permutation matrix which is chosen such that the diagonal elements rjj (λ) of R(λ) are decreasing in magnitude. one obtains the iteration 1 λk +1 = λk − enH Q(λk )H T 0 (λk )P(λk )R(λk )−1 en for approximations to an eigenvalue. Dense nonlinear eigenvalue problems Characteristic equation For the characteristic equation det T (λ) = 0.e. |r11 (λ)| ≥ |r22 (λ)| ≥ · · · ≥ |rnn (λ)|. Applying Newton’s method to this equation. Then λ is an eigenvalue if and only if rnn (λ) = 0. i.1970) to use a QR-decomposition with column pivoting T (λ)P(λ) = Q(λ)R(λ). TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 5 / 109 .

Applying Newton’s method to this equation. it was suggested by Kublanovskaya (1969. Approximations to left and right eigenvectors can be obtained from yk = Q(λk )en and xk = P(λk )R(λk )−1 en . TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 5 / 109 . where P(λ) is a permutation matrix which is chosen such that the diagonal elements rjj (λ) of R(λ) are decreasing in magnitude. one obtains the iteration 1 λk +1 = λk − enH Q(λk )H T 0 (λk )P(λk )R(λk )−1 en for approximations to an eigenvalue.1970) to use a QR-decomposition with column pivoting T (λ)P(λ) = Q(λ)R(λ). i.e. Dense nonlinear eigenvalue problems Characteristic equation For the characteristic equation det T (λ) = 0. |r11 (λ)| ≥ |r22 (λ)| ≥ · · · ≥ |rnn (λ)|. Then λ is an eigenvalue if and only if rnn (λ) = 0.

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 6 / 109 . and also quadratic convergence was shown. Dense nonlinear eigenvalue problems Characteristic equation ct. An improved version of Kublanovskaya’s method was suggested by Jain. Singhal & Huseyin (1983).

and also quadratic convergence was shown. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 6 / 109 . A similar approach was presented by Yang (1983). Dense nonlinear eigenvalue problems Characteristic equation ct. via a representation of Newton’s method using the LU-factorization of T (λ). Singhal & Huseyin (1983). An improved version of Kublanovskaya’s method was suggested by Jain.

Singhal & Huseyin (1983). An improved version of Kublanovskaya’s method was suggested by Jain. via a representation of Newton’s method using the LU-factorization of T (λ). Other variations of this method can be found in the books of Zurmühl & Falk (1984. Dense nonlinear eigenvalue problems Characteristic equation ct. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 6 / 109 . 1985). and also quadratic convergence was shown. A similar approach was presented by Yang (1983).

Dense nonlinear eigenvalue problems Characteristic equation ct. and also quadratic convergence was shown. useful in the context of iterative refinement of computed eigenvalues and eigenvectors. It is. Other variations of this method can be found in the books of Zurmühl & Falk (1984. Singhal & Huseyin (1983). An improved version of Kublanovskaya’s method was suggested by Jain. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 6 / 109 . since it computes eigenvalues one at a time and needs several O(n3 ) factorizations per eigenvalue. via a representation of Newton’s method using the LU-factorization of T (λ). however. However. this relatively simple idea is not efficient. 1985). A similar approach was presented by Yang (1983).

Dense nonlinear eigenvalue problems Successive linear approximations Let λk be an approximation to an eigenvalue of T (λ)x = 0. This suggests the method TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 7 / 109 . Linearizing T (λk − θ)x = 0 yields T (λk )x = θT 0 (λk )x.

. Dense nonlinear eigenvalue problems Successive linear approximations Let λk be an approximation to an eigenvalue of T (λ)x = 0. . until convergence do 3: solve the linear eigenproblem T (λk )x = θT 0 (λk )x 4: choose suitable eigenvalue θ (usually smallest in modulus) 5: set λk +1 = λk − θ 6: end for TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 7 / 109 . . Linearizing T (λk − θ)x = 0 yields T (λk )x = θT 0 (λk )x. 2. This suggests the method 1: start with an approximation λ1 ∈ D to an eigenvalue of T (λ)x = 0 2: for k = 1.

and let λ̂ be an eigenvalue of T (λ)x = 0 such that T 0 (λ̂) is nonsingular and 0 is an algebraically simple eigenvalue of T 0 (λ̂)−1 T (λ̂). THEOREM Let T (λ) be twice continuously differentiable. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 8 / 109 . Then the method of successive linear problems converges quadratically to λ̂. Dense nonlinear eigenvalue problems Successive linear approximations ct.

Dense nonlinear eigenvalue problems Successive linear approximations ct. λ) := `H x − 1 it holds Φ(x̂. Then the method of successive linear problems converges quadratically to λ̂. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 8 / 109 . THEOREM Let T (λ) be twice continuously differentiable. θ. Sketch of proof: For T (λ)x − θT 0 (λ)x   Φ(x. θ(λ). and let λ̂ be an eigenvalue of T (λ)x = 0 such that T 0 (λ̂) is nonsingular and 0 is an algebraically simple eigenvalue of T 0 (λ̂)−1 T (λ̂). λ̂) = 0. θ. and by the implicit function theorem Φ(x. 0. λ) = 0 defines differentiable functions x : U(λ̂) → Cn and θ : U(λ̂) → C on a neighborhood of λ̂ such that Φ(x(λ). λ) = 0.

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 8 / 109 . λ̂) = 0. λ) = 0. λ) := `H x − 1 it holds Φ(x̂. Dense nonlinear eigenvalue problems Successive linear approximations ct. θ0 (λ̂) = 1 yields the quadratic convergence of the method λk +1 = λk − θ(λk ). θ(λ). 0. THEOREM Let T (λ) be twice continuously differentiable. θ. θ. Then the method of successive linear problems converges quadratically to λ̂. Sketch of proof: For T (λ)x − θT 0 (λ)x   Φ(x. λ) = 0 defines differentiable functions x : U(λ̂) → Cn and θ : U(λ̂) → C on a neighborhood of λ̂ such that Φ(x(λ). and let λ̂ be an eigenvalue of T (λ)x = 0 such that T 0 (λ̂) is nonsingular and 0 is an algebraically simple eigenvalue of T 0 (λ̂)−1 T (λ̂). and by the implicit function theorem Φ(x.

Dense nonlinear eigenvalue problems Inverse iteration Applying Newton’s method to the nonlinear system     x T (λ)x F = . λ `H x − 1 (` suitable scaling vector) yields T (λk ) T 0 (λk )xk      xk +1 − xk T (λk )xk =− . `H 0 λk +1 − λk `H xk − 1 TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 9 / 109 .

`H 0 λk +1 − λk `H xk − 1 Assuming that xk is already scaled such that `H xk = 1 the second equation reads `H xk +1 = `H xk = 1. λ `H x − 1 (` suitable scaling vector) yields T (λk ) T 0 (λk )xk      xk +1 − xk T (λk )xk =− . TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 9 / 109 . and the first one xk +1 = −(λk +1 − λk )T (λk )−1 T 0 (λk )xk . Dense nonlinear eigenvalue problems Inverse iteration Applying Newton’s method to the nonlinear system     x T (λ)x F = .

`H 0 λk +1 − λk `H xk − 1 Assuming that xk is already scaled such that `H xk = 1 the second equation reads `H xk +1 = `H xk = 1. and the first one xk +1 = −(λk +1 − λk )T (λk )−1 T 0 (λk )xk . `H T (λk )−1 T 0 (λk )xk TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 9 / 109 . λ `H x − 1 (` suitable scaling vector) yields T (λk ) T 0 (λk )xk      xk +1 − xk T (λk )xk =− . Dense nonlinear eigenvalue problems Inverse iteration Applying Newton’s method to the nonlinear system     x T (λ)x F = . Multiplying by ` yields `H xk λk +1 = λk − .

until convergence do 3: solve T (λk )xk +1 = T 0 (λk )xk for xk +1 4: set λk +1 = λk − `H xk /`H xk +1 5: normalize xk +1 ← xk +1 /kxk +1 k 6: end for TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 10 / 109 . 2. . . 1: start with an approximation λ1 ∈ D to an eigenvalue of T (λ)x = 0 and an approximation x1 to an appropriate eigenvector 2: for k = 1. Dense nonlinear eigenvalue problems Inverse iteration ct. .

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 10 / 109 . 2. 1: start with an approximation λ1 ∈ D to an eigenvalue of T (λ)x = 0 and an approximation x1 to an appropriate eigenvector 2: for k = 1. λ̂). Then the inverse iteration converges locally and quadratically to (x̂. Dense nonlinear eigenvalue problems Inverse iteration ct. and let x̂ be a corresponding eigenvector with `H x̂ = 1. until convergence do 3: solve T (λk )xk +1 = T 0 (λk )xk for xk +1 4: set λk +1 = λk − `H xk /`H xk +1 5: normalize xk +1 ← xk +1 /kxk +1 k 6: end for THEOREM Let λ̂ be an eigenvalue of T (·) such that µ = 0 is an algebraically simple eigenvalue of T 0 (λ̂)−1 T (λ̂)y = µy . . . .

1: start with an approximation λ1 ∈ D to an eigenvalue of T (λ)x = 0 and an approximation x1 to an appropriate eigenvector 2: for k = 1. Then the inverse iteration converges locally and quadratically to (x̂. Sketch of proof: Only have to show that the only solution of T (λ̂) T 0 (λ̂)x̂      y y F 0 (x̂. . . until convergence do 3: solve T (λk )xk +1 = T 0 (λk )xk for xk +1 4: set λk +1 = λk − `H xk /`H xk +1 5: normalize xk +1 ← xk +1 /kxk +1 k 6: end for THEOREM Let λ̂ be an eigenvalue of T (·) such that µ = 0 is an algebraically simple eigenvalue of T 0 (λ̂)−1 T (λ̂)y = µy . and let x̂ be a corresponding eigenvector with `H x̂ = 1. λ̂) = =0 µ `H 0 µ is the trivial one. . λ̂). 2. Dense nonlinear eigenvalue problems Inverse iteration ct. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 10 / 109 .

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 11 / 109 . where y k is an approximation to a left eigenvector. It was suggested in Ruhe (1973) to use `k = T (λk )H y k for the normalization. The normalization condition can be modified in each step of inverse iteration. Dense nonlinear eigenvalue problems Inverse iteration ct.

It was suggested in Ruhe (1973) to use `k = T (λk )H y k for the normalization. Dense nonlinear eigenvalue problems Inverse iteration ct. and which can be interpreted as one Newton step for solving the equation fk (λ) := (y k )H T (λ)x k = 0. Then the update for λ becomes (y k )H T (λk )x k λk +1 = λk − . where y k is an approximation to a left eigenvector. (y k )H T 0 (λk )x k which is the Rayleigh functional for general nonlinear eigenproblems proposed in Lancaster (2002). The normalization condition can be modified in each step of inverse iteration. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 11 / 109 .

Then the update for λ becomes (y k )H T (λk )x k λk +1 = λk − . where y k is an approximation to a left eigenvector. The normalization condition can be modified in each step of inverse iteration. (y k )H T 0 (λk )x k which is the Rayleigh functional for general nonlinear eigenproblems proposed in Lancaster (2002). and which can be interpreted as one Newton step for solving the equation fk (λ) := (y k )H T (λ)x k = 0. Dense nonlinear eigenvalue problems Inverse iteration ct. Rothe 1989) for symmetric nonlinear eigenproblems having a Rayleigh functional if we replace statement 4 by λk +1 = p(uk +1 ). For linear Hermitean eigenproblems this gives cubic convergence if λk is updated by the Rayleigh quotient. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 11 / 109 . The same is true (cf. It was suggested in Ruhe (1973) to use `k = T (λk )H y k for the normalization.

Dense nonlinear eigenvalue problems Inverse iteration ct. where κ is a given constant. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 12 / 109 . and x and v are given vectors. Osborne (1978) considers Newton’s method for the complex function β(λ) defined by T (λ)u = β(λ)x. v H u = κ.

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 12 / 109 . Dense nonlinear eigenvalue problems Inverse iteration ct. where κ is a given constant. This approach generalizes the method of Kublanovskaya. v H u = κ. and x and v are given vectors. and a method proposed in Osborne & Michelson (1964). inverse iteration. Osborne (1978) considers Newton’s method for the complex function β(λ) defined by T (λ)u = β(λ)x.

Dense nonlinear eigenvalue problems Inverse iteration ct. where κ is a given constant. and x and v are given vectors. Lancaster (2002). v H u = κ. inverse iteration. but also x and/or v are updated appropriately. and a method proposed in Osborne & Michelson (1964). Osborne (1964). Kublanovskaya (1970). Osborne (1978) considers Newton’s method for the complex function β(λ) defined by T (λ)u = β(λ)x.and Osborne & Michelson (1964). thus unifying the results in Anselone & Rall (1968). and that cubic convergence can be obtained if not only λ. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 12 / 109 . This approach generalizes the method of Kublanovskaya. It was proved that the rate of convergence is quadratic.

Dense nonlinear eigenvalue problems Inverse iteration ct. The disadvantage of inverse iteration with respect to efficiency is the large number of factorizations that are needed for each of the eigenvalues. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 13 / 109 .

The disadvantage of inverse iteration with respect to efficiency is the large number of factorizations that are needed for each of the eigenvalues. where the shift σ is kept fixed during the iteration.e. The obvious idea then is to use a version of a simplified Newton method. to use xk +1 = T (σ)−1 T 0 (λk )xk for some fixed shift σ. Dense nonlinear eigenvalue problems Inverse iteration ct. i. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 13 / 109 .

The disadvantage of inverse iteration with respect to efficiency is the large number of factorizations that are needed for each of the eigenvalues. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 13 / 109 . The obvious idea then is to use a version of a simplified Newton method. in general this method does not converge in the nonlinear case. i.e. Dense nonlinear eigenvalue problems Inverse iteration ct. However. where the shift σ is kept fixed during the iteration. to use xk +1 = T (σ)−1 T 0 (λk )xk for some fixed shift σ.

where the shift σ is kept fixed during the iteration.e. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 13 / 109 . from which one cannot recover an eigenpair of the nonlinear problem T (λ)x = 0. However. i. Dense nonlinear eigenvalue problems Inverse iteration ct. The iteration converges to an eigenpair of a linear problem T (σ)x = γT 0 (λ̃)x. The obvious idea then is to use a version of a simplified Newton method. in general this method does not converge in the nonlinear case. The disadvantage of inverse iteration with respect to efficiency is the large number of factorizations that are needed for each of the eigenvalues. to use xk +1 = T (σ)−1 T 0 (λk )xk for some fixed shift σ.

Dense nonlinear eigenvalue problems

Residual inverse iteration

If T (λ) is twice continuously differentiable,

x k − x k +1 = x k + (λk +1 − λk )T (λk )−1 T 0 (λk )x k
= T (λk )−1 (T (λk ) + (λk +1 − λk )T 0 (λk ))x k
= T (λk )−1 T (λk +1 )x k + O((λk +1 − λk )2 )

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 14 / 109

Dense nonlinear eigenvalue problems

Residual inverse iteration

If T (λ) is twice continuously differentiable,

x k − x k +1 = x k + (λk +1 − λk )T (λk )−1 T 0 (λk )x k
= T (λk )−1 (T (λk ) + (λk +1 − λk )T 0 (λk ))x k
= T (λk )−1 T (λk +1 )x k + O((λk +1 − λk )2 )

Neglecting second order terms yields the update

x k +1 = x k − T (λk )−1 T (λk +1 )x k ,

replacing λk by a fixed shift σ yields an update

x k +1 = x k − T (σ)−1 T (λk +1 )x k ,

without misconvergence.

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 14 / 109

Dense nonlinear eigenvalue problems

Residual inverse iteration ct.
1: start with an approximation x1 ∈ D to an eigenvector of T (λ)x = 0
2: for k = 1, 2, . . . until convergence do
3: solve `H T (σ)−1 T (λk +1 )x k = 0 for λk +1
(or (x k )H T (λk +1 )x k = 0 if T (·) is Hermitean)
4: compute the residual r k = T (λk +1 )x k
5: solve T (σ)d k = r k
6: set x k +1 = x k − d k , x k +1 = x k +1 /kx k +1 k
7: end for

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 15 / 109

Dense nonlinear eigenvalue problems

Residual inverse iteration ct.
1: start with an approximation x1 ∈ D to an eigenvector of T (λ)x = 0
2: for k = 1, 2, . . . until convergence do
3: solve `H T (σ)−1 T (λk +1 )x k = 0 for λk +1
(or (x k )H T (λk +1 )x k = 0 if T (·) is Hermitean)
4: compute the residual r k = T (λk +1 )x k
5: solve T (σ)d k = r k
6: set x k +1 = x k − d k , x k +1 = x k +1 /kx k +1 k
7: end for

THEOREM (Neumaier 1985)
Let T (λ) be twice continuously differentiable. Assume that λ̂ is a simple
eigenvalue of T (λ)x = 0, and let x̂ be a corresponding eigenvector
normalized by kx̂k = 1. Then the residual inverse iteration converges for all σ
sufficiently close to λ̂, and it holds

kx k +1 − x̂k
= O(|σ − λ̂|), and |λk +1 − λ̂| = O(kx k − x̂kt ).
kx k − x̂k

Here, t = 1 in the general case, and t = 2 if T (·) is Hermitean.
TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 15 / 109

Dense nonlinear eigenvalue problems Safeguarded iteration Let T (λ) ∈ Cn×n . λ ∈ J ⊂ R be a family of Hermitean matrices. such that the assumptions of the minmax characterization hold. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 16 / 109 . and let p : D → J denote the corresponding Rayleigh functional.

Dense nonlinear eigenvalue problems Safeguarded iteration Let T (λ) ∈ Cn×n . The minimum in λj = min max p(v ) dim V =j. This suggests TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 16 / 109 . λ ∈ J ⊂ R be a family of Hermitean matrices. such that the assumptions of the minmax characterization hold. V ∩D6=∅ v ∈V ∩D. v 6=0 is attained by the invariant subspace of T (λj ) corresponding to the j largest eigenvalues. and the maximum by every eigenvector corresponding to the eigenvalue 0. and let p : D → J denote the corresponding Rayleigh functional.

λ ∈ J ⊂ R be a family of Hermitean matrices. V ∩D6=∅ v ∈V ∩D. This suggests Safeguarded iteration 1: Start with an approximation µ1 to the `-th eigenvalue of T (λ)x = 0 2: for k = 1. Dense nonlinear eigenvalue problems Safeguarded iteration Let T (λ) ∈ Cn×n . . . . 2. The minimum in λj = min max p(v ) dim V =j. such that the assumptions of the minmax characterization hold. and let p : D → J denote the corresponding Rayleigh functional. and the maximum by every eigenvector corresponding to the eigenvalue 0. v 6=0 is attained by the invariant subspace of T (λj ) corresponding to the j largest eigenvalues. until convergence do 3: determine eigenvector u corresponding to the `-largest eigenvalue of T (µk ) 4: evaluate µk +1 = p(u) 5: end for TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 16 / 109 .

Dense nonlinear eigenvalue problems Convergence of safeguarded iteration (V. then for ` = 1 the safeguarded iteration converges globally to λ1 TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 17 / 109 .. Werner 1982) If λ1 ∈ J.

Werner 1982) If λ1 ∈ J. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 17 / 109 . then for ` = 1 the safeguarded iteration converges globally to λ1 If λ` ∈ J is a simple eigenvalue. then the safeguarded iteration converges locally and quadratically.. Dense nonlinear eigenvalue problems Convergence of safeguarded iteration (V.

Werner 1982) If λ1 ∈ J. then the convergence is even cubic. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 17 / 109 . then for ` = 1 the safeguarded iteration converges globally to λ1 If λ` ∈ J is a simple eigenvalue. Dense nonlinear eigenvalue problems Convergence of safeguarded iteration (V. then the safeguarded iteration converges locally and quadratically.. If T 0 (λ) is positive definite and x k in Step 3 is replaced by an eigenvector of T (σk )x = µT 0 (σk )x corresponding to the `-th largest eigenvalue.

then the convergence is even cubic. then the safeguarded iteration converges locally and quadratically.. If T 0 (λ) is positive definite and x k in Step 3 is replaced by an eigenvector of T (σk )x = µT 0 (σk )x corresponding to the `-th largest eigenvalue. then for ` = 1 the safeguarded iteration converges globally to λ1 If λ` ∈ J is a simple eigenvalue. A variant exists which is globally convergent also for higher eigenvalues. Werner 1982) If λ1 ∈ J. Dense nonlinear eigenvalue problems Convergence of safeguarded iteration (V. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 17 / 109 .

set m = 1 1: while m ≤ number of wanted eigenvalues do 2: compute eigenpair (µ. and compute r = T (λm )u 8: end if 9: expand search space V = [V . xm = u. vnew ] 10: update projected problem 11: end while TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 18 / 109 . kuk = 1.and residual r = T (µ)u 4: if kr k < ε then 5: accept approximate eigenpair λm = µ. increase m ← m + 1 6: reduce search space V if necessary 7: choose approximation (λm . Iterative projection methods Iterative projection method Require: Initial basis V with V H V = I. 3: determine Ritz vector u = Vy . y ) of projected problem V T T (λ)Vy = 0. u) to next eigenpair.

3: determine Ritz vector u = Vy . u) to next eigenpair. y ) of projected problem V T T (λ)Vy = 0. kuk = 1. vnew ] 10: update projected problem 11: end while Main tasks expand search space choose eigenpair of projected problem (locking. increase m ← m + 1 6: reduce search space V if necessary 7: choose approximation (λm . and compute r = T (λm )u 8: end if 9: expand search space V = [V . xm = u. Iterative projection methods Iterative projection method Require: Initial basis V with V H V = I.and residual r = T (µ)u 4: if kr k < ε then 5: accept approximate eigenpair λm = µ. set m = 1 1: while m ≤ number of wanted eigenvalues do 2: compute eigenpair (µ. purging) TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 18 / 109 .

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 19 / 109 . Expand V by a direction with high approximation potential for the next wanted eigenvector. Iterative projection methods Expansion of subspace Given a search space V ⊂ Cn .

Expand V by a direction with high approximation potential for the next wanted eigenvector. Iterative projection methods Expansion of subspace Given a search space V ⊂ Cn . then inverse iteration yields a suitable candidate v := T (θ)−1 T 0 (θ)x TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 19 / 109 . Let θ be an eigenvalue of the projected problem V H T (λ)Vy = 0 and x = Vy corresponding Ritz vector.

then inverse iteration yields a suitable candidate v := T (θ)−1 T 0 (θ)x BUT: In each step have to solve large linear system with varying matrix. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 19 / 109 . Let θ be an eigenvalue of the projected problem V H T (λ)Vy = 0 and x = Vy corresponding Ritz vector. Iterative projection methods Expansion of subspace Given a search space V ⊂ Cn . Expand V by a direction with high approximation potential for the next wanted eigenvector.

Iterative projection methods Arnoldi method The residual inverse iteration requires to solve several linear systems with the same system matrix. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 20 / 109 .

then expand V by new direction v = x − T (σ)−1 T (θ)x for some fixed shift σ. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 20 / 109 . If θ is an eigenvalue of the projected problem V H T (λ)Vy = 0 and x = Vy is a corresponding Ritz vector. Iterative projection methods Arnoldi method The residual inverse iteration requires to solve several linear systems with the same system matrix.

Therefore the resulting iterative projection method is called nonlinear Arnoldi method although no Krylov space is constructed and no Arnoldi recursion holds true. For the linear problem T (λ) = λI − A this is exactly the Cayley transformation and the corresponding iterative projection method is the shift-and-invert Arnoldi method. Since the Ritz vector x is contained in span V we expand V by v = T (σ)−1 T (θ)x. If θ is an eigenvalue of the projected problem V H T (λ)Vy = 0 and x = Vy is a corresponding Ritz vector. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 20 / 109 . In projection methods the new direction is orthonormalized against the previous ansatz vectors. Iterative projection methods Arnoldi method The residual inverse iteration requires to solve several linear systems with the same system matrix. then expand V by new direction v = x − T (σ)−1 T (θ)x for some fixed shift σ.

set k = m = 1 2: determine preconditioner M ≈ T (σ)−1 . Iterative projection methods Nonlinear Arnoldi Method 1: start with initial basis V . ṽ ] and reorthogonalize if necessary 18: update projected problem 19: end while TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 21 / 109 . k = k + 1 17: v = v − VV H v . V H V = I. y ) and set u = Vy . rk = T (µ)u 5: if krk k/kuk <  then 6: Accept eigenpair λm = µ. 7: if m == number of wanted eigenvalues then STOP end if 8: m =m+1 9: if krk −1 k/krk k > tol then 10: choose new pole σ.ṽ = v /kv k. V = [V . determine preconditioner M ≈ T (σ)−1 11: end if 12: restart if necessary 13: Choose approximations µ and u to next eigenvalue and eigenvector 14: determine r = T (µ)u and set k = 0 15: end if 16: v = Mr . xm = u. σ close to first wanted eigenvalue 3: while m ≤ number of wanted eigenvalues do 4: solve V H T (µ)Vy = 0 for (µ.

determine preconditioner M ≈ T (σ)−1 11: end if 12: restart if necessary 13: Choose approximations µ and u to next eigenvalue and eigenvector 14: determine r = T (µ)u and set k = 0 15: end if 16: v = Mr . σ close to first wanted eigenvalue 3: while m ≤ number of wanted eigenvalues do 4: solve V H T (µ)Vy = 0 for (µ. V = [V . xm = u. V H V = I. rk = T (µ)u 5: if krk k/kuk <  then 6: Accept eigenpair λm = µ. k = k + 1 17: v = v − VV H v . set k = m = 1 2: determine preconditioner M ≈ T (σ)−1 . ṽ ] and reorthogonalize if necessary 18: update projected problem 19: end while TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 22 / 109 . 7: if m == number of wanted eigenvalues then STOP end if 8: m =m+1 9: if krk −1 k/krk k > tol then 10: choose new pole σ. y ) and set u = Vy .ṽ = v /kv k. Iterative projection methods Comments on line 1 1: start with initial basis V .

g. Iterative projection methods Comments on line 1 Here preinformation on eigenvectors can be introduced into the algorithm (approximate eigenvectors of contiguous problems in reanalysis. e.) TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 23 / 109 .

Iterative projection methods Comments on line 1 Here preinformation on eigenvectors can be introduced into the algorithm (approximate eigenvectors of contiguous problems in reanalysis.g. and if we are interested in eigenvalues close to the parameter σ ∈ D: choose initial vector at random. e. and choose the eigenvector corresponding to the smallest eigenvalue in modulus or a small number of Schur vectors as initial basis of the search space. and execute a few Arnoldi steps for the linear eigenproblem T (σ)u = θu or T (σ)u = θT 0 (σ)u. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 23 / 109 .) If no information on eigenvectors is at hand.

Starting with a random vector without this preprocessing usually will yield a value µ in step 3 which is far away from σ and will avert convergence.) If no information on eigenvectors is at hand. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 23 / 109 . Iterative projection methods Comments on line 1 Here preinformation on eigenvectors can be introduced into the algorithm (approximate eigenvectors of contiguous problems in reanalysis.g. and if we are interested in eigenvalues close to the parameter σ ∈ D: choose initial vector at random. and execute a few Arnoldi steps for the linear eigenproblem T (σ)u = θu or T (σ)u = θT 0 (σ)u. and choose the eigenvector corresponding to the smallest eigenvalue in modulus or a small number of Schur vectors as initial basis of the search space. e.

rk = T (µ)u 5: if krk k/kuk <  then 6: Accept eigenpair λm = µ. V H V = I. V = [V . σ close to first wanted eigenvalue 3: while m ≤ number of wanted eigenvalues do 4: solve V H T (µ)Vy = 0 for (µ. set k = m = 1 2: determine preconditioner M ≈ T (σ)−1 . k = k + 1 17: v = v − VV H v . Iterative projection methods Comments on line 4 1: start with initial basis V . 7: if m == number of wanted eigenvalues then STOP end if 8: m =m+1 9: if krk −1 k/krk k > tol then 10: choose new pole σ.ṽ = v /kv k. y ) and set u = Vy . determine preconditioner M ≈ T (σ)−1 11: end if 12: restart if necessary 13: Choose approximations µ and u to next eigenvalue and eigenvector 14: determine r = T (µ)u and set k = 0 15: end if 16: v = Mr . xm = u. ṽ ] and reorthogonalize if necessary 18: update projected problem 19: end while TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 24 / 109 .

Iterative projection methods Comments on line 4 Since the dimension of the projected problem is small it can be solved by any dense solver like — methods based on characteristic function det T (λ) = 0 — inverse iteration — residual inverse iteration — successive linear problems TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 25 / 109 .

Iterative projection methods Comments on line 4 Since the dimension of the projected problem is small it can be solved by any dense solver like — methods based on characteristic function det T (λ) = 0 — inverse iteration — residual inverse iteration — successive linear problems For conservative gyroscopic systems: Determine Schur vectors of a (particular) linearization (Meerbergen (2001)) TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 25 / 109 .

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 25 / 109 . Iterative projection methods Comments on line 4 Since the dimension of the projected problem is small it can be solved by any dense solver like — methods based on characteristic function det T (λ) = 0 — inverse iteration — residual inverse iteration — successive linear problems For conservative gyroscopic systems: Determine Schur vectors of a (particular) linearization (Meerbergen (2001)) For symmetric problems allowing a minmax characterization of its eigenvalues this property is inherited by projected problems: therefore. eigenvalues can be determined one after the other by safeguarded iteration (Betcke. (2004)). V.

determine eigenvalue one after the other by successive linear approximations: TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 26 / 109 . Iterative projection methods Comments on line 4 ct. For general problems if one wants eigenvalues in vicinity of a fixed σ0 .

determine eigenvalue one after the other by successive linear approximations: If j − 1 eigenvalues are found and λ is an approximation to the next eigenvalue then iterate (V. For general problems if one wants eigenvalues in vicinity of a fixed σ0 . (2003)): TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 26 / 109 . Iterative projection methods Comments on line 4 ct.

incr = θ(i(j). i(j)). θ] = eig(T (λ). i(j)). vv = v (:. (2003)): while abs(incr) > tol do [v . λ = λ − incr. Iterative projection methods Comments on line 4 ct. end while. T 0 (λ)). i] = sort(abs(diag(σ0 + θ − λ))). For general problems if one wants eigenvalues in vicinity of a fixed σ0 . TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 26 / 109 . determine eigenvalue one after the other by successive linear approximations: If j − 1 eigenvalues are found and λ is an approximation to the next eigenvalue then iterate (V. [µ.

Iterative projection methods Comments on line 4 ct. For a (small) perturbation T (λ)x = 0 of a linear problem T̃ (λ)x = Kx − λMx = 0 (for instance for a finite element model of a vibrating structure with nonproportional damping) often the eigenmodes of both problems do not differ very much whereas the eigenvalues do. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 27 / 109 .

e. determine an approximate eigenvalue λ̃ of the nonlinear problem from one of the complex equations `H V H T (σ)−1 T (λm )Vy = 0 or y H V H T (λ)Vy = 0. and correct (λ̃. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 27 / 109 . Iterative projection methods Comments on line 4 ct. Hence.). y ) by (residual) inverse iteration. it is at hand to determine an eigenvector y of the projected linear problem V H (K − λM)Vy = 0 corresponding to a particular eigenvalue (m-smallest. For a (small) perturbation T (λ)x = 0 of a linear problem T̃ (λ)x = Kx − λMx = 0 (for instance for a finite element model of a vibrating structure with nonproportional damping) often the eigenmodes of both problems do not differ very much whereas the eigenvalues do.g.

Iterative projection methods Comments on lines 9 . k = k + 1 17: v = v − VV H v . σ close to first wanted eigenvalue 3: while m ≤ number of wanted eigenvalues do 4: solve V H T (µ)Vy = 0 for (µ.ṽ = v /kv k. V H V = I. V = [V . set k = m = 1 2: determine preconditioner M ≈ T (σ)−1 . ṽ ] and reorthogonalize if necessary 18: update projected problem 19: end while TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 28 / 109 . determine preconditioner M ≈ T (σ)−1 11: end if 12: restart if necessary 13: Choose approximations µ and u to next eigenvalue and eigenvector 14: determine r = T (µ)u and set k = 0 15: end if 16: v = Mr . rk = T (µ)u 5: if krk k/kuk <  then 6: Accept eigenpair λm = µ. y ) and set u = Vy .11 1: start with initial basis V . xm = u. 7: if m == number of wanted eigenvalues then STOP end if 8: m =m+1 9: if krk −1 k/krk k > tol then 10: choose new pole σ.

Therefore. Iterative projection methods Comments on lines 9 – 11: Residual inverse iteration converges (at least) linearly where the contraction rate satisfies O(|σ − λ̂|). the preconditioner is updated if the convergence (measured by the reduction of the residual norm in the final step before convergence) has become too large. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 29 / 109 .

rk = T (µ)u 5: if krk k/kuk <  then 6: Accept eigenpair λm = µ. determine preconditioner M ≈ T (σ)−1 11: end if 12: restart if necessary 13: Choose approximations µ and u to next eigenvalue and eigenvector 14: determine r = T (µ)u and set k = 0 15: end if 16: v = Mr . V H V = I. Iterative projection methods Comments on line 12 1: start with initial basis V . 7: if m == number of wanted eigenvalues then STOP end if 8: m =m+1 9: if krk −1 k/krk k > tol then 10: choose new pole σ. xm = u. V = [V .ṽ = v /kv k. σ close to first wanted eigenvalue 3: while m ≤ number of wanted eigenvalues do 4: solve V H T (µ)Vy = 0 for (µ. ṽ ] and reorthogonalize if necessary 18: update projected problem 19: end while TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 30 / 109 . y ) and set u = Vy . set k = m = 1 2: determine preconditioner M ≈ T (σ)−1 . k = k + 1 17: v = v − VV H v .

Iterative projection methods Comments on line 12: As the subspaces expand in the course of the algorithm the increasing storage or the computational cost for solving the projected eigenvalue problems may make it necessary to restart the algorithm and purge some of the basis vectors. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 31 / 109 .

Since a restart destroys information on the eigenvectors and particularly on the one the method is just aiming at. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 31 / 109 . Iterative projection methods Comments on line 12: As the subspaces expand in the course of the algorithm the increasing storage or the computational cost for solving the projected eigenvalue problems may make it necessary to restart the algorithm and purge some of the basis vectors. we restart only if an eigenvector has just converged.

Reasonable search spaces after restart are — the space spanned by the already converged eigenvectors (or a space slightly larger) — an invariant space of T (σ) or eigenspace of T (σ)y = µT 0 (σ)y corresponding to small eigenvalues in modulus. Since a restart destroys information on the eigenvectors and particularly on the one the method is just aiming at. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 31 / 109 . Iterative projection methods Comments on line 12: As the subspaces expand in the course of the algorithm the increasing storage or the computational cost for solving the projected eigenvalue problems may make it necessary to restart the algorithm and purge some of the basis vectors. we restart only if an eigenvector has just converged.

Iterative projection methods Restarts 500 450 CPU time 400 350 300 time [s] 250 200 150 100 50 nonlinear eigensolver 0 0 20 40 60 80 100 120 iteration TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 32 / 109 .

solver 0 0 20 40 60 80 100 120 iteration TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 33 / 109 . Iterative projection methods Restarts 250 restart 200 CPU time time [s] 150 LU update 100 50 nonlin.

V = [V . k = k + 1 17: v = v − VV H v . set k = m = 1 2: determine preconditioner M ≈ T (σ)−1 . ṽ ] and reorthogonalize if necessary 18: update projected problem 19: end while TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 34 / 109 . V H V = I. 7: if m == number of wanted eigenvalues then STOP end if 8: m =m+1 9: if krk −1 k/krk k > tol then 10: choose new pole σ.ṽ = v /kv k. xm = u. rk = T (µ)u 5: if krk k/kuk <  then 6: Accept eigenpair λm = µ. σ close to first wanted eigenvalue 3: while m ≤ number of wanted eigenvalues do 4: solve V H T (µ)Vy = 0 for (µ. y ) and set u = Vy . Iterative projection methods Comments on line 13 1: start with initial basis V . determine preconditioner M ≈ T (σ)−1 11: end if 12: restart if necessary 13: Choose approximations µ and u to next eigenvalue and eigenvector 14: determine r = T (µ)u and set k = 0 15: end if 16: v = Mr .

Iterative projection methods Comment on line 13 If the projected problem in step 3. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 35 / 109 . is solved by linearization the method of successive linear problems safeguarded iteration which solve in each iteration step a linear eigenproblem then at the same time one gets approximations to further eigenpairs of the nonlinear problem which can be exploited to get a good initial approximation to the next wanted eigenpair.

determine preconditioner M ≈ T (σ)−1 11: end if 12: restart if necessary 13: Choose approximations µ and u to next eigenvalue and eigenvector 14: determine r = T (µ)u and set k = 0 15: end if 16: v = Mr . 7: if m == number of wanted eigenvalues then STOP end if 8: m =m+1 9: if krk −1 k/krk k > tol then 10: choose new pole σ. ṽ ] and reorthogonalize if necessary 18: update projected problem 19: end while TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 36 / 109 . y ) and set u = Vy . rk = T (µ)u 5: if krk k/kuk <  then 6: Accept eigenpair λm = µ. set k = m = 1 2: determine preconditioner M ≈ T (σ)−1 .ṽ = v /kv k. xm = u. V H V = I. V = [V . σ close to first wanted eigenvalue 3: while m ≤ number of wanted eigenvalues do 4: solve V H T (µ)Vy = 0 for (µ. k = k + 1 17: v = v − VV H v . Iterative projection methods Comments on line 18 1: start with initial basis V .

Iterative projection methods Comment on line 18 Often the family of matrices T (λ) has the form p X T (λ) = fj (λ)Cj j=1 with differentiable complex functions fj and fixed matrices Cj ∈ Cn×n . TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 37 / 109 .

k +1 = . v H Cj Vk v H Cj v TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 37 / 109 . Then the projected problem has the form p p X X TVk (λ) = fj (λ)VkH Cj Vk =: fj (λ)Cj. Iterative projection methods Comment on line 18 Often the family of matrices T (λ) has the form p X T (λ) = fj (λ)Cj j=1 with differentiable complex functions fj and fixed matrices Cj ∈ Cn×n .k can be updated according to VkH Cj v   Cj.k j=1 j=1 and the matrices Cj.k Cj.

Iterative projection methods Locking of converged eigenvectors A major problem with iterative projection methods for nonlinear eigenproblems when approximating more than one eigenvalue is to inhibit the method from converging to an eigenpair which was detected already previously. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 38 / 109 .

Iterative projection methods Locking of converged eigenvectors A major problem with iterative projection methods for nonlinear eigenproblems when approximating more than one eigenvalue is to inhibit the method from converging to an eigenpair which was detected already previously. linear problems: (incomplete) Schur decomposition TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 38 / 109 .

Iterative projection methods Locking of converged eigenvectors A major problem with iterative projection methods for nonlinear eigenproblems when approximating more than one eigenvalue is to inhibit the method from converging to an eigenpair which was detected already previously. linear problems: (incomplete) Schur decomposition quadratic problems: Meerbergen (2001) based on linearization and Schur form of linearized problem (lock 2 vectors in each step) TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 38 / 109 .

Iterative projection methods Locking of converged eigenvectors A major problem with iterative projection methods for nonlinear eigenproblems when approximating more than one eigenvalue is to inhibit the method from converging to an eigenpair which was detected already previously. Liu. Wang (2005) based on linearization and knowledge of all eigenvalues of the linearized problem TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 38 / 109 . Lin. linear problems: (incomplete) Schur decomposition quadratic problems: Meerbergen (2001) based on linearization and Schur form of linearized problem (lock 2 vectors in each step) cubic problem: Hwang.

Wang (2005) based on linearization and knowledge of all eigenvalues of the linearized problem Approach of Hwang et al. can be generalized directly to general problems if all eigenvalues of projected problems are determined TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 38 / 109 . Liu. Lin. linear problems: (incomplete) Schur decomposition quadratic problems: Meerbergen (2001) based on linearization and Schur form of linearized problem (lock 2 vectors in each step) cubic problem: Hwang. Iterative projection methods Locking of converged eigenvectors A major problem with iterative projection methods for nonlinear eigenproblems when approximating more than one eigenvalue is to inhibit the method from converging to an eigenpair which was detected already previously.

Lin. Liu. can be generalized directly to general problems if all eigenvalues of projected problems are determined for Hermitean problems one can often take advantage of a variational characterization of eigenvalues TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 38 / 109 . Iterative projection methods Locking of converged eigenvectors A major problem with iterative projection methods for nonlinear eigenproblems when approximating more than one eigenvalue is to inhibit the method from converging to an eigenpair which was detected already previously. linear problems: (incomplete) Schur decomposition quadratic problems: Meerbergen (2001) based on linearization and Schur form of linearized problem (lock 2 vectors in each step) cubic problem: Hwang. Wang (2005) based on linearization and knowledge of all eigenvalues of the linearized problem Approach of Hwang et al.

Iterative projection methods Alternative expansion of the subspace Given subspace V ⊂ Cn . Expand V by a direction with high approximation potential for the next wanted eigenvector. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 39 / 109 .

Iterative projection methods Alternative expansion of the subspace Given subspace V ⊂ Cn . then inverse iteration yields suitable candidate v := T (θ)−1 T 0 (θ)x TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 39 / 109 . Expand V by a direction with high approximation potential for the next wanted eigenvector. Let θ be an eigenvalue of the projected problem V H T (λ)Vy = 0 and x = Vy corresponding Ritz vector.

Iterative projection methods Alternative expansion of the subspace Given subspace V ⊂ Cn . then inverse iteration yields suitable candidate v := T (θ)−1 T 0 (θ)x BUT: In each step have to solve large linear system with varying matrix TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 39 / 109 . Expand V by a direction with high approximation potential for the next wanted eigenvector. Let θ be an eigenvalue of the projected problem V H T (λ)Vy = 0 and x = Vy corresponding Ritz vector.

then inverse iteration yields suitable candidate v := T (θ)−1 T 0 (θ)x BUT: In each step have to solve large linear system with varying matrix and in a truly large problem the vector v will not be accessible but only an inexact solution ṽ := v + e of T (θ)v = T 0 (θ)x. Iterative projection methods Alternative expansion of the subspace Given subspace V ⊂ Cn . ṽ }. Expand V by a direction with high approximation potential for the next wanted eigenvector. Let θ be an eigenvalue of the projected problem V H T (λ)Vy = 0 and x = Vy corresponding Ritz vector. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 39 / 109 . and the next iterate will be a solution of the projection of T (λ)x = 0 upon the expanded space Ṽ := span{V.

v }. and therefore the eigenvector we are looking for will be very close to the plane E := span{x. Iterative projection methods Alternative expansion of search space ct. Then v will be an even better approximation. We assume that x is already a good approximation to an eigenvector of T (·). TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 40 / 109 .

Then v will be an even better approximation. Iterative projection methods Alternative expansion of search space ct. We therefore neglect the influence of the orthogonal complement of x in V on the next iterate and discuss the nearness of the planes E and Ẽ := span{x. v }. and therefore the eigenvector we are looking for will be very close to the plane E := span{x. ṽ }. We assume that x is already a good approximation to an eigenvector of T (·). TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 40 / 109 .

v }. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 40 / 109 . Iterative projection methods Alternative expansion of search space ct. ṽ }. v }. Then v will be an even better approximation. then the projection of T (λ) upon Ṽ should be similar to the one upon span{V. We therefore neglect the influence of the orthogonal complement of x in V on the next iterate and discuss the nearness of the planes E and Ẽ := span{x. and the approximation properties of inverse iteration should be maintained. If the angle between these two planes is small. We assume that x is already a good approximation to an eigenvector of T (·). and therefore the eigenvector we are looking for will be very close to the plane E := span{x.

ṽ }. Then v will be an even better approximation. We therefore neglect the influence of the orthogonal complement of x in V on the next iterate and discuss the nearness of the planes E and Ẽ := span{x. and therefore the eigenvector we are looking for will be very close to the plane E := span{x. v }. Iterative projection methods Alternative expansion of search space ct. and the approximation properties of inverse iteration should be maintained. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 40 / 109 . v }. If the angle between these two planes is small. We assume that x is already a good approximation to an eigenvector of T (·). then the projection of T (λ) upon Ṽ should be similar to the one upon span{V. then it is not surprising that the convergence properties of inverse iteration are not reflected by the projection method. If this angle can become large.

and the relative error of ṽ by ε := kek. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 41 / 109 . Iterative projection methods Theorem Let φ0 = arccos(x T v ) denote the angle between x and v .

Iterative projection methods Theorem Let φ0 = arccos(x T v ) denote the angle between x and v . and the relative error of ṽ by ε := kek. Then the maximal possible acute angle between the planes E and Ẽ is ( q β(ε) = arccos 1 − ε2 / sin2 φ0 if ε ≤ | sin φ0 | π 2 if ε ≥ | sin φ0 | TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 41 / 109 .

Iterative projection methods “Proof” TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 42 / 109 .

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 43 / 109 . Iterative projection methods Expansion by inexact inverse iteration Obviously for every α ∈ R. α 6= 0 the plane E is also spanned by x and x + αv .

α 6= 0 the plane E is also spanned by x and x + αv . TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 43 / 109 . Iterative projection methods Expansion by inexact inverse iteration Obviously for every α ∈ R. ε) = arccos 1 − ε2 / sin2 φ(α) if ε ≤ | sin φ(α)| π 2 if ε ≥ | sin φ(α)| where φ(α) denotes the angle between x and x + αv . If Ẽ(α) is the plane which is spanned by x and a perturbed realization x + αv + e of x + αv then by the same arguments as in the proof of the Theorem the maximum angle between E and Ẽ(α) is ( q γ(α.

Iterative projection methods Expansion by inexact inverse iteration Obviously for every α ∈ R. α 6= 0 the plane E is also spanned by x and x + αv . ε) = arccos 1 − ε2 / sin2 φ(α) if ε ≤ | sin φ(α)| π 2 if ε ≥ | sin φ(α)| where φ(α) denotes the angle between x and x + αv . if α is chosen such that x and x + αv are orthogonal TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 43 / 109 . If Ẽ(α) is the plane which is spanned by x and a perturbed realization x + αv + e of x + αv then by the same arguments as in the proof of the Theorem the maximum angle between E and Ẽ(α) is ( q γ(α. Since the mapping q φ 7→ arccos 1 − ε2 / sin2 φ decreases monotonically the expansion of the search space by an inexact realization of t := x + αv is most robust with respect to small perturbations.

(∗) x H T (θ)−1 T 0 (θ)x TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 44 / 109 . Iterative projection methods Expansion by inexact inverse iteration ct. t := x + αv is orthogonal to x iff xHx v =x− T (θ)−1 T (θ)x..

2 if ε ≥ 1 TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 44 / 109 . ε) = π . t := x + αv is orthogonal to x iff xHx v =x− T (θ)−1 T (θ)x. Iterative projection methods Expansion by inexact inverse iteration ct. (∗) x H T (θ)−1 T 0 (θ)x which yields a maximum acute angle between E and Ẽ(α)  √ arccos 1 − ε2 if ε ≤ 1 γ(α..

Iterative projection methods Expansion by inexact inverse iteration TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 45 / 109 .

Iterative projection methods Expansion by inexact inverse iteration ct. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 46 / 109 .

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 47 / 109 . Iterative projection methods Expansion by inexact inverse iteration ct.

t⊥x x H T 0 (θ)x TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 48 / 109 . (∗) x H T (θ)−1 T 0 (θ)x of the current search space V is the solution of the equation T 0 (θ)xx H (I − )T (θ)(I − xx H )t = −r .. Iterative projection methods Jacobi–Davidson method The expansion xHx v =x− T (θ)−1 T (θ)x.

Iterative projection methods Jacobi–Davidson method The expansion xHx v =x− T (θ)−1 T (θ)x. Booten. (∗) x H T (θ)−1 T 0 (θ)x of the current search space V is the solution of the equation T 0 (θ)xx H (I − )T (θ)(I − xx H )t = −r . t⊥x x H T 0 (θ)x This is the so called correction equation of the Jacobi–Davidson method which was derived in T. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 48 / 109 .. Fokkema and van der Vorst (1996) for linear and polynomial eigenvalue problems. Betcke & Voss (2004) generalizing the approach of Sleijpen.

Betcke & Voss (2004) generalizing the approach of Sleijpen. Booten. Hence. Fokkema and van der Vorst (1996) for linear and polynomial eigenvalue problems.. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 48 / 109 . (∗) x H T (θ)−1 T 0 (θ)x of the current search space V is the solution of the equation T 0 (θ)xx H (I − )T (θ)(I − xx H )t = −r . the Jacobi–Davidson method is the most robust realization of an expansion of a search space such that the direction of inverse iteration is contained in the expanded space in the sense that it is least sensitive to inexact solves of linear systems T (θ)v = T 0 (θ)x. Iterative projection methods Jacobi–Davidson method The expansion xHx v =x− T (θ)−1 T (θ)x. t⊥x x H T 0 (θ)x This is the so called correction equation of the Jacobi–Davidson method which was derived in T.

u) to next eigenpair 10: compute residual r = T (λm )u. σ close to first wanted eigenvalue 3: while m ≤ number of wanted eigenvalues do 4: compute approximation to m-th wanted eigenvalue λm and corresponding eigenvector xm of projected problem V H T (λ)Vx = 0 5: determine Ritz vector u = Vxm and residual r = T (λm )u 6: if kr k/kuk <  then 7: accept approximate eigenpair (λm . t ⊥ u. increase m = m + 1.) 0 H H (I − uTH(λ m )uu T 0 (λm )u )T (λm )(I − uuuH u )t = −r . V H V = I. u). and expand subspace V = [V . end if 11: solve approximately (by preconditioned GMRES or BiCGStab. 12: orthogonalize t = t − VV H t. e. 8: reduce search space V if necessary 9: choose approximation (λm . Iterative projection methods Nonlinear Jacobi-Davidson 1: Start with an initial basis V . v ] 13: determine new preconditioner K ≈ T (λm )−1 if necessary 14: update projected problem 15: end while TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 49 / 109 .g. v = t/ktk. m = 1 2: determine preconditioner K ≈ T (σ)−1 .

Iterative projection methods

Comment on line 11

1: Start with an initial basis V , V H V = I; m = 1
2: determine preconditioner K ≈ T (σ)−1 , σ close to first wanted eigenvalue
3: while m ≤ number of wanted eigenvalues do
4: compute approximation to m-th wanted eigenvalue λm and
corresponding eigenvector xm of projected problem V H T (λ)Vx = 0
5: determine Ritz vector u = Vxm and residual r = T (λm )u
6: if kr k/kuk <  then
7: accept approximate eigenpair (λm , u); increase m = m + 1;
8: reduce search space V if necessary
9: choose approximation (λm , u) to next eigenpair
10: compute residual r = T (λm )u; end if
11: solve approximately (by preconditioned GMRES or BiCGStab, e.g.)
0 H H
(I − uTH(λ m )uu
T 0 (λm )u
)T (λm )(I − uuuH u )t = −r , t ⊥ u.
12: orthogonalize t = t − VV H t, v = t/ktk, and expand subspace V = [V , v ]
13: determine new preconditioner K ≈ T (λm )−1 if necessary
14: update projected problem
15: end while

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 50 / 109

Iterative projection methods

Comment on step 11
The correction equation
T 0 (θ)xx T
(I − )T (θ)(I − xx T )t = −r , t⊥x
x T T 0 (θ)x
is solved approximately by a few steps of an iterative solver (GMRES or
BiCGStab).

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 51 / 109

Iterative projection methods

Comment on step 11
The correction equation
T 0 (θ)xx T
(I − )T (θ)(I − xx T )t = −r , t⊥x
x T T 0 (θ)x
is solved approximately by a few steps of an iterative solver (GMRES or
BiCGStab).

Similarly as for the linear eigenproblem preconditioning by
px H xx H
K̃ := (I − )K (I − )
xHp xHx
where K ≈ T (σ) is a preconditioner of T (σ) and p := T 0 (θ)x can be
implemented easily.

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 51 / 109

Iterative projection methods

Comment on step 11
The correction equation
T 0 (θ)xx T
(I − )T (θ)(I − xx T )t = −r , t⊥x
x T T 0 (θ)x
is solved approximately by a few steps of an iterative solver (GMRES or
BiCGStab).

Similarly as for the linear eigenproblem preconditioning by
px H xx H
K̃ := (I − )K (I − )
xHp xHx
where K ≈ T (σ) is a preconditioner of T (σ) and p := T 0 (θ)x can be
implemented easily.

Taking into account the projectors in the preconditioner, i.e. using K̃ instead of
K , raises the cost of the preconditioned Krylov solver only slightly.

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 51 / 109

e. using K̃ instead of K . t⊥x x T T 0 (θ)x is solved approximately by a few steps of an iterative solver (GMRES or BiCGStab). To initialize one has to solve the linear system K p̃ = p and to determine the scalar product α := x H p̃ = x H K −1 p. Similarly as for the linear eigenproblem preconditioning by px H xx H K̃ := (I − )K (I − ) xHp xHx where K ≈ T (σ) is a preconditioner of T (σ) and p := T 0 (θ)x can be implemented easily. Taking into account the projectors in the preconditioner. i. These computations have to be executed just once. raises the cost of the preconditioned Krylov solver only slightly. Iterative projection methods Comment on step 11 The correction equation T 0 (θ)xx T (I − )T (θ)(I − xx T )t = −r . TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 51 / 109 .

Iterative projection methods Comment on step 11 Hwang. Wang. Lin. t ⊥ u u T (θ)u u u approximately TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 52 / 109 . Wang (2004) considered a non-iterative solution (which was already contained in the paper of Sleijpen et al. but was not considered in subsequent papers): Solve  T 0 (θ)uu H   uu H  I− H 0 T (θ) I − H t = −r . (1996).

t ⊥ u u T (θ)u u u approximately by computing u H M −1 r t = M −1 r + αM −1 T 0 (θ)u with α := u H M −1 T 0 (θ)u where M is a preconditioner of T (θ). Lin. (1996). Iterative projection methods Comment on step 11 Hwang. Wang (2004) considered a non-iterative solution (which was already contained in the paper of Sleijpen et al. but was not considered in subsequent papers): Solve M   T 0 (θ)uu H  uu H  I− H 0 T (θ) I − H t = −r . TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 52 / 109 . Wang.

(1996). Iterative projection methods Comment on step 11 Hwang. Wang. but was not considered in subsequent papers): Solve M   T 0 (θ)uu H  uu H  I− H 0 T (θ) I − H t = −r . t ⊥ u u T (θ)u u u approximately by computing u H M −1 r t = M −1 r + αM −1 T 0 (θ)u with α := u H M −1 T 0 (θ)u where M is a preconditioner of T (θ). Wang (2004) considered a non-iterative solution (which was already contained in the paper of Sleijpen et al. Method combines preconditioned Arnoldi method (M −1 r ) and simplified inverse iteration (M −1 T 0 (θ)u) TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 52 / 109 . Lin.

σ close to first wanted eigenvalue 3: while m ≤ number of wanted eigenvalues do 4: compute approximation to m-th wanted eigenvalue λm and corresponding eigenvector xm of projected problem V H T (λ)Vx = 0 5: determine Ritz vector u = Vxm and residual r = T (λm )u 6: if kr k/kuk <  then 7: accept approximate eigenpair (λm . end if 11: solve approximately (by preconditioned GMRES or BiCGStab. v ] 13: determine new preconditioner K ≈ T (λm )−1 if necessary 14: update projected problem 15: end while TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 53 / 109 . 8: reduce search space V if necessary 9: choose approximation (λm . u) to next eigenpair 10: compute residual r = T (λm )u. Iterative projection methods Comment on line 11 1: Start with an initial basis V .) 0 H H (I − uTH(λ m )uu T 0 (λm )u )T (λm )(I − uuuH u )t = −r . V H V = I. t ⊥ u. u). and expand subspace V = [V . v = t/ktk. increase m = m + 1. e. m = 1 2: determine preconditioner K ≈ T (σ)−1 .g. 12: orthogonalize t = t − VV H t.

e.g. but the speed of convergence is reduced considerably with incomplete LU. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 54 / 109 . Iterative projection methods Comment on step 13 We preconditioned by an LU factorization of T (σ) for a shift close to the wanted eigenvalue. Any other preconditioner is fine.

but the speed of convergence is reduced considerably with incomplete LU. Iterative projection methods Comment on step 13 We preconditioned by an LU factorization of T (σ) for a shift close to the wanted eigenvalue. e. but we allowed at most 10 GMRES steps. Any other preconditioner is fine.g. In the k -th outer iteration step for the n-th eigenvalue we iterated by GMRES until the residual was reduced by 2−k . TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 54 / 109 .

but we allowed at most one update for every eigenvalue. In the k -th outer iteration step for the n-th eigenvalue we iterated by GMRES until the residual was reduced by 2−k . Any other preconditioner is fine. but we allowed at most 10 GMRES steps. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 54 / 109 . Iterative projection methods Comment on step 13 We preconditioned by an LU factorization of T (σ) for a shift close to the wanted eigenvalue. If GMRES needed more than 5 iteration steps to satisfy these accuracy requirements we updated the preconditioner. e.g. but the speed of convergence is reduced considerably with incomplete LU.

2006) generalized the rational Krylov approach for linear eigenproblems to sparse nonlinear eigenvalue problems by nesting the linearization of T (λ)x = 0 by Regula falsi and the solution of the resulting linear eigenproblem by Arnoldi’s method. where the Regula falsi iteration and the Arnoldi recursion are knit together. Iterative projection methods Rational Krylov method Ruhe (2000. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 55 / 109 .

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 55 / 109 . Similarly as in the rational Krylov process a sequence Vk of subspaces of Cn is constructed. Iterative projection methods Rational Krylov method Ruhe (2000. where the Regula falsi iteration and the Arnoldi recursion are knit together.2006) generalized the rational Krylov approach for linear eigenproblems to sparse nonlinear eigenvalue problems by nesting the linearization of T (λ)x = 0 by Regula falsi and the solution of the resulting linear eigenproblem by Arnoldi’s method. and at the same time Hessenberg matrices Hk are updated which approximate the projection of T (σ)−1 T (λk ) to Vk .

Similarly as in the rational Krylov process a sequence Vk of subspaces of Cn is constructed. Here σ denotes a shift and λk an approximation to the wanted eigenvalue of T (·). Iterative projection methods Rational Krylov method Ruhe (2000.2006) generalized the rational Krylov approach for linear eigenproblems to sparse nonlinear eigenvalue problems by nesting the linearization of T (λ)x = 0 by Regula falsi and the solution of the resulting linear eigenproblem by Arnoldi’s method. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 55 / 109 . where the Regula falsi iteration and the Arnoldi recursion are knit together. and at the same time Hessenberg matrices Hk are updated which approximate the projection of T (σ)−1 T (λk ) to Vk .

where the Regula falsi iteration and the Arnoldi recursion are knit together. and at the same time Hessenberg matrices Hk are updated which approximate the projection of T (σ)−1 T (λk ) to Vk . Here σ denotes a shift and λk an approximation to the wanted eigenvalue of T (·). Then a Ritz vector of Hk corresponding to an eigenvalue of small modulus approximates an eigenvector of the nonlinear problem from which a (hopefully) improved eigenvalue approximation of T (λ)x = 0 is obtained. Iterative projection methods Rational Krylov method Ruhe (2000.2006) generalized the rational Krylov approach for linear eigenproblems to sparse nonlinear eigenvalue problems by nesting the linearization of T (λ)x = 0 by Regula falsi and the solution of the resulting linear eigenproblem by Arnoldi’s method. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 55 / 109 . Similarly as in the rational Krylov process a sequence Vk of subspaces of Cn is constructed.

and at the same time Hessenberg matrices Hk are updated which approximate the projection of T (σ)−1 T (λk ) to Vk .2006) generalized the rational Krylov approach for linear eigenproblems to sparse nonlinear eigenvalue problems by nesting the linearization of T (λ)x = 0 by Regula falsi and the solution of the resulting linear eigenproblem by Arnoldi’s method. Similarly as in the rational Krylov process a sequence Vk of subspaces of Cn is constructed. Then a Ritz vector of Hk corresponding to an eigenvalue of small modulus approximates an eigenvector of the nonlinear problem from which a (hopefully) improved eigenvalue approximation of T (λ)x = 0 is obtained. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 55 / 109 . Iterative projection methods Rational Krylov method Ruhe (2000. Hence. Here σ denotes a shift and λk an approximation to the wanted eigenvalue of T (·). where the Regula falsi iteration and the Arnoldi recursion are knit together. in this approach the two numerical subtasks reducing the large dimension to a much smaller one and solving a nonlinear eigenproblem which are solved separately in the Arnoldi and the Jacobi–Davidson methods are attacked simultaneously.

Iterative projection methods Rational Krylov method ct. (1) σ−µ µ−σ TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 56 / 109 . Linearizing the nonlinear family T (λ) by Lagrange interpolation between two points µ and σ one gets λ−µ λ−σ T (λ) = T (σ) + T (µ) + higher order terms.

Iterative projection methods Rational Krylov method ct. (3) 1−θ TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 56 / 109 . and multiplying by T (σ)−1 from the right one obtains λj − λj−1 T (σ)−1 T (λj−1 )w = θw with θ = (2) λj − σ predicting a singularity at θ λj = λj−1 + (λj−1 − σ). neglecting the remainder in the Lagrange interpolation. Linearizing the nonlinear family T (λ) by Lagrange interpolation between two points µ and σ one gets λ−µ λ−σ T (λ) = T (σ) + T (µ) + higher order terms. (1) σ−µ µ−σ Keeping σ fixed for several steps. iterating on µ.

(1) σ−µ µ−σ Keeping σ fixed for several steps. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 56 / 109 . neglecting the remainder in the Lagrange interpolation. and multiplying by T (σ)−1 from the right one obtains λj − λj−1 T (σ)−1 T (λj−1 )w = θw with θ = (2) λj − σ predicting a singularity at θ λj = λj−1 + (λj−1 − σ). and choosing the smallest eigenvalue of (2) in modulus for every j one can expect convergence to an eigenvalue close to the initial approximation λ1 . Linearizing the nonlinear family T (λ) by Lagrange interpolation between two points µ and σ one gets λ−µ λ−σ T (λ) = T (σ) + T (µ) + higher order terms. iterating on µ. Iterative projection methods Rational Krylov method ct. (3) 1−θ If the dimension n is small then this linear eigenproblem can be used to approximate an eigenvalue of the nonlinear problem.

. .j−1 . . v j . . . Assume that the method has performed j steps. orthonormal vectors v 1 . where Vj = [v 1 . (4) is fulfilled (at least approximately). v j ]. yielding approximations λ1 . . . Iterative projection methods Rational Krylov method ct. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 57 / 109 . . . λj to an eigenvalue.j−1 ∈ Cj×(j−1) such that the Arnoldi recursion T (σ)−1 T (λj−1 )Vj−1 = Vj Hj. For large and sparse matrices Ruhe suggested to combine the linearization (1) with a linear Arnoldi process. . . . and an upper Hessenberg matrix Hj.

v j ]. . . . . and an upper Hessenberg matrix Hj. v j . Updating the matrix Hj.j = (5) 0 kr⊥ k where k j = VjH r j .j . . where Vj = [v 1 . Assume that the method has performed j steps. For large and sparse matrices Ruhe suggested to combine the linearization (1) with a linear Arnoldi process. yielding approximations λ1 . and r⊥ = r j − Vj VjH r j which due to the nonlinearity of T (·) violates the next Arnoldi relation T (σ)−1 T (λj )Vj = Vj+1 H̃j+1. . .j−1 H̃j+1. (4) is fulfilled (at least approximately).j−1 ∈ Cj×(j−1) such that the Arnoldi recursion T (σ)−1 T (λj−1 )Vj−1 = Vj Hj. Iterative projection methods Rational Krylov method ct.j−1 . TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 57 / 109 . λj to an eigenvalue. . orthonormal vectors v 1 .j−1 according to the linear theory yields kj   Hj. . r j = T (λj )v j . . . v j+1 = v⊥ /kv⊥ k. .

j = (6) 0 kr⊥ k arriving at a first version of the rational Krylov method: TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 58 / 109 .j−1 kj Hj+1.j−1 − 1−θ Ij. λj−1 − σ λj−1 − σ 1−θ 1−θ where A(λ) := T (σ)−1 T (λ). and updated H according to  1 θ  1−θ Hj. To satisfy it at least approximately Ruhe took advantage of Lagrangean interpolation λj − σ λj − λj−1 1 θ A(λj ) ≈ A(λj−1 ) − I= A(λj−1 ) − I. Iterative projection methods Rational Krylov method ct.

j − 1−θ Ij+1. λj−1 − σ λj−1 − σ 1−θ 1−θ where A(λ) := T (σ)−1 T (λ). . 2.j with corresponding eigenvector s θ 6: λj+1 = λj + 1−θ (λj − σ) 1 θ 7: Hj+1. To satisfy it at least approximately Ruhe took advantage of Lagrangean interpolation λj − σ λj − λj−1 1 θ A(λj ) ≈ A(λj−1 ) − I= A(λj−1 ) − I. hj+1.j−1 kj Hj+1. . and updated H according to  1 θ  1−θ Hj. r⊥ = r − Vhj .j = (6) 0 kr⊥ k arriving at a first version of the rational Krylov method: 1: Start with initial vector v 1 with kv 1 k = 1. and initial λ1 and σ 2: r = T (σ)−1 T (λ1 )v 1 3: for j = 1.j = 1−θ Hj+1.j = kr⊥ k 5: θ = min eig Hj. .j 8: vj+1 = r⊥ /kr⊥ k 9: r = T (σ)−1 T (λj+1 )v j+1 end for TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 58 / 109 . until convergence do 4: orthogonalize hj = V H r . Iterative projection methods Rational Krylov method ct.j−1 − 1−θ Ij.

Iterative projection methods Rational Krylov method ct. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 59 / 109 . Since the method turned out to be inefficient Ruhe suggested to modify λ. H and s in an inner iteration until the residual r = T (σ)−1 T (λ)Vj s is enforced to be orthogonal to Vj . and to expand the search space only after the inner iteration has converged.

and with k j = VjH T (σ)−1 T (λ)Vj s = VjH r we have approximately   Ij−1 s̃ T (σ)−1 T (λ)Vj = Vj [Hj. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 59 / 109 . If Hj. and to expand the search space only after the inner iteration has converged. H and s in an inner iteration until the residual r = T (σ)−1 T (λ)Vj s is enforced to be orthogonal to Vj . Iterative projection methods Rational Krylov method ct. k j ] + r (ej )T 0 sj where s̃ is the leading j − 1 vector of s. Since the method turned out to be inefficient Ruhe suggested to modify λ.j has already been updated according to step 7: then Hj.j s = 0.j−1 .

and with k j = VjH T (σ)−1 T (λ)Vj s = VjH r we have approximately   Ij−1 s̃ T (σ)−1 T (λ)Vj = Vj [Hj. Since the method turned out to be inefficient Ruhe suggested to modify λ. Iterative projection methods Rational Krylov method ct. If Hj.j−1 .j has to be replaced by hj + sj−1 k j .j−1 s̃ + sj−1 k j ]. k j ] + r (ej )T 0 sj where s̃ is the leading j − 1 vector of s.j has already been updated according to step 7: then Hj. k j ] j = [Hj.j−1 . TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 59 / 109 . 0 sj−1 and Hj.j−1 . H and s in an inner iteration until the residual r = T (σ)−1 T (λ)Vj s is enforced to be orthogonal to Vj . −sj−1 Hj.j s = 0.j−1 s̃ + sj hs = 0 finally yields that the last column of Hj. and to expand the search space only after the inner iteration has converged.j = [Hj. Multiplying by the inverse of the matrix in brackets from the right and by VjH from the left one gets the new Hessenberg matrix " # −1 Ij−1 −s s̃ Ĥj.

j = 1−θ Hj. set j = 1 2: set hj = 0j . s = ej . GOTO 2: 16: end if 17: Accept eigenvalue λi = λ and eigenvector x i = x 18: If more eigenvalues wanted.j = kr k 14: if |hj+1. initial λ and σ .j . 3: compute r = T (σ)−1 T (λ)x and k j = VjH r 4: while kk j k >ResTol do 5: orthogonalize r = r − VjH k j 6: set hj = hj + k j sj−1 7: θ = min eig Hj. and GOTO 8: TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 60 / 109 .j sj | >EigTol then 15: v j+1 = r /hj+1.j − 1−θ I −1 11: compute r = T (σ) T (λ)x and k j = VjH r 12: end while 13: compute hj+1. j = j + 1. x = v j . Iterative projection methods Rational Krylov algorithm 1: start with initial vector V = [v 1 ] with kv 1 k = 1. choose next θ and s.j with corresponding eigenvector s 8: x = Vj s θ 9: update λ = λ + 1−θ (λ − σ) 1 1 10: update Hj.

Iterative projection methods Rational Krylov method ct. Ruhe motivated the inner iteration and the requirement to make sure that the residual is orthogonal to the search space only by analogy to the linear case where it is satisfied automatically not being aware that the inner iteration is nothing else but a solver of the projected problem VjH T (σ)−1 T (λ)Vj s = 0. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 61 / 109 .

and V is expanded by (the orthogonal complement of) the residual r = T (σ)−1 T (λ)Vs of the Ritz pair (with respect to V ). Ruhe motivated the inner iteration and the requirement to make sure that the residual is orthogonal to the search space only by analogy to the linear case where it is satisfied automatically not being aware that the inner iteration is nothing else but a solver of the projected problem VjH T (σ)−1 T (λ)Vj s = 0. although motivated in a completely different way the rational Krylov method is an iterative projection method where the nonlinear eigenproblem T (σ)−1 T (λ)x = 0 is projected to a search space V . Iterative projection methods Rational Krylov method ct. Hence. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 61 / 109 .

and V is expanded by (the orthogonal complement of) the residual r = T (σ)−1 T (λ)Vs of the Ritz pair (with respect to V ). 2. . r /kr k] 7: end for TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 61 / 109 . Ruhe motivated the inner iteration and the requirement to make sure that the residual is orthogonal to the search space only by analogy to the linear case where it is satisfied automatically not being aware that the inner iteration is nothing else but a solver of the projected problem VjH T (σ)−1 T (λ)Vj s = 0. until convergence do 3: solve projected eigenproblem V H T (σ)−1 T (λ)Vs = 0 for (λ. although motivated in a completely different way the rational Krylov method is an iterative projection method where the nonlinear eigenproblem T (σ)−1 T (λ)x = 0 is projected to a search space V . Iterative projection methods Rational Krylov method ct. Hence. s) by inner iteration 4: compute Ritz vector x = Vs and residual r = T (σ)−1 T (λ)x 5: orthogonalize r = r − VV H r 6: expand searchspace V = [V . 1: start with initial vector V = [v 1 ] with kv 1 k = 1. initial λ and σ 2: for j = 1. . .

Iterative projection methods Observations The inner iteration in step 3: can be replaced by any dense solver for nonlinear eigenproblems TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 62 / 109 .

2005) that the method can be accelerated considerably this way. V. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 62 / 109 . Iterative projection methods Observations The inner iteration in step 3: can be replaced by any dense solver for nonlinear eigenproblems Numerical examples demonstrate (cf. Jarlebring.

the dense solvers need the explicit form of the projected problem whereas the inner iteration of Ruhe only needs a procedure that yields the vector T (σ)−1 T (λ)x for a given x. 2005) that the method can be accelerated considerably this way. V. Jarlebring. But on the other hand. Iterative projection methods Observations The inner iteration in step 3: can be replaced by any dense solver for nonlinear eigenproblems Numerical examples demonstrate (cf. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 62 / 109 .

the dense solvers need the explicit form of the projected problem whereas the inner iteration of Ruhe only needs a procedure that yields the vector T (σ)−1 T (λ)x for a given x. But on the other hand. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 62 / 109 . 2005) that the method can be accelerated considerably this way. Iterative projection methods Observations The inner iteration in step 3: can be replaced by any dense solver for nonlinear eigenproblems Numerical examples demonstrate (cf. V. A disadvantage of the rational Krylov method is that symmetry properties which the original problem may have are destroyed if the projected problem V H T (σ)−1 T (λ)Vs = 0 is considered instead of VjH T (λ)Vj s = 0 in the Arnoldi method or the Jacobi–Davidson algorithm. Jarlebring.

AMLS for nonlinear eigenproblems AMLS for nonlinear eigenproblems AMLS for linear eigenproblems is a one shot projection method which constructs a suitable subspace by condensation and mode truncation on several levels. This suggests for the nonlinear problem T (λ)x = 0 the following approach: TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 63 / 109 .

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 63 / 109 . This suggests for the nonlinear problem T (λ)x = 0 the following approach: Identify (essential) linear part T (λ) := −K + λM + R(λ). where K ∈ Cn×n and M ∈ Cn×n are Hermitean and positive definite matrices. which is small in the eigenparameter set of interest. and R(λ) = K − λM − T (λ) is a perturbation of the linear eigenproblem Kx = λMx. AMLS for nonlinear eigenproblems AMLS for nonlinear eigenproblems AMLS for linear eigenproblems is a one shot projection method which constructs a suitable subspace by condensation and mode truncation on several levels.

apply AMLS to linear pencil (K . and R(λ) = K − λM − T (λ) is a perturbation of the linear eigenproblem Kx = λMx. where K ∈ Cn×n and M ∈ Cn×n are Hermitean and positive definite matrices. which is small in the eigenparameter set of interest. AMLS for nonlinear eigenproblems AMLS for nonlinear eigenproblems AMLS for linear eigenproblems is a one shot projection method which constructs a suitable subspace by condensation and mode truncation on several levels. M) project T (λ)x = 0 to a search space suggested by first step solve projected nonlinear eigenproblem TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 63 / 109 . This suggests for the nonlinear problem T (λ)x = 0 the following approach: Identify (essential) linear part T (λ) := −K + λM + R(λ).

Once the multi-level substructuring transformation of the linear pencil (K . AMLS for nonlinear eigenproblems AMLS for nonlinear eigenproblems ct. and a projected eigenproblem Ky = λMy of much smaller dimension where K = ΦH H AMLS K ΦAMLS and M = ΦAMLS MΦAMLS . TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 64 / 109 . M) has been accomplished with a given cut-off frequency we obtain a matrix ΦAMLS of substructure modes.

Once the multi-level substructuring transformation of the linear pencil (K . M) has been accomplished with a given cut-off frequency we obtain a matrix ΦAMLS of substructure modes. and a projected eigenproblem Ky = λMy of much smaller dimension where K = ΦH H AMLS K ΦAMLS and M = ΦAMLS MΦAMLS . This information can be used in two ways to solve the nonlinear eigenvalue problem approximately: TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 64 / 109 . AMLS for nonlinear eigenproblems AMLS for nonlinear eigenproblems ct.

AMLS for nonlinear eigenproblems AMLS for nonlinear eigenproblems ct. and a projected eigenproblem Ky = λMy of much smaller dimension where K = ΦH H AMLS K ΦAMLS and M = ΦAMLS MΦAMLS . M) has been accomplished with a given cut-off frequency we obtain a matrix ΦAMLS of substructure modes. This information can be used in two ways to solve the nonlinear eigenvalue problem approximately: (1) Project the nonlinear eigenproblem to the subspace of Cn spanned by substructure modes which were kept in the AMLS reduction.e. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 64 / 109 . i. Once the multi-level substructuring transformation of the linear pencil (K . S(λ)y := ΦH H AMLS T (λ)ΦAMLS y = Ky − λMy − ΦAMLS R(λ)ΦAMLS y = 0.

e. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 64 / 109 . This information can be used in two ways to solve the nonlinear eigenvalue problem approximately: (1) Project the nonlinear eigenproblem to the subspace of Cn spanned by substructure modes which were kept in the AMLS reduction. AMLS for nonlinear eigenproblems AMLS for nonlinear eigenproblems ct. In particular this projection can be performed easily if the remainder R(λ) has the form p X R(λ) = fj (λ)Cj j=1 where fj (λ) are given complex functions and Cj ∈ Cn×n are given matrices. and a projected eigenproblem Ky = λMy of much smaller dimension where K = ΦH H AMLS K ΦAMLS and M = ΦAMLS MΦAMLS . S(λ)y := ΦH H AMLS T (λ)ΦAMLS y = Ky − λMy − ΦAMLS R(λ)ΦAMLS y = 0. Once the multi-level substructuring transformation of the linear pencil (K . i. M) has been accomplished with a given cut-off frequency we obtain a matrix ΦAMLS of substructure modes.

j = 1. . . . . . . m of the linear problem Kx = λMx corresponding to eigenvalues in the wanted region. . . AMLS for nonlinear eigenproblems AMLS for nonlinear eigenproblems ct. ΦAMLS yj ). λm } and X = (x1 . xj ) := (λj . and project the nonlinear problem to the subspace spanned by these Ritz vectors. . Thus we get S(λ)z := X H T (λ)Xz = Λz − λz − X H R(λ)Xz = 0 where Λ = diag{λ1 . TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 65 / 109 . (2) Determine Ritz pairs (λj . . . . xm ).

. . (2) Determine Ritz pairs (λj . AMLS for nonlinear eigenproblems AMLS for nonlinear eigenproblems ct. . ym of Ky = λMy corresponding to λ1 . . xm ). . m of the linear problem Kx = λMx corresponding to eigenvalues in the wanted region. j = 1. λm } and X = (x1 . . . and project the nonlinear problem to the subspace spanned by these Ritz vectors. . . Projected problem is equivalent to the projection of the AMLS problem of (1) to the space spanned by the eigenvectors y1 . . . . . Thus we get S(λ)z := X H T (λ)Xz = Λz − λz − X H R(λ)Xz = 0 where Λ = diag{λ1 . . λm . TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 65 / 109 . . . . . ΦAMLS yj ). xj ) := (λj . . .

Projected problem is equivalent to the projection of the AMLS problem of (1) to the space spanned by the eigenvectors y1 . ΦAMLS yj ). . . TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 65 / 109 . . . λm } and X = (x1 . . . . Examples. we can expect that the first approach will yield better approximations. . Thus we get S(λ)z := X H T (λ)Xz = Λz − λz − X H R(λ)Xz = 0 where Λ = diag{λ1 . . . j = 1. demonstrate that the loss of accuracy is often negligible. λm . xm ). . . m of the linear problem Kx = λMx corresponding to eigenvalues in the wanted region. . and project the nonlinear problem to the subspace spanned by these Ritz vectors. . . ym of Ky = λMy corresponding to λ1 . . . . . however. AMLS for nonlinear eigenproblems AMLS for nonlinear eigenproblems ct. (2) Determine Ritz pairs (λj . xj ) := (λj . . Hence.

In either case we arrive at a projected nonlinear eigenvalue problem of much smaller dimension which preserves the structure of the original problem TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 66 / 109 . AMLS for nonlinear eigenproblems AMLS for nonlinear eigenproblems ct.

g. residual inverse iteration. structure preserving linearization.) iterative projection methods (Arnoldi. e. AMLS for nonlinear eigenproblems AMLS for nonlinear eigenproblems ct.g. e. successive linear problems. e. inverse iteration. In either case we arrive at a projected nonlinear eigenvalue problem of much smaller dimension which preserves the structure of the original problem Depending on the dimension it can be solved by a dense solver (based on characteristic function det S(λ) = 0.) TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 66 / 109 .g.) dense method taking advantage of symmetry structure (safeguarded iteration. Jacobi–Davidson.

M and B are symmetric matrices of dimension 36040. Numerical experiments Fluid-solid structure Consider a rational eigenvalue governing vibrations of a fluid-structure problem (elliptic cavity containing 9 structures) λ T (λ)x := −Kx + λMx + Bx = 0 (1) 1−λ where K . TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 67 / 109 . and M is positive definite. K and B are positive semidefinite.

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 67 / 109 . and M is positive definite. . Problem (1) has 28 eigenvalues 0 = λ1 ≤ λ2 ≤ · · · ≤ λ28 < 1 in [0. ∞). 20 of which are contained in in the interval (1. 3). . K and B are positive semidefinite. 1) and a large number of eigenvalues 1 < λ̃11 ≤ λ̃12 ≤ . M and B are symmetric matrices of dimension 36040. Numerical experiments Fluid-solid structure Consider a rational eigenvalue governing vibrations of a fluid-structure problem (elliptic cavity containing 9 structures) λ T (λ)x := −Kx + λMx + Bx = 0 (1) 1−λ where K . in (1.

. Numerical experiments Fluid-solid structure Consider a rational eigenvalue governing vibrations of a fluid-structure problem (elliptic cavity containing 9 structures) λ T (λ)x := −Kx + λMx + Bx = 0 (1) 1−λ where K . Notice that the linear eigenvalue problem Kx = λMx contains only 12 eigenvalues in (0. 1) and a large number of eigenvalues 1 < λ̃11 ≤ λ̃12 ≤ . in (1. Thus. 20 of which are contained in in the interval (1. ∞). and M is positive definite. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 67 / 109 . K and B are positive semidefinite. Problem (1) has 28 eigenvalues 0 = λ1 ≤ λ2 ≤ · · · ≤ λ28 < 1 in [0. 3). . 1). the rational eigenvalue problem (1) is not just a small perturbation of the linear problem which is obtained by neglecting the rational term. M and B are symmetric matrices of dimension 36040.

Numerical experiments TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 68 / 109 .

Numerical experiments TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 69 / 109 .

Numerical experiments TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 70 / 109 .

V. 2006) Let nk be the number of eigenvalues λj of the reduced problem: Find λ ∈ R and x ∈ Hk := {x ∈ H : Bk x = 0} such that k −1 p  X σk   X 1  K+ Bj x = λ M + Bj x. and let rk be the codimension of Hk . TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 71 / 109 .. Numerical experiments Theorem (Mazurenko. σk − σj σj − σk j=1 j=k +1 satisfying λj ≤ σk .

2006) Let nk be the number of eigenvalues λj of the reduced problem: Find λ ∈ R and x ∈ Hk := {x ∈ H : Bk x = 0} such that k −1 p  X σk   X 1  K+ Bj x = λ M + Bj x. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 71 / 109 . Numerical experiments Theorem (Mazurenko. and let rk be the codimension of Hk . σk +1 ]. Then the rational eigenproblem p  X 1  Kx = λ M + Bj x. σj − λ j=1 has nk +1 + rk +1 − nk eigenvalues in (σk .. σk − σj σj − σk j=1 j=k +1 satisfying λj ≤ σk . V.

σk +1 ) consider the linear eigenproblem k p  X µ   X 1  K+ Bj x = λ M + Bj x. Then λm (·) : (σk . µ − σj σj − µ j=1 j=k +1 and denote by λm (µ) the m-smallest eigenvalue. σk +1 ) is an eigenvalue of the nonlinear problem. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 72 / 109 . if and only if it is a fixed point of λm . Numerical experiments Sketch of proof For µ ∈ (σk . σk +1 ) → R is a continuous and monotonely decreasing function. and λ̂ ∈ (σk .

Then λm (·) : (σk . TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 72 / 109 . σk +1 ) is an eigenvalue of the nonlinear problem. σk +1 ) → R is a continuous and monotonely decreasing function. σk +1 ) consider the linear eigenproblem k p  X µ   X 1  K+ Bj x = λ M + Bj x. Numerical experiments Sketch of proof For µ ∈ (σk . It can be shown that limµ→σk + λm (µ) is the m-smallest eigenvalue of the k -th restricted problem. that the nk +1 smallest eigenvalues λm (µ) converge to 0 for µ → σk +1 −. if and only if it is a fixed point of λm . and that the m + nk +1 smallest eigenvalue converges to the m smallest eigenvalue of the (k + 1)-st restricted problem. and λ̂ ∈ (σk . µ − σj σj − µ j=1 j=k +1 and denote by λm (µ) the m-smallest eigenvalue.

Numerical experiments Eigencurves 5000 4500 4000 3500 3000 2500 2000 1500 1000 500 0 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 73 / 109 .

Numerical experiments Jacobi–Davidson method The experiments were run under MATLAB 6. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 74 / 109 .5 on a Pentium 4 processor with 2 GHz and 1 GB RAM.

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 74 / 109 .5 on a Pentium 4 processor with 2 GHz and 1 GB RAM. By the approximation properties of the Rayleigh functional then the eigenvalues are determined with full accuracy. Numerical experiments Jacobi–Davidson method The experiments were run under MATLAB 6. outer iteration is terminated if the norm of the residual was less than 10−8 .

correction equation is solved by preconditioned GMRES until residual is reduced by 2−k in k -th outer iteration. outer iteration is terminated if the norm of the residual was less than 10−8 . allowed at most 10 GMRES steps TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 74 / 109 . Numerical experiments Jacobi–Davidson method The experiments were run under MATLAB 6. By the approximation properties of the Rayleigh functional then the eigenvalues are determined with full accuracy.5 on a Pentium 4 processor with 2 GHz and 1 GB RAM.

By the approximation properties of the Rayleigh functional then the eigenvalues are determined with full accuracy.5 on a Pentium 4 processor with 2 GHz and 1 GB RAM. allowed at most 10 GMRES steps preconditioning by LU factorization of T (σ) TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 74 / 109 . correction equation is solved by preconditioned GMRES until residual is reduced by 2−k in k -th outer iteration. Numerical experiments Jacobi–Davidson method The experiments were run under MATLAB 6. outer iteration is terminated if the norm of the residual was less than 10−8 .

5 on a Pentium 4 processor with 2 GHz and 1 GB RAM. Numerical experiments Jacobi–Davidson method The experiments were run under MATLAB 6. By the approximation properties of the Rayleigh functional then the eigenvalues are determined with full accuracy. outer iteration is terminated if the norm of the residual was less than 10−8 . allowed at most 10 GMRES steps preconditioning by LU factorization of T (σ) choose new preconditioner if more than 5 GMRES steps were necessary to reduce residual by 2−k TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 74 / 109 . correction equation is solved by preconditioned GMRES until residual is reduced by 2−k in k -th outer iteration.

320 MatrixVector products in 209 GMRES steps TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 75 / 109 . Numerical experiments Approximation history approximation history 2 1.5 x : eigenvalue found 1 0.5 pole σ 0 0 20 40 60 80 100 iteration 6 LU factorizations.

Numerical experiments Convergence history 0 congergence history 10 −2 10 o: LU update −4 10 −6 10 residual norm −8 10 −10 10 −12 10 −14 10 −16 10 0 20 40 60 80 100 iteration TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 76 / 109 .

Numerical experiments Time consumption Development of CPU time consumption without restarts 700 600 total CPU time 500 time [s] 400 300 200 100 time: nonlinear eigensolver 10 20 30 40 50 60 70 80 90 100 110 iteration TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 77 / 109 .

restarted 10 restart −2 10 −4 10 −6 10 residual norm −8 10 −10 10 −12 10 −14 o: LU update 10 −16 10 0 20 40 60 80 100 120 iteration TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 78 / 109 . Numerical experiments Convergence history: restarted 0 congergence history.

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 79 / 109 .01 σ−1 corresponding to eigenvalues smaller than σ. σ = 1. Numerical experiments Second interval For eigenvalues greater than 1 we started the Jacobi–Davidson method with the subspace spanned by the eigenvectors of the linear problems σ (K + B)x = λMx.

Numerical experiments Second interval For eigenvalues greater than 1 we started the Jacobi–Davidson method with the subspace spanned by the eigenvectors of the linear problems σ (K + B)x = λMx. and a CPU time of 628 seconds. 6 LU updates. 1). . σ = 1. . TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 79 / 109 . 165 GMRES steps with 264 matrix-vector products. . The method behaved similarly as for the interval (0. It found all eigenvalues λ11 .01 σ−1 corresponding to eigenvalues smaller than σ. λ30 requiring 99 iteration steps. .

and a CPU time of 759 seconds. .01 σ−1 corresponding to eigenvalues smaller than σ. 165 GMRES steps with 264 matrix-vector products. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 79 / 109 . 1). . Restarting if the dimension of the search space exceeded 40 it required 110 iteration steps. Numerical experiments Second interval For eigenvalues greater than 1 we started the Jacobi–Davidson method with the subspace spanned by the eigenvectors of the linear problems σ (K + B)x = λMx. and a CPU time of 628 seconds. The method behaved similarly as for the interval (0. It found all eigenvalues λ11 . 6 LU updates. . 8 LU updates. σ = 1. λ30 requiring 99 iteration steps. 211 GMRES steps with 321 matrix-vector products. .

4 2.2 1 0 10 20 30 40 50 60 70 80 90 100 iteration TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 80 / 109 . Numerical experiments Approximation history approximation history.2 2 1.8 2. eigenvalues in (1.8 1.6 2.3) 3 2.6 1.4 1.

Numerical experiments Convergence history 0 congergence history. eigenvalues in (1.3) 10 −5 10 −10 10 −15 10 0 10 20 30 40 50 60 70 80 90 100 TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 81 / 109 .

restarted. Numerical experiments Convergence history. eigenvalues in (1. restarted −1 congergence history.3) 10 −2 10 −3 10 −4 10 residual norm −5 10 −6 10 −7 10 −8 10 −9 10 −10 10 −11 10 0 10 20 30 40 50 60 70 80 90 100 110 iteration TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 82 / 109 .

and with the tolerance tol = 10−1 only 3 updates of the preconditioner were necessary. Numerical experiments Arnoldi method Starting with the initial shift σ = 0.1 and a random vector the algorithm without restarts needed 90 iteration steps and a CPU time of 87. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 83 / 109 .7 seconds to approximate all 28 eigenvalues in the interval [0. 1).

Numerical experiments Approximation history 1 0.1 0 10 20 30 40 50 60 70 80 90 TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 84 / 109 .9 0.2 0.8 0.5 0.4 0.7 σ 0.6 0.3 0.

Numerical experiments Convergence history 0 10 −2 10 −4 10 −6 10 −8 10 −10 10 −12 10 10 20 30 40 50 60 70 80 90 TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 85 / 109 .

Numerical experiments Convergence history. interval (1. 3) −2 10 −3 10 −4 10 −5 10 −6 10 −7 10 −8 10 −9 10 −10 10 10 20 30 40 50 60 70 80 90 TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 86 / 109 .

1) requiring 1722 iterations and 1112 seconds. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 87 / 109 . Numerical experiments Incomplete LU factorization Preconditioning by incomplete LU factorization with drop tolerance 0. 4% of which are required to solve the projected nonlinear eigenproblems and 2.5% to update the preconditioner.01 and restarting and updating the preconditioner whenever an eigenvalue has converged the algorithm finds all 28 eigenvalues in [0.

Numerical experiments

Quantum dot

pyramidal quantum dot: baselength: 12.4 nm, height: 6.2 nm
cubic matrix: 24.8 × 24.8 × 18.6 nm3

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 88 / 109

Numerical experiments

Quantum dot

pyramidal quantum dot: baselength: 12.4 nm, height: 6.2 nm
cubic matrix: 24.8 × 24.8 × 18.6 nm3
Parameters (Hwang, Lin, Wang, Wang 2004)
P1 = 0.8503, g1 = 0.42, δ1 = 0.48, V1 = 0.0
P2 = 0.8878, g2 = 1.52, δ2 = 0.34, V1 = 0.7

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 88 / 109

Numerical experiments

Quantum dot

pyramidal quantum dot: baselength: 12.4 nm, height: 6.2 nm
cubic matrix: 24.8 × 24.8 × 18.6 nm3
Parameters (Hwang, Lin, Wang, Wang 2004)
P1 = 0.8503, g1 = 0.42, δ1 = 0.48, V1 = 0.0
P2 = 0.8878, g2 = 1.52, δ2 = 0.34, V1 = 0.7

Discretization by FEM or FVM yields rational eigenproblem

1 1
S(λ)x = λMx − A1 x − A2 x − Bx = 0
m1 (λ) m2 (λ)

where S(λ) is symmetric and satisfies conditions of minmax characterization
for λ ≥ 0

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 88 / 109

Numerical experiments

Quantum dot

pyramidal quantum dot: baselength: 12.4 nm, height: 6.2 nm
cubic matrix: 24.8 × 24.8 × 18.6 nm3
Parameters (Hwang, Lin, Wang, Wang 2004)
P1 = 0.8503, g1 = 0.42, δ1 = 0.48, V1 = 0.0
P2 = 0.8878, g2 = 1.52, δ2 = 0.34, V1 = 0.7

Discretization by FEM or FVM yields rational eigenproblem

1 1
S(λ)x = λMx − A1 x − A2 x − Bx = 0
m1 (λ) m2 (λ)

where S(λ) is symmetric and satisfies conditions of minmax characterization
for λ ≥ 0
All timings for MATLAB 7.0.4 on AMD Opteron Processor 248 × 860_64 with
2.2 GHz and 4 GB RAM

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 88 / 109

68516 0.67945 0.68539 0.39804 0.57427 0.57668 0. Wang.39785 0.69922 8.58350 0. Lin.06 s 1860 543 0.68418 0. Wang (2004) dim λ1 λ2/3 λ4 λ5 CPU time 20 475 0.67 s 120 4190 775 0.41195 0.70478 0.57415 overnight TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 89 / 109 . Numerical experiments Numerical example ct.39878 0.69727 4017.92 s 1 5320 255 0 0.69767 150. FVM: Hwang.68 s 220 103 0.57477 0.40166 0.

70478 0.68 s 220 103 0. Lin. DoFinterf ) = (430 615.06 s 1860 543 0.7 s TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 89 / 109 . 49 it.69767 150. Numerical experiments Numerical example ct. Wang.40166 0. 7 it. 24 it.68418 0. FVM: Hwang.39804 0.58350 0.57427 0.69922 8.57411 0.39785 0. 7 it.57415 overnight FEM: Cubic Lagrangean elements on tetrahedal grid dimension: 96’640 ((DoFQD .39779 0. 29 it.39878 0. 226.69727 4017. Wang (2004) dim λ1 λ2/3 λ4 λ5 CPU time 20 475 0.57411 0. 24 it. 430 897.68539 0. 21 it. 29 it.57477 0.67945 0.92 s 1 5320 255 0 0.41195 0.8 s JD 15 it.68516 0. 5 it. 1 it. 21 it. DoFmat . 9 it.57668 0.4 s HLWW 45 it. 90 128) dim λ1 λ2 λ3 λ4 λ5 CPU time 960 640 0.69714 Arnoldi 44 it.67 s 120 4190 775 0. 204. 188.68547 0.

Numerical experiments Convergence history TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 90 / 109 .

001 118.4 3.7 117. Numerical experiments Preconditioning incomplete LU with cut-off threshold τ τ JD Arnoldi HLWW precond. 0.4 0.6 46.2 96.1 261.1 155.4 1084.1 1212.7 71.9 61.6 0.0001 155.0 246.6 71.1 665.7 0.6 TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 91 / 109 .01 132.

Numerical experiments Preconditioning incomplete LU with cut-off threshold τ τ JD Arnoldi HLWW precond.2 96.2 1557.1 155.4 3.7 71.9 TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 91 / 109 .6 0. 0.7 641.0 0.6 Sparse approximate inverse τ JD Arnoldi HLWW precond.7 117.0 0.1 1212.5 2124.9 61.2 0.6 2744.001 118.9 2027.6 3105.01 132.4 314.7 4073.4 968.6 71.5 2157.0001 155.1 694.1 1517.9 1461.2 718.1 261.6 46.4 1084. 0.0 246.1 665.4 0.7 0.4 1560.3 819.

Numerical experiments Damped vibrations of a structure Using a viscoelastic constitutive relation to describe the material behavior in the equations of motion yields a rational eigenvalue problem in the case of free vibrations. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 92 / 109 .

and ∆Kj collects the contributions of damping from elements with relaxation parameter bj . K is the stiffness matrix with the instantaneous elastic material parameters used in Hooke’s law. Discretizing by finite elements yields K  2 X 1  T (λ)x := ω M + K − ∆Kj x = 0 1 + bj ω j=1 where M is the consistent mass matrix. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 92 / 109 . Numerical experiments Damped vibrations of a structure Using a viscoelastic constitutive relation to describe the material behavior in the equations of motion yields a rational eigenvalue problem in the case of free vibrations.

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 93 / 109 . Numerical experiments Nonproportional damping ct.

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 93 / 109 . FE model of feeder clamp linear Lagrangean elements on tetrahedal grid dimension 1930 617 nnz(K) 70 6700 533 nnz(M) 20 5570 851 Determine 30 eigenvalues with maximal neg- ative imaginary part. Numerical experiments Nonproportional damping ct.

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 93 / 109 . Numerical experiments Nonproportional damping ct. Using a viscoelastic constitutive relation to describe the material behavior in the equations of motion yields a rational eigenvalue problem in the case of free vibrations. FE model of feeder clamp linear Lagrangean elements on tetrahedal grid dimension 1930 617 nnz(K) 70 6700 533 nnz(M) 20 5570 851 Determine 30 eigenvalues with maximal neg- ative imaginary part.

Numerical experiments

Nonproportional damping ct.

Iterative projection methods
Preconditioner Arnoldi Jacobi–Davidson rational Krylov
threshold CPU # iter. CPU # iter. CPU # iter. CPU
10−2 41 601 4137 166 1736 412 9033
10−3 127 226 2004 129 1144 163 3053
10−4 348 109 847 105 1203 78 2268

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 94 / 109

Numerical experiments

Nonproportional damping ct.

Iterative projection methods
Preconditioner Arnoldi Jacobi–Davidson rational Krylov
threshold CPU # iter. CPU # iter. CPU # iter. CPU
10−2 41 601 4137 166 1736 412 9033
10−3 127 226 2004 129 1144 163 3053
10−4 348 109 847 105 1203 78 2268

AMLS
cut-off freq. CPU time dim. CPU solve max. rel. error
1.2e7 837 sec. 1262 7.1 sec. 2.1e − 3
2.4e7 996 sec. 2808 24.1 sec. 8.1e − 4

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 94 / 109

Numerical experiments

Fluid–solid vibrations

Consider the fluid–solid vibration problem for an elliptic cavity with 9 tubes of 3
different types

λ λ λ
T (λ)x = (−K + λM + C1 C1T + C2 C2T + C3 C3T )x = 0
1−λ 2−λ 3−λ
of dimension n = 143063.

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 95 / 109

Numerical experiments

Fluid–solid vibrations

Consider the fluid–solid vibration problem for an elliptic cavity with 9 tubes of 3
different types

λ λ λ
T (λ)x = (−K + λM + C1 C1T + C2 C2T + C3 C3T )x = 0
1−λ 2−λ 3−λ
of dimension n = 143063.

This Problem has 18, 15, and 14 eigenvalues in the interval J1 = (0, 1),
J2 = (1, 2), and J3 = (2, 3), respectively, and a large number of eigenvalues in
(3, ∞), 18 of which are contained in J4 := (3, 5).

TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 95 / 109

Numerical experiments Problem ct. and they can be determined one after the other by iterative projection methods. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 96 / 109 . In each of the intervals Jj the eigenvalues can be characterized as minmax values of a Rayleigh functional. Jacobi-Davidson 1033 sec. method CPU time Arnoldi 420 sec.

Jacobi-Davidson 1033 sec. method CPU time Arnoldi 420 sec. Numerical experiments Problem ct. In each of the intervals Jj the eigenvalues can be characterized as minmax values of a Rayleigh functional. and they can be determined one after the other by iterative projection methods. Reducing the problem by AMLS with the base problem Kx = λMx and a cut-off frequency of 100 generates a rational eigenvalue problem of the same structure of dimension 888 which can be solved by the nonlinear Arnoldi method. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 96 / 109 .

method CPU time Arnoldi 420 sec. CPU time: Reduction (231 sec. Reducing the problem by AMLS with the base problem Kx = λMx and a cut-off frequency of 100 generates a rational eigenvalue problem of the same structure of dimension 888 which can be solved by the nonlinear Arnoldi method. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 96 / 109 .) = 266 sec. In each of the intervals Jj the eigenvalues can be characterized as minmax values of a Rayleigh functional. Numerical experiments Problem ct. and they can be determined one after the other by iterative projection methods.)+Arnoldi (35 sec. Jacobi-Davidson 1033 sec.

Numerical experiments Relative errors −1 10 −2 10 relative errors −3 10 −4 10 −5 10 0 1 2 3 4 5 eigenvalues TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 97 / 109 .

Numerical experiments Typical eigenvector TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 98 / 109 .

Numerical experiments Typical eigenvector TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 99 / 109 .

Numerical experiments Problem ct. C2 . Complement the 503 interface degrees of freedom on the coarsest level generated by METIS for the base problem Kx = λMx by the 1728 components corresponding to nonzero row entries of the matrix [C1 . C3 ]. Dimension of the AMLS projected problem: 902 TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 100 / 109 .

C3 ]. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 100 / 109 . C2 .)+Arnoldi (35 sec.) = 376 sec. Complement the 503 interface degrees of freedom on the coarsest level generated by METIS for the base problem Kx = λMx by the 1728 components corresponding to nonzero row entries of the matrix [C1 . Dimension of the AMLS projected problem: 902 CPU time: Reduction (341 sec. Numerical experiments Problem ct.

Numerical experiments Relative errors AMLS with (o) and without (+) interface DoF −1 10 −2 10 relative errors −3 10 −4 10 −5 10 0 1 2 3 4 5 eigenvalues TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 101 / 109 .

Here K is the stiffness matrix modified by the presence of centripetal forces. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 102 / 109 . Clearly. and G is the gyroscopic matrix stemming from the Coriolis force. K and M are symmetric and positive definite. Numerical experiments Gyroscopic eigenproblems Consider the gyroscopic eigenvalue problem Q(ω)x := Kx + iωGx − ω 2 Mx = 0 (1) governing eigenvibrations of rotating structures. M is the mass matrix. and G is skew–symmetric.

M is the mass matrix. For example. Here K is the stiffness matrix modified by the presence of centripetal forces. M and G of a sufficiently accurate FE model are very large and sparse. this problem arises when modeling noise of rolling tires which is the major source of traffic noise for passenger cars at speed exceeding 60 km/h. Therefore. and G is the gyroscopic matrix stemming from the Coriolis force. Numerical experiments Gyroscopic eigenproblems Consider the gyroscopic eigenvalue problem Q(ω)x := Kx + iωGx − ω 2 Mx = 0 (1) governing eigenvibrations of rotating structures. for the acoustic analysis many eigenpairs not necessarily at the end of the spectrum are needed. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 102 / 109 . Due to the complicated interior structure of a belted tire the matrices K . Moreover. well-established sparse eigensolvers of Arnoldi type with shift and invert techniques for a linearization or iterative projection methods for nonlinear eigenproblems are very costly since LU factorizations of complex valued matrices Q(ωj ) for several parameters ωj are required. Clearly. K and M are symmetric and positive definite. and G is skew–symmetric.

Numerical experiments Gyroscopic eigenproblems ct. Since the influence of the gyroscopic matrix G on the eigenvalues is usually not very high compared to the mass and stiffness matrix. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 103 / 109 . it is reasonable to neglect the linear term iωG when defining the essential linear eigenproblem Kx = ω 2 Mx.

Since the sparsity pattern of G matches the ones of K and M one gets the reduced model Ky + iωGy − ω 2 My = 0. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 103 / 109 . it is reasonable to neglect the linear term iωG when defining the essential linear eigenproblem Kx = ω 2 Mx. Since the influence of the gyroscopic matrix G on the eigenvalues is usually not very high compared to the mass and stiffness matrix. and projecting the matrix G simultaneously. Numerical experiments Gyroscopic eigenproblems ct. when applying the AMLS reduction to Kx = ω 2 Mx.

that all projectors are real. Since the sparsity pattern of G matches the ones of K and M one gets the reduced model Ky + iωGy − ω 2 My = 0. and projecting the matrix G simultaneously. Since the influence of the gyroscopic matrix G on the eigenvalues is usually not very high compared to the mass and stiffness matrix. when applying the AMLS reduction to Kx = ω 2 Mx. and therefore the reduction can be performed in real arithmetic. and the gyroscopic matrix G is a skew-symmetric block matrix containing diagonal blocks corresponding to the (reduced) substructures and interfaces. Numerical experiments Gyroscopic eigenproblems ct. and only off–diagonal blocks describing the coupling of a substructure and its interface contain non–zero elements. Here the stiffness and mass matrix have the same structure as in the linear case. Notice. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 103 / 109 . it is reasonable to neglect the linear term iωG when defining the essential linear eigenproblem Kx = ω 2 Mx.

Numerical experiments Gyroscopic eigenproblems ct. If the dimension of the reduced problem is very small. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 104 / 109 . a method at hand is to consider the linearization       iG K ωy M O ωy =ω K O y O K y and to apply any dense solver.

a method at hand is to consider the linearization       iG K ωy M O ωy =ω K O y O K y and to apply any dense solver. In both cases the solution requires complex arithmetic. Numerical experiments Gyroscopic eigenproblems ct. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 104 / 109 . In this case the reduced problem can be solved by an iterative projection method taking advantage of the minmax characterization of its positive eigenvalues or by a sparse solver of the linearized problem like ARPACK. If the dimension of the reduced problem is very small. For very large gyroscopic problems (for instance a realistic model of a rolling tire) the dimension of the projected problem will still be quite large.

Numerical experiments Numerical example Consider a tire model with 39204 brick elements. rotating with 50 km/h. 124992 degrees of freedom and 20 different material groups. Our aim is to determine approximations to the smallest 180 eigenvalues with relative error less than 1% and the corresponding eigenvectors. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 105 / 109 .

124992 degrees of freedom and 20 different material groups. Numerical experiments Numerical example Consider a tire model with 39204 brick elements. rotating with 50 km/h. Linearizing in the usual way       −iG −K ωx M O ωx =ω I O x O I x or by the Hermitean problem       iG K ωx M O ωx =ω K O x O K x and applying the shift-and-invert Arnoldi method requires an LU factorization of Q(ω) for every shift ω. Our aim is to determine approximations to the smallest 180 eigenvalues with relative error less than 1% and the corresponding eigenvectors. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 105 / 109 . which is a complex matrix.

04 GByte and a CPU time of 3910 seconds on one PA-RISC (750 MHz) processor of an HP superdome. Numerical experiments Numerical example ct. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 106 / 109 . Determining the factorization by SuperLU requires a memory of 6.

Determining the factorization by SuperLU requires a memory of 6. Applying the nonlinear Arnoldi method the preconditioners can be chosen as real matrices K − ω 2 M.7 GByte storage and 1940 seconds with SuperLU. and 2. the LU factorization of which requires 2. Numerical experiments Numerical example ct. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 106 / 109 .86 GByte storage and 1080 seconds with the multi frontal solver MA57 of HSL.04 GByte and a CPU time of 3910 seconds on one PA-RISC (750 MHz) processor of an HP superdome.

7 GByte storage and 1940 seconds with SuperLU.04 GByte and a CPU time of 3910 seconds on one PA-RISC (750 MHz) processor of an HP superdome. the LU factorization of which requires 2. Since the LU factorization has to be updated several times a total CPU time of more than 12 hours results on one processor of the superdome. and 2. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 106 / 109 . Determining the factorization by SuperLU requires a memory of 6. Numerical experiments Numerical example ct. Applying the nonlinear Arnoldi method the preconditioners can be chosen as real matrices K − ω 2 M.86 GByte storage and 1080 seconds with the multi frontal solver MA57 of HSL.

Numerical experiments Numerical example ct. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 107 / 109 . AMLS demands much less storage and the problem under consideration can be solved on a personal computer.0 GHz and 1 GByte storage. namely a Pentium 4 processor with 3.

namely a Pentium 4 processor with 3. AMLS demands much less storage and the problem under consideration can be solved on a personal computer. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 107 / 109 .0 GHz and 1 GByte storage. With a cut-off frequency of ωc = 2 × 105 the problem is projected to a gyroscopic eigenproblem of dimension nc = 2697 requiring a CPU time of 1187 seconds. Numerical experiments Numerical example ct.

by ARPACK) under MATLAB 7. With a cut-off frequency of ωc = 2 × 105 the problem is projected to a gyroscopic eigenproblem of dimension nc = 2697 requiring a CPU time of 1187 seconds.0 requires another 166.e. AMLS demands much less storage and the problem under consideration can be solved on a personal computer. Solving the linearization of the projected problem by eigs (i.1 seconds. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 107 / 109 .0 GHz and 1 GByte storage. Numerical experiments Numerical example ct. namely a Pentium 4 processor with 3.

Numerical experiments Numerical example ct. The relative errors of 180 eigenvalues are all less than 0. −2 10 −3 10 Relative Error −4 10 −5 10 0 20 40 60 80 100 120 140 160 180 Number of Eigenvalue TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 108 / 109 .67%.

Projecting to the subspace spanned by the Ritz vectors corresponding to eigenvalues of the linear problem Ky = λMy 2 not exceeding 1. Numerical experiments Numerical example ct.5ωmax where ωmax = 12000 is the maximal wanted eigenvalue one gets a gyroscopic eigenproblem of dimension 262. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 109 / 109 .

5ωmax where ωmax = 12000 is the maximal wanted eigenvalue one gets a gyroscopic eigenproblem of dimension 262. The accuracy of the approximations to the 180 smallest eigenvalues is deteriorated only slightly. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 109 / 109 . Projecting to the subspace spanned by the Ritz vectors corresponding to eigenvalues of the linear problem Ky = λMy 2 not exceeding 1. Numerical experiments Numerical example ct.69%. The maximum relative error is raised only to 0.

The maximum relative error is raised only to 0.69%.6 seconds. the solution time of the projected problem is reduced to 39. Thus. The accuracy of the approximations to the 180 smallest eigenvalues is deteriorated only slightly.5ωmax where ωmax = 12000 is the maximal wanted eigenvalue one gets a gyroscopic eigenproblem of dimension 262. TUHH Heinrich Voss Projecting nonlinear problems Eigenvalue problem 2012 109 / 109 . Numerical experiments Numerical example ct. Projecting to the subspace spanned by the Ritz vectors corresponding to eigenvalues of the linear problem Ky = λMy 2 not exceeding 1.