Introduction to Krylov Subspace Methods

in Model Order Reduction
Boris Lohmann and Behnam Salimbahrami,
Institut für Automatisierungstechnik, Universität Bremen, 28359 Bremen,,

In recent years, Krylov subspace methods have become popular tools for computing reduced
order models of high order linear time invariant systems. The reduction can be done by
applying a projection from high order to lower order space using the bases of some
subspaces called input and output Krylov subspaces. The aim of this paper is to give an
introduction into the principles of Krylov subspace based model reduction and to give a
rough overview of the algorithms available for computation. A technical example illustrates
the procedures.

1 Introduction and Principles of Model Order Reduction
Krylov subspace methods play a key role in the approximate solution of many large-scale
problems in different scientific fields1. In model order reduction, they can be used to find a
mapping from the high-dimensional space of a given state-space model to some lower
dimensional space and vice versa, thereby defining a model of reduced order. This procedure
of using mappings in order reduction is also referred to as reduction by projection. It turns out
that also classical methods in linear order reduction, like modal methods [5] and balancing
and truncation [14], can be understood as projection methods. Therefore, in this introductory
section, we briefly review some basic concepts and relations in model order reduction.
For simplicity, we consider linear time-invariant state-space models with only one input and
one output (although all methods discussed here can be extended to multi-input multi-output
x Ax  bu , (1)
y cT x , (2)
with the input variable u, the output variable y, the (n,1)-vector x of state variables, a constant
non-singular matrix A, and vectors b, c of suitable dimensions. The number of state-variables,
n, is called the order of the system. The transfer function of the system in (1),(2) is
g ( s ) c T ( sI  A) 1 b , (3)
relating the output to the input by y ( s ) g ( s )u ( s ) .
If the number of state variables, n, is very high, model analysis, simulation and design are
difficult. Figures 1 and 2 illustrate two technical systems that can be modelled in state-space
by equations of type (1) and (2), leading to system orders between 400 and several thousands
(depending on how detailed modeling is done). When dealing with such high-order models, it
is reasonable to look for models of lower order:

Krylov subspace iteration methods were mentioned among the “top 10 algorithms of the 20th century” in IEEE
Computing in Science and Engineering, Vol 2 (2000), Issue 1.


The resistor is heated up by electrical current until the ignition fuel and the propulsive fuel explode.The task of model order reduction is finding a model (typically of type (1). x As an alternative to approximation in time-domain. The heating-up process can be modelled by finite elements methods and leads to 1000 to 10000 ordinary differential equations. INERT BENZENE BTX-FEED K1 TX-MIX TOLUENE K2 XYLENE Figure 1 Block diagram of a destillation column (model by BASF AG). The BTX-Feed is a mixture of benzene. toluene and orhto-xylene. q  n . x Sometimes. 2 . A computer-aided modeling with linearisation at an operating point delivers n = 400 first-order ordinary differential equations Figure 2 Components of a micro-fluidic system (IMTEK. that approximates the behaviour of the original model. not only the output signal y(t) but also the state trajectory x(t) is to be approximated by the reduced order model. Freiburg [3]). In what sense should this approximation be good? Several aspects play a role: x In many applications.(2)) with a significantly lower order. to be separated into three separate streams. we may require that the bode-plot of the original model is well approximated by the reduced model. the most important requirement is that the time response y(t) to some given input signal u(t) is approximated by the reduced model. Univ.

These parts are most easily identified from a diagonal representation2 of the model (1).. O n . the state-space model becomes a so-called minimal realization of the system. If we focus on the first aspect for a moment (namely the input-output approximation) then the question is. Starting from (4) and (5) this means removing a state- 2 Equations (4). The ith path can be removed.(2) by applying the state transformation x=Tz where T is the matrix of eigenvectors of A. or ci 0 . (5). or both. if either bi 0 . Now. x Sometimes. The eigenvalues of A are denoted by O1 . that in this representation the system consists of n independent transfer paths.(5) result from (1). (4) » « » «¬ On »¼ «¬bn »¼ y cT Tz >c1  cn @z .. we may require the state variables of the reduced model to have some physical meaning. Figure 3 Block diagram of model (4). just like the state variables of the original model. it is only a small step to model order reduction: The key idea of model order reduction is to not only remove uncontrollable/unobservable parts but to also remove weakly controllable/observable parts. which “parts” of (1) and. or both.. (2): ªO1 º ª b1 º « z T 1 ATz  T 1bu «  » z  «  »u . or ci is zero (then zi is said to be unobservable). if either bi is zero (then zi is said to be uncontrollable). (2) can be neglected without changing the transfer function g(s) significantly. from this model. (5) Figure 3 illustrates. By removing unobservable and uncontrollable state-variables. a state-variable zi can be removed without affecting the input-output-relation. 3 .. Kalman’s controllability and observability concepts give an answer: uncontrollable and unobservable parts of the system can be removed without affecting the transfer function. Obviously.

(7) .variable zi if bi is small or ci is small or both. (6). The more famous method of balancing and truncation [14] works in a similar way by using a so-called balanced realization instead of the diagonal representation (4). As a result. as introduced in the 60s [5]. (5). y cT Tleft x r . if we assume that the first q state-variables in (4) are to be preserved and the remaining ones are to be removed (this sequence can be achieved by renumbering and rearranging the state variables). These simple steps in fact were the basis of the first reduction methods in state-space. the reduced order model is 1 1 x r Tupper ATleft x r  Tupper bu .


Furthermore. Ar br cTr 1 where Tupper comprises the q upper rows of T 1 and Tleft comprises the q left columns of T. Thereby. (9) can be considered instead of (1) and (2). and x r is the state vector of the reduced model. On the right hand side of (6). Multiplication by A approximates Ax by ATleft x r . i. instead of using matrices Tleft and Tupper (which are biorthogonal. we now 1 consider the term Tupper ATleft x r : the vector x r of dimension q is first mapped by Tleft into a space of dimension n. with the result Tupper ATleft x r . applying the projection matrices V and W to this model leads to the general reduced order model by projection: W T EV x r W T AV x r  W T bu . 1 Tupper Tleft I ). (5) is that T and T 1 need not to be calculated. In the last step. in which Tleft x r approximates the original state vector x. (10) . We perform an order reduction by projection [22]. (7) compared to removing rows and columns from (4). 1 More generally. W can be used for projection. but only 2q columns/rows of them. (8) y cT x . the original state-space of dimension n is projected3 onto the space of the reduced model with dimension q. the result is mapped back into a space of 1 dimension q.e. other (n.q)-matrices V. Then. a more general model of type Ex Ax  bu . The advantage of the projection result (6).


an order reduction method should have the following properties: x It should give some hint on suitable choices of q. (11) cTr Besides delivering these two projection matrices V and W. 3 1 Mathematically more precisely: P Tleft (Tupper Tleft ) 1 Tupper 1 is a projection. since P 2 P. Er Ar br y cT Vx r . non-singular state transformations to (1). the order of the reduced model. (2) or multiplication of the model with constant matrices should not influence the transfer function of the reduced model). 4 . x The reduction should be invariant to different representations of the original model (i.e.

the reason is that the controllability or the observability subspace has a dimension smaller than q. The Krylov subspace method to be presented in the next section fulfils the last two bullets but not all of the previous ones (see [16] for invariance properties and see [8] for stability/passivity issues).n)-matrix. b. Section 9 gives a short outlook onto future work in linear and nonlinear model reduction.. Ongoing research may improve the Krylov subspace methods in the future and may even deliver combinations of Krylov subspace methods and Balancing/Truncation techniques [1]. Ab. Section 6 presents a simple stopping criterion for the iterative algorithms. x Stable original models should lead to stable reduced order models. determining the reduced order model (10). q must be reduced. then the method is called a one-sided method. c from (8). (9). With this choice (and provided that the resulting A r is non-singular) the reduction method is called a two-sided method. in the sense that there are not many design parameters to play with.. 2 Krylov subspaces in order reduction The Krylov subspace is defined ~ ~ ~ ~~ ~ ~ ^ K q ( A..1)-vector (the so-called starting ~ ~~ ~ ~ vector) and q is some given positive integer.. However. and with q1 . W is any basis of the Krylov subspace K q2 ( A T E T .. are chosen as follows: V is any basis of the Krylov subspace K q1 ( A 1E.. q 2 such that V and W both are rank q 4. fixing the order q of the reduced model. Error bounds can help to judge suitability of reduced models. A q 1b .. With this definition. Then. E. the numerical calculation of the matrix-vector-products involved in the Krylov subspaces turns out to tend to be unstable. In the next section. The suggested choices of V and W look very simple. with A. A 1b) ^ ` span A 1b. 5 . b ) span b.` (12) ~ ~ where A is a constant (n.. algorithms have been developed to reliably calculate V and W (see next sections). W is chosen as specified and the other one is chosen arbitrarily but such that A r is non-singular. A 2 b. 4 If some desired value of q can not be achieved. since input as well as output relations of the system are involved. A T c) ^ ` span A T c. constructing the subspace are called basic vectors. we present the two most popular algorithms applied in numerical computations.. If only one of the two matrices V.. Ab.. ( A T E T ) q2 1 A T c . x The method should be able to handle general models like Ex Ax  bu or Mx  Dx  Kx bu .. (11). ( A 1E) q1 1 A 1b . b is a constant (n. In sections 3 and 4.. Some numerical aspects and an application example can be found in sections 7 and 8. section 5 discusses an interesting extension. with A  T ( A 1 ) T . we introduce the Krylov subspace based reduction and its basic properties. x Numerical computations should be simple and numerically robust. The Balancing and Truncation method [14] fulfils these requirements with some restrictions regarding the last two bullets. The vectors b. the two projection matrices.. x It should be clear in what sense the reduced order model is a “good” or optimal approximation.. Therefore. The method should be automatic.

certain parameters of the input-output relation of the original and of the reduced model equal each others.... . g ( s ) c T ( sI  A) 1 b T c A 1 b  cT ( A 1E) A 1b s  . The moments mi are defined as the (negative) coefficients of the taylor-series of the transfer function (3) around s0 0 . (13) .  cT ( A 1E) i A 1b s i  ..An important question now is: In what sense is the above choice of V and W “good”? In fact. the so-called moments.








we consider the first moment. x In one-sided methods. typically W V is chosen. the moments can be matched there.16. while two-sided methods for some q delivered unstable reduced models. Furthermore. in one-sided methods q moments match. When reducing the destillation column from figure 1 we experienced that one-sided methods did not deliver stable models at all. Another advantage of one-sided methods with W V is that they often deliver better approximations of the original state trajectory by xˆ (t ) Vx r (t ) . q moments match. the input-output behaviour of a reduced model found by a two-sided method is independent of the model representation we start from. A 1b) and therefore can be expressed as Vr0 . it can be shown (by a similar line as (14)) that for any output y. we can state that the reduced model approximates the original one in the sense that the first q or 2q coefficients of the taylor series of g(s) around zero match. then). Some additional properties of the reduction are: x Instead of s0 0 . the steps of proof are similar [16]. For the higher moments. it is possible to proove that the reduced model will be passive [8]. mr 0 . for certain original models with passivity-related properties. (9) influence the reduction result. Then. 6 . It is even possible to match moments for different values s0 at the same time. x Two-sided methods have the advantage of more matching moments. m0 m1 mi In two-sided methods. the taylor series can be considered at other points s0 z 0 . using the facts that ( A 1E) i Ab Vri and ( A T ET ) i A T c Wq i . In order to give an idea of how this can be proved.22]. x In general. while some two-sided approaches worked. thereby. with slight modifications of the Krylov subspaces. When reducing the micro-fluidic system from figure 2. all one-sided methods delivered stable models (because the preconditions for passivity of the reduced model [8] are fulfilled in this system). see [10. In this case. b AVr0 and Vr0 A 1b . the first 2q moments of the original and the reduced model match. there is no guarantee for stability of the reduced model. Then. of the reduced model (10). A 1b) . and even for s0 o f (the moments are called Markov parameters. This is not true for one-sided methods. (11) and show that it equals m0 : mr 0 cTr A r1b r cT V ( W T AV ) 1 W T b cT V ( W T AV ) 1 W T AVr0 cT Vr0 cT A 1b m0 . where non- singular state-transformations or matrix multiplications applied to (8). leading to better approximations of the output y in many applications. than methods with W z V do. typically V is chosen as a basis of K q1 ( A 1E. As a result. (14) The essential step in this proof is that A 1b belongs to the Krylov subspace K q1 ( A 1E. Furthermore.

3 Arnoldi algorithm In order to avoid numerical problems when building up the Krylov subspace. Calculating the next vector: The next vector is v ˆi = A1 vi−1 . In the following. 2. Furthermore. the Arnoldi algorithm is the most common way to find a basis for the input or output Krylov subspace and compute the matrix V or W respec- tively. that the transfer function of the reduced model depends on the particular choice of W. otherwise increase i and go to step 1. vi } span the same space as the first i basic vectors of the given Krylov subspace. Else. It is assumed that in each step in fact an orthogonal vector exists (should this not be true. Note however. the i-th column of matrix V is 3. v h=v ˆi = v ˆ i − hvj . ˆ i is zero. with q moments matching. Start: Set b1 v1 =  . VT A1 V = H. it is an advantage to construct an orthogonal basis for a given subspace. in each step one more vector. Normalization: If vector v ˆi v vi =  . (15) where the columns of the matrix V are the basis for the given Krylov subspace. 5 It means that hij = 0 for i > j + 1 for all 1 ≤ i. then the algorithm can not be continued and q must be reduced). In the following algorithm. · · · . For instance. VT V = I. b1 ). In one-sided Krylov subspace methods. By knowing that in each step of the Arnoldi algorithm. is constructed and normalized. where H ∈ Rq×q is Hessenberg5 . [16]. Consider the Krylov subspace Kq (A1 . Algorithm 1 Arnoldi algorithm (using modified Gram-Schmidt Orthogonalization) [8]: 0. break the loop. then set i = 2. the matrix V can be found and matrix W can be chosen arbitrarily but such that Ar is nonsingular (a ”good” choice often is W = V). the i-th column of V is a linear combination of the first i basic vectors. j ≤ q 7 . vˆ iT v ˆi 4. we briefly describe the Arnoldi algorithm. bT1 b1 where b1 is the starting vector. Check for break down: Stop if i = q. the Arnoldi algorithm finds a set of vectors with norm one that are orthogonal to each others. orthogonal to all other previous vectors. 1. it is not difficult to show that the vectors {v1 . by using the input Krylov subspace Kq (A−1 E. The reduced order model then is given by (10) and (11). The classical Arnoldi procedure [2] finds a set of orthonormal vectors as a basis for a given Krylov subspace. A−1 b) and applying the algorithm 1. Orthogonalization: For j=1 to i-1 do: ˆ iT vj .

c1 ). j ≤ q. we present a new generalized numerically stable algorithm (see also [17]) using modified Gram-Schmidt (MGS) for the general representation of systems. This algorithm is limited to the special case that the matrix A is equal to identity.4 Modified Lanczos algorithm A well known algorithm in two-sided Krylov methods is Lanczos algorithm. the new columns before normalization can be found as follows. w1 =  1 . when one of the matrices A or E is an identity matrix. (18) where H. and then matrices H and T will be transposed of each other. The property of the algorithm 2 is that. So. 2. w v ˆ i = A2 wi−1 . In this case. Kq (A1 . By this assumption in SISO systems the matrices H and T are tridiagonal matrices. Check for break down: If i = q stop otherwise go to step 1 and continue. reorthog- onalization must be used [4]. 3. b1 ) and Kq (AT1 . In this paper. it is necessary to multiply the state equation in system (8. Calculating the next vector: The next vectors are computed as follows. b1 ) and Kq (AT1 . hij = 0 for i > j + m and j > i + p for all 1 ≤ i. 8 . |cT1 b1 | |cT1 b1 | where b1 and cT1 are the starting vectors. The Lanczos algorithm can be expressed as follows: Algorithm 2 Lanczos algorithm using MGS: 0. T ∈ Rq×q are Hessenberg matrices. Otherwise. T ˆ i vj . (17) VT A2 W = T. WT A1 V = H. c1 ) such that. cT2 ) both with the same rank. wi =  i . Orthogonalization: By using the vectors in the last step and MGS. which is the famous result in the classical Lanczos algorithm. w h2 = w ˆi = wˆ i − h2 wj . WT V = I. 1. To avoid this problem. The Lanczos algorithm is numerically unstable and after some iterations biorthogonality of the vectors is lost. for j=1 to i-1 do: ˆ iT wj . The classical Lanczos algorithm [12] and its improved version in [8] generate two sequences of basis vectors which span the Krylov subspaces Kq (A1 . viT w |ˆ ˆ i| viT w |ˆ ˆ i| 4. A1 = AT2 . the algorithm can be directly applied to the system (8) and (9). b1 ) and Kq (A2 . re- spectively. v h1 = v ˆi = v ˆ i − h1 vj . Start: Set sign(cT1 b1 )b1 cT v1 =  . Normalization: The i-th columns of matrices V and W are vT w sign(ˆ ˆ )ˆ v ˆ w vi =  i i i .9) by A−1 and then apply the Lanczos method for this new system. (16) where the columns of the matrix V and W are the bases for Kq (A1 . In order reduction. ˆ i = A1 vi−1 . Consider the Krylov subspaces.

in the basis of input or output Krylov subspace becomes zero then we can stop and a minimal realization can be found. Our algorithm comprises the following steps: Algorithm 3 Two-sided Arnoldi algorithm: 0. In that case Arnoldi algorithm 1 finds the vector v ˆi = 0 and a basis for the given subspace is found. If the Arnoldi algorithm is applied to such a model to find a basis for input Krylov subspace. Kq1 (A−1 E. 2. Apply Arnoldi algorithm 1 to the output Krylov subspace to find the matrix W.e. consider the original model is not controllable. q. The same result to delete the non-observable modes can be obtained by applying the Arnoldi algorithm to the output Krylov subspace. Choose the appropriate input and output Krylov subspaces for the given system. without breaking the loop. if the i-th basic vector is a linear combination of the first i − 1-st basic vectors. Now. 1. 6 Stopping criteria One of the important parts in order reduction is to find a suitable order. each basis is orthonormal. Apply Arnoldi algorithm 1 to the input Krylov subspace to find the matrix V. finding an easy to calculate measure that can be examined in each iteration seems to be essential. A−T c). As mentioned before. VT V = I and WT W = I. In the Krylov subspace. whereas in the two-sided Arnoldi. by continuing the iterations to find zero vector in step 2 of algorithm 1. for the reduced model which leads to a good approximation. if one of the vectors before normalization. Find the reduced order model from equation (10) and(11). In two-sided Arnoldi. In iterative methods. A−1 b) and Kq2 (A−T ET . 3.e. if the i-th basic vector is almost linear combination of the first i − 1-st basic vectors (i. otherwise higher order is necessary. we can stop the iterations. This new method in comparison to Lanczos is numerically stable and easier to implement while leading to the same result as Lanczos [16. in a Krylov subspace. the vectors of each basis are orthogonal to the vectors in the other basis except for one of them.5 Two-sided Arnoldi algorithm In this section we introduce another method called two-sided Arnoldi to find the bases neces- sary for projection and calculating the reduced order model. In that case Arnoldi algorithm 1 finds the vector v ˆi ≈ 0. then all the next basic vectors are also linear combinations of the first i − 1-st basic vectors. 7]. i. VT W = I. By knowing that calculating the 2-norm or infinity-norm of a large scale model using the current algorithms is very time consuming or even impossible. there exists a linear combination of them that is very small) then. For comparison: in the Lanczos algorithm. 9 . The main goal of order reduction can be deleting the almost non-controllable and non- observable parts. it is desired to have a measure that can be tested in each iteration. 10. it is possible to delete the non-controllable modes in the reduced order model. If the errors are acceptable we can stop. the question is: in iterative order reduction when can we stop the iteration loop? To answer this question one way is to find a reduced order model and then compare it to the original system. Roughly speaking. This comparison can be done for instance by computing the norm of the error between two transfer functions.

it is better to solve the linear equation Ax = Evi−1 for x in each iteration to avoid numerical errors and to save time. v2 and b3 are almost linearly dependent. However. the inverse of the large matrix A seems to be necessary. In the next step. Vector b1 is the starting vector but the algorithm chooses the normalized form of it as the first element of the basis. (In this way. In two-sided Arnoldi. for input Krylov subspace it is norm(ˆ vi ) d1 = norm(A −1 Ev i−1 ) . In order to find the vector x = A−1 Evi−1 . Using this method in each iteration leads to a slow algorithm while the most time consuming part is finding the LU factor- ization of the large matrix. for example norm(ˆ ˆ i ). then we can stop but. b2 = A1 v1 and the algorithm chooses the vector v2 .) There exist many methods that can solve a linear equation and find an exact or approximate solution. d2 = norm(ˆ vi ). Calculating this inverse and then use it in the algorithm 1 is not recommended for numerical reasons. whereas the calculation of matrix A−1 would require n sets of such equations. One of the best method to find an exact solution is using LU-factorization [9] and then solve two triangular linear equations by Gaussian elimination. 7 Inverse of a large matrix In order reduction of the system (8) and (9) based on moment matching the matrix A−1 E and A−T ET are very important. there is no guaranty that for all cases the method works well or the measure falls down after some iterations. multiplication of the two measures for input and output Krylov subspaces can be considered. Sine of the angle between the vector A1 vi−1 and the plane (it can be a space or hyper- space) constructed by all previous vectors. then we can stop and calculate the reduced order model. Figure 4 is a 3 dimensional aspect of Arnoldi algorithm until finding the 3rd vector. Figure 4: Arnoldi algorithm in 3 dimensional view. a total of q sets of linear equations are solved. two measures can be used as stopping criteria: 1. if this vector has a very small angle with that plane (sine of this angle can be calculated as norm(ˆ v3 ) norm(b3 ) ) or the vector v ˆ3 is very small then we can consider that the vectors v1 . So. The remedy is finding the LU factorization of the matrix A at the 10 . 2. When this measure is small or falls vi ) · norm(w down and remains constant after some iterations. if the vector b3 lies on the plane of vectors v1 and v2 . The norm of the new vector before normalization. Therefore.

as shown in figure 6. It is to be mentioned that for the system considered here. and the output is the resulting displacement.html. In this case there is no need to define an output. Two-sided Krylov subspace using the two-sided Arnoldi and Lanczos algorithms (they lead to the same results) to match more moments and improve the results. different reduction methods have been applied to the distillation column (figure 1.i-1)) can be used to solve triangular linear equations. 6 In MATLAB instead of the command A\V(:. A two-dimensional axi-symmetric model is used. In this case. More details can be found in [20]. The second example is the beam model which is SISO with 348 states [6]7 . The model is obtained by spatial discretization of an appropriate partial differential equation. As expected.beginning and then solve only triangular linear equations in each iteration 6 .nl/niconet/NIC2/benchmodred. It turned out that balancing and truncation can be applied and reduced to order 30 with excellent results. Finally. 2. The first example is a class of high energy Micro-Electro-Mechanical System (MEMS) actu- ators which integrates solid fuel with three silicon micromachined wafers [3] as shown in figure 2. Alternatively. 9 Conclusion and Outlook In the previous sections. the one-sided methods (with W = V) always lead to stable reduced models whereas the two-sided methods can produce unstable models. the two-sided method performs better in approximating the output (node number 1) and in this particular example even performs better when comparing all the state variables instead of the output only [20]. One-sided Krylov subspace method using Arnoldi algorithm applied to input Krylov sub- space and choosing W = V. (iii) The most commonly used algorithms for the numerical implementation.tue.U]=LU(A) must be added to the beginning of the algorithm and in each iteration the command U\(L\V(:. (ii) The essential relations in Krylov subspace based projections. which after finite element based spatial discretization of the governing equations results in a linear system of n = 1071 ordinary dif- ferential equations. No steady state error occurs. together with: (a) a stopping criterion and (b) some numerical issues. using a selection procedure in the MIMO implementation of the Arnoldi algorithm [18]. order 6 is suitable for the reduced system. as shown in figure 5. Based on the stopping criterion. The multiplication two measures d2 from two Arnoldi algorithms in two-sided Arnoldi is used as stopping criterion. time is saved and the result is obtained very fast. Krylov subspace methods have been applied: one- sided input subspace approaches led to unstable reduced models while the two-sided approach of section 5 led to good results with a reduced order of 46. the step response of the original system can be approximated very good. 8 Numerical example Two methods have been applied to reduce the order of several systems: we have introduced: (i) Some basic concepts of linear model order reduction (for balancing and truncation see [14] and for modal reduction see [5]). this is due to matching of the first moment. 7 The model is available online at http://www. responsible for the ”DC gain” of the reduced model. By this reduced model. a model with order 400). 11 . the command [L. [16].i-1). The input represents the force applied to the structure at the free end.

Approximation of Large-Scale Dynamical Systems: an Overview. [3] Bechtold. without guaranteed error bounds for the nonlinear model. recent research goes towards approximating the Balancing and Truncation results using Krylov subspace methods [1]. A. [2] Arnoldi. 2001. Another challenging field of research is the reduction of nonlinear system models. The Principle of Minimized Iterations in Solution of the Matrix Eigenvalue Problem. 11(5):1093–1121. B. J. 12 . Such approaches are referred to as Proper Orthogonal Decomposition (POD) methods [11]. D. The nonlinear approach of balancing and truncation by [21. 35:829–848. both. 13:733–758. T. Math. [4] Boley. 15]. D. Computer Science. References [1] Antoulas. Therefore. A. and Sorensen. Internation Journal of Control. Int. and Korvink. Krylov subspace based reduction schemes can be applied to very large order systems with up to ten or even hundred thousand state variables. A Unifind Derivation and Critical Review of Modal Approaches to Model Reduction. but applicable to medium or even large-scale models [19]. L. where matrix V is gained from an analysis of simulation data of the original model or from reducing a linearized model. two-sided Arnoldi. [5] Bonvin. D.. 2001. Krylov space methods on state-space control models. Circuits Syst. Due to their simplicity. Automatic Order reduction of Thermo-Electric Models for MEMS: Arnoldi versus Guyan. Proc. C. D. 1994. J. Stopping Criterion Step Response Original Model 2000 900 Reduced Model 1800 800 1600 700 1400 600 1200 500 Amplitude 1000 400 800 300 600 200 400 100 200 0 0 200 400 600 800 1000 1200 Time (sec) 0 4 6 8 10 12 14 Iterations Figure 6: Step response of the original beam Figure 5: Stopping criterion for beam model model and its 6 order approximation using using two-sided Arnoldi. and Mellichamps. Signal Process. E. on the other hand. These approaches and POD approaches. 4th ASDAM. Appl. One approach is to apply the projections as introduced in section 1 to nonlinear models resulting in reduced models of the form. Other successful approaches are based on isolating the nonlinearities and reducing the linear system parts around them [19. Quarrt. Their disadvantages (compared to the very successful method of Balancing and Truncation) are the facts that there is no guarantee for stability of the reduced model and that there are no error bounds known today. 13]. is limited to small systems for computational reasons. 1982. Applied Math. 1951.. are essentially based on linear considerations. Rudnyi. 9:17–29.. W.

D. [9] Golub. E. Inst. 13 . 1998. Passive Reduced-Order Modeling via Krylov Subspace Methods. and Krishnaprasad. PhD thesis. Res. properties. Bureau Stan. W. California Institute of Technology. Nat. [22] Villemagne. [16] Salimbahrami. Y. Rep. Two-sided Arnoldi and Nonsymmetric Lanczos Algorithms. B. 2002. University of Bremen. and Van Loan. [7] Cullum. Empirical model reduction of controlled nonlinear systems. M. A Simulation Free Nonlinear Model Order Reduction Approach and Comparison Study. [14] Moore. Matrix Computations.uni- bremen. 2003. J. Systems Research. (00-3-02). Available at http://www.. J. D. Modified Lanczos Algorithm in Model Order Reduc- tion of MIMO Linear Systems. Bechtold. Sci. Mathematical Modelling of Systems. C. T. Balancing for Nonlinear Systems. University of Illinois at Urbana Champaign. T. Vienna. 45:255–282. rep. and Lohmann. 4th Mathmod. E. B. Uni.iat. and Van Dooren. [11] Lall. [17] Salimbahrami.C... [13] Lohmann. Scientific report. R. Control.uni-bremen. B. and Glavaski. Institute of Automation. Dep. SLICOT Working Note 2002-2. The Netherlands. Tech. Sci. 2002. rep.. B. 46:2141–2169. 4th Mathmod. 24(2):303–319. In Proc.. March 2000. Int. Vienna. [12] Lanczos. 1(2):77–90. Two-sided Arnoldi Method for Model Order Reduc- tion Of High Order MIMO Systems. Available at Krylov Projection Methods for Model Reduction. P. [6] Chahlaoui. C. [20] Salimbahrami.iat. B. G. Two-sided Arnoldi Algorithm and Its Application in Order Reduction of MEMS. S. B. Krylov Subspace Methods in Linear Model Order Reduction: Introduction and Invariance Properties. [8] Freund. Dissertation. B. CDCSS TR 2000-4. SIAM Journal on Matrix Analysis and Applications. J. 2002. B. and Zhang.uni-bremen. S. Technical Reports CIT/CDS 98-008. B. In Proc.. 2000. Principal Component Analysis in Linear Systems: Controllability. G. AC-26:17–31.pdf. [21] Scherpen. The Johns Hopkins University Press. University of Twente. and Lohmann. 2002. Lohmann. [15] Newman. and Lohmann. [18] Salimbahrami. Observ- ability and Model Reduction. A collection of benchmark examples for model reduc- tion of linear time invariant dynamical systems. B. 1995. 1981. Available at http://www. P. and Skelton. Maryland. pages 1021–1028. B. J. and Lohmann. E. 3rd edition. J. An Iteration Method for the Solution of the Eigenvalue Problem of Linear Differential and Integral Operators. A. J. [19] Salimbahrami. 1997.pdf. F. 1987. [10] Grimme. IEEE Trans. H. Model reduction using a projection formulation. 2003. C. and Korvink.. of Electrical Engineering. 2002. R. Numeri- cal Analysis Manuscript. pages 429–435. 1996. A. S. Computing Balanced Realizations for Nonlinear Systems. Marsden. Order Reduction and Determination of Dominant State Variables of Nonlin- ear Systems. 1994. on Automatic Control. J.pdf. J.iat. 1950.