You are on page 1of 19

G2ELAB

Model Order Reduction
Partial Report
Mateus Antunes Oliveira Leite
24/04/2015

Partial report containing the theory of model order reduction

1. Once the columns of 𝐁 are linearly independent. However. the matrix is not singular and its inverse exists. Equation (2) can be regarded as change of basis. In Equation (1). 𝛂 = 𝐁−𝟏 𝐯 (3) Projections In this section. 𝐯 = 𝐁𝛂 (2) It must be pointed out that vectors are geometric objects and thus are independent of the basis one chooses the represent them. a simple 3-dimensional example of projection is presented. The columns of matrix 𝐁 are the base vectors and the vector 𝛂 contains the coefficients of each of the base vectors. v e p b1 b2 Ω Figure 1 . some important topics in linear algebra are presented. this relation can be expressed as in Equation (2). Vector basis Any vector in a ℝ𝑛 space can be represented as a linear combination of n linearly independent base vectors. the derived results are general and can be applied directly to an arbitrary number of dimensions.Projection into a 2-dimensional space . The vector 𝐯 is said to be in the column space of B. Therefore. This permits to express the inverse mapping as represented in Equation (3). 𝐛i represents a base vector of ℝ𝑛 with index i and 𝐯 is the vector being composed. The vectors 𝐯 and 𝛂 can be regarded as the same geometric vector but in different basis. 𝐯 = 𝛼1 𝐛𝟏 + 𝛼2 𝐛𝟐 + ⋯ + 𝛼𝑛 𝐛𝐧 (1) In matrix notation. Topics in Linear Algebra In order to introduce the concepts of model order reduction in a self-contained manner.

one should choose a subspace 𝚿 orthogonal to e. the projected vector can be obtained by Equation (7). Thus. This kind of projection has a very important relation to the following optimization problem: find a vector 𝐩 lying in the subspace 𝛀 that best approximates the vector v. the best approximation is obtained when the square of the length of the error is minimized.The Figure 1 represents a projection of the vector 𝐯 into the subspace . . Using the law of cosines. 𝐯 = 𝛀𝛂 + 𝐞 (5) To determine the vector . In the current context. the above relation can be written as in Equation (10).Ψ = 𝛀(𝚿T 𝛀) 𝚿T (8) If both the subspaces 𝛀 and 𝚿 are the same. the error vector represented in Figure 1 is given by Equation (9). obtaining an explicit expression for . This process is shown in (11). p and e is given by Equation (4). as shown in Equation (6). See the appendix for the used math property. The subspace represented by the matrix 𝛀 can be constructed by arranging the base vectors 𝐛𝟏 and 𝐛𝟐 as its columns. the relation among the vectors v. any vector lying in this subspace can be represented by a relation similar to Equation (2). 𝐯=𝐩+𝐞 (4) The fact that 𝐩 is in the column span of 𝛀 allows writing equation (4) as in Equation (5). |𝐞|2 = 𝐯 T 𝐯 − 2𝐯 T 𝐩 + 𝐩T 𝐩 (9) Using the fact that 𝐩 is in the column space of . this projection is called an orthogonal projection. −1 𝚷Ω. one can isolate the vector . Therefore. The resulting or projected vector is 𝐩 and 𝐞 is the different among these two quantities. −1 𝐩 = 𝛀(𝚿T 𝛀) 𝚿T 𝐯 (7) The linear operator representing the oblique projection of the components of 𝐯 that are orthogonal to 𝚿 into 𝛀 can be extracted directly from this relation as expressed in Equation (8). |𝐞|2 = 𝐯 T 𝐯 − 2𝐯 T 𝛀𝛂 + (𝛀𝛂)T 𝛀𝜶 (10) Taking the gradient of the right hand side in relation to α and imposing the condition of it being zero. −1 𝛂 = (𝚿T 𝛀) 𝚿T 𝐯 (6) Therefore.

once that the number of base vector to span the totality of ℝ𝑛 is equal to n but only m base vectors are available. . one can conclude that the optimal approximation vector is just an orthogonal projection into the subspace . 𝐱 ∈ ℝ𝑚 and 𝐛 ∈ ℝ𝑛 with n > m. the available basis represents a subspace of dimension m embedded on a higher n-dimensional space. 𝐀T 𝐀𝐱 = 𝐀T 𝐛 (15) Therefore. it is only necessary to multiply the system of equations by the transpose of the coefficient matrix. imagine that one wants to solve the linear system of Equation (13) with 𝐀 ∈ ℝ𝑛. 𝐀𝐱 = 𝐛 (13) This problem is overdetermined and can be interpreted geometrically as trying to find the representation of the vector 𝐛 using the columns of 𝐀 as the base vectors. that represents the application of a transformation A to a vector x. it is possible to conclude that to obtain the closest approximation to Equation (13). it is natural to wonder how to transform the operator between coordinate systems. It is very unlikely that this task is possible. Therefore.𝛁(𝐯 T 𝐯 − 2𝐯 T 𝛀𝛂 + (𝛀𝛂)T 𝛀𝜶) = 𝟎 𝛁(−2𝐯 T 𝛀𝛂) + 𝛁[(𝛀𝛂)T 𝛀𝜶] = 𝟎 −2[𝛁𝐯 T 𝛀𝛂 + 𝛁(𝛀𝛂)T 𝐯] + 2𝛁(𝛀𝛂)T 𝛀𝜶 = 𝟎 (11) 𝛀T 𝐯 = 𝛀T 𝛀𝜶 −1 𝜶 = (𝛀T 𝛀) 𝛀T 𝐯 Therefore the vector that minimizes the square of the modulus of the error is given by Equation (12). −1 𝐀𝐱 = 𝐀(𝐀T 𝐀) 𝐀T 𝐛 (14) One can multiply both sides of the above equation by 𝐀T to obtain Equation (15). This results in shown in Equation (14). −1 𝐩 = 𝛀(𝛀T 𝛀) 𝛀T 𝐯 (12) Comparing Equations (8) and (12). A very important application of this fact that will be largely used in model order reduction is that it can be used to approximately solve overdetermined systems. an example of this is the projection operator. in the sense of least squares. As an example. Similarity Transformation Matrices may be used to represent linear operators.𝑚 . If two bases for the same linear space are related by a change of coordinates as shown by Equation (16). One approach to solve this problem in an approximated fashion is to project the vector 𝐛 into the column span of A. One may start with Equation (17).

A similar argument shows that the matrix V is composed by the eigenvectors of the matrix A*A. ̅ 𝐁𝐲 = 𝐓 −1 𝐀𝐓𝐲 = 𝒚 (18) This type of relation has a very important property: the eigenvalues of A are the same as B’s. This is shown in Equation (18). |𝐓 −1 𝐀𝐓 − 𝜆𝐈| = |𝐓 −1 𝐀𝐓 − 𝜆𝐓 −1 𝐓| = |𝐓 −1 ||𝐀 − λ𝐈||𝐓| = |𝐀 − λ𝐈| (19) In a similar fashion. This is very important because. 𝐀𝐫 = 𝐏 T 𝐀𝐏 (20) For this analysis the focus is not to show that the eigenvalues are the same. 𝐀 = 𝐔𝚺𝐕 ∗ (23) To be able to determine these matrices. as will be explained latter in this document. 𝐀𝐀∗ = 𝐔𝚺𝐕 ∗ 𝐕𝚺 ∗ 𝐔 ∗ = 𝐔𝚺𝚺 ∗ 𝐔 ∗ (24) This leads to conclude that the matrix U is build using the eigenvectors of the matrix AA*.𝐱 = 𝐓𝐲 (16) 𝐀𝐱 = 𝐱̅ (17) Using Equation (16) into (17). but to demonstrate that their signs are preserved.n may be decomposed as a product of two orthonormal and a diagonal matrix as shown in Equation (23). one may wonder of what are the relations of the eigenvalues of a transformation of the kind indicated in Equation (20). the relation for y and the transformed operator B may be obtained. one may write the product of Equation (24). If A is symmetric. there is no guarantee that the resulting reduced system will be stable. and each of the diagonal entries of Σ is the square root of the not null eigenvalues of AA*. which they are not. and the system is stable. This can be easy shown by the development in (19). The first n columns of matrix U are an orthonormal basis for the range of A. the sign of the eigenvalues determines if the system is stable or not. For a general A matrix. 𝑨 = −𝑩𝑻 𝑩 (21) This allows to prove that the reduced matrix is negative definite as shown in (22). then it can be decomposed as written in Equation (21). . 𝐱 𝐓 𝐀𝐫 𝐱 = 𝐱 𝐓 𝐏 T 𝐀𝐏𝐱 = −𝐱 𝐓 𝐏𝐓 𝑩𝑻 𝑩𝑷𝐱 = −(𝑩𝑷𝒙)𝑻 (𝑩𝑷𝒙) = −‖𝑩𝑷𝒙‖ ≤ 𝟎 (22) Singular Value Decomposition Any matrix 𝐀 ∈ ℝm.

𝐱 ∈ ℝ𝑛 . 𝐃 ∈ ℝ𝑠. If this is not the case in a particular application. 𝐱̇ = 𝐄−1 𝐀𝐱 + 𝐄−1 𝐁𝐮 = 𝐀′ 𝐱 + 𝐁′ 𝐮 (28) Many textbooks present this formulation as the standard form for the state space system.𝑟 .𝑛 . 𝐱 = 𝐏𝐲 (30) Accordingly.This decomposition allows writing any matrix as a sum of products of rank 1. To achieve this. 𝐀 = σ1 𝐮𝟏 𝐯𝟏∗ + σ1 𝐮𝟏 𝐯𝟏∗ + ⋯ + σr 𝐮𝐫 𝐯𝐫∗ (25) 2.𝑟 . 𝐁 ∈ ℝ𝑛. y is the output vector and u is the input vector. 𝐂 ∈ ℝ𝑠.𝑛 . 𝐄 [ 11 𝟎 𝐀 𝟎 𝐲̇1 ] [ ] = [ 11 𝐀 𝐲̇ 𝟎 2 21 𝐀12 𝐲1 𝐁 ] [ ] + [ 1] 𝐮 𝐀22 𝐲2 𝐁2 (29) To obtain this form. problems arise when E is singular. This is written in Equation (25). one may use a permutation matrix P to create the transformation of Equation (30). the transformed system is given by Equation (31). The first step is to write Equation (26) as in Equation (29). 𝐄𝐱̇ = 𝐀𝐱 + 𝐁𝐮 (26) It must be pointed out that Equations (26) and (27) are not coupled. permutations of the lines and columns of the E matrix should be followed by equivalent permutations of A and B. This indicates the presence of algebraic states that have to be threated explicitly. the matrix D is considered of all zeros. 𝐮 ∈ ℝ𝑟 .𝑛 . 𝐏 T 𝐄𝐏𝐲 = 𝐏 T 𝐀𝐏𝐲 + 𝐏 T 𝐁𝐮 (31) In this stage it is possible to apply a Kron reduction to Equation (29) to obtain (32). The vector x is the state vector. It is possible to show that truncating this series at position r leads to the best rank r approximation of matrix A. the reasoning presented below can be easily adapted. −1 𝐄11 𝐲̇ 1 = (𝐀11 − 𝐀12 𝐀−1 22 𝐀 21 )𝐲𝟏 + (𝐁1 − 𝐀12 𝐀22 𝐁2 )𝐮 (32) . Model Order Reduction by Projection State Space Representation The representation of time invariant linear systems used in this document is given by Equations (26) and (27). 𝐀 ∈ ℝ𝑛. If the matrix E is not singular it is possible to write the system in a simpler form as shown in (28). However. In these equations: 𝐄 ∈ ℝ𝑛. 𝐲 = 𝐂𝐱 + 𝐃𝐮 (27) For now. 𝐲 ∈ ℝs .

In the framework of model order reduction. Thus. 𝑡 x(t) = ∫ 𝑒 𝐀(t−𝜏) 𝐁𝐮(τ)dτ (34) 0 If one chooses the input function as in Equation (35). 𝑡 𝐱(𝑡) = 𝑒 𝐀(𝑡−𝑡0 ) 𝐱(𝑡0 ) + ∫ 𝑒 𝐀(𝑡−𝜏) 𝐁𝐮(τ)dτ (33) 𝑡0 If in the beginning of the simulations the system is found with zero initial conditions. For simplicity. The system response to this scenario is given by Equation (38). ∗ 𝐮(𝜏) = 𝐁∗ 𝑒 𝐀 (𝑡−𝜏) 𝓟−1 (𝑡)𝐱(𝑡) 𝑡 ∗ 𝑡 (35) ∗ 𝓟 = ∫ 𝑒 𝐀(t−τ) 𝐁𝐁∗ 𝑒 𝐀 (𝑡−𝜏) 𝑑𝜏 = ∫ 𝑒 𝐀𝜉 𝐁𝐁∗ 𝑒 𝐀 𝜉 𝑑𝜉 0 (36) 0 The energy in this signal can by written as in (37). The equation that describes the state evolution of this system is given by (34). the response to an arbitrary input signal is given by Equation (33). The matrix 𝒫 which is called the controllability Gramian is defined by Equation (36). Its physical interpretation leads to the conclusion that the controllability Gramian measures how hard is to achieve a certain state.This representation is a reduced state space equation that allows direct reduction to standard form. 𝐱0 = 𝐂𝑒 𝐀𝑡 𝐱𝟎 (38) . Observability and Reachability There are two quantities that are very important to characterize the system. the initial time is set to zero. There may be an infinite number of signals that can accomplish this task. it is a natural question to ask which is the input signal u(t) that may be used to drive the system to a given state. Some authors use t = ∞ to define this quantity and they call it the infinite Gramian. but without any loss of generality. one may want that this signal is optimal in the sense that it is capable to drive the system to the desired state using the minimum amount of energy. Suppose a system written as in Equation (28). 𝑡 Ec = ∫ 𝐮(𝜏)∗ 𝐮(𝜏)𝑑𝜏 = 𝐱 0∗ 𝓟−1 𝐱0 (37) 0 The above discussion involved solely the states of the system and ignored its output. A dual concept called observability can be derived if one analyses the amount of energy that a certain state delivers to the output if no input signal is present. To reduce the number of choices. writing the system as in Equation (26) or as in Equation (28) is interchangeable and both formulations may be used when they are more convenient. states that are difficult to reach are good candidates for truncation. the above identity is satisfied in the least amount of energy [1].

the last two terms of the right hand side can be interpreted as an error factor. 𝑡 ∗ Eo = 𝐱0∗ ∫ 𝑒 𝐀 𝝉 𝐂 ∗ 𝐂𝑒 𝐀𝜏 𝑑𝜏 𝐱 0 = 𝐱𝟎∗ 𝓠𝐱0 (39) 0 Controllability and Observability are dual concepts and their relation is shown in Table 1. Equation (26) may be written as in (41). 𝐄𝐕𝐱̇ 𝐫 = 𝐀𝐕𝐱𝐫 + 𝐁𝐮 + 𝝐 (42) The above system in xr is overdetermined. V contains q base vector and U the last n – q. any two bases for ℝn can be related by a transformation matrix. Therefore.The energy of this signal can be calculates with aid of Equation (37) and the result is given by Equation (39) which is also the definition of the observability Gramian. the relation is exact and does not lead to any computational improvement. As presented in Equation (2). Table 1 – Duality relation among controllability and observability Controllability Observability A A* B C* Reduction by Projection If it is assumed that n is a large number. the burden of solving Equation (26) is very high or even prohibitive. if the last term of the right rand side of (43) is neglected. allowing one to obtain Equation (40). a reduction of the system may be used to allow a faster solution with an acceptable loss of accuracy. The transformation matrix can be partitioned into two smaller matrices. The same process used in Equations (13). However. This is shown in Equations (44) and (45). This is shown in (43). one can write the equations for the reduced system. 𝐱 = 𝐕𝐱𝐫 + 𝐔𝐱𝐬 (40) Then. 𝐖 T 𝐄𝐕𝐱̇ 𝐫 = 𝐖 T 𝐀𝐕𝐱𝐫 + 𝐖 T 𝐁𝐮 + 𝐖 T 𝝐 (43) Until this point. (14) and (15) can be used to allow the solution of this equation. 𝐖 T 𝐄𝐕𝐱̇ 𝐫 = 𝐖 T 𝐀𝐕𝐱𝐫 + 𝐖 T 𝐁𝐮 (44) 𝐲 = 𝐂𝐕𝐱𝐫 (45) . Thus. 𝐄𝐕𝐱̇ 𝐫 = 𝐀𝐕𝐱𝐫 + 𝐁𝐮 + 𝐀𝐔𝐱 𝐬 − 𝐄𝐔𝐱̇ 𝐬 (41) One may choose the basis in a way that the vectors of V are more important in representing the dynamics of the system than the vectors in U.

The above set of equations allows the solution of an approximation to the original system but with a reduced order. Therefore. 𝐄𝐫 𝐱̇ 𝐫 = 𝐀𝐫 𝐱𝐫 + 𝐁𝐫 𝐮 (46) 𝐲 = 𝐂𝐫 𝐱𝐫 (47) To deal with initial conditions. the transfer function of the system is given by Equation (48). If the matrices K and L are nonsingular. This relation is presented in Table 2. the above equations represent the projection of the system into a subspace spanned by V and orthogonal to W. This allows writing the reduced system in standard form. a relation between the original system and reduced system matrices can be made. one could use the projection of the initial condition vector into the span of the reduced basis. respectively. defined in Equations (49) and (50). the matrices V’ and W’. −1 𝐇 = 𝐂𝐕 ′ 𝐊 −1 (s(𝐖 ′ 𝐋−1 )T 𝐄𝐕 ′ 𝐊 −1 − (𝐖 ′ 𝐋−1 )T 𝐀𝐕 ′ 𝐊 −1 ) (𝐖𝐋−1 )T 𝐁 (51) Developing the above expression.Reduced System Coefficient Matrices Original System Reduced System 𝐄 𝐖 T 𝐄𝐕 𝐀 𝐖 T 𝐀𝐕 𝐁 𝐖T𝐁 𝐂 𝐂𝐕 The reduced system can be written as in Equations (46) and (47). This is valid for the full and reduced system as well if one keeps in mind the relations of Table 2.Under the light of projection matrices. For simplicity. Invariance of the Transfer Function In the frequency domain. it may be concluded that the transfer function is invariant under a change of base. have the same column span of V and W. The important aspects of matrices V and W are their column span. the resulting transfer function is unchanged. 𝐇 = 𝐂(s𝐄 − 𝐀)−1 𝐁 (48) If instead of using matrices V and W for the projection. . Table 2 . 𝐕 ′ = 𝐕𝐊 (49) 𝐖 ′ = 𝐖𝐋 (50) Substituting these matrices into Equation (48). Equation (51) is obtained. the expression to the transformed reduced system is found to be identical to the original reduced system. one utilizes different matrices but with the same column span.

̃ = 𝑻−𝟏 𝓟𝑻−∗ 𝓟 ̃ = 𝑻∗ 𝓠𝑻 𝓠 (52) It is possible to choose the matrix T such that both Gramians are equal and diagonal. The problem of this method is the burden to calculate the Gramians that is of the order of n3 [2]. the states that have small associated diagonal values. The proper orthogonal decomposition aims at finding a projection operator of fixed rank (Π) that minimizes the quadratic error created by the simulation of a lower order system. This is shown in Equation (56). The verification of this fact can be made by direct substitution of the definition of the Gramians. However. As any vector can be written as the sum of the projection into a subspace and the projection to the orthogonal subspace. the development in (57) is possible. The interested reader may look at this work for further discussion. the transformed Gramians are given by (52). The standard way to do this is to solve the Lyapunov equations given by (54). This can be achieved by a singular value decomposition of the product (53). the following development raises a lot of understanding of the problem structure. minimizing the error is equal to maximize the projection into the orthogonal space. In this equation. 𝑻 𝜖 = ∫ ‖𝐱(𝑡) − 𝚷𝐱(𝑡)‖2 𝑑𝑡 (55) 𝟎 The integrand is the projection of the vector x into the space that is orthogonal to the projected one.Balanced Truncation The observability and reachability of each state is dependent of the system realization. 𝑻 𝑻 2 𝑻 𝑀 = ∫ ‖𝚷𝐱(𝑡)‖2 𝑑𝑡 = ∫ ‖𝛀T 𝐱(𝑡)‖ 𝑑𝑡 = 𝑡𝑟 (𝛀𝑻 ∫ 𝐱(𝑡)𝐱(𝑡)𝑇 𝑑𝑡 𝛀) 𝟎 𝟎 𝟎 (57) . If one applies a base change represented by the matrix T. 𝓟𝓠 = 𝑻𝚺 𝟐 𝑻−𝟏 (53) In order to reduce the system. 𝑻 𝑀 = ∫ ‖𝚷𝐱(𝑡)‖2 𝑑𝑡 (56) 𝟎 Using Equation (12) and imposing the orthonormality of the base. For the case of continuous time. the quadratic error is given by Equation (55). 𝑨𝓟 + 𝓟𝑨∗ + 𝑩𝑩∗ = 𝟎 𝑨∗ 𝓠 + 𝓠𝑨 + 𝑪∗ 𝑪 = 𝟎 (54) Proper Orthogonal Decomposition The development of this section is largely inspired in [3]. tr represents the trace. We already know that one may truncate Equation (25) to solve directly this problem.

to obtain Equation (62). in relation to Ω. Mathematically this is equivalent to compute V by the eigenvalue decomposition of the product STS. To calculate K in a digital computer.This motivates the definition of the POD Kernel given by Equation (58). 𝐊𝐕 = 𝐕𝐀 (62) This condition implies that V must span a subspace that is invariant under the application of the linear operator K. 𝜶) = 𝒕𝒓(𝛀𝑻 𝑲𝛀) − ∑ 𝜶𝒊. To compute U in an efficient way. 𝜶) = 𝑡𝑟(𝛀𝑻 𝑲𝛀) − 𝑡𝑟(𝑨𝑻 𝑽𝑻 𝑽) + 𝑡𝑟(𝑨) 𝑨𝒊𝒊 = 𝜶𝒊𝒊 . one may calculate the first order optimality condition. the eigenvectors that we are looking for are the columns of the U matrix of the singular value decomposition of the kernel. The first order optimality condition for the α values leads to the obvious restriction given by Equation (63). 𝓛(𝑽.𝒋 It is possible to cast the orthogonality restriction as in Equation (60) if one defines the matrix A as in (61). the burden to compute the eigenvectors may be very high. that has a size much smaller than the Kernel. Equation shows a way to approximate the kernel. The S matrix is defined in (65). a discretization is necessary. 𝑻 𝐊 = ∫ 𝑥(𝑡) 𝑥(𝑡)𝑇 𝑑𝑡 = ∑ 𝒙(𝒕𝒊 )𝒙(𝒕𝒊 )𝑻 𝚫𝒕𝒊 = 𝑺𝑺𝑻 𝟎 (64) 𝒊 𝐒 = [𝒙𝟎 √𝚫𝒕𝟎 𝒙𝟏 √𝚫𝒕𝟏 ⋯ 𝒙𝒏𝒔 √𝚫𝒕𝒏𝒔 ] (65) As the kernel is an n-by-n matrix. 𝑆𝑆 𝑇 = (𝑈Σ𝑉 𝑇 )(𝑈Σ𝑉 𝑇 )𝑇 = 𝑈ΣΣ𝑇 𝑈 𝑇 (66) Therefore. 𝓛(𝑽. 𝑻 𝐊 = ∫ 𝐱(𝑡) 𝐱(𝑡)𝑇 𝑑𝑡 (58) 𝟎 This development can be cast into an optimization problem whose Lagrangian is given by (59).𝒋 (𝒗𝑻𝒊 𝒗𝒋 − 𝜹𝒊𝒋 ) (59) 𝒊. 𝑨𝒊𝒋 = 𝟏 𝜶 𝟐 𝒊𝒋 (60) (61) Using the properties in the appendix. However it is possible to avoid this by the application of the singular value decomposition of S. the ‘economic’ version of the SVD may be used. 𝐕T𝐕 = ℑ (63) Both of these conditions are satisfied if V is built with the eigenvectors of K. and then using Equation (67) to get . This is shown in Equation (66).

These moments are centered at the zero frequency.Equivalence for decentered moments Centered at Zero Centered at σ 𝑠 𝑠−𝜎 𝐀 𝐀 − 𝜎𝐄 Using these relations of can directly write the moments for any frequency. Moment Matching The transfer function in Equation (48) can be rewritten as in Equation (68). This does not alter the problem once that the column span of the product is unchanged by the multiplication by Σ. The first moment of the reduced model is given by Equation (72). . This relation can be used to extend other results obtained to the zero centered expression to an arbitrary placed frequency. In this method. the transfer function can be written as in Equation (85).only some eigenvectors. only the system states are taken into account. This is written in Equation (71). ∞ 𝐇 = − ∑ 𝐂(𝐀−1 𝐄)𝑗 𝐀−1 𝐁s𝑗 (69) 𝑗=0 The negative of these terms in s are called the moments of the transfer function. To obtain the moments for other frequencies. Using information from these two systems allows to approximate the balanced truncation method in a less expensive way. 𝐇 = −𝐂(s𝐀−1 𝐄 − 𝐈)−1 𝐀−1 𝐁 (68) The central term can be expanded to obtain a series representation of the transfer function. This can be easily solved by using a second system that satisfies the relations in Table 1. Table 3 . −1 𝐇 = 𝐂((𝑠 − σ)𝐄 − (𝐀 − σ𝐄)) 𝐁 (70) Direct comparison of Equations (48) and (70) allows determining the equivalences pointed out in Table 3. This is shown in (69). 𝑈Σ = 𝑆𝑆 𝑇 𝑉 (67) It must be pointed out that this is only the mathematical description of the solution. ∞ 𝐇 = − ∑ 𝐂[(𝐀 − 𝜎𝐄)−1 𝐄]𝑗 (𝐀 − 𝜎𝐄)−1 𝐁(s − σ)j (71) 𝑗=0 The moment matching technique aims at choosing the columns of V and W in a way that some of the moments of the transfer function are exactly matched. In a computer implementation this procedure may be replaced by more efficient ones. called the dual system. No information is obtained from the system output.

q moments of the transfer function of the full system are matched if the matrix V is given by Equation (80). If A and E are symmetric and CT = B. 𝐯. 𝒦(𝐆. To express this result in a simple way. −𝟏 𝐦1 = 𝐂𝐕(𝐖 𝐓 𝐀𝐕) 𝐖 𝐓 𝐄𝐀−1 𝐛 (76) If A-1EA-1B is in the column span of V. −𝟏 𝐦1 = 𝐂𝐕(𝐖 𝐓 𝐀𝐕) −𝟏 𝐖 𝐓 𝐄𝐕(𝐖 𝐓 𝐀𝐕) 𝐖𝐓𝐁 (75) Using (73). one may introduce the Krylov subspaces. impedance probing. 𝐕𝐫𝟏 = 𝐀−1 𝐄𝐀−1 𝐁 (77) This relation can be used to write (76) as in Equation (78). the zeroth and the first moments match. 𝐀−𝟏 𝐁. −𝟏 𝐦1 = 𝐂𝐕(𝐖 𝐓 𝐀𝐕) 𝐖 𝐓 𝐀𝐕𝐫1 = 𝐂𝐀−1 𝐄𝐀−1 𝐁 (78) Therefore.g. if A-1B and A-1EA-1B are in the column span of V. Equation (75) can be reduced to (76). Therefore. both of the subspaces are the same. 𝑞) = [𝐯 𝐆𝐯 𝐆𝟐 𝐯 ⋯ 𝐆 𝐪−𝟏 𝐯 𝐆 𝐪 𝐯] (79) Using this definition. A simple choice for this matrix is to make it equal to V. 𝑞) (80) It must be pointed out that nothing was imposed over the matrix W. 𝐀−𝐓 𝑪𝑻 . . one may use a very similar argument to show that W may be chosen in order to match even more moments. −𝟏 𝐦0 = 𝐂𝐕(𝐖 T 𝐀𝐕) 𝐖 T 𝐀𝐕𝐫0 = 𝐂𝐀−1 𝐁 (74) The first moment for the reduced system is given by Equation (75). there is no need to calculate both subspaces. there exists a vector r1 as shown in Equation (77). there exists a vector r0 as shown in Equation (88). 𝐖 = 𝒦(𝐀−𝐓 𝐄𝐓 . A very similar argument shows that if W is chosen in accordance to Equation (81). then other q moments of the full system are matched. e. 𝐕𝐫0 = 𝐀−1 𝐁 (73) Equation (72) can be written as in (74).−𝟏 𝐇 = 𝐂𝐕(𝐖 T 𝐀𝐕) 𝐖T𝐁 (72) If A-1B is in the column span of V. This process can be continued to match the successive moments. the zeroth moment of the reduced system is equal of the zeroth moment of the full system. 𝑞) (81) In some special systems. defined in Equation (79). 𝐕 = 𝒦(𝐀−𝟏 𝐄. However.

A graph. The vectors 𝐢r .) and the nodes are the connection points. current source.2 zero entries. In order to translate this structure into matrix notation.|E|) with its elements satisfying 𝐀ij ∈ {−1. Before introducing the linear equations for circuit simulation. From this matrix. 𝐀r 𝐑−1 𝐯𝑟 + 𝐀c 𝐂 d𝐯𝑐 + 𝐀l 𝐢l = 𝐀s 𝐢s dx (84) . inductors and current sources will be considered. If the edges node of known potential. Circuit Simulation This section is largely related to the work in [4]. 𝐢c . denoted by G(V. In this particular analysis. The nonzero elements must be placed accordingly with the incidence of the edges into the nodes. capacitor. ̃ = [𝐀s 𝐀 𝐀r 𝐀c 𝐀l ] (82) Using these submatrices is possible to write the Kirchhoff’s circuit laws for the circuit as shown in Equation (83). Each column of this matrix is directly associated with an edge of the underlying graph and each row with a node. These components and its orientations are illustrated in Figure 2. one entry -1 and |V| . each column must contain exactly one entry 1. containing tuples of elements of V. 0.Circuit elements with voltage and current orientations Suppose the existence of an oriented graph representing the circuit. some notation and important concepts are introduced. To build the matrix. This new matrix is denoted by 𝐀 of this matrix are placed in a way that components of the same kind are side by side. one utilizes an edge-node incidence matrix (𝐀 ∈ ℝ|V|. 𝐀r 𝐢r + 𝐀c 𝐢c + 𝐀l 𝐢l = 𝐀s 𝐢s (83) Direct application of resistors and capacitors terminal relations leads to Equation (84). is a group of two sets: the set of vertices (V) and the set of edges (E). only resistors. etc. 𝐢l e 𝐢s contains the currents of each of the different types of circuit elements. 1}.3. some submatrices can be identified as shown in (82). it is possible to derive a reduced matrix excluding the line representing the ̃ . an edge can be tough as a circuit component (resistor. i V i i V V i V Figure 2 . One way to model a circuit into the computer is by an abstraction called graph. The graph orientation is used to determine the sign of the entry. For the case of circuit representation. usually the ground node. E). capacitors.

This allows calculating only one subspace and having the precision achieved by two subspaces in the general case.RLC ladder For this experiment a pattern as indicated in Table 4 was chosen. 𝐀Ti 𝐯 = 𝐯i (85) Using this relation. Figure 3 . 𝐀r 𝐑−1 𝐀Tr 𝐯 + 𝐀c 𝐂𝐀Tc d𝐯 + 𝐀l 𝐢l = 𝐀s 𝐢s dx (86) Writing the inductors terminal voltage and using Equation (85). This allows obtaining a unified equation with the node voltages and currents in the inductors as unknowns. The Bode diagram for the impedance measured at the terminals of the current source is plotted in Figure 4. both right and left Krylov subspaces are the equal. Equation (84) can be rewritten as in Equation (86). This circuit consists basically in a current source in parallel with a capacitor that feeds a ladder of adjustable size that contains a pattern of inductance.The relation between the voltage in each of the elements and the potential of each of the nodes of the circuit is given by Equation (85). Application Example The formulation developed in the last section is used to build an electric circuit as shown in Figure 3. it is possible to write Equation (87). . resistance and capacitance. 4. l or s. The output is an ordinary resistor whose choice is independent of the other. The index i is a placeholder and can be replaced by r. The same phenomenon happens with the original and dual systems in the Proper Orthogonal Decomposition. 𝐀Ti 𝐯 = 𝐯i T [𝐀c 𝐂𝐀𝑐 𝟎 −1 T 𝟎 ] 𝑑 [𝐯n ] = − [𝐀r 𝐑 𝐀r 𝐀Ti −𝐋 𝑑𝑥 𝐢l (87) 𝐀i 𝐯n 𝐀 ] [ 𝐢 ] + [ 𝑠 ] 𝐢𝑠 𝟎 l 𝟎 (88) If formulated in this fashion. c. This is presented in Equation (88).

Finally.Value patterns for the RLC ladder Parameter Value Input capacitor 1 pF Output resistor 1Ω Resistor pattern 1Ω Inductor pattern 1 nH. 10 nH.Table 4 . . 10nF Figure 4 – Bode diagram of the system The resulting reduced order models for an order equal to 20 is shown for the POD method in Figure 5. 1μH Capacitor pattern 1pF. Figure 6 shows the result for the Krylov method with 12 logarithmically distributed expansion points for the same system. 1nF. Figure 7 shows the result for a Balanced Truncation method. The sampling time was one nanosecond.

Figure 5 – POD Approximation Figure 6 – Krylov Approximation .

Figure 7 – Balanced Truncation 5. Appendix – Mathematical development Gradient property 𝑛 = ∑ 𝑢𝑘 𝑣𝑘 (89) 𝑘 𝑑𝑛 𝑑𝑢𝑘 𝑑𝑣𝑘 =∑ 𝑣𝑘 + 𝑢𝑘 𝑑𝑥𝑖 𝑑𝑥𝑖 𝑑𝑥𝑖 (90) ∇𝑥 (𝑢𝑇 𝑣) = (∇𝑥 𝑢𝑇 )𝑣 + (∇𝑥 𝑣 𝑇 )𝑢 (91) 𝑘 .

First gradient of the trace 𝑡𝑟(𝑉 𝑇 𝐾𝑉) = ∑ ∑ 𝑉𝑙𝑢 ∑ 𝐾𝑙𝑘 𝑉𝑘𝑢 𝑢 𝑙 (92) 𝑘 𝑑 𝑡𝑟(𝑉 𝑇 𝐾𝑉) 𝑑𝑉𝑙𝑢 𝑑𝑉𝑘𝑢 = ∑∑( ∑ 𝐾𝑙𝑘 𝑉𝑘𝑢 + 𝑉𝑙𝑢 ∑ 𝐾𝑙𝑘 ) 𝑑𝑉𝑖𝑗 𝑑𝑉𝑖𝑗 𝑑𝑉𝑖𝑗 (93) 𝑑 𝑡𝑟(𝑉 𝑇 𝐾𝑉) = ∑ 𝐾𝑖𝑘 𝑉𝑘𝑗 + ∑(𝐾 𝑇 )𝑖𝑙 𝑉𝑙𝑗 𝑑𝑉𝑖𝑗 (94) ∇𝑉 𝑡𝑟(𝑉 𝑇 𝐾𝑉) = 𝐾𝑉 + 𝐾 𝑇 𝑉 (95) 𝑢 𝑙 𝑘 𝑘 𝑘 𝑙 Second gradient of the trace 𝑡𝑟(𝐴𝑇 𝑉 𝑇 𝑉) = ∑ ∑ 𝐴𝑙𝑢 ∑ 𝑉𝑘𝑙 𝑉𝑘𝑢 𝑢 𝑙 (96) 𝑘 𝑑 𝑡𝑟(𝐴𝑇 𝑉 𝑇 𝑉) 𝑑𝑉𝑘𝑙 𝑑𝑉𝑘𝑢 = ∑ ∑ 𝐴𝑙𝑢 ∑ ( 𝑉𝑘𝑢 + 𝑉𝑘𝑙 ) 𝑑𝑉𝑖𝑗 𝑑𝑉𝑖𝑗 𝑑𝑉𝑖𝑗 (97) 𝑑 𝑡𝑟(𝐴𝑇 𝑉 𝑇 𝑉) = ∑ 𝑉𝑖𝑢 (𝐴𝑇 )𝑢𝑗 + ∑ 𝑉𝑖𝑙 𝐴𝑙𝑗 𝑑𝑉𝑖𝑗 (98) ∇𝑉 𝑡𝑟(𝐴𝑇 𝑉 𝑇 𝑉) = 𝑉𝐴𝑇 + 𝑉𝐴 (99) 𝑢 𝑙 𝑘 𝑢 𝑙 .