You are on page 1of 110
STATE VARIABLE METHODS IN AUTOMATIC CONTROL KATSUHISA FURUTA Tokyo Institute of Technology AKIRA SANO Keio University, Yokohama In association with DEREK ATHERTON Sussex University, UK JOHN WILEY & SONS Chichester - New York » Brisbane - Toronto - Singapore “This work is an extended version of that frst, published by Corona Publishing Company Limite Fee 10 Sengoku Bunkyo-ku Tokyo, Japan Copyright ©1988 by Katsuhisa Furuta All ighes reserved opt of this book may be reproduced by ny means oF Neseticad oe aston iio # machine anuage who the ten person of he pole: Lamary of Congres Cataloging Pubation Date: Furuta, Kathi, 980 Jat sete metods in ator cont Kasse For ta Sino assum vith etek Aer po om ISBN 0 471 91877 6 System analysis optimization. I, Sano, Akira, 1983— ML, Title QA02,F87 1988, 629.83—ae 19 Cont eon. 3, slate! f IL. Atherton, Derek P. [itish Library Cataloguing in Publeation Data: Furuta, Katsuhisa ‘State variable methods in automatic contro 1. Automatic control [Tite Mh Sano, Akiea IIL Atherton, Derek 629.832 rats ISBN 0 471 91877 6 sd, oy Steet, Salsbury “ype by Matheratial Composition Seters Lid Pile and ound in Great Bian by Biles of Gulord s73sash cp. CONTENTS 1 MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS ‘SYSTEM REPRESENTATION 1.1 Input-Output Description 1.2. State Variable Description 1.3. Relationship between Transfer Function and State Variable Description STATE VARIABLE MODELLING Linearization Analogies in Physical Systems Transier Function and State Variable Description State Variable Description of an RLC Network SOLUTION OF THE LINEAR STATE EQUATION Homogeneous Linear Equation Solution of the Inhomogeneous Linear Equation Calculation of the Transition Matrix IVALENT SYSTEMS, Input-Output Relation Equivalent Systems {EALIZATION OF SINGLE:INPUT AND SINGLE-OUTPUT LINEAR SYSTEMS 5.1. Companion Form (a): Controllable canonical form 5.2 Companion Form (bh: Observable canonical form 5.3 Jordan Canonical Form 5.4 Tridiagonal Form PROBLEMS 4 1 1 1 BOnS 2 2 2 2. g é 8 BE 2 1 1 1 1 3 1 1" 1 4 1 1 5 1 1 1 1 2. STRUCTURE OF LINEAR SYSTEMS 2.1, OBSERVABILITY AND CONTROLLABILITY OF TIME-INVARIANT SYSTEMS 1.1 The Condition for Controllability 1.2. The Condition for Observability 1.3. Duality 1.4 Output Controllability 1.8 Equivalent Systems STATE SPACE STRUCTURE OF TIME-INVARIANT SYSTEMS, 2.1 Controllable Subspace 2 2 2. 2 2. 2.2 2, i STATE VARIABLE METHODS IN AUTOMATIC CONTROL 2.2.2 Unobservable Subspace and Kalman's Canonical Decomposition 2.2.3. Stability and Decomposition of the State Space 2.2.4 Lyapunov Function PROBLEMS 3 CANONICAL FORMS AND MINIMAL REALIZATIONS: 3.1. CANONICAL FORM 3.1.1 Canonical Form 3.1.2. The Definition of Canonical Farm and Ackermana’s Procedure 3.1.3. Canonical Form and Input-Output Relation 3.2. MINIMAL REALIZATION 3.2.1 Dimension of a Minimal Realization 3.2.2 Kalman’s Algorithm for a Minimal Realization from @ ‘Transfer Matrix 3.2.3. Mayne’s Minimal Realization Algorithm PROBLEMS 4 STATE FEEDBACK AND DECOUPLING 4.1. STATE FEEDBACK ‘4.1.1. State Feedback and the Controllable Subspace 4.1.2 Pole Assignment by State Feedback 4.1.3. Pole Assignment in the Controllable Subspace 4.2. THE DECOUPLING PROBLEM 4.2.1 Decoupling by State Feedback 4.2.2 Pole Assignment and Zeros of Decoupled Systems PROBLEMS: 5 OPTIMAL CONTROL AND OBSERVERS 5.1 OPTIMAL CONTROL 4.1 Quadratic Criterion Function 1.2. The Stability of the Optimal Control System 113. Frequency Domain Characteristics of 2 System with Optimal Control 1.4 Square Root Locus 1.5 Computational Methods of Optimal Control ‘OBSERVERS 2.1 State Observer 2.2. Determination of L in Gopinath’s Algorithm 2.3. Design of a Functional Observer 2.4 Characteristics of a Closed Loop System Incorporating an Observer 5.3 CONTROL SYSTEM FOR STEP COMMAND 5.3.1 Control System Design 5.2.2 Behaviour of Observer to Constant Disturbance PROBLEMS gogoken 112 112 412 14, 19 421 121 127 132 138 135 135 138 141 144 148, 148, 14g, 158 159 163 165 166 169 174 CONTENTS 6 THE KALMAN FILTER AND STOCHASTIC OPTIMAL CONTROL 6.1 THE LOG PROBLEM 6.2 THE KALMAN FILTER 6.2.1 Properties of the Kalman Filter 6.2.2 Treatment of Various Types of Random Noise 6.3 STOCHASTIC OPTIMAL CONTROL 6.3.1 Perfect State Observation 6.3.2 The Separation Theorem PROBLEMS, REFERENCES INDEX 176 176 177 182 184 189 189 192 205 207 210 1 MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS 1.1 SYSTEM REPRESENTATION L.L1 Input-Output Description F ‘The input—output description of a system gives a mathematical relationship between the input and output of the system. The impulse response, the transfer function and the frequency response are typical examples of this description. The system shown in Figure 1.1 has m inputs and p outputs, which are written as the vectors W(1) = (ity osm)" and (1) = (Dy en Fe” respectively. If the time function w is defined only over the interval [fo, 1), we write it as uj...) The output at time fof the system generally depends not only on the input applied at f, but also on the input before andfor after {. Therefore the input-output description is given by YM =¥l6 209) f° ee nme ar ap where H(t, 7) is the matrix impulse response, and the (i, #) element hy(t, 7) is the impulse response associated with the input 1(1) and the output 31(). The physical interpretation of A(t, 7) is observed from the fact that when the impulse function w(/) = 6(¢ ~ 7) is applied as an input at time ¢= 7 the ‘output is represented by (1) = hy(t,7). If the characteristics of a system do not change with time, the system is said to be time-invariant or stationary. The time-invariant linear system has the property that if an input is shifted by an amount of time, the waveform of the output remains the same except for the shift by the same amount of time, Therefore, the impulse response H(f,7) depends only on the difference 1 r, and the input-output description is given by vo | t= nun ar (12) Generally in dynamical systems, the output at time ¢ depends not only on the input at ¢ but also on the input applied before andor after . The system a Ate) Figure 1.1 Matrix impulse response function is said to be causal if the output at f does not depend on the input after t, that is, it depends only on the input applied before and at time 1. Every physically realizable system satisfies the causality property. The impulse response of a causal system therefore satisfies HUyn)=0 forrest a3) and hence the input-output description is given by vee y(t, ten) i H(t,7)u(e) de ay 112 tate Variable Description We consider the output response y(t) after time fo of a system excited by the input t.0). Generally the output y(t) for £2 f cannot be uniquely determined by knowledge of the input uj) alone. Rewriting (1.1) yields J" mamerarr [ aegominds 0.9 aC and it is noticed that the output y(t) is excited not only by the input tous) but also by the unknown input u- «..) before fo. The first term in the RHS. of (1.5) corresponds to the initial condition at time fo. If the n-tuple parameters x= (1, .-, %)" which specify the initial condition are given together with the future input up,..), the output yA) can be uniquely calculated. x is a function of fo, and we can represent the output y(¢) by VO=V GX) Me) for CB fo a6 where x(fo) is called the stare or state variable at time fo, and nis the order of the system. The state is the minimal sufficient information that enables us to uniquely calculate the future output y(t) on t B fo for the IMPUE Ua.) without reference to the past input w-«,a) before time fo. MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS 3 Example 1.1 Consider the series RLC circuit shown in Figure 1.2, where the input is the voltage (1) and the output is the capacitor voltage (7). The transfer function from 1 to y is easly evaluated (0 be ¥s)___fs 1 1 Uy BH2s4 is) S405 S41 and che impulse response A(1) is given by M29 HG) =e Denoting the initial time as fo and taking account of the input w,.1. the outpur ¥4E) is described as follows Hs to fen Joe rutn are menue de “7 [meron dene [" eteunarne [ewe 2 a8) where u(r) de stere [ euuer x and x: are both independent of ¢. Thus if x and x: are known, the output y(2) for £2 fo when the circuit is excited by the input uj, can be uniquely calculated Hence xy and x: are the minimum amount of information at fy for determining the fucure output response and therefore we can assign xi(f) and x2(F) as the states for any time 1 Jt Follows from (1.7) and (1.8) that Ila) = e°; = eo as) 32 2H wry 1F Figure 12 RLC eiteuie 4 STATE VARIABLE METHODS IN AUTOMATIC CONTROL Differentiating (1.7) with respect to # yields Hide -0se%m veins Ants [Emel de and letting = fy in the above equation gives Fla) = = 0.5 etx Het (1.10) From (1.9) and (1.10) we ean write wy =e ME yi0) + HH) a= el yo) + 24a) This implies that y(Zo) and Ji(9) can also be chosen as state variables. They are the physical quantities related’ with the capacitor charge and the inductor current respectively. It is thus important to note that the choice of the state variables is not unique. We wil later show a standard method for deriving the stace equation for a general RLC network, where capacitor voltages and inductor currents are taken as the state variables. Putting £= to in (1.4), we have OHM. Ua) [1 x(), WO] ay ‘This implies that the output ¥(r) can be specified by the state x(7) and the Input u(e) at time ¢. We call g(+) the output function. ‘On the other hand, from (1.6) and (1.11) its easy to see that x(#) depends ‘on x(to) and Up.) and is hence represented as (2) = WEF Xo, fo, ii) for Xo = X(U0) (1.12) where ¥() is called the state transition function, since it indicates the transition behaviour from the initial state xp to the state x(1) by the application of the input thn In summary, a dynamical system can be described in the compact form shown in Figure 1.3, by using the state-ransition function (1.12), the output function (1.11) and incorporating the concept of state variables. re 1.3. Schematic diggram of a dynamical system MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS 8 We treat, in many cases, the transition function ¥(-) given by the solution of the ordinary differential equation: RC) = FM, U2), A) for x(f0) = Xo 13) Combined with the output equation (1.11), the description of the dynamical system is summarized as XU)= HIN, U(T). XL) = No YO = ais, 009, 4) Provided that £(+) satisfies the Lipschitz condition with respect to x and is, continuous for w and ¢, and that g(-) is continuous for x, w and #, the state X(0) and the output y(7) uniquely exist from any initial condition xo and input u(1). x, wand y are vectors with n, m and p elements respectively, and the space spanned by x is called the n-dimensional state space. If f(-) and 2() in (114) do not include ¢ explicitly then the dynamical system is time-invariant or stationary. If £() and g(-) are linear with respect to x and u, the linear dynamical system is described by: a4) (a) Time-variant linear system [4 (1), BO), CC), DIMI: X()= A(OX() + BIQU)— for x)=» (UL Sa) Y() = CEO) DEQ) (.15b) (b) Time-invasiant linear system (A, B, C, D) X()= AX) +Bu(e) for x(F0) = xo (1.16) Y= CX) + DUD (1.160) where A(z), BU), C(O, and D(X) (or A,B,C, and D) are matrices with nxn mx m,pXn and px m elements respectively. If the clements are continuous with respect {0 ¢ then there exists a unique solution of (1.15). ship between Transfer Function and ate Variable Another description of the input-output relationship is the Laplace trans- form of the impulse response matrix which we call the transfer function ‘matrix. Wf the first term in the RHS of (1.5) is assumed zero, the output y(F) depends only on the input ui... In the time-invariant causal linear system the input-output description, assuming fo =0, is MO= | He uc) ar ay 6 STATE VARIABLE METHODS IN AUPOMATIC CONTROL Let the Laplace transform of H(1), ¥(9, and w(t) be defined by His)= #HCO= | eH) ae iH (1.18) Y()= HO UG) Huon The application of the Laplace transformation to (1.17) gives ¥(s)= His U(s) 19) where H(s) is called the matrix transfer functior We now determine the transfer function of the linear system (A, B, C, D) described by (1.16). Applying the Laplace transform to (1.16) leads to SX (5) —Xo = AX(s) + BU(S) (4.20) Y(s) = CX(s) + DUS) (1.20b) where X(s) = ZIx(M}, Us) = Yiu} and ¥(s) = ¥iy(9}. Rewriting (1.20) we have X(s)= (81 A) 'xo + (91 = A) 'BUIS) (21a) V(s)= Clsl— Ayko 4 [O(F— AY'B+ DIU(s) (1.210) SSS ae us) as. ater --=—=> ¥(s) ) Figure 141 lock diagrams of a linear dynamical system. (3) state variable eseriptio, and (0) input-output description transfer matrix funetion) MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS 1 where is the identity matrix. In the ease when the initial condition is zero, |, the transfer function from w to y is seen to be given by H(s) = C(sI- Ay" B+ D (1.22) _ C adi(s!~ A)B eit 23) The relation between the descriptions (s) and (A, B, C, D)is illustrated in Figure 1.4, imple 7.2. Consider the linear single-input single-output (SISO) system (Ab, 0)" described by 0 1 OV /m\ £0 9-1 -1](e pets ju ovo -3f \n] \ yeC( 0 OW Then, cea GI dest Ay=de (0 sti 1 )=sis+1is+3) 00 S43 ‘The transfer function H(s) is given from (1.23) by (s+ i(st3) $43 -1-\ (0 a0 0 0 sey -s ) (a Yo) o 0 sens Ni ws sis+ 1)(6+3) “364 DED, Hs) There are many methods to compute (s/— A)~!. One is to make use of (1.23), which requires calculations of determinants in both the numerator and denominator as done above. Here we present an iterative scheme to calculate (sf — A)~*, which is called the Faddeev algorithm. Let the characteristic polynomial of the 1 x m matrix A be denoted by de(sf — A) = a(s) = Saas" tan Then, (2a) +o +Do) (4.25) ‘This representation is wsed since fora SISO system H becomes a column vector and Ca row 8 STATE VARIABLE METHODS IN AUTOMATIC CONTROL where on, n—25 -+-1 05 Py 1, Pe=25 +s Po can be calculated in a recursive manner from: Ta. w(AP n1), Ta-2= AP ent +Qn-f — Qn-2= ~tt(APn-2)/2 Dys= AD ea tote 2l ays = —1(APs-3)f3 i (1.260) T= AP yi tone = = u(AD fn ~ i) Ty= Ali haat y= ~ WAP fn = ATo + oof (1.266) where tr(X), the trace of X, is the sum of all the diagonal elements of the matrix X, Example 1.3 Here we again compute (sf — A)~' which appeared in Example 1.2 but this time using the Faddeev algorithm (1.26) ays -1(A)=4 o “) v= -tcaryye=a L 1 3 o Snsthe Te=Ari+¥= (00 0), ao=~mlarey3=0 90 0 Hence from (1.24) and (1.25), we obtain i stedse3 $4300 91 (t-ayt=5— 4 {0 tas -s ° 0 sts The recursive form for {I') in (1.26) can be derived as follows: Rearrang- ing (1.25), we obtain (8) (8 AYP 1s* + Daas sea T is tT) and equating the coefficients with the same power of s yields the formula for IP} in (1.26). Eliminating P4151 To from (1.26) iteratively, we have AM ey AN cy 2A tba Abagl=0 (1.27) which also implies 6() = 0. This well known result is the Cayley-Hamilton theores, which will be frequently referred to later. It is easily shown from MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS, 9 this theorem that, for any integer i> 0, A"** can be expressed as a linear combination of 1, A,..., A"! 1.2. STATE VARIABLE MODELLING 1.2.1 Lineariz ion The technique of linearizing nonlinear systems is extremely important since it provides in many cases a standard approach to analysis. For instance, the method is commonly employed in process control where a process is regulated around certain set points. The linearization technique is also used in designing a control system so as to keep the output along the nominal trajectory. f It is now assumed that a nonlinear control system is described by the differential equation: KD = LEK, WD), NL) =X (1.28a) Y= 2h, U0), (1.280) If the nominal state vector is defined by x°(¢) and the corresponding control input by u°(¢), the state x*(7) satisfies ROTO aa YO = 29,00) We now consider the deviation of the state and the ouiput from their nominal trajectory owing to the fact that the input deviates from u(r). We define these deviations by u(t) = ule) — u(t) B= XO VO, BD= HO - YO If these variations are assumed to be small, we can expand (1.28) ina Taylor series around the nominal value, as follows "+ 88D) = FPO, w+ cae ORO afew) |" ayitx.u) |" ma ETE [age NOt ote da) for 11,2, 10 STATE VARIABLE METHODS IN AUTOMATIC CONTROL : Srna H+ OO BIN.) +L dats, u) |" detuw (6s, 54 Ba |g, ue + 06s 889 for F212, 0uB where 0(6x, du) denotes higher order terms. By introducing the Jacobian matrices agi ax," OX" A= Heat ‘ax fe hy Ox AK) st bh | afi TO SE Ho aa Buy Baty da aa xy Ixy cw i : 8 88 Oxt OXe Se aa ae: Buy" Bu 38 9B a Bit J xox we have the linearized system description BRD = ACD) Ox() + BCD) du(t), for 6x(%o) = 6x0 1.30a) BY(D) = CC) OX() + DID 6u() (1.306) ‘This equation describes the variations around the nominal trajectory and has at least first-order accuracy. MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS u If x*, u® and y* are chosen constant as the equilibrium or steady state such that they satisly 0=F0*,u") (3ta) yagi" iu"), (1.31b) the linearized equation corresponding to (1.30) becomes the time-invariant linear system BR(1)= A Ox(1) + Bout) (1.324) By(t)= C ox(1) + D bu(o) (1.326) Example [4 (Dynamics of a stivred tank). Consider the stirred tank shown in Figure 1.5. The tank is fed with an incoming flow with constant temperature 6 (°C) and tlow rate is qi(/) (m’/s). We consider the dynamical behaviour of the tempe ture and the level of the tank, whith are later chosen as’the state variables. It is assumed that the tank is heated and stirred well so that the temperature of the ‘outgoing flow equals the temperature (1) in the tank. Other symbols are defined as follows: go(e){m'Ys) is the outgoing flow rate, »(¢) (ealfs) the heat input, c (aijm? °C) the specific heat of the flow, “(2) the level of the sank, S (m’) the cross-sectional area of the tank and hence the volume, (i), is equal 10 Si(?). From the mass balance equation, we have Sh(t) = 440) ~ gol) (1.334) gal = ah) (1.330) a 68 F, IOLA = eBeakD ~ ODL + HE (aie) M\ G0, qin vr) Heater igure 1.5) Stined tank 2 STATE VARIABLE METHODS IN AUTOMATIC CONTROL We take q(t) and o(¢) as the inputs and the output y(1) 10 be aot) _ fasthee) n= 134 fa eo) eas ni We consider now a steady-state situation whet all quantities ate constant, ic and gy for the low rates, » for the heating rate, h* for the level and 6” for the temperatures, In this stationary state, the following relations hold: O=gi— ae, gem ath) (1.35a) O= chi" g* - cBa"Ga° + ¥" (1.35b) at) _ feutn*) and the output (%) = (* 1 inna (i) ( a ) Ht We now assume only small deviations front the steady state such that gd= qi +oqde), oft) =e" +501) W()= AF 4 BHU), Bot) = 85 + Holt) sand expand (33a) (0 (1.330) in me the stationary values to obtain ath") 25K wir) ange) + & Bait) 8 alse +08 a= ~ LED a8 34) - SP? see agin + X ont Using the notation V*° = il a= a,{h") and p* = V"Jqe (hold-up time of the tank), these linearized equations can be summarized in vector Form. ‘The state equation is ()-C i) GI) la Bee) 8) 0m and the output equation is (ea) (Gait ny taba (1.376) a(t) 0 Nene, Example 1.3 (Model Tor stabilization of inverted pendulum). We consider the inverted pendulum shown in Figure 1.6, The pivot of the pendulum is mounted on a carriage which, can move in a horizontal direction. The pendulum can be kept balanced at a specified position by applying horizontal Forces to drive the carriage From inspection of Figure 1.6 we construct the differential equations describing the dynamnies of the inverred pendulum and the carriage, The horizontal displace ment of the pivot on the carriage is £(0), while the rotational angle of the pendulum is @(/). The horizontal and vertical positions of the centre of gravity of the MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS, B Figure 16 tnverted pendulum pendulum are given by £ + /sin @ and feos @ respectively. The equations of motion ‘can be seen 10 be Lehn snd Hono @ m5 (1 c0s 6) = V— mi ga lf 008 8) "e a mes tsin dye ae 8 where the length of the pendulum is 2/, the moment of inertia with respect to the centre of gravity is J = ml'/3, and Hf and V are horizontal and vertical reaction Forces, respectively, on the pivot. C and F represent the Irietion coefficients for the rotary motion of the pendulum and the linear motion of the carriage, and u is the Force driving the carriage, Eliminating M and V from these equations gives (J+ mb )6 + (ml cos @)E= ~ Cé+ mig sin & (1.38) (M+ m)é + (inl cos o}6 = ~ FE+ (int sin 6) 4 STATE VARIABI & METHODS IN AUTOMATIC CONTROL Using sin 6 = 6 and cas ¢: for smiall ¢, we can derive the linearized equations with respect to é about &” 0 from (1.38), Then replacing 6 by 6 gives, nti (14+ mP y= ~ 06+ mig & (te mes mnib= ~ FES a 39) Choosing the states x1 = & 39 = 6.4 x Eand x space model of the inverted pendulum 4, we obtain for the linear state a\ fo ° 1 o x) ° a\ fo o ° 1 » 0 sf o — mga — + mya mils as} (ss mea} vif No mins eis mira cate mad xf \— mits (1.40) where = (M4 in) Mm Analogies in Physical Systems An analogy can be formed between a mechanical system, an electric circuit, and other physical systems, These analogies are used in replacing elements of one type of system with analogous quantities of another type of system Table 1.1 summarizes analogies of physical quantities between mechan- ical systems and electric circuits. For instance, Figure 1.7 shows a dynamic translation system and the corresponding electric circuit Let y1(¢) and y2(¢) be displacements from the equilibrium position which exist when the force f(#) is zero. Then, the dynamic equations for the motion of the two masses are MiSi + Didi + Kini + Kx ~ 32) = 0 (41a) Maj2+ Day + Ka2— n= Slt). (Lab) Next, we obtain equations for the electric circuit, shown in Figure 1.7(b), which is analogous to the mechanical system. If we apply Kirchhott's voltage law to loops (D) and 2) of the circuit, we obtain Lids + Rigi + afi +(q1— q2)[C2=0 for loop — (1.42a) Lah + Rags +(qo~ q)Cz= (8) for loop @ (1.426) where q: = fis dt and go = fis du. Thus we sce that the mechanical system and electric circuit yield similar differential equations with the analogous quantities shown in columns one and three of Table 1.1. This analogue is often referred to as the force voltage analogue, and that between columns one and four as the force—cur- rent analogue (see Problem P14). MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS, is Table 1.1. Analogy of mechanical systems and cletieeteuts Mechanical Mechanical ——_‘Eletele Electric translation rotational =——_cireuic cireuit system system @) © vai Fore: f Torque: T Voltage: v= Current: Velocity: » Angula Currents? Voltage: » velocity: & Distance: y Angle: 0 Electric Magnetic charge: g flux: G=fdy @=fody) — @=frdn = Juan Elements Mass: M-—- Moment of Inductance: Capacitanee: © and functions inefia: J food Sa yen “re (ot eo) Damping Damping Resistance: R__Conductance: G constant: D constant: D U=Do) (T=Ds) w= Rd) =o Stiffness: KStifiness: K Capacitance: © Inductance: Kjvdy (T=Kjedn (: dai) ( 4 war) Ho. fg Be ai ht the, Figure 1,7 Mechanical system and eleeric ciceuit 16 STATE VARIABLE METHODS IN AUTOMATIC CONTROL 3 Transfer Function and State Variable Description We now consider the rela variable descr ionship between the transfer function and a state pption by analysing a physical system. Example 1.6 (Position control of d.c, motor). We consider the position control of an armature-con‘rolled d.c. motor, shown in Figure 1.8, where the angular displacement of the motor shaft # can be controlled by changing the armature voltage «. The torque Ty generated at the mocar shaft depends on the magnetic held ain he armature current ix. In the absence of saturation Tas bneary proportional 10 iy, that is T= Krie 1.43) where Kr is the torque constant (Nm/A). The generated torque Ty drives the load and overcomes the viscous Iriction so that the equation of motion is Soot a a where J} is the total moment of inertia of the motor shaft, including the inertia of the load, rotor and shaft, and D, is che viscous friction constant For the armature circuit we have 7 ag) (4s) ta Dit Relat Ve a where Wis the back e.m.f. of the armature citcuit, which is linearly proportional to the angular velocity of the motor, and given by Wek 1.49) where K. (Vjradjs) is the e.m.f. constant First, we give the transfer function representing the input-output behaviour of the d.c. motor. Taking the Laplace transforms of (1.43) 10 (1.46), assuming zero ina condtons, and eliminating is, Vs and 7 we obtain the transfer Tu rom uta # as _ 2b) Ky UG) TCLs + RS D+ KK Hs) any an Figure 18 Dic. motor schematic MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS 7 [Next the state variable description of the motor is investigated. Let us choose the state variables as eb md (1.48) ‘Again eliminating ig, Va and Tos from (1.43) through (1.46) leads to the differential equation Re, D) j, Deka Kok 5 Ke *) 6+ Peace a (1.49) a ii ) ii Sila fi Sila 1c noted from the choice ofthe state variables in (1.48) that (1.50) so that we obsain the state space description a\ (0 1 0 x 0 w}=fo 0 1 mj+( 0 do asf \O RDA Ke La ~(RalLat DID) \xy Kafe (asia) 10 Ow (sib) [As mentioned before, the choice of the state variables is not unique, therefore we can obtain other state variable descriptions, If, alternatively, the states are taken as » Ro, Had, «2 i easy to see from (1.43) through (1.46) and (1.52) that Riki t Lai + Koh om Kita hist Dts and then the state variable description has the different form: i = Ryka 0 ~ KL) (iv yt) (US A | TB (1.530) ay Kis 0 ~ Dips) \ sy o ye@ 1 Os (1.53) Calculating the transfer function for (1.53) using (1.22), we find, as expected, that the two state-variable descriptions (1.51) and (1.53) yield the same transfer function (1.47), The following relationships exist between the states x and oh, aay Kets Jet Dar ‘or equivalently in vector form x OFTHE OTN 3 * o 0 1 \{mp=n (say asf \kia 0 ~DIaf \ry STATE VARIABLE METHODS IN AUTOMATIC CONTROL This indicates the non-uniqueness of a state variable description of a given system and shows that an alternative set of state variables can be obtained by any non-singular transformation, T (det T # 0), 1.2.4 State Variable Description of an RLC Network We explain a standard procedure for deriving the state equations of linear RLC networks through a typical example. Example 1.7 Consider the linear electric eieult shown in Figure 1.9. If we know the initial conditions at time fo, such as the magnetic ux (4g) of the induetor and the charges qe (ta) and ga (ta) of the capacitors, and also the source voltage e,() and the source current i,(7) for time £ = fo, then the flux (4) and the charges qy(1) an q2(0) are uniquely determined. This implies that the Auxes and charges ean be taken as the state variables which can uniquely specify the behaviour of the RLC cireuit Assigning the inductor current A(#) and capacitor voltages ve(1) as the state variables, since they satisfy the relationships (1) = Li. (0) and q(¢) = Cuc(0), the procedure is as follows: (1) Choose the states, then ductor current J, (7) and capacitor voltages v,(7) and #4) as the qe ~ Citi = bt (55) - Gi, 2) Describe 04 (7, ie{F) and ic(2) on the RS of (1.55) in terms of the states 4.(), v.(0 and vo(1) and the inputs e(1) and (2), and, for this purpose, modify Figure 1.9 into Figure 1.10 which shows the equivalent circuit comprising only ab Yee +. MATHEMATICAL DESCRIPTION OF LINEAR SYS! ” to + Mab : ‘loses path Figure 1.10 Equivalent circuit for use with Figure 19 resistors and sources. Now apply Kirchhof’s voltage and current laws to the circuit ff Figure 1.10. The application of Kirchhofl’s current law to the nodes @ and b yields Bi + KD ~ ba i) iN =0 The application of Kirchhoff’s voltage law to loop yields el) Ru Solving these three equations with respect 10 v1 (1), f(2) and fat") gives PORT TOnaxG) felt) = i) (1.36) PLD (Ri + ROYAL ~ BoD BED = BC) ~ Rei) Finally substituting (1.56) into (1.55) and rearranging the equations into matrix fore ges ay —(Re+ RYE YL lL) faey YL RL ea) a “ye. 0 0 Ylatn}+(0 yea (3) Hat Me HMO Nesta Nott st Y= =U) * Roi) 7) es) aco) «a Ro(* (1.5%) Rt of: ‘) c net) (+ (+ HC # 1D = Rai(D = 0 Plt) 20 STATE VARIABLE METHODS IN AUTOMATIC CONTROL, 1.3. SOLUTION OF THE LINEAR STATE EQUATION In this section we investigate the solution of the linear state equation RC) = ALONE) + BUQUCD, for x10) = Xo (1.58) where x(1) is an n-state vector, u(t) an m-input vector, A(#) and B(.) are (1 1) and (x 2) matrices respectively and cach element is continuous with respect to £. We also discuss the time-invariant linear state equation RC) = Ax(t)+ Bu(D), for x(f0) = we. (1.59) 1.3.1 Homogeneous Linear Equation We first consider the solution of the homogeneous equation without any forcing terms, that is RC) = AND. (1.60) Let (6, fo), O26, 10), +.) bn(ts fa) be the set of solutions of (1.60) associ- ated with the corresponding initial conditions| 1 ° ‘ xilto)=er=( ° }yxo(to)-e= | 0], ...,x0C00) o) en , i 1 Combining these solutions, we define the 1 x n matrix PUL, £0) = (Gxt, fo), br, f0)s «+5 Gulls fo) (1.62) which is called the transition matrix. Properties of the transition matrix (a) ®(¢, £0) is the unique solution of the matrix differential equation for all tand fy 2 (6, 10)= ACP fo) for 12 (1.63) Poo) = 1 (1.64) Proof (1.63) and (1.64) are equivalent to the definition of the transition matrix. The uniqueness of the solution is assured by the continuity of A(¢) MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS a (b) The homogenous equation (1.60) with the initial condition x(to) = Xo hhas the solution X()=(C,10)X0 for all ¢ (63) Proof Substituting (1.65) into (1.60), we have 209 = 3 86 to = ABE ma = ACN (Zo) = PC, fo}o = Xo, (©) The transition matrix (¢, (9) is non-singular. Proof Suppose that (fy, 0) is singular for some 41. Then there exists @ non-zero vector € (#0) such that (4, fo}¢=0, The vector defined by (1, fo}e is shown from (1,63) to satisly (N= ACW) and With) =0; hence it follows that Y(t) = 0 for all 1, i.e, b(f, fo)e= 0 for all f Since e # 0, &(t, fo) is singular for all ¢. This contradicts the non-singularity of (to, fo) = [, Hence ®(1, fo) is non-singular for all ¢ (@) For all fo, 4 and &, B(t2, fo) = Bla, 1) P(E f0) (1.66) Proof Let the state variables at time fo, and f be denoted by X(fo). (1) and x(f2), then x(i1) = BU, fo) (lo) Xo) > XO) X(i2) = @(E2, )N(G) MC) > XU) (fa) = (ia, fo Xo) EN(f0) > XC) Hence we hav X(E2) = Pia, F1) (la) = Bday f(A, f0)8t0). ‘Then from the uniqueness of the solution of (1.63), (1.66) can be estab- lished, (©) For all ¢and to, BF, fo) = Blas 2) Proof It is seen from (1.66) and (1.64) that Dlto, fo) = Bo, 1) BCE, fo) Hence for all fq and t, b(t, 19) = ® “(lo 0) 2 STATE VARIABLE METHODS IN AUTOMATIC CONTROL (£) The transition matrix (1, fo) can be expressed in the series BU, 0) =F i At are | aac dr drt +f fe aed) Ale) dred ts (67 Proof By substitution, it is easily verified that (1.67) is the solution of (1.63) with (1.64). In the case where the matrix (1) is constant and equal to A, (1.67) becomes amare [ drea? {’ ("dn an se AE AU = 0) 4H AME to) i (1.68) Hence the transition matrix depends only on (1 ~ fo) and we can write BU 0) = Bl 6,0) (1.69) The seties of (1.68) converges uniformly and absolutely on any finite interval, and extending the notation of the scalar exponential function 10 matrices we have a att SIF ATED APP a Fe ave ate (1.70) so that the transition matrix of the time-invariant linear system can be written b=" a7 Clearly this satisfies H(N= AKU), — BO) 72) The series (1.70) is frequently utilized to numerically compute the transi matrix 1.3.2 Solution of the Inhomogeneous Linear Equation The general solution of the inhomogeneous equation (1.58) is shown via an analogy with a scalar linear differential equation to consist of the sum of two parts: the first one being a particular solution of (1.58), which we now derive, and the second one being the general solution (1.65) of the homogeneous equation (1.60). MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS 2B Theorem 1.1 ‘The unique solution of the linear state equation (N= ALOX(D + BUDD For xio) = Xo. (1.58) is given by N= WG Nay fo = B(1, 0)Xo + ii (6, 7) Beyaz) dr (1.3) where (1,7) is the transition mattix satisfying (1.63) and (1.64). Proof First, by setting ¢ to fo in (1.73) it is easily scen that the solution (1.73) satisfies the initial condition x(io) = Xo. Next, we have to verify that (1.73) satisfies (1.58). Taking & derivative of (1.73)!with respect tof, we obtain a a R= 5, PUG N04 5, \ P(t, 7) B(r)u(s) de = APE 0 HG, DBOQM + [2 E.BE WML) dr = aw]ee txo+ [ b,BOMC) «| + BOM) = AON + BOOM). Therefore, itis established that (1.73) is the unique solution of (1.58). The solution (1.73) consists of the two terms. The first term expresses the solution of the homogeneous equation with u(1)=0 for all 1 to as WE X0, f0,0) = BCE, f0)0 (.74y and is the effect of the initial state xo on the state at time f. ‘The second term, in (1.73) expresses the solution of (1.58) for the case of the zero WO, fo, y= | BC 7)BG IUCr) dr (4.75) and is the effect of the input uy,.:) on the state at time ¢. The linear system has the property that the response of the state comprises the zero-input response and the zero-state response. ¥(-) satisfies the following properties: (a) Decomposition property: for any 1, fo, Xo and u, WG Nos fo, Weg et) = PUES Noy fo, BY + YEO, fo, Aru) 4 STATE VARIABLE METHODS IN AUTOMATIC CONTROL (b) Linearity of the zero-state response: for any fo, f,u!,u?, e102, WSO, fo, cert, | + 2M, ) = WEG D, fay Hl.) + HCE O, fo, Whi) (©) Linearity of the zero-input response: for any (0, 4,%1,x2, <2, WU aN! + aan’, £0, 0) = ab (EEX!, 6050) + a29(E5X7, fo, 0) 1.3.3. Calewlation of the Transition Matrix We now present several procedures for calculating the transition matrix e* for a time-invariant linear system (@) Method via Laplace transform Application of the Laplace transform {-} to (1.72) yields SHBU)} ~ $0) = AZBU)} hence ‘USI Ay (1,76) A recursive scheme for computing (sf ~ 4)~' was given in Se eM) Example 1.8 Consider the linear system Oe wom The solution is given as follows: wale) Cy SN a 2) “ermera is 1 ca Ha stl st2 Tae tet e"™ ~ 20°" + 20° "GSI - Ay = tata arte (Oe ee x0 CE Ee [Cat MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS 25 (b) Method via diagonalization Let As, Ray 14 An be the distinct eigenvalues of A and let v; be an eigenvector of A associated with Ay for /=1,2,.... 7% then Pant a The necessary and sufficient condition for (1.77) to have the non-zero solutions v; #0 for i= 1,...,% is that X; is a root of the characteristic in ded -M)=0 (1.78) The eigenvalues X/ for i= 1,...,m are the solutions of (1.78), and are assumed (0 be all distinct, in which case the eigenvectors Vj, ...,¥n are linearly independent. This is proved by contradiction. Suppose that Viz cos Ve ae linearly dependent, then there exists at, least one non-zero 2 0 stich that i Avia wi for i= CM FE H+ Gn =O For instance, if ¢) #0, we obtain from the above equation (A= IA ad) 4 ro (3) om) =0 (1.79) Since it follows clearly from (1.77) that (AD = 0 Nv, then (1.79) can be reduced to 6104) = 2) = As) Or ed 6. By assumption that As,..., dv ate all distinet, we have cy contradiction Therefore, since the eigenvectors are linearly independent, the matrix T defined by This is a T= (V1, 925005 ¥0) (1.80) is non-singular. Combination of (1.77) and (1.80) gives MnO seIILG OLRM AT=T/ 0 0 p J=7a asp De where A is a diagonal matrix with the eigenvalues in the diagonal elements. Hence we have TOAT=A (1.82) iagonalization of the matrix A We call the above operation 6 STATE VARIABLE METHODS IN AUTOMATIC CONTROL, The transition matrix e“’ can be calculated by use of the diagonalized matrix Aof A, as buys et = TT (183) - This can easily verified by use of (1.70) as follows perang-! Ter capertara bar tanee ve} u TT! +17 Mare ET ATP TOE + He Ate EAP eae (184) Example 1.9 Calculate e*' for A OHO} ann | “2-7-6 From (1.78), dot(A—M)= (+ DO+2)043), then de -2 and <3. The mauix P associated with the eigenvalues is bo. 92 52 1 t=(-1 -4 -3),ana7'=(=3 -2 -1 ann 52 32 1 hence o 0 nea 0-3, From (1.83), we have 12 a) fet a oy foe set vena aa} (oe 0 3-2-1 11037 \o °o eo) Asp ae 2 Sts ee eee ees Jobe Sevoaetyde etone 7 eo der 30-% He toe tte 2 MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS 7 I the matrix A has repeated eigenvalues, itis not always possible to find a diagonalized matrix like (1.81). In this case, the matrix A becomes of the Jordan block form which is discussed further in Section 1.5.3 (c) Sylvester expansion theorern Let M1, Aa, «+, Av be the distinct eigenvalues of A, and let f(A) be a matrix polynomial of degree 1, Then the Sylvester theorem states that f(A) an be expressed as $ (A= AD) (A = NTA = Bin Do (A = a) A)= 3) f0y) ASD (A= NAD Jet) FA FO nc Oe waded cde 89 The case of f(A}= ec corresponds to the calculation of the transition matrix. We use an example to illustrate the procedure. Example 10 Calculate SA)=e" with A From (1.78), det(A ~ A)= (A+ 1)(0+2)=0, thus di = 1 and X= ~2. Hence fs) =e", Qa) = e-*” and thus from (1.85) we have AL dal Aaa St 00) Sa = fOu) seAtwnde Biya elie ranean ( (d) Use of the Cayley-Hamilton theorem We have already seen from the Cayley-Hamilton theorem that for any integer fan mx m matrix A"*! can be expressed as a linear combination of 1. Ay on A". Thus e* can be written Pa col + 0A + ay. ++ 1A"! In addition if A has distinct eigenvalues they must satisfy the same equation, that is eM = aot ahi + aa oo + etna! for /=1,2,..., m, These 1 equations then enable the unknown coeffi 0, 1, --+444n~ 1, t0 be found which are required to evaluate e**, 8 STATE VARIABLE METHODS IN AUTOMATIC CONTROL, 1.4 BQUIVALENT SYSTEMS 1.4.1 Input-Output Relation We consider the linear system X= ARCO + BLOM) (1.86a) Y= CUR + DEOL) (1.860) where x(¢), u(1) and y(Z) are an n-state vector, an m-input vector and a p-output vector respectively; A(), BOD, CU), and DU) are an nxn, nx im, p Xn, and p x m matrix respectively; and all of them are assumed to be continuous in f The general solution of the state equation (1.86a) is known to be X(1) = YES Xo fos the) =; t0)%0+ |) bC,ABEOMG) de «sn ‘Then, it follows from (1.86) that the output y(t) is MO=MEND forte = CHE oxo + C9 | HE, 2)BCa)uLe) dr + DOWD) (1.88) = ¥(CNo, £0) + YU, fo) (1.89) ‘Thus the output y(¢) can be decomposed into the zero-inputt response and the zero-state response, as indicated by (1.89). The frst term in (1.88), that is the zero-input response, represents the effect of the initial state Xo on the output response y(t). The second and third terms in (1.88), that is the zero-state response, express the output response in the case of a zero initial state, and can be written as (60, fo, Uh4))= i. {C(OB(, BE) + D(NS* (F= TMU) dr (1.90) where 6* () is the delia function defined by 3° (= lim 8300) and ay Ga ee 0=( Sherte If H(1,7) is denoted by CW)B(t, 7) B(7) + D(QS* (= 7); for Heo fp foro 91) MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS » the output (J) for the zero-initial state x» =0 can be described by HO= HEE, to 810) = [HUE uC) ar (1.92) Hence, it can be seen from (1.4) that (J, 7) is the impulse response. We also remark that, if the state-variable description [4(1), BC), C(t), D(O) is given, the impulse response function is uniquely determined as (1.91). In particular for the time-invariant linear dynamical system (A, B, C, D) X(N) = Ax) + Bucy (1.934) Y= CXC) + Dao, (1.930) the transition matrix (¢,7) is given by ®(r— 7)=e*” and the matrix impulse response is i H(t) = CeM™B + D5 9) for ter. Since H(t, 7) is a function of only (¢~ 7), then we can write it as H(t)= C eB + Dd*(1) for t= 0 (1.94) By taking the Laplace transform of (1.94), we obtain the transfer function (3) as H(s)= Cs ~ AY"'B+ D (1.95) which is the same as (1.22) obtained directly from the Laplace transform of 16) 1.4.2 Equivalent Systems We consider a transformation of the state vectors of a linear dynamical system by use of the non-singular matrix T(1), that is RQ = TOMO) or N= TIX, 1.96) where 7(#) is non-singular for all / and continuously differentiable. ‘The new state X(1) obtained through the transformation (1.96) satisfies XC) = ALON) + BQ) (97a) a (= CCR) + Denney (1.97) AD = TAT) T (NTO) (1.984) BW) = TB) (1.980) CH= CHT) (1.989) DO= DO) (1.980) 30 STATE VARIABLE METHODS IN AUTOMATIC CONTROL We show below in Theorem 1.2 that the input-output relationship is not changed by the transformation. Thus, if the initial conditions also satisfy (1.96), that is, X(lo) = T(Go)8(4o), then both linear systems have identical output responses for the same input. In this case the linear dynamical system given by (1.97) is said to be equivalent to the linear system of (1.86), and T(¢) is also called an equivalence transformation. Similarly, dynamical systems which are equivalent to the time-invariant linear dynamical system in (1.93) can be derived through the non-singular transformation matrix T given by 8(Q=T XO or (N= TRO (1.99) so that ae (0 = As(y + Baie) (1.100a) (0) = Cx(0) + Dutt) (1.100b) where A=T AT (.t01ay B-T'B (1016) é=cr (1010 D=D (oid) Hence, there exist an infinite number of equivalent systems since the transformation matrix can be arbitrarily chosen. Theorem 1.2 Equivalent linear systems have identical matrix impulse responses. Proof The state variables x(/) and X(¢) for the linear dynamical systems described by (1.86) and (1.97), respectively, are given by AU = (6 0)ntio)+ | BG DBCML) dr (1.102) 8) = 86,1080) + | BENBOW) ar (1.103) where &(1, 7) and (4, 7) are the transition matrices corresponding to A(1) and A(Z) respectively. Substituting (1.96) into (1,102) and multiplying by To"), we have R(1)= To (B(t to) To) 80) + [Powe nTeIT-\GrBE WL de (1.109 MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS 31 Then, it follows from (1.103) and (1.98) that BL) = T(t, TE) 1.105) Using (1.105) and (1.98) in (1.91) itis seen that the impulse response (1, 7) for the dynamical system (1.97) is At) OBC, Br) + DUNS! (t= 7) = CITT! (BE, 1) TE) "(@)B() + D(DS* t— 7) = CLOB(, BE) + D(NB* C=) =Ht7) (1.106) Thus, the theorem is established. Corollary 1.1 Linear time-invariant dynamical systems which are equivalent have identical transfer functions. Proof The transfer function associated with the linear system (1.100) is As) = C(sl- Ay"'B+ D= CTST-'T- T'AT)T~'B + D CIT-(sl~ Ay TTB + D=C(sl- Ay B+ D =H) Hence all equivalent linear systems have identical transfer functions. 1.3 REALIZATION OF SINGLE-INPUT AND SINGLE-OUTPUT LINEAR SYSTEMS We consider, initially, a linear multi-input multi-output (MIMO) system. with transfer function matrix H(s). If there exists a linear finite- dimensional dynamical system (A, B, C, D) that has the specified H(s), the transfer function matrix H(s) is said (0 be realizable, and (A, B,C, D) is called a realization of H(s). When the system description (4, B, C, D) is given, H(s) is described by the rational function of s indicated in (1.23). The determinant of (sf — A) gives a denominator polynomial of deeree vin s, while every element of the adjoint matrix of (s/— A) gives a numerator polynomial of degree equal to or less than n—1, If D=0 the transfer function matrix H(s)= C(st— A) 'B, and for each element the degree of its denominator is at least one degree higher than that of its numerator. In this case the 2 STATE VARIABLE METHODS IN AUTOMATIC CONTROL, transfer function is called strictly proper. If D is a constant matrix, the degree of the denominator of each element is equal to or greater than that of its numerator, and then the transfer function is called proper. Hence, it can be seen that H(s) is realizable if and only if H(s) is @ proper rational matrix. We now tum our attention to the realization of a singl output rational transfer function, input single- Hts) = bast bis by ee Say SOF mF ans +0 srt + + + bis + Bo, 5 (1.1079) Saas bas + a9 sa 1 and 8 = by. We present methods able descriptions (A,b,e",d) for the given where 8; = bj ~ bac for for reconstructing state va transfer function (1.107b). 1.5.1 Companion Form (a): Controllable canonical form Equation (1.107b) being decomposed into two parts, as shown in Figure 1.11, the output Y(s) can be written as ¥6) = 26) +806) (10 2(s) _ Bw. 1st Ba : Peo + Bis + Bo (1.109) Let X1(s) be defined as in Fig. 1.11, then we have Xiuls) _ (110), Ce Te Te +astao xe + Bu-2! +o 4 Bist Bo ) za Me xis Figure L.L1 Transfer function zepresentation MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS 3B If x:(0) = 7" Xi(5)} is taken as one of the state variables, it satisfies the linear ordinary differential equation MPC) + an rad (D+ oo + aan) = WC) (1.112) Further, if the other 1 state variables are chosen as a iy ks = = Ps ce ay PP, este = See aD, it follows from (1,112) that the state equation is ca a.113) Bae = Xe Yan= —a0k aim ene tH and from (1.108) and (1.111) that the output equation is WW) = xt) + 6ul) = Boxi(0) + Bix2(0) +--+ Bn taal) + du(t) (4 Rewriting (1.113) and (1.114) in matrix form, we have 8() = AX() + UC) (.115a) WD) =eTK(a) + due) (1.1156) where oT An 0 CCM an 0 1 OTTO MH OT 0 0 Pot aN (80> Br Br. Ba-a)s The block diagram of the state space description (1.115) is given in Figure 1.12. 1.8.2 Companion Form (b): Observable canonical form If the companion form (4,b,e", d) in (1.115) is a realization of Hs), another companion form (A",¢,b, d) is also a realization of H(s), since Als st — Ay 'b = (b%st— AT)'e}" (st — Aye. a6) 4 STATE VARIABLE METHODS IN AUTOMATIC CONTROL Figure 1.12 Realization by companion form (a) ‘Thus we can obtain another companion form given by % 0 0 se tao ft Bo ae 10 Pca TL ag a as f=fo 1 0. -02 pas |e] 62 Ju Caza iw Oo 2 OF sar © 0 2. 0 Dr+ou, (1.1176) Figure 1.13 Realization by companioa form (b) MATHEMATICAL, DESCRIPTION OF LINEAR SYSTEMS 38 It is seen, from Figure 1.13 which gives the block diagram of this realization, that the state variables are assigned by Ban = y+ Oe Ba Xpo2 = Sve + 2% — Bua ~ 1.113) x)= 4 ante Bit Anne O= X1 + cox%n — Bou. 1.8.3 Jordan Canonical Form When the transfer function (1.107) has distinct poles dj, \; Many As) can be described by the partial.fraction expansion " neta Lug s— de 6 J where a= tim (s= SHS) oh As illustrated in Figure 1.14, if we assign the state variables 21, 43, 144 ¥u as eC PA as) Xals)_ Us) a” UG) a UG) Ss 2 (n) Figure 1.) Realization by Jordan form: case (a) distinct eigenvalues 36 STATE VARIABLE METHODS IN AUTOMATIC CONTROL the output ¥(s) is Ys) = Xils) + Xs) + + Xa) +SU(S) (1.120) We now obtain the diagonatized state variable description R(D= Ax(s) + bu(t) (1.1214) Te(1) + dutty (1216) where a 0 0 0 1 om 0 0 1 00% 0). I 0 0 1 Next we consider a transfer function with poles which are not all distinct. We give an example to explain the procedure to obtain the Jordan block form. Assume that H(s) can be expanded by partial fractions as Bast + fps" 2b + Be His) = 2S Bast (9 SIEM Bd +6 Figure 1.15 Realization by Jordan block form: case (b) repeated eigenvalues MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS a7 where yu fim (6-6) ial 5 w= tim $2 (s— PHO! u sim (ie ) | a . y= lim (S (-s) H1)} a= in (5 MH) We assign the state variables x1, x2, .. %q, as illustrated in Figure 1.15, then (9 ~ dual = ud) (1) = 32() = (9. 2D ve) = A) Bs()~ axa) = u(t) Yn) — roa UC) DED = yurxvlt) + yrerat) + yinaa t+ veal + + yard) + 5UE) Rewriting in matrix form, we have the Jordan block canonical form XL x ol x x 0 4 al fa ball beat aL 1 i ti v (1.122a) YEG yr Yd Ye aN + Ou (1.122b), It is seen that the matrix 4 consists of one Jordan block, shown dotted, associated with each eigenvalue. Because \; is of multiplicity three the corresponding Jordan block is three by three. The elements above the diagonal in this block are unity, or in some rather special cases, zero. Every linear time-invariant system has an equivalent Jordan canonical form. For instance, the non-singular matrix which transforms the com. 38 STATE VARIABLE METHODS IN AUTOMATIC Ct NTROL panion canonical form (1.115) into the Jordan canonical form (1.121) is given by Mode AMP OM OM | det THO (1.123) PS ara ot aM where M1, hay phe are the eigenvalues of the companion matrix A in (2.115). This result can be easily proved as follows. With A taken as in (1.115) we have on 6 an 1 o 0 1 0 Mood de AT= : 1 a9 a manip \Ath oh et MOM Me a[ MOM OM a an Tee NOT HO tan Onno an Ase NO Thus AT= TA, giving A= T~'AT and T is therefore the non-singular matrix which transforms the companion form into the equivalent Jordan form. 1.5.4. Tridiagonal Form I is assumed that the strictly proper patt Ho(s) of the transfer function H(s) can be expanded in a continued fraction as MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS 3 by + a3¢¢$ b+ s+ Byatt ays + where a #0 for F= Ly. 7 If we assign the state variables from ps) an XS) Det aes Kral) Bai + @a-1S + PeCn ae M00) Fae BD i), U9 ee BE) ¥(s) = Xi(s) we then have the following state equations 40 STATE VARIABLE METHODS IN AUTOMATIC CONTROL They can be rewritten in matrix form as Me oe olf) ft aa a % Se eetnto mn | fo tag ag als dw 1 1 aon Bot Se ° Ot a a y = 0 OK PROBLEMS PLA1_ Find, by use of the Faddeev algorithm, the transfer function of the near system (A,b,") given by -2 0 1 A Ot enn + a(-1 1 Oy 0 -3 -2 P12 Find the state variable description of the network given in Figure P1.2, where the input w(d) is the source voltage and the output voltage y(2) is the voltage across the resistor R a c 0 1 et win (OY Ry yy) Figure PL2 P13. Construct the state variable description of the tank system shown in Figure P1.3, where input 1#(/) is the incoming flow rate, the output y(4) the outgoing flow rate, S;, S: and Sy the cross-sectional areas of the tanks and MATHEMATICAL DESCRIPTION OF LINEAR SYS a Sy hy m = ty i if yr) Figure PL ny fa and fy the levels. It is assumed that the each flow rate gz, gs and y through the pipes is proportional to the difference in the levels, that is, qi = (hi — fn)/Ra, where Ro is the flow resistance. P14 Write node voltage equations for the circuit shown in Figure P14 and show that it is the force current analogue of Figure 1.7(a). Figure PLA PLS Obtain state variable equations for the network shown in Figure PI. using the voltages across the capacitors and the current through % 2 He a aw Figure PLS a STATE VARIABLE METHODS IN AUTOMATIC CONTROL the inductance as the state variables. Determine the A, B, Cand D matrices if v1(0) is the input and v2(1) the output. P1.6 Write the state equations for the network of Figure P1.6 using the state variables /;, i: and v, er and e are the network inputs and #; and f are the required outputs. Figure PLS P17 Obtain the differential equations for the system shown in Figure P1.7. Draw a block diagram for this plant and find the transfer funetion ¥2(s){ F(s). Write the state-variable representation for the plant, using the variables 21 = x1, 22 = 21, 23= a, and zs = 2. Loss Figure PL? MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS, 48 P18 Draw the electric circuit whieh is the force-current analogue of the mass, spring, damper system of Figure P1.7. P19. Show that the circuit shown in Figure P1.9 has a transfer function from vs to v2 of 1 C+ sQ Re LjR)+2 and it can be represented in state variable form with - —a(a+2b) —2a. a*b, where a= IRC and b= RIL. & ‘ Choosing the state variables xy = v1, x2 = v2 and xy = Liy show that the new A matrix A’ is given by ~a -a -ab A'=[ 0 -a ab to -1 0 Determine the transformation matrix T relating the state variables for the two choices of the A matrix. 09 “ 2 bp 880 4 4 & ‘g Ce " c ce re PL P1.10 Obtain the state space representation of the network shown in Figure P1.10 with the currents through the inductors as state variables, 1: and u> the input voltages and y; and y2 as the output voltages. “4 STATE VARIABLE METHODS IN AUTOMATIC CONTROL, Figure PIO Ry=1 and L,=2 for all PI.11 Write in controllable canonical form the state equation for the differential equation Pepe 2y= u(y, and draw the corresponding signal low graph. If a state variable transfor- mation x = 7x is used, where x and & are respectively the old and the new state variables, determine the new state equations for the differential equation ait fig (aie) Draw a signal flow graph for this new representation, P1.12, Obtain a state space representation of the differential equation V43k43txsutw with the output y= x. P1.I3. A system has an A matrix with a modal matrix U (.e. the matrix which diagonalizes 4). Show that the modal matrix V of the system with A matrix A" is, with a suitable choice of normalizing constants, given by vreu"! P1.14 Obtain the response (1) of the following system using the Laplace transformation method aa 1 x=( 1 0 -1}x4(0)u 0 0 4 9, Yen bla t 3a, wis a unit step function and x(0) = 0. MATHEMATICAL DESCRIPTION OF LINEAR SYSTEMS, 48 P1.15 A dynamic system is described by the state space equation k= Axtbu i x) 0 8-8 -2 Az=(4 -3 -2}) and b=[0 Suan, 1 (a) Find a non-singular transformation T such that T~'AT is diagonal, (b) From (a), find e™. (©) Find e™ using the Sylvester expansion, (@) Find y if w isa unit impulse function. PL.16 A system is governed b§'the state equation w= (F -)or(, taco. where x(/) and u(s) are respectively the state and input vectors of the system, Determine the transition matrix of this system and hence obtain an explicit expression for x(0) if 7 (ero P1.17_ For the system shown in Figure P1.17 write a state space descrip tion with xy and x2 the state variables. If u(7)=0 for all ¢ and x(2) = (1 0)", find x(@) and x(3) where x0 (2 aaa Figure PIT 1.18 Find the transition matrix of 4 if sun lot 4-3 46 STAY /ARIABLE METHODS IN AUTOMATIC CONTROL P1.19 The state space equation of a system is, COMETH I 3. 0° 2)xe( 2)u -12 -7 ~6, 3 as)'. Obtain the scalar differential equation satishied where x= (xi, by x P1.20 Represent the following plants by means of state variable deserip- tions with A in the controllable canonical form. (@) Gyls) (b) Gyls) So¥3s 425010 1O(s + Is+ 2) (5+ 3544) P21, The transfer function G(s) is given by (©) Gpls) SLIst10 SF RP 4 s+ 122 Find (a) the controllable canonical form, {b) the observable canonical form, (©) the Jordan canonical form and (d) the tridiagonal form. Draw the corresponding block diagrams. as P12 Verify that eure holds true only if A and B commute, i. P1.23. Show that Cesena) Caria eine dtitiiacrtie) where Ay; and Ago are assumed to be non-singular, AB= BA, PL.24 Show that (a pal = (AF FAR ALD A An! ~AG'A LD Any An, DANA! pb! where Ay) and D = Ag: — AvAi:'Ayo are assumed to be non-singular, 2 STRUCTURE OF LINEAR SYSTEMS 2.1 OBSERVABILITY AND CONTROLLABILITY OF TIME-INVARIANT SYSTEMS Let us consider the time-invariant system represented by (1) BANCO + Buty la y()= Ext + Duco, Q2.1b) where x, u, yare an n-vector, an m-vector and a p-vector. The objective of the control is to transfer the’state of the system to a desirable state from the initial state using the input u. However, the existence of such an input should be assured; this is the controllability condition. On the other hand, it is sometimes necessary to know all state variables from measurement of the output y(/) whose dimension is less than that of the state. The observability condition assures the construction of the state from the output, These properties are intrinsic for systems and play important roles in linear system theory. 2.4.1 The Condition for Controllability Definition 2, (Controllability). For the linear system given by (2.1) if there exists an input Woy.) Which transfers the initial state x(0) = xo to the zero state x(f;) = 0 in a finite time 41, the state Xo is said to be controllable. If all initial states are controllable the system is said to be completely control- lable. From (1,73), the solution of (2-1a) is MD = exo + ll eo Bulr) dr 22) Ifthe system is controllable, i.e., there exists an input to make x(f1) = xi = 0 at finite time ¢= @,, then after premultiplying by e~ “", (2.2) yields [i eam ar 3 ” 48 STATE VARIABLE METHODS IN AUTOMATIC CONTROL Therefore any controllable state satisfies (2.3), and for a completely controllable system every state xo€ R” satishies (2.3) with (>0) and Up. From (2.3), it is found that complete controllability of a system depends on A and B, and is independent of the output matrix C. ‘The theorem to check the controliability of the system is given as follows. Theorem 2.1 ‘The necessary and sufficient condition for the system (2.1) to be completely controllable is given by one of the following conditions: (i) WO, 4) = Se "BBE" dr is non-singular, Gi) The controllability matrix # 4 (B, AB, AB, satisfies rank @ = 1,! A‘ 'B), (nx man) Since condition (ii) can be computed without integration it allows the controllability of a system to be easily checked. As seen from Theorem 2.1, the complete controllability of the system comes from the properties of A and B. So one simply states that ‘(4, B) is controllable’. Before the proof is done some examples are given. Example 21” The system represented by ong ain 1 x=(1 0 -3)x+(1)u 0 1 3 0 is considered. The rank of the controllability matrix of the given system is 2 as shown below. So the system is not completely controllable, 1 0-1 rank (1001-3 One Example 2.2» The state equation of the circuit shown in Figure 2.1 can be derived using the method described in Section 1.2.4. Let the state variables be the current of rank @=rank{b, Ab, 476) 1 denotes the set of real numbers [x] ~ ee < x-< zo and R* gives an mcimensional real vector space. xR" denotes tat xis an clement of R",i.., is an maimensional real vector. Rank © denotes the rank of the matrix "@, which is equal 10 the aumber of linearly Independent column (or ro) vectors in STRUCTURE OF LINEAR SYSTEMS 9 Figure2.1 Bridge circuit the inductance, f1, and the voltage of the capacitance ve, and let the output be fi, then 1 f Rik nay +f La ] i i “tle R+Rl CURFR B+ RJ |f" ea Mt a i Rs+ Ril’ CLR + Ry Ry 2 v0 of) The controllability matrix of the system is 1 Af Ri RRs LL UR Ra” Rit Re €=(b, Ab] = mR Le [RR Rie Thus we see that under the condition that Rsj(Rs + Rs) Raf(Rs + Rade that iss RiRs = RoR, rank ~ | and the system becomes ‘uncontrollable’. This condition is the one required to balance the resistance bridge, and in this case, the voltage of the capacitance us cannot be varied by any external input u. Proof of Theorem 2.1 (due to Brockett) Condition (i): Sufficiency. IF WO, 4) given in (i) is non-singular, the following input can be applied to the system ul) = = Be W"'O,t1)x0 a4) 50 STATE VARIABLE METHODS IN AUTOMATIC CONTROL For the input (2.4), the state of the system (2.2) is given by x(t) = eK0 e{["e*aae Pde] WO. tN 0 25) for any initial state x». Therefore the system (A, B) is controllable. Necessity. Assume that though (0, ,)is singular for any f,, the system is controllable. Then for f: > 0, there exists a non-zero n-vector a such that a WO, 1)a~ {ale BBTe Ma dr=0 2.6) which yields for any £30 ale "B=0", 130 (2.7) From the assumption of controllability, there exists an input u satisfying (2.3), therefore from (2.3) and (2.7) Halw=al |" eBu) do=0 2.8) holds for any initial state xo. By choosing xo = a, (2.8) gives «= 0 which contradicts the non-zero property of a. Therefore the non-singularity of W(@, 1) is proved. Condition (i): Sufficiency. It is first assumed that though rank = 1 the system is not controllable, and by showing that this is a contradiction the controllability of the system is proved. By the above assumption that the system is not controllable and rank@ =n, W(, f:) is singular. Therefore (2.7), that is ate "B=0", 130,040 9) holds. Derivatives of the above equation at ¢= 0 yield @ANB=0", hk =0,L nad (2.10) which is equivalent to a"[B, AB, ..., A”'B] =a" @= 0" 2.11) This contradicts the assumption that rank controllable: Necessity, I is assumed that thé system is completely controllable, but rank @ 0 CeMa=0, 130 ey ‘This means that the output is always zero for the non-zero initial condition x(0) =a #0, and the state a cannot be distinguished from the zero state from the measured output, which contradicts the observability of the system, Therefore (0, f1) should be non-singular for the system to be completely observable. From the above results, itis clear that observability is dependent only on the properties of the matrices A and C, and the following theorem concerning observability is given. Theorem 2.2 ‘A necessary and sufficient condition for the system (2.1) to be completely observable is one of the following equivalent conditions, ) MO,n " dr is non-singular. (ii) The observability matrix defined as the mx np matrix, OIC ATC! (AV ICT has rank (6) has been proved before the theorem and (fi) can be proved from (i) in a similar manner to Theorem 2.1 STRUCTURE OF LINEAR SYSTEMS 83 Example 2.3 The observability matrix of the bridge circuit given in Example 2.2 is given by AP RR | RRs Rit Re” Ret Re LR ae ea T[R+R RR, For the balance condition of R\Ry = RoRs, rank ¢= observable wate altel and the system is not 2.1.3, Duality ‘The controllability matrix © and the observability matrix @ have similar structure in that by replacing B and A by C' and AT, @ becomes “. Let us consider the system ei9= a(t) (2.20) YD BN + D0. (2.22b) By changing (> ~ 1, A> AT, B> CT, C>B', D> D" in the system Q.1), the system (2.22) is obtained, and this system is said to be the dual system to the system (2.1), From Theorem 2.1 and Theorem 2.2, if the system (2.1) is comtroltable (observable), its dual system is observable (controllable). This shows that controllability and observability have a dual relationship, and this relation- ship will again appear between the optimal regulator and the optimal filter (ee Section 6.2.1). AN) 2.4.4 Output Controllabi Controllability is concerned with the transfer of the state to the zero state and is irrelevant with respect to the output. In this section we consider the problem of making the output y(1) zero in a finite time, Definition 2.3 Output controllability. If there exists an input which makes the output of the system, y(¢), zero in a finite time f, when the output at time zero is arbitrary, then the system is said co be completely output controllable. Concerning output controllability we give the following theorem similar to Theorem 2.1 54 STATE VARIABLE METHODS IN AUTOMATIC CONTROL Theorem 2.3 A necessary and sufficient condition for the system (2.1) to be completely output controllable is one of the following conditions. (i) W'@,6)= Sp Ce BBTe~ 4CT de is non-singular (ii) The output controllability matrix a px mm matrix defined by 2 [CB, CAB, CAB, .., CA”-'B) has rank p. 2.1.5. Equivalent Systems By the non-singular transformation x=TX the system (2.1) is transformed into an equivalent system 8() = Ax(y) + Buca) (2.23a) yl) = CX(0) + Dui as described in Section 1.4.2, where AaTAT, B=T"'B, 2.24 The controllability matrix of (2.23) is @ = (BAB, ...,A"- 1B) =[T~'B, T7'AB,... T™'A"™'B) = Te) 2.28) Since 7~' is non-singular rank @=rank €. A similar relationship can be shown for the observability matrices so that the following theorem exists. Theorem 2.4 Complete controllability and observability are preserved for equivalent systems, 2. STATE SPACE STRUCTURE OF TIME-INVARIANT SYSTEMS The conditions for a system co be completely controllable or completely observable have been given in Theorem 2.1 or Theorem 2.2, and the STRUCTURE OF LINEAR SYSTEMS 35 concepts have been demonstrated by Examples 2.1, 2.2 and 2.3. This section describes the state space structure from the viewpoint of control- lability and observability. 2.2.1 Controllable Subspace The system (2.1) is assumed not to satisfy the complete controllability condition of Theorem 2.1, then although all states may not be controllable there may exist some controllable states. Since a controllable state is represented by (2.3), the set of all controllable states #, named the conirollable subspace, is given by fs Since for x1, %2€ Ys, x) +¥2E Me, and for x1 € 9, and a real number a, axr€ J's, then is a linear space. The controllability subspace 4%. is defined in the next theorem, 2X [tau ar} 2.26)" Theorem 2.5 The necessary and sufficient condition for the state xp of the system (2.1) to be controllable is x € O) 2.27) ‘The controllable subspace ¥, is written as Y= RC) (2.28) where #0(@) is the range space of the controllability matrix © RCO) =Ax|X eu, Yue R™, Before the proof of this theorem an example is given. Example 2-4 The controllable subspace of Example 2.1 7s given by ae) Peers eee) This is imtexpreted a the Set 4% consisis of tw characterized by w= { Bur) dr 56 STATE VARIABLE METHODS IN AUTOMATIC CONTROL ‘The range space of a matrix is the subspace whose basis is formed by a linear ‘combination of the independent column vectors of the matrix. Since here rank is ‘wo we can take any «wo of the three column vectors as the basis vectors. Thus -#(€) is a ywo-dimensional hyperplane, a subspace, of the three-dimensional space ord Proof of Theorem 2.5. By expanding e~*" (2.26) can be rewritten as a a [et Bmc r= fi (- Art Pte ) Buco) ar J) mc de = (B, AB, A*B, {.-m ear fetal where ff, u(7) dr, 15-7u(r) dry ff u(r) dr... can be made arbitrary by an appropriate choice w(z). Therefore the controllability subspace 9°. is represented by RB, AB, A*B,...) From the Cayley-Hamilton thearem, A"*’B is given by a linear combi- nation of A‘B, A‘*)B,...,4"*'"'B, and by repeating this procedure, A”**B (k > 0) can be represented by a linear combination of B, AB- , A"-'B, Therefore the controllable state in 7’: x= 5 a'Bu, can be rewritten as Thus og ANB) and (2.28) holds. For a system which is not completely controllable with rank @= no X is said to be a canonical map for E on X if () xE8(09, x6 X G.21a) Q) xEy & 660) = 60), eX G.21b) ‘The range of 4, denoted by #(6), is said to be a set of canonical forms for Eon X. Let X be a set of n-dimensional systems with m-inputs and p-outputs, then (A, B,C) is an element of X. X= (A,B, C)| AER™", BER”*", Ce RP When (A,B,C) and (4, B,C)-are equivalent, that, is, there exists a non-singular T satisfying A= T-'AT, B= T~'B and C=CT, we can denote (4, B, C)E(A, B,C), since the relation is an equivalence relation For the equivalence relation defined by (A,B, C)EA,B,O) © CT:non-singular)(A = T~'AT, B= T~'B,€= CT) (3.22) there are many invariants, Controllability indices are invariants, since if A’b, is linearly independent of be, bo, ..., Buy ADs, 5 AP bn, ADI, oy A’bi-1, then Ab; = T~*A"b;is also linearly independent OF By, Bay oy Bis ADIy sory AP "By ADs, ney APB) 1. SO if We Use the Follow ing diagram, where a X (cross) is marked when A” in the left of the row times b, in the top of the column is linearly independent of the already alx]|xfudx [ee Bix fx fe ea o |e lee % 88 STATE VARIABLE METHODS IN AUTOMATIC CONTROL ‘marked elements, then if the number of the cross marks in the ith column is Ab SS ay atye S OE aya, 23) where @A.b = min(a, b). From the above relation we see that not only foi} but also {aye} are invariants on X. {o)| is called a controllability index or Kronecker index. Let the set of controllable systems be X. defined as X= ((A, B, C)|(A, B, C)€ X, (A, B) is controllable} The E defined in (3.22) is also an equivalence relation on Xe. For asystem in X., the sum of Kronecker indices isn, and (oi| plus {ain} are completely invariam for Eon Xe, since T= [by Ab), 5 A "bys bo, Ae by] gives a canonical form oo 10 o1 for a class of controllable equivalent systems on Xe. The above canonical orm is not convenient for general use and Ackermann gave a procedure to give the canonical form obtained by Luenberger’s algorithm. He defined. sept, K=O, aie = aye for = yong my J smin(oj,0;)— 1 (3.24a) By= aya, where faye} and (Gy) are called a-parameters and B-parameters respectively Let aj = [~ 240, — e405. ~ ag 0-01 (6.240) where 6 = min(o, 0)) —1 and ent 3.258 Kaa=| Sa el, ala... abe CANONICAL FORMS AND MINIMAL REALIZATIONS 89 1 By Onn Maw : Boa) 8.250) Onn} 1 then the canonical form is given by Ae+ BeMisl Kaw (3.26a) Mab (3.26b) where diag(Aer, Aes, Be = diag ber, Ben) 01 o Aa= 1} Xo matrix 0 0, 0 bum | 5} 0:1 matrix Using Ackermann’s procedure, the controllable canonical form of the system (3.19) is derived as follows: “(C0 “( 1 Ab: = (°)- — bi + Abi + 2b2 = erobi + B21 Ab: + c220b2 4 2b, ~ Aby + Sb2 = ayiebs + cy AD) + ar20b> Therefore faaw ain enw) _(-2 1-1 Kan = co 0 ae) a o >) na Mo-(5 ) wetter NCEE “DG EY 90 STATE VARIABLE METHODS IN AUTOMATIC CONTROL, and the canonical form is fo) onto? =(3 -1 1)x+(1 i )o 5 0 2, Onn ‘The relationship is proved by considering a simple example. For the controllable system, o 1 06 a) ao dir ao }xt( 1 0 }u 1 aig 0 a0, 0 ‘The elements of the controllability matrix are So that 0 i aus bi=( 1 Ab) =| any A*= { anot ain 0 0 A120 0 0 b={ 0 aan ‘ a0, AX; = ayjoby + a1 ADs + ara9b2 Abs azigby + azzoba therefore in the case that all the G-parameters are zero, the coefficients of the linear combination giving Ab; are the same as those in the A-matrix of the canonical form. However, if the é-parameters are not equal to zero such that on TMG 00 Guo din aw fet (1 Ba) w Giz9 0 o1 0 i can b=( Aby= (din) A? =| duo + din 0, 0 G20 0 Ba bre [ Bx Gadus + da CANONICAL FORMS AND MINIMAL REALIZATIONS on then A%y = Gob) + Fy Abs + &20(bv2 ~ Baibbs) A (bz ~ B21b}) = da,0bs + daz0 (bz ~ Baiby)- This relationship shows that the coeflicients of the following equations ay1¢y + ayy Abi + aac az,0by + 211 Abs + zzob> AX, Ab, satisfy Sau = aa and 1 B21) (a0 din aad - (2 au aa 01 Aco 0 azo ain 0 azn), and by generalizing the above.example, Ackermann’s procedure can be understood as follows: From the above simple example, we will derive the controllable canonical form for @ general multivariable system. B is frst defined as B= BMan where Map is 1-61-60 ~ Bm Mas=| 0 Rea = Bm 1 ° and the d-parameters are defined as Biny Gicr os tot avatar ee aga = Mib ain Then from (3.23) cian \ + ABL Gian | ++ AB] aiaq | G23)" T= [11135 .0-thoy 224 and then from (3.23)" Gae-1 Using 7, the following relations are derived ot 00 0 1 STATE VARIABLE METHODS IN AUTOMATIC CONTROL, CANONICAL FORMS AND MINIMAL REALIZATIONS 3 (Ac, Be) are thus found in the canonical form given by Luenberger’s algorithm 3.1.3, Canonical Form and Input-Output Relation ‘The canonical form discussed has a close connection with the transfer function which represents the input-output relationship. First the controllable canonical form (A,b,") with a single input and ‘output is considered. We have OTL OTHE 0 0 ke i o |xs{ fe G27 ° CTI een pantie = ayy 1 [oy oe Baal (3.270) and the transfer function of the system is Bu-18"7! + + + Bo MO yaa Te ao na 1 1 ih s s (3.27c)’ which can be proved as follows: To show (3.27¢)' follows from (3.27), we first prove I A)"'b i it icine it H (3.28) Premultiplying by (s7— A) and postmultiplying by TA TTTRUHT AIM He ETH (3.28) yields o 1 a a(t) © 6.29) S™ + ay 18" te tm, vt 4 STATE VARIABLE METHODS IN AUTOMATIC CONTROL which is obviously crue, Therefore (3.28) holds and (3.27) follows from (1.23) ‘This relation between the transfer function and the canonical form of a single variable system can be generalized to a multivariable system by the structure theorem of Falb~Wolovich. Theorem 3.2 (structure theorem) Consider the second controllable canonical form of Luenberger (Ac, Be, Cz) with full rank B represented by Au Am Ag on Aim a Aw Anis Bu (3.30) Tat ) 40 ( 0 i ) 8h Ming | oe — Bt 68.88, ) where po=0, pr= Zar on and oO i“ 6.8) a ie fy. on) t i The transfer matrix of the system is given by ColsI = Ae) 'Be = CoS(S) 8. (5) Bn G31) where ‘Si(s) S(s)= ied (3.32a) Sms), 0 O 1 On 0 Sus) = H .32b) CANONICAL FORMS AND MINIMAL REALIZATIONS 95 i oO be(8)= ~ AnS(s) 6.33) ¥ Am = : B.34a) Ham oe emo, 1 By Big — 1 o-. : 3.340) 1 Bonnin» ao ‘The proof is given in the appendix of this chapter. In the case where (4, B,C) is an observable canonical form a similar relation exists. Letting Luenberger’s second observable canonical form of (A, B, C) be (Ao, Bo, Co), then (Al, C3, B)) is Luenberger’s second con- Pear canonical form; therefore the transfer matrix of the system from 31) is By(sI— AS)°CE and taking the transpose, the transfer matrix of (Ao, Bo, Co) is given. Specifically Co(sl = Ao)! Bo = Epa '(8)S"(s)Bo 3.35) where S(s) is defined similar to (3.32a,b), but by choosing m= p, and m0 a bos) = (3.36) = aK = OKa- Hp, 3.37) Yoo te enw nt 6 STATE VARIABLE METHODS IN AUTOMATIC CONTROL, Example 3.7 Vind the wansfer matrix ofthe sytem given in Example 3.1 Letting the contrallale canonical form of (4, B, C) be (Ax, Bo Co, then by wsng the fat that equivalent systems have the same transfer matrix there exis H(s) = C(sl- AY"'B= Celsl- Ac)” 'Be o 0 i 0 oO oO cn( oO i 0" fr wr 8 23 A Deb 3 rf where a5) @ tn iiioltita -) s ‘Therefore H(s) is given by ist 0\ (st 4 658+ 11846 eae | eee tA A ot 1 iS eds? 3s? 4 12s? ) SS nosy 35s 505424 11 83465? + 11827) ‘The result can be compared with that given by the algorithm of Faddeev (1.26) which yietds| att A= 10 Tet AOA ON 09 0 1 Oo oouitietta te { —8-" 4 8) as M=APstal=( 6-1 4 0 fp =H [24 6-350 Matta it ot! = 66-11 0-24, 3 0 10 nea e i 6 4 40 ‘oom, Adrtel=| 34 59 00 x00 -6 -11 Ol ae 440240 4 0 ne -u 0 00 0 0)» Atital=( "9 4 0 0 4 0 121 -66 -I1 6, 0-2, CANONICAL FORMS AND MINIMAL REALL 97 ‘The transfer matrix is, therefore, given by 1 A) Traps ask a Sis (CBS + CP DBS? + CP B.S + CPB) eC s*+ 105? + 35s? + 50s +24 C6 I(t 2) 3.2, MINIMAL REALIZATION The construction of a state space representation from an input-output relation such as the transfer function is called a realization. In Chapter 2, it was pointed out that the transfer function of a system is equal to that of its controllable and observable subsystem. From this fact it is therefore understandable that there can exist many state space representations with different orders which give the same transfer function. Among these possible state space descriptions the minimal dimension system is called a minimal realization. In this section, algorithms to obtain the minimal realization from a transfer function are given. 3.2.1 Dimension of a Mi imal Realization ‘The transfer matrix H(s) is a common representation for a system input-output relationship. For a multivariable system there are many forms for its representation. We give below several of these for strictly proper systems which by definition satisfy lim, H(s} © ils) pals) Pun(s) gus)? gis)" gins) His) G.38) Pes) BoolS) Poms) pilS)" Genl5)"” Gots) l Gi) H(s) wih (3.39) where as) + ay + a, aeR Hi(s) = Heys! 00+ Ho, Hye ROX 98 STATE VARIABLE METHODS IN AUTOMATIC CONTROL Gil), HGS) = NG)D“'s) 6.40) (8) = Neois’' + Neaas 2 4 +No, Ne REX D(s) = Ds! + Dyas"! ++ Dy Die R™" ‘There is also a dual form to (3.40a) with Als) = N“(s)D(s) G.40b) (5) = Nes tot No, Nie RPXP D(s) = Deas +--+ Do, Die RE” Gy) Hs) = Hos + His? + Has + Gab In this representation the coefficient matrices H of (3.41) are called Markov parameters. If the transfer matrix of the system (A, B, C) is H(s), that is, H(s)= CsI A)"'B = CBs"! + CABS? + than the Markov parameter Hi is related to (A, B, C) by 01,2, (6.42) ‘Therefore if there exists an A, B, C satisfying (3.42) then (A, B,C) is a realization of H(s). Theorem 2.8 in Chapter 2, shows that the completely controllable and observable subsystem of (4, B, C) gives the same transfer matrix. There is, however, a question of whether there is a system with a lower order than that of the completely controllable and observable system. The next theorem answers this question. H,=CA'B, Theorem 3.3 The realization of the system (A, B, C) from the given transfer function matrix H(s), is @ minimal realization if and only if it is completely controllable and observable. Proof From Theorem 2.8, it is known that the minimal realization is controllable and observable. Therefore it will be proved that the control- lable and observable system is a minimal realization, Let a controllable and observable nth order system (A, B, C) be a realization of H(s) and assume that there exists a lower order system (4’, B',C’) with the dimension n'( BF) 'BG =x[A‘BG=0" which yields XT = 0" 14 STATE VARIABLE METHODS IN AUTOMATIC CONTR and 2(@2)+ D 2(4)* is derived which is equivalent to PC)) > Rr) The converse statement is proved as follows, x$@2=0 means xf(A + BFY'BG=0 for i= 0, 1,2,...,2~1. Since G is non-singular OT = x]B, 0" = xJ 4B, ..., 07 = xI(4 + BFY'BG =X1(A + BFA + BF)'"'BG DA(A + BF)"'BG xiA'BG Therefore ROE) C Rr) and RUC) = KE) 4.1.2. Pole Assignment by State Feedback ‘Theorem 4.1 shows the fact that the controllable subspace does not vary with state feedback, which indicates that the controllability of the system (A + BF, B,C) is the same as that of the system (4, B, C). However the Poles, that is the mode characteristics of the state space, can be changed by state feedback. This is illustrated by the following example. Example 4.2 For a controllable single variable system k= Ax+ bu where the characteristic equation of is (8) = des 4) Th 6-s» FSF ayas + + a0 We determine the control law rey so that the character ie equation of the closed loop system on substituting the control law, that is (A+ dE x + bv has a predetermined characteristic equation of the form els) = det(st ~ A — bt") Saas! +10 STATE FEEDBACK AND DECOUPLING 1s ‘The required 1 can be determined as follows (Gopinath). The relationships ee(e4 Peder a0) =te( "54 pub ayn) via det(sf — A ~ bIT) = det(s/ ~ AY ~ Fst — A} 'b) so that Sn is 4 t ye St ag 5+ + a — Mes + Pans" + + Poy (4.4) whee (= Ayah Cau he aT Comping cots of omer of 38) ses FUP ab, Paeab oe PUBL fas posted ~ 9 The Tate given by Faddeev’s algorithm, (1.26), that is Tot = fePaca= At Qi Peay = A? + an + Onaahe 80 that (Py -:b, Py-ab, .., Pub] cat be written OHH a [Po by-.,Pob] = | o o 0 1 where [b, Ab... A" "OL Since the system is controllable, @ is non-singular and we have the required result Voto ay? Pear inyetvaenye) | of eo as) 0 0 4 ‘Phe algorithm (4.5) includes the parameters of the characteristic equation of A which therefore have to be evaluated. Ackermann proposed another algorithm of pole allocation for single input systems where [" is given by Fl = 10,0, 440, 129A). ao ‘The proof is as follows. We write the relationship between the system with state feedback and its controllable canonical form, that is o1 TMA + bE) vo jre ea 116 STATE VARIABLE METHODS IN AUTOMATIC CONTROL where 7! is given by (3.8) as u rita (BA) witn et, ....0, 1h! warn On premultiplying both sides of the above relationship by (0, ..,0,1) we obtain (4.6), Equations (4,5) and (4.6) indicate that for any given 7, the corresponding state feedback f” can be determined if and only if the system is controllable. This result ean be extended to multivariable systems as originally shown by Wonham. Theorem 4.2 For a linear multivariable system (4, B, C)a state feedback Fan be found such that the characteristic equation of the feedback system has arbitrary real coefficients if and only if (4, B) is comtrollable. Proof Since (A, B,C) has the same characteristic equation as its equiv- alent system (4, B,C) det(sf—A)= det T~* det(sl — A)det T. Also the feedback law F of the system is related to that of the equivalent system F by FT = F, and the following relationship exists det(s! ~ A ~ BF) =det T°! det(st = A ~ BF) det T This equation shows that the pole assignability of (A, B, C) is the same as that of (4, B,C), and if (A, B) is not controllable, then from Theorem 2.6, it has an equivalent system of the form a-|[4 4x] ge l®] perez A 5. | B= = [Fo Fil (4.7 ae tes The characteristic determinant of this system is given by y= ae(! Ac ~ BeFe eaten) det(s! — A il 0 sl —An = det(s! ~ Ae ~ Bu. det(st— An) which shows that all the characteristic roots of A cannot be changed by state feedback. Controllability of (A, B) is thus a necessary condition for all the poles to be assigned arbitrarily. Sufficiency can be proved by determining the feedback law F for a controllable system (4, B) so that the resulting closed loop system has a preassigned characteristic equation. Let STATE FEEDBACK AND DECOUPLING the controllability indices of the system be {o}, then one of the equivalent systems can be represented by the controllable canonical form Ey 0 al vi E 0 al bt En 0 ay bhy Zot Hea A] = [etns oes n=] 10.05.0515 Burt oes Bi) Let vy Bu=\ 3 bi, and define i Fa Bak (4.8) then Ey al +f A+ BF E, alt fy, where | a a i det(s/— A ~ BF)= 8" + yn-15" If Fis determined so that +10 18 STATE VARIABLE METHODS IN AUTOMATIC CONTROL then the fs are given for = 1,2,3,4,...,m— I as follows: a +0) as B= alt [= 905 — ys eey — Ya This F gives oo ° AvBr=( 5 a AO wt whose characteristic equation is specified a priori. This indicates that the feedback matrix P= Bi 4.9)" makes the characteristic equation of A + BF of the desirable form. Example 4.3 Find the control law w= Fx for the system OHH SHOE oO conte. 0 0 2-n -4 of*fo a]® 3-6 0 6, aia 000 1), 00 1G, so that poles of the closed loap system are — controllable canonical form of the system is given by 2. From Example 3.1, the CME aH 00 Le A a a 00 Mie ere nee Ito cata Ln oH gti O41 where x is related to § by onan) Mifare al Molto} “lo o o iff OHO aH Since the desirable characteristic equation is dest = A= BF) =(s + S41 ~ S++ NS +2) (2 +384 2)657 42542) stb Ss 410s? + 10s+4 STATE FEEDBACK AND DECOUPLING 119 and o1 = 3,01 = 1, Fis given from (4.8) by RES ye F=(6 1) (aba to 10 ats) Ans ta an =a -10 -10 -1 therefore SHE it Hig Psat MMS a Onto ee | ott otto WU -15 436 Ratna A ‘Theorem 4.1 shows that a control law can be determined for a controllable system to provide it with any desirable charaéteristic equation. Similér results exist for the Observable system. If (A, C) is observable, itis known that (AY, C') is controllable, therefore there exists a K so that the characteristic equation of A+ KC det(s1— AT C17) = dor(sl— A ~ KC) =0 hhas any arbitrary symmetric poles. Corollary If (A, ©) is observable there exists a K for predetermined ce satisfying det(Is = A = KC} "an as" ! 4 a 4.1.3 Pole Assigoment in the Controllable Subspace In the previous section it was shown that if a system is completely controllable a feedback matrix F can be determined so that the character- istic equation is of a desirable form. In this section we discuss pole assignment for a system which is not controllable. By Theorem 4.1, it is shown that the controllable subspace of (A, B,C) represented by (1) is equal to that of the feedback system (A + BE, B, C) represented by (2), 2#(@)) is A-invariant as shown in Section 2.2.1, Similarly #¢(2) is (A + BF)snvariant. Therefore (#1) is (4 + BP)-invariant and (A+ BEYRU™) CRU) (4.10) Let the rank of @: be me and let the basis of (1) (= (2) be Yi, .u0s i Then from Theorem 2.6, there exists a controllable (4, B.) for A,B satisfying (A BPYIv;5 205 ¥nd = Wie Ag+ 4.10)" 120 STATE VARIABLE METHODS IN AUTOMATIC CONTROL where Fos FUN ¥ 25 Vo) Since (Ac, B.) is controllable, F, can be determined to satisfy det(Is — Ay — +10 for the given 4. Let G(S)= SH 18" +10 then the Cayley-Hamilton theorem yields (Ae + BF) =0 aay On the other hand, (4.10)" gives (A+ BEY, e205 ¥ed = Wipes Veet BA)! (F=0,1,2,3, 0) (4.12) and therefore (4.12) and (4.11) yield cA + BEV is 2p Va) = [Win voy VadbelAe + Bee) =0 (4.13) In section 2.2.3, it was shown that the general solution of (4+ BPX given by (2.49) for an initial condition in a certain subspace. And for x(0) € @ satistying d.(A + BF) =0, x(1) is given by a linear combination of (2.49) corresponding to the roots of @<(s) = 0 Theorem 4.3 ‘There exists a feedback matrix Fto realize the given characteristic equation (8) corresponding to the controllable subspace 9°. of (A, B) satisfying (A + BFF. = 0. When some characteristic roots of 4 have positive real parts, 6 *(s) denotes the polynomial which has these characteristic roots, Then for any initial condition (EPA) where 2 (A)= 418A then the solution of k= Ax is known to diverge as we have seen in Chapter 2. Therefore if PAC AE) lay STATE FEEDBACK AND DECOUPLING 1 there exists a feedback matrix which makes the closed loop system stable. So if (4.14) is satisfied, the system is said 10 be ‘stabilizable’ or alternatively (A, B) is said to be ‘stabilizable’. When (A", C’) is stabilizable, (A, C) is said to be ‘detectable’. Thus ‘detectability’ is the dual concept of ‘stabil- ‘ability’ Example 4.4 Check whether the following system is stabil 0 ‘The controllable subspace of the system is “G 3-3)-10).(3} Since the characteristic roots of A are ~1,1,2, 9 (A)= A> ANGI AY =QI-34 +44) 0 0 10 1 0 #(o 0 1)=4fo}, (1p, OnMOn | 0)” \o, cme) So the system is found to be stabilizable 4.2. THE DECOUPLING PROBLEM 4.2.1, Decoupling by State Feedback A system with nm inputs and 1 outputs is said to be decoupled if and only if its transfer function is given by hie(s) His)= tine OY x east ints) to2(3) smn) 0 Il), (4.15) where /ty(s) is not zero, For a decoupled system the effect of the ith input appears only in the ith output. In this section we consider decoupling of the system X= AN+ Bu x(0)=x0 (4.16a) yax (4.166) IN AUTOMATIC CONTROL 12 STATE VARIABLE METHO! where uy €R"XER" AC R™", BER™", CER" by the control law Fx + Gv. (4.17) This problem is called decoupling by state feedback, When the control law (4.17) is employed, the resultant system becomes (A+ BF)x + BGY (4.18a) yaax (4.18b) and the transfer funetion matrix is given by Hec(s) = C(sl ~ A ~ BF"'BG (4.18) ‘Therefore decoupling by state feedback requires one to find the control matrices F and G which make Hro(s) diagonal and non-singular. Before treating the decoupling problem we first consider the relation bevween the transfer function Hre(s) and that of (4.16). The transfer function matrix af (4.16) is given by H(s)= C(sI- AY 'B (4.160) Proposition 4.1 ‘The transfer function matrix Hyc(s) of (4.18¢) is related to H(s) of (4.16¢) by Heo(s) = H(s)ll + F(st~ A~ BF BIG H(s\I = F(sl- AY'B)G (4.19) Proof Hro(s)= C(sl- A ~ BF 'BG = Cis = AY (I~ A ~ BF) + BEY = A ~ BFY' BG = H(s)B"\U+ BF(s! — A~ BF)" \BG = H(s\[1+ F(sl— A~ BPY'B\G Now consider (1+ F(st~ A BF)” °B\[T— F(sl~ A)~'B) 4+ F(sI~ A~ BFY” (sl - A~ BF + BFY(st~ A) 'B Fs ~ Ay" 'B~ Fst — A~ BF)" 'BFSI~ A) 'B I Therefore [+ Fis= A= BP) 'B) =~ F(st~ Ay'BI! (4.20) and (4.19) is proved: STATE FEEDBACK AND DECOUPLING 3 2 - Pete Pee f—— ovsener ———+}+— satem —__>| Figure 4.2. Series compensator Proposition 4.1 indicates that controlling the system (4.16) with the control law (4.17) is equivalent to compensating the system (4.16) serially using the compensator i He(s)= [I+ F(sl— A~ BFY'B\G (4.21) ‘This compensator can be represented by the state space equations (A+ BPX. + BGy (4.22a) yer Fact Gy (4.220) where ¥ is the input to the compensator and yc is its output. The original feedback control system is shown in Figure 4.1 and its equivalent represen- tation with the series precompensator is shown in Figure 4.2. Using chis Proposition the necessary and sufficient condition for the system to be decoupled by state feedback control is given by the following theorem. Theorem 4.4 There exists a control law of the form (4.17) to decouple the system (4.16) if and only if the matrix AB dag (4.23) is non-singular, where =I al and 0; (= 1,2, ..5 2) is an integer given by min(j| ef B 07 a= {mince x0) (aay (n= ict a'B=07, v, 4 STATE VARIABLE METHODS IN AUFOMATIC CONTROL Proof Necessity is proved first. The ith row of the transfer function H(s) of (4.16) can be expanded in polynomials of s~/ as h(s)f = e(st — Ay"'B =f Bow! + ef ABS UTA IB 4 ABS + FAT BS 4) as Tat 'B + FASE AY |B (4.25) Using B® of (4.23) H(s) can be written as H(s)= [Bo + CsI AY'B) 4.26) where C* is (4.27) char Thus from Proposition 4.1 the transfer funetion matrix of the feedback system Hro(s) is given by Hro(s) = H(s)1 + F(sI— A ~ BF) ‘BIG se iag( 52), 5789s any SPB" + CSI ANB x [I+ F(sI- A ~ BFY'B\G (4.28) If Hro(s) is to be non-singular and diagonal then [B* + C*(sl~ A)"'B] [1+ F(s!— A ~ BF)"'B]G should be non-singular and diagonal. Thus G should be non-singular and the coefficient matrix for s° of the polynomial ‘matrix B°G should be diagonal. Since G is non-singular and all row vectors ‘of B” are non-zero, B*G is non-singular. Thus the non-singularity of B* is a necessary condition for decoupling. The sufficiency is proved by constructing the control law to decouple the system as follows, If B* is non-singular, let F and G be chosen as cosa" (4.298) F=-Bt'ict ~ (4.296) Now from Proposition 4.1, Hya(s) = diag(s"°', 5 Heey(B" + C(I = AY BL x{G7'= GI F(sI- Ay BI". (4.30) STATE FEEDBACK AND DECOUPLING Rs Substituting (4.29) into (4.30) gives Hrols)= digs 1S BY + CSE AY 'B) x(B"+C%sI- Ay'B)"! = diag(s-* and the system is decoupled This form of decoupled system having s~** in the diagonal element as (4.31) is called an integrator decoupled system. (30 The system o 0 0 ro} o 0 )a+(o o}w -1 52 -3 fonts ae CMC OMIT a is required to be integrator decoupled by the contro! law (4.17). Example 4 y Solution ‘The wansfer funtion matrix of che system is calcutated as to (4 f° : “) (° ‘) Besse SENET GHGS aa IED GHC and its in smal structure is depicted in Figure 4.3 8 t. oC AUNT RATATAT Figure 4.3. System of Example STATE FEEDBACK AND DECOUPLING a 126 STATE VARIABLE METHODS IN AUTOMATIC CONTROL x which has the tansfer Funetion €1B = (4,0) which yields 0) = and ef = (0,0,1) He aN and Hector () i ) OS 0 0 B= (0,1) which yields o: = Land ef 4 =(—1, -2, =D, 0 0 s/ \o 1 JB=(0, i therefore BY of (4.23) and C° of (4.27) are mn 1 22 (5 9, (9 8 2) a we). = (C1 a 3 : ingular, the system can be decoupled. From (4.29) the control law The system is therefore integrator decoupled and its structure is depicted in ular, the syst i Figure 4.4, As can be seen from (4.18) the control law docs not change the output Ma i matrix C, ‘The measurement structure is therefore unchanged, and decoupling is Ieand F=-B"'C°= ( | MI achieved by cancelling the effect of path @ by path Thus the system becomes 6-1 io a i xt “ Hy i 4.2.2 Pole Assignment and Zeros of Decoupled Systems: £4, In this section, the control law matrices Fand G are determined so that the ant decoupled system has preassigned poles. This is given by the following theorem, “ g 4 EB 4 ti tos) + eo Theorem 4.5 When the system (4.16) can be decoupled by state feedback and F and G are determined from 1 Ne AMP ITT 7 ligt (—G"F)p= A+ aye] AY + ne] A?’ taioe? (4.32) 3 E structre ue fo C where (— G~! F); denotes the ith row vector of — G~!F, then the resultant feedback system has the transfer function given by “Fans Hrols)=| Ye A eee o SF ase “Gros (4.33) Figure a ps STATE VARIABLE METHODS IN AUTOMATIC CONTROL Proof Let the ith row vector of the transfer function matrix of (4.16) be denoted by h(s)) and multiply by (s"*+ +++ + aie,), then from (4.25) (s+ ais") #7 + eta NCSI = APB (CAP + cxncl AY + aioe])BS te RTA ane] AP toe JAY BS! + J APB + (TAN + ancl A® + aige!)(sf— A)'B = JAB +e sf— AY'B 4.34) where oT = (PAM Haneda! + + ce) (4.39) Dividing both sides of (4.34) by (s"! + ais?! + + + ae), H(s) is given by. Veta H(s) = diagl(s°! + exis?) ++ + tn) 18" + a2 (4 ems?" + nag) B® + C8 SE AY'B) (4.36) where a ee a ont Then from (4.19) and (4.36) Heo(s) = diagh(s*, + ais?!" +o + ction) veg (SF eto? oe + einen) MBY + CSI AY BIG”! = GNF(sf~ AY'BI (4.37) ‘Thus by choosing G and F as (4.32), the transfer funetion matrix of the closed loop system is given by (4.33). Example 4.6 In Example 4.5, the integrator decoupled system is obtained. In this example, the same system is required 10 be decoupled to give Hya(s) = diag((s + 1°48 +27) From Example 4.5, B” = J and therefore G = J. Since a1 = 1,21 = 2, (4.32) gives ~F= (4) +(e) “(42 -)e@ 0 Ci -2 1) [ FEEDBACK AND DECOUPLI oo oy 1 0 asar-( 4 . re (0 o) Geta) ces HiKott 19 This yields Hro(s) = diagl (s+ D7's(s-+2)"! In the above example, a 3rd order system has been decoupled to give a 2nd order system without finite zeros. Thus a pole-zero cancellation has taken place. In the next theorem the zero positions are considered in the decoupling. Theorem 4.6 ‘The linear system (4.1) can be decoupled by the feedback control law (4.17) with the ith diagonal element of the transfer function of the decoupled. system having a numerator 7,(s) if and only if the ith row of H(s), denoted by h/(s) is represented by h(s)] = e](s1~ A) 'B= nd s\6(S1— AY'B (i= 1h) (4.38) and dae lp Be={ dara (4.23) cA 'B, is non-singular, where a, is y= {mint |ohA*B 7) Unt ha B= 08, 4 Proof Sufficiency is proved first. Let et ch. then from (4.38), the transfer function matrix H(s) is H(s) = diaglini(s), t2(3)y 0-5 m(SC(ST~ AY'B (4.39) From Theorem 4.5 and (4.23)', (4, B,C) can be decoupled to give Hols) = diaglm(s), 12(S),-..5 Mn(s)diagldy "(s), -.., din'(S)] 0 STATE VARIABLE METHODS IN AUTOMATIC CONTROL Therefore the ith component of the decoupled transfer function /:(s) is represented by hals)= nsydils) 12 1.2yeney tt (4.40) where di(s) is determined not to have common factors with s(s). Thus sufficieney is proved. Necessity is proved next, When the diagonal clement of the decoupled system is given by (4.40), the ith row vector of the decoupled transfer function hyo(s)! is hyo(s)f = els A- BP) 'BG = nats)fei(sdel Gan, From (3.23) and (4.41), there exists a ¢f satisfying hyo(s)l = nes\el(sf— A - BF) 'BG 41’ On the other hand from (4.19), (4.41) and (4.41)" i is easily shown that (sf — A) BU F(st — A)"'B)"'G = nfsyelist — Ay 'BU Fist = AY'B) SG (4.39 is then derived by dividing both sides of the above by UF F(st~ Ay"'B]-'G from the right side. From the above theorem, it is found that the numerator terms in the decoupled system are given by the properties of the original system and cannot be designed arbitrarily. In the next section an algorithm to determine the numerator terms using Wolovich and Falb's structure theorem is presented The Algorithm t0 Obtain the Numerator Terms of Decoupled Systems ‘The Ist step: Luenberger’s canonical form (4g, Bc, C.) is calculated for the given system (4, B, C). Then from (3.23) the transfer function is given by (3) = CoS(SYE"(5)BoilS) 6.23) where at 0 S(s)= ipa 0 STATE FEEDBACK AND DECOUPLIN BL ” al era al (5) = - a s) a * ) si) ¢ al, 1 bo bs Bum OnE bata one 1 and af and by are found from Au An Ain Acz| 40 42 Am} [Ait soy Ali vey Ail = Bi, Bu, 001 bist Bin ‘The 2nd step: The C. satisfying CoS(S)E(S)Bin = diag (5), 45 Mol SNES(SE (SB (4.42) is determined. The 3rd step: The control law F and Ge to decouple (Ac, Be, Ce) is determined, and if necessary Fe and G are transformed to F and G for (A,B,C) Using the previous example, the given algorithm is illustrated. Since the system is already in the controllable canonical form, the first step is from (3.23), Ml 10 i Hoy=4( Gicte mf000G be Ca): “(0 30 osSs42) 132 STATE VARIABLE METHODS IN AUTOMATIC CONTROL, ‘The 2nd step: yields so that m(s)= 1, m2(s) The 3rd step: ]B=[1 0], FB=[0 0], GAB=[0 1] yields BY= 1. So G=Tand from (4.32) Fis given by peo “Ny (ean en oya(? 0 °) FLA HESS eh HOI OR A aU Onna By choosing a1 = 1, a1 =3, a2 =2, Fis determined as, eee ca tuatiGlit) “The resultant transfer function is ( 1 ‘) Ha i STON O Mas oe i a tf Heals: ae —_ s Sy 3se2, and in this case all the poles are found to be assignable. Thit not generally the case for decoupled systems. however, is PROBLEMS P4.1. Find feedback f” to assign poles —1, -1 + j for the following systems. a) o 1 2) fi py=ft 3 4l fo o s 6} fo a2) Sin 2riitg nl (Aby=(Jo 0 11, J0 aie stn PE STATE FEEDBACK AND DECOUPLING 133 (a) 1 0 1 I (Aby=(] 0 1 of, ]-1 ons no 1 4.2. Find feedback f” to assign poles ~1 * / for the systems en aw-(L4 (7) a ao=((} 4). (4) P4.3. Check whether the systems given in P2.1 are stabilizable or not. P4.4 Check whether the folowing systems can be decoupled or not, and if the system can be decoupled fini G 10 yield integrator decoupled (ii hE ae sa (4-2) i 1 Oo 3 6 ees 3) P4,5. Show that for the n-dimensional system with m inputs and p outputs the output feedback control u= diy can assign p poles by choosing 7 as FT = 16041), 02), oy DOBLE adil — ABR, Cadj(\y!~ A)BR)-' (4.43) for the given k, where #(s) = det(s!— A) and M1, .... hy are the poles to be assigned, P4.6 Show that the pair (A,b) is controllable if wn [(E + DQ) Show that the A matrix has (wo unstable eigenvalues and determine a state feedback control law to move these eigenvalues to —3 and —2 whilst leaving the third eigenvalue unchanged. 134 STATE VARIABLE METHODS IN AUTOMATIC CONTROL 4.7, Determine a state feedback control [aw for the system with o 1 o\ fo o 0 1),f0 og al 4, (Ab 10 place its closed loop poles at —2, ~3 and 5 OPTIMAL CONTROL AND OBSERVERS 5.1 OPTIMAL CONTROL 5.1.1 Quadratic Criterion Function Control can be considered 10 be a manipulation to achieve or to fulfil a aiven objective, such as to transfer the state to a desirable one andfor to exclude the effect of a disturbance. When more than a single unique control can fulfil the given objective, the most desirable control in the sense of minimizing a given criterion function can be used. Such a control is said to be an optimal control. If the criterion function is the time required to attain, the objective state, the minimum time control is optimal, and if the criterion, function is the energy required, then the minimum energy control becomes optimal, Many kinds of functionals can be considered as criterion functions although in this book, only the quadratic criterion function is discussed since it is mathematically tractable and thus commonly used for the design of controls For linear multivariable systems. For the linear controllable multivariable system X= AX + Bu,x(0) = % 6.) with controllable pair (A, B), we consider the following quadratic criterion function J 5), sdb da ar 6.2" where R is a symmetric positive definite matrix, and Q is a symmetric positive semidefinite matrix written as Q=H"H with observable pair (A, H). The optimal control problem for a linear multivariable system with: the quadratic criterion function is one of the most common problems in linear system theory. The optimal control for the quadratic function is given. by the following theorem. 1 xls denotes che quadiaule Form x1Qy; Len N= 10s. wl] i= wR 135 136 STATE VARIABLE METHODS IN AUTOMATIC CONTROL, Theorem 5.1 ‘The optimal control for the controllable system (5.1) which minimizes the criterion function Ra ee 2 r lista +5 [2 disco + collin ae 6.3) is given by u(t) =~ RBEP CS Py x) 6.4) where P is positive definite, Q is positive semidefinite written as Q= HH with observable pair (A, 1), R is a positive definite symmetric matrix, and PU; Py, (7) 18 @ solution of the Riccati differential equation P= A'P+ PAV Q~ PBR'B'P. (S.5a) with the terminal condition PU= (5.5b) ‘The minimum value of the criterion function is given by min J=})\xol) Po: ro 6.6) ‘The proof is given in the appendix at the end of the chapter. In particular when ty> 2 and Py 0, let the limit of the solution of (5.1), if it exists, be P(d), that is lim P(0, 0) = PCC). 6.7) It can be shown that if the system (5.1) is controllable the limit exists. This can be proved as follows. When the system is controllable, there exists a control to make the state zero in a finite time f (<4), so there exists a B such that min J= 3) oll bo; rsa <5 J" xcnlid+ jacoltio ar 5 Bl xoll? (5.8) for finite matrices © and R. Equation (5.8) shows that the limit of the solution P(z) exists, since P(O; Py, 7) is bounded and non-decteasing with respect 10 t). For constant matrices A,B, Q, R, the limit of the solution does not depend on a shift of time and satisfies PU)= lim Play + h,0, 4+ h)= PUL A), Yh OPTIMAL CONTROL AND OBSERVERS 137 Since P(Z) is positive definite from (5.6) (see Appendix) when (A, H) is observable, P(c) is the constant, symmetric, positive definite matrix solution of the algebraic Riccati equation ATP + PA+ Q- PBR'B'P=0 6.9) Corollary 5.1 If the linear system (6.1) is controllable and (A, #1) is observable, the optimal control minimizing the criterion function (5.2) is given by (= — RB PRO) Gay’ where P is the symmetric, positive definite solution of (5.9). ‘Bxample 5.1 Find the optimal control with the eriterion function i eG Q)eeriat'] ae a = (2, a) avo Let the solution of the Riccati equation be denoted by Rasen) em) ea) Ghani) (| on a) Me mG For the (1.1), (1.2) and (2.2) elements, respectively, we obtain for the system =w 1D 6.2) 2oopn+qr=phr t= Pu ~ eps aopr~ papar'=0 2pia— 2eape2 + q2—- Phr-'=0 which yield the solutions p= ~ roo + rte + ail) par=~ re, £ rlledd + gale + 2paled Pusan copes + paper '=(ayr+ padoor + Since P is positive definite we requite pr; > 0, p22 > 0, prpr2 — ptr > 0. Thus, the elements of Pare im Aad geet 2c (ob + GON tad + alr) —aocur (5.132) 138 STATE VARIABLE METHODS IN AUTOMATIC CONTROL, pa= rant nab tain) 6.13) pam —rast rad + gift 2— a0 ab aA? (5.130) and the optimal contro! is given by '@ DPX maa + ad+ gifs + le + gale + 2 ant ad + afr 'ZIn 5.14) Characteristic properties of systems with optimal contro! and the numerical computation of the optimal control 5.1.2 The Stabil The optimal control for the system (5.1) with the quadratic criterion function (5.2) has been proved to be given by us Fx (5.15) where — RRP 5.16) Substituting (5.15) into (5.1), the closed loop system is given by R= (A + BPI, x(O) = 0 (3.17) ‘The stability of this closed loop system is discussed in this section. Using (5.16), (5.9) can be rewritten as (A+ BFY' P+ P(A+ BF) = ~ Q+ FIRE (5.18) If Q and P are positive definite then from the Lyapunov equation (2.52) (A+ BP) is stable. Even if Q is symmetric positive semidefinite and cexpessed as o-WH 6.19) then the closed loop is still stable as shown by the following theorem. Theorem 3.2 If the linear system (A, B, 11) is controllable and observable, then the closed loop system using the optimal control (5.15) which minimizes the quadratic criterion function (5.2) is asymptotically stable. OPTIMAL CONTROL AND OBSERVERS: 139 Proof Let the symmetric positive definite solution of (5.9) be P, and define a functional list (5.20) which gives a positive value for any x except x = 0, Let x be the solution of (5.17), then the derivative of V(x) is derived from (5.18) as roo=t{(Le)mreted | = Psd Ds ieert = —Lijix|2 + | RB EN Bs — INI se o20 Agsume that V(x, ) = 0 for any x 0, then BT Px = O and Hx = 0 should be simultaneously satished, Therefore F=0 and Hx = Hel4*8P 9 = He"'xo = 0 which contradicts (A, H) is observable. Therefore WyN<0 wSO and V(x, £) is a Lyapunov function satisfying (5.17) and (5.17) is asymp- totically stable, The poles of the closed loop system (5.17) are the roots of the character istic equation det(s! ~ A ~ BF) = det(s!~ A+ BR-'B'P)=0 (5.22) rariple 32 Find the poles forthe optimal slowed loop onan of example S1, Since F is 12 matrix, it can be written as f* = (Jo, fi) and is given by’ r+ Mao + tod + ary) "21 6.2) Bis a2 I matrix, so if itis denoted by b the characteristic equation of the closed loop system is det(s! — A — bf) fos fil = oa Aad + qr, an ~ fort + g 5 1 Joo fe star fi = det aS 4 lait afr + 2 aot Cod + aJon'es blab + an 6.29 and the poles are given by iL lad + gal 2a + Qos + gape)? [at + galr~2a0-24ed+ qfr)!?| 6.25) 140 STATE VARIABLE METHODS IN AUTOMATI Example 53 For the linear system C xe (Se (526 find how the poles of the closed loop system using the optimal control minimizing (eC 29 ua (14,14 940), 2+ Ure 2,0 + 9) 45.28) andthe ok of he dosed oop system een fo (29) as J (- [ifr 2+ 240 + fe? = [fr 2-2/1 + 9? (8.29) Theorem 5.2 the poles of the closed loop system for any r (> 0) are located in the left half of the complex plane. The location of the poles as r varies is shown in Figure 5.1 1304 Fig $.1- Pole locations with variation of r OPTIMAL CONTROL, AND OBSERVERS cr 5.1.3 Frequency Domain Characte Control '$ of a System with Optimal In the previous sections we have shown that the optimal control for a quadratic criterion function is given as a function of time from the solution of a Riccati equation. In this section we discuss the frequency domain properties of the single input system with optimal control The system considered is a controllable single input system represented by X= Ax + but,x(0) = 80 (sy with the criterion function J=5 [nti lu Py de 62) From Corollary 5.1, the optimal control wats 6.15)’ is characterized by the following equations fs -r ble 5.16)" ATP + PA+ HH ~ Uf" =0 (5.18) This control law may be characterized in the frequency domain without using the solution of the Riccati equation by the Kalman equation. Theorem 5.3 If (5.1)' is controllable, the necessary and sufficient condi- tion that the control law f given by (5.15)’ is the optimal control minimizing (5.2)’ is that the closed loop system with feedback fis stable and the following Kalman equation exists for almost all «. (= jot Ay tb) = Pat Ay by = 1+ 4) HGet— ay *I)? 3.30) Proof Necessity. Since (5.18)' exists for the optimal control law 17, adding sP to and subtracting sP from (5.18)’ gives (= sl- AY P+ P(sl— A) + ff" = 9 Premultiplying by (~s/— A")! and postmultiplying by (sf - A)~' yields, PUT Ay! + (= T= AY) "ps (—st- Ay last ay! Osa — fp > 0. Denoting the characteristic equation of the open loop system by Example 3.4 Find what eri OG) = Fas + a0 144 STATE VARIABLE METHODS IN AUTOMATIC CONTROL the left side of (5.31) can be written as 1 fot fis) (f= fis), = f= Sis , = fot fs (-s) fs) ots) o(-5) 2p Liban fo) - (fi + 246. 8-160) ‘The sight side of (5.31) with r= 1 is given by 1.996) Daas 1+ where From the equality of the above expressions, giz and gz) which are equal, may be chosen arbitrarily and qu = fh 2avfo Ufo~ aa)? ~ aby HV 2fa- dah (fir aa) ah + 2 5.33) 5.1.4 Square Root Locus For Example 5.3 the poles of the closed loop system with optimal control are depicted in Figute 5.1 as r varies. These poles are roots of the Kalman equation, which is a function of the square of sas shown in (5.31), so that it has roots { ~ bi} if (are roots, and the locus of these roots as a function of rris called the square root locus. Since the locus is symmetrical with respect (o the imaginary axis only the left half of the figure, as shown in Figure 5.1, is usually drawn, In this section the characteristics of the locus will be discussed using the Kalman equation. The optimal control of (5.1)’ minimizing the criterion function (5.2)" has been given by (5.15)’ and (5.16)'. Since there exist che relationships dest A w=" 54 | Me OO Mo 1afier- ayn) = dettst— Ay(L ATs ay by and (~ s)6(s) = det(— sf ~ A)det(s! — A) *s of (5.31) by (—s}o(s) we obtain, 8(— 8}6(3) + po(— 5)I| Hil ~ AY 'bI/Pa(s) (5.34) multiplying both 4f(—s)0(5) OPTIMAL CONTROL AND OBSERVERS 145 where p= Ir and os) = det(s! — A— bf") (5.35) Now, since || H(sf— A)"b||? can be expressed as ot — Ay "yl? = SSD 5.36) Water Ae sae) i substituting (5.36) into (5.34) yields 44 — 8)0A3) = 4 ~ S}6(8) + pn(~ sn(s) (5.34) Since A,b, His controllable and observable then from Theorem $.2 all the roots of ¢(s)=0 have negative real parts, thus all the roots of the right-hand side of (5.34)' with negative real parts are those of éy(s)=0. The roots of #( —s)éy(s) = 0 as p varies constitute the square root locus. Letting Gs)= I (s— po m(s)=@ TI (sa) 5.37) (5.34)" can be rewritten as I — pnts+ po toa 1" TL = gls+ a =0. (5.38) From (5.38) when p = 0 the roots of g(s) = 0 are those of 6(~s)o(s) with negative real part, and when p+, / roots of dy(s) =0 are those of -m(—syrn(s) = 0 with negative real part. From (5.38) the rest of the roots are infinite zeros satisfying s+ par(— 1 's=0 (se) Letting p +2, (n= 1) roots of ¥(s) = 0 are given by Dyn Lis even ee 5.39 (nti: tis ous cae whose reat parts are negative, This we ean also write as sa (902) '20-P elt (6.40) where kx (k= £1, 23,.04 for n—/even n= (k= 0, £2, for n—/ odd. In the case of Example 5.3, (~ s)6(s) = (8? = 1, m= syn(s) = 9-8? so that m(s)=s43, 1=2and /=1, and the left half of the square root locus is as given in Figure 5.1 146 STATE VARIABLE METHODS AUTOMATIC CONTROL, 5.1.5 Comput ional Methods of Optimal Control ‘To compute the optimal control for a given quadratic criterion function, the positive definite solution P of the algebraie Riccati equation (5.9) must be calculated. In the case of a single input system, however, the optimal control can be directly calculated from the Kalman equation (5.31) without the need to first obtain P. In this section computational procedures for obtaining the optimal control solution for a single input system and for the positive definite solution of the algebraic Riccati equation are described. Computation of the optimal conirol of a single input system For a single input system, A,b,Q,r ate assumed to be given and the ‘optimal control f may be calculated as fatlows: Ist step: Multiply (5.31) by 6(s}6(—5) 10 sive det(— sf — A — bP )det(s/~ A ~ bt") = 6(- s)o(s) +1 wT adie sl AQ adj(s!— Ayb (5.41) where IPA der(s!— 4 The roots of the right side of (5.41) which are of the form (sl-Ay! j adilot A) (6.42) Diy Ais day Ady sey Ray he are calculated. 2nd step: Find the n roots with negative real parts from the above roots of (5.41). Let them be denoted by {2}, and calculate os)4 TL (8— d= 5" + ay as" bo ta, REN SOC Tyo) 31d step: Find the control law £" satisfying det(s!— A= bf") = s" bays" + + a0 (5.43) ‘This can be done using the Ackermann algorithm = 10,...,0 1]fb, Ab,..., A" 'b]"'a(A) (5.44) In the case of a multi-input system where a control to stabilize the closed loop system is to be determined one approach is to choose the first input w, as the optimal control of (4,by) such that mats (5.45) OPTIMAL CONTROL AND OBSERVERS 147 and then wis calculated for (A + byf,b2) and so on where w is determined for (A + Djs! bly, bi). Positive definite solution of the algebraic Riccati equation ‘The computation of the matrix solution P of the algebraic Riccati equation is quite complicated using any method. Kalman proposed the calculation of Pas P(~ 2) of the Riccati differential equation calculaied backward from P()=0. Kleinman suggested calculation of P from the limit of the iteration (A= BRB Pine + Prei(A = BR 'BTP) = ~Q+ PBR BP, Both these methods sometimes require long computation times if the value is not appropriately chosen. This section describes # more straightfor- \ward computation method due to Potter. The method proceeds as follows: Ist step: The 2nx 2n matrix -(4 ia *”) 5.46) OM is calculated for (A, B, Q, R), and all its eigenvalues '1, — Mi, Ray ~ Das eos Aw ~ Aw Obtained. {Ay} and {— Nj} are the eigenvalues since detist sl A BRB eee( 2 ie) i si~ A+ BR'BTP ema) wee Gy Gore AYP PT A+ BR-'BTP) sl-+ AT PBR™'B = des! — A+ BR” 'BT Pydet(s! + AT — PBR~'B") where the (2.1) block is seen from the Riccati equation to be a small block, 2nd step: Let the eigenvector or generalized eigenvector corresponding t0 ., which has negative real part i, be ie ‘Then the solution of the Riccati equation is given by P= [ates ce tha) [Ves Vol! (5.48) nm) 6.47) ‘The algorithm can be proved as follows: Since P satisties ATP + PA+ Q~ PBR”'B'P=0 148 STATE VARIABLE METHODS IN AUTOMATIC CONTROL which can be written (s+ AT)P— P(sI- A+ BR” TP) +O=0 (5.49) then if the eigenvalues of (4~ BR'BTP) are Aj,--., dv» their real parts are negative and their corresponding eigen- vectors satisfy OuT= A+ BR'BTP YY: =0. Using this relationship in (5.49) yields Oul+ AT)Pyi+ Qv, (5.50) Letting Py the above two expressions may be written as A BR BN (wi) _ fv fo et) MG) which means that [v/, u?] are eigenvectors corresponding to Ay. In the case of multiple eigenvalues a similar proof can be done using generalized eigenvectors, 5.2 OBSERVERS 5.2.1 State Observer ‘This section is concerned with the linear system X= AX+ Bu, x(0) = x0 (5.514) y=cx (5.51b) where input we R", state xe R", output ye R®, AER™", BER"™", and Ce R™". It is assumed that the input and output of the system can be measured but not the state which is needed to implement the control law. This section discusses the realization of a state observer which constructs the state from the measured input and output. Before presenting the details of the observer we consider a simple problem. OPTIMAL CONTROL AND OBSERVERS 149 Example 5. When the initial state xo of (5.51) is unknown, ean the state x be estimated from the state & of a system with the same structure and same input = AN + Bu,s(0)=0 (5.5tay ‘This problem can be stated as to whether the state of a system can be estimated by the state of a model system when those two systems, with different initial conditions, receive the same input ‘Subtracting (S.51a)’ from (5.51a) and letting =X = 8. 5.32) yields E> Ae, e(0) = Xo 6.53) ‘This equation shows that provided (5.53) is asymptotically stable, that js all the eigenvalues of A have negative real parts, then e(!)—* 0 as ¢— c© which means that XU) approaches x(1) as ¢ inereases, otherwise X(¢) does nét approach x(1) as ¢ increases. From this example, we see that (5.5la)’ cannot be used in all cases to estimate the state of (5.51). Thus instead of using (5.51a)’, the following system which also uses the measured output, and is given by B= AS+ But Kly~ CX), 8) =0 (5.54) may be considered where K€ R"**, By substracting (5.54) from (5.51a), we obtain Ae~ KCe = {A — KC}e, e(0) = Xo (5.55) where e(1) = x(/) — 8(1). If K is chosen such that (5.55) is asymptotically stable, then (1) approaches x(J) as ¢ increases. The existence of such aK can be found from the dual of Theorems 4.2 and 4.3, that is if (4, C) is detectable, there exists a K such that e(1) > 0 as {+ 29, and if (A, C) is observable, a K can be chosen so that the characteristic equation of (5.55) is ina given form. This means that for an observable system the error of (5.55) may be made to vanish with arbitrary modes. If (A, C) is detectable and K is chosen 50 that (5.55) is asymptotically stable, (5.54) is called a state observer of (5.51). This kind of observer is not always practically useful since its dimension is the same as that of the system which is often too high. Its dimension can be decreased using the output and an observer whose state zestimates Ux such that Ga) . (2x0 asi 65.56) 150 STATE VARIABLE METHODS IN AUTOMATIC CONTROL, If the row vectors of U are linearly independent of those of C and then i vy") an (4) 2) -woss 5 This equation indicates that if 2(7)—> Ux(1) (f+) then x(1) can be estimated from y and 2. The dimension of z is the same as that of Ux, and from (5.57) the dimension of z to estimate the state of the system is equal to n—p, and it may be written as Az + By + Ju, x0) =0 (5.58a) a= 16, aif] (6.586) where Ae RUCP*U-M, fg RUMI, Je RODIN To find the condition that 2(¢) + Ux(2) as ¢-* «©, premultiplying (5.51) by U and subtracting (5.58a) gives b= Ag+ (UB- Jus (WA AU- Boy 39) where = Ux ~z. TU & approaches zero as ¢ increases independent ofthe input u(t) and the state, then 8(@) approaches x(t) by choosing = By ( l 16,Bi(2)=% 65.59) The system (5.58) is called a minimal order state observer. Definition 5.1 If the linear system gi fen by (5.58) produces the output & satisfying lim (xt) —8() Yu, Yo Ingo i Output Prt Soe of pont Cbeerer Block diagram of plant snd observed OPTIMAL CONTROL AND OBSERVERS 131 for any input and initial condition, (5.58) is a minimal order state observer of 6.51). From (5.55) the condition for the observer is given by the following theorem, Theorem 5.4 ‘The n—p-dimensional observer with u and y of (5.51) as inputs, given by Ax+ By + Ju, 20) =0 (5.58a) = Cr+ By (5.58b) is the minimal order state observer of the controllable system (5.51) if and only if it satisfies the following conditions. é cape (i) There exists a Ue R' satisfying ua ~ AU (5.60a) = UB (5.606) Cu+ BC= I, (5.60¢) (ii) All the eigenvalues of A have negative real parts. (5.60d) The structure of the observer and the plant whose state is to be estimated is depicted in Figures 3.2 and 5.3 Design methods for the minimal order state observer have been proposed by many researchers, These methods can be classified into two categories. One is to determine (A, B, C, B, J) alter an appropriate form of Vis given de <— Pom SE peLpplp epee Ee (LC i a } —>-— Be ag Figure 5.3 Configuration of plant and observer 1st STATE VARIABLE METHODS IN AUTOMATIC CONTROL ‘The other is to determine (4, B) first, then a corresponding U'is obtained to calculate C, B, and J. A typical design algorithm for the first approach has, been given by Gopinath and is presented below First (5.60a) and (5,60c) are rewritten as (6 ad-(@ c Bl\c nd then if (3 is non-singular, (A, B, €, B) can be calculated from AB UA il ae ye 5.61 (3) CY) ae and J is obtained from (5.600) when U is given. This minimal order state observer does not necessarily satisfy (5.604). Gopinath proposed a design procedure using a non-singular U with prespecified structure and par: ers to satisfy (5.604). Observer design algorithm of Gopinath w, The plant (A, B, C) whose state is to be estimated is considered. First C# eR" 5s constructed such that the # xn matrix noe ey i R 5.62 r ( it ) € (5.62) is non-singular. 2) Using T of (5.62), the equivalent system (4, B,C) to (4, B,C) is calculated from (5.63a) 6.630) = CT=(O, 1p) G.63c) 8) A, equivalent system the minimal order state observer J) is determined. Using C given by (5.63c). U satisfying (5.57) es OPTIMAL CONTROL AND OBSERVERS 153 is chosen as O= Un ~L) (5.64) (4) For U given by (5.64), the following relation is derived from (5.61), aa An-Lan Ar- Lax) es f alae ° foi, oO by This may be written i A=Ay-LAy (5.65a) B=AyL—LAnL + An~ An (5.65b) In-p (5) i (.65e) B~ LB; (5.654) If (4, ©) is observable, ai bb CA An | rank| C4? | =rank | Ani +dndar Anan +4h| = (6.66) aan! which indicates the first np column, vectors are linearly independent. ‘Therefore (421,411) is observable and A of (5.65a) can be designed to have | given charagteristic rgots by an appropriate choice of L (5) Since (4, B, C, D, J) is the observer of (4, B,C), the output of the observer & can be used to estimate the state of (A, B,C) as TR x as 10 Therefore the observer of (A, B, C) has the parameters & 154 STATE VARIABLE MEEHODS IN AUTOMATIC CONTROL Exomple 5.6 For the system find the observer with a single pole at ~1 Proceeding according to Gopinath’s algorithm: ay Let c= 0) then nO reefo 1 -1)er™ Lao tg 2 @) Let U= 11,4, -A (4) From (5.65) A=-6-1h-b = 1-3-9) The b- VAIO as of, 5-]1 0f,3 0 o1 Choosing = ~ 1 and fy =2 the observer is given by leh OPTIMAL CONTROL AND OBSERVERS 155 Pront 1 Figure $4 Observer for Example 5.6 (3) Therefore the observer for (A, 8, C) is Aa-l, 0 NT On Ou wa-(t ‘)- Tio :) H ' ri-1 2 ie The designed observer is shown in Figure 5.4, which indicates that the observer does rnot use the input of the system. Therefore this observer can be used in situations where the input is unknown, 5.2.2 Determination of L in Gopinath’s Algorithm In the previous section an algorithm to design @ minimal order state observer was presented. In the algorithm, one of the most important things is to determine U which is equivalent to finding L in (5.64). When L is given, the obseryer is designed from (5.65). L should be given so that all the eigenvalues of = Avi ~ Lan, have negative real parts. Such an L can be determined (1) by pole assignment so that the characteristic equation of A, det(s — A) =0 has appropriate roots or (2) by the optimal feedback — L* 156 STATE VARIABLE METHOI /TOMATIC CONTROL SIN for (4h,, Al.) with a quadratic criterion function, since such an optimal control gives a stable Ahi ALLT These algorithms are discussed in detail. (1) Determination of L by pole assignment If (Ai, Ant) is observable, (AT, Af.) is controllable and from Theorem 4.2, the coefficients of the characteristic equation det(sf— Ay, + Lx) = deus! — Afi + ALL") =0 are known to be specified by the choice of L. In the case that Aj, is cyclic such that there exists k to make [k, Arik, ... iy?” "k} linearly indepen- dent, an L to assign the poles of A can easily be obtained. So in this pact, this design approach is given. When Ay; is not cyclic, an L; (© make ‘Au ~ Lids: = Ais cyclic is chosen and then Ai is considered as Avi to determine L’ from Ag~ Lan Ay ~ Lin) b’Ans When Aisi eyetic and Ars, An) is observable, a-77 such that (Ais, 7'Aas) is observable can be determined. * ce det(sl— A) = det(st — Ay, + by"An) wen) =det(sf — Ay + y'An(sf— Au) (5.67) det{sf — Aji) = 5°74 + ctep- 18" 8 to toe 1 (sI-Any! (Dy-parst Ph + + Po) det(s rueture and realization problems in the theory of ‘See, for example, M. Heymann, Lectures, No. 208, Springer-Verlag (1975). dynamical systems, CISM Course and OPTIMAL CONTROL AND OBSERVERS 187 For the characteristic equation of A to be written as aet{st = AY 8"? + By pas N 4 4 By (5.67) requires Beri maew\ [AT yp-1 eee enn Ere rn Bo ~ ax VAT o So Vis given by VAD n-y-t\ “' {Bap — up t VADs 93) | Br-p am eny-2 5 i (5.68) VATo Boao By the dual of (4.6), it can be rewritten as y'An Pee PCat re Manan eg OY | Ba pz arpa emi ent nn Bo- eo and from I", L is determined. (5.68)" (2) Determination of L from optimal control ‘The optimal feedback law ~ L" for (41), A3,) with a quadratic criterion function is known to give all roots of det(s! Ay, + LAn) = det(st — AT; + ALLL") =0 with negative real parts. So in this part L is determined using this approach Since a= ATx+ Al is controllable, the optimal control minimizing the criterion function J= [tistina bagi ar is given from Corollary 5.1 by a= - RA PR 1s8 STATE VARIABLE METHODS IN AUTOMATIC CONTROL where (Ali, A) is observable and P is the symmetrical positive definite solution of the Riccati equation Ay P + PAL, + AA ~ PAR R“AnjP=0 6.69) From Theorem 5.2, all the eigenvalues of AT, ~ AY: R~'Az) P have negative real parts and therefore T= R'AnP (5.70) makes A of (5.65a) stable. The choice of Hand R determine the poles of A. jh observer is designed for the system in Example 5.6 by the method Example 5.7 proposed in (1) In this case so Ayy is cyclic. Let y= (0 1) shen ‘Since det(st — As) a0 > 6,0 Plant v 1 PFastsete (Observer SSS aa OPTIMAL CONTROL AND OBSERVER 159 and since the pole of the observer is specified as — 1 dei(st— Ay= st giving B= From (5.68) Iea-yld -69=-s L=(0,~5) that is fh =0, b= 5 Thus, substituting for J in B of (4) in Example 5.6 gives 1, B= 1,41, (eG and the resulting observer following(S) of Example 5.6 is * \f/t 0 -s\ fo 0 7 oj(o 1 0 a o/\o 0 1 10 5, The observer designed is shown in Figure 5.5. é From the examples, we see that the observer for the system (A, B, C) can hot be uniquely determined simply by specifying the observer poles. But from (5.60) and (5.61), the observer is uniquely determined if Uis specified 5. 3 Design of a Funetional Observer Until now all the state variables have been estimated by the observer. Generally, however, it may not be necessary to estimate all the state variables but simply Fx which is required for control. In case Fis a row vector f, the observer to estimate the scalar f"x is called a functional observer and when F is a matrix, the observer to estimate Fx is called a linear functional observer. The construction of the minimal order func- tional observer is a current topic, but we still do not have many powerful algorithms for its determination. In this section, a possible algorithm for oe even when the state in the control law is the estimate from the observer, Example 5.9 The response of the controlled variable is considered when an observer is employed in the case of Example 5.8, The observer is designed using the Gopinath algorithm. So it is frst transformed a (9 (): © bs od Ld using. Let then from (5.65), the observer is given by Asner 3-1, 2-1). 5-[|] 20 15 sty SOO a 20 ao 60 “ao too wo “woo wo 200 ado os} Tire -10) ~15 Figure 5.10 Response of system of Example 5.9 m STATE VARIABLE METHODS IV AUTOMATIC CONTROL Thus using x= 7x it follows tae ; 5.(" = hie =F b- Ast () Wher 122, r=1 and w= 1 the response ofthe controll variable to the step reference signal s shown in Figure $10, APPENDIX (A) Proof that P(/) in Positive Definite With the optimal control (5.4), the system is represented by X= (A= BR'BTP)X, (0) = x0 and the criterion function is given by I 1 jet om mabe LR BTR Rar Assume that (A, H) is observable and P is positive semidefinite, and let xo be such that Pxo=0, chen [, Uletollb + | RMT Peso nae 1 i sina llb=0 Sib “This means that xy is an clement of an A invariant subspace 1 satisfying P? =Oand Hex9=0. Ixc€ ¥ which contradicts (A, ) is observable. Therefore, P is positive definite. (B) The Proof of Theorem 5.1 IF x(0) is the solution of (5.1), there exists for an arbitrary matrix S the relation j i é (01Sx) d= SUY)SLANLUD) = XTOSORO) [ifearscon (ates 25) eats} a OPTIMAL CONTROL AND OBSERVERS 173 Since S is arbitrary, itis taken to satisfy 4S 4 ATS + SA= SBR'BTS— Q, SU) = Pr which taking $(f)= P(A) is equal to (5.5). So the following relation is derived O= —al(UPrx() + X'(O)PONO) + [i sear-tate ~ Ox + x" PBu + ul BT Px} de By adding the above term multiplied by 1/2 to the criterion function (5.3), Wx@llno ‘ Ix’ PBR~'BT Px + x" PBu + u" BT Px =u" Ru} de = SAMO] +4 [ius RB Prk de 65.3) ‘This equation shows that the optimal control minimizing J/is given by (5.4). (©) Proof that when (A, C) is Observable and A is Cycli, then there ‘y such that (A, y'C) is Observable Such " always exists as follows: Let (A, C) be observable, generator be d”™ and ¢; be the cyclic space generated by ¢7, then whole state space R" is spanned by ¢;as Read tbat tds and the minimal polynomial of ¢, denotes f,. Then the minimal polynomial of Ris ee(h) = LCM( Bt, «+5 Bm), the cyclicity of A f= dMnifA)y Since ef is given fr 50 if yy e159 OF AN) = ms) + yam) + oo + yatta) m4 STATE VARIABLE METHODS IN AUTOMATIC CONTROL rime to ax(A), then R" is generated by C= D vel This is using the fact that if d” is a generator of R" with minimal polynomial (AQ), then ‘are chosen so that (A) is ad Ay with (A) which is coprime to «(\) generates also R". This is because y and @ are coprime there exist 9 and o such that yptao=1 Therefore dl = d"y(A)o(A) + dTa(A)o(A} = d'y(Ayp(Ay =elo(A) This shows Were" xT = dA) = e794) Therefore for the minimal polynomial of ¢” denoted by B(d) x18(4)=0 and a8 ‘On the other hand, ois the minimal polynomial of R” and 8 is that of ¢", so Ba and «= @ is derived PROBLEMS nal control fora linear system AUT 0 0 0 1x+] 0 Ju 0 0 0, IT which minimizes the ctterion function = 300 g=F(e]o 2 ofeeie) ar i UT PS.2 When ao= 2, cy =3 in the system (3.11) and gi=1, @: criterion (5.10), how does the optimal control vary wich 4? PS.1. Find the opt 0 in the OPTIMAL CONTROL AND OBSERVERS 175 P53 1 Tae 5 J=3] (xo + 2x7 Wa |u|) de is used as the criterion function instead of (5.2), then find the optimal control u for (5.1), PS.4 Design observers for the following systems, 1 werd Jefe (42) ye ([-O5 (Abe) (( eat oo Wi OD) (aBo-{|1 0 -3 01 -3 5.5 Show that the functional observer of o ' me can be realized by a first order system. 6 THE KALMAN FILTER AND STOCHASTIC OPTIMAL CONTROL 6.1 THE LQG PROBLEM We showed, in Chapter 5, that the optimal feedback control for determin- istic linear systems is given in the form of a linear combination of the state variables when we wish to minimize a performance index which is quadratic in the state and control input. When all of the state variables are not accessible a state observer can play an important role in reconstructing the states to implement the optimal control. In this chapter we focus our attention on a stochastic version of the linear optimal conirol problem and consider the situation where the system is perturbed by random disturbances and only some of the states can be measured through noise-corrupted output data. We confine the discussion only to the basic principles of the stochastic control problem and so some of the details of more mathematically rigorous treatments will be omitted. The stochastic linear system considered is described by R(D= AN() 4 BUC) + PVD, x(0) =o, 6.) Y(D= Cx(1) + w(0) 6.2) where x(/) is an n-vector of the state variables, u(1) is an m-vector of the control input, y(¢) is a p-vector of the measured output, v(0) is an r-vector of the system disturbances, and w(1) is a p-vector of measurement noises. The matrices A, B,T', and C have constant coefficients and are of dimen- sion nx, xm, nx r, and px n respectively. Both v(¢) and w(0) are assumed (0 be white Gaussian random variables having zero means and to be statistically independent of each other, that is Eto) 0, coviy(e),v(r)} = VOC 7) (6.3) pw) 0, covtw(t), w(o)} = W(t 7) (6-4) coviv(r), w()} (6.5) where Ela} =a is the expected value of the random variable a, and cov {a,b} = El(a—ay(b—B)"} denotes the covariance matrix for the 176 ‘THE KALMAN FILTER AND STOCHASTIC OPTIMAL CONTROL 177 random variables a and b. The presence of the delta function 5(¢~ 7) guarantees the whiteness property implying that the noise is uncorrelated from one instant to the next. It is also assumed that V is a non-negative Gefinite covariance matrix and W is a positive delinite covariance matt The initial state xy is also assumed to be a Gaussian random variable with ‘mean Sy and covariance Yo, and to be independent of both v(t) and w(t). Therefore, in mathematical terms, E|xo} =%o, covixo, Xo} = Zo (6.6) Elxov" (1) = Bixaw"()} = 0 (6.66) ‘The control input u(#) should be manipulated to force the system to behave in a desired manner. A quadratic performance index is introduced, as in Chapter 5, 10 evaluate the quality of the system behaviour. Since v(t), W(0), and xo are random variables, we need (o evaludie the expected value of the performance index, that is s-te[f fil x(o b+ || mo i de 6 where Q is a symmetric non-negative definite matrix and R is a symmetric positive definite matrix, The stochastic optimization problem for the linear system described by (6.1) and (6.2) which is subjected to Gaussian random disturbances and measurement noise with a control specified to minimize the quadratic performance index (6.7) is called an LQG problem. The term LQG otiginates from ‘Linear, Quadratic and Gaussian’. As will be shown later, the LQG problem consists of an optimal filtering and an optimal stochastic control problem. In Section 6.2, we study the Kalman filter which can estimate all of the state variables in an optimal fashion based on the set of measured output data. In Section 6.3, we then give the optimal solution for the LQG problem. 62 THE KALMAN FILTER The optimal estimation problem considered in this section is that of how to estimate the state of a stochastic linear system with the assumption that the measurement data obtained is noise corrupted. The estimation operation is called optimal when an estimate is determined in accordance with the minimization of some criterion or loss function, which represents a ‘quantitative measure of how good the estimate is. If we are concerned with the estimation error, it is reasonable to take a non-negative loss function of the estimation error such as a quadratic function 8 STATE VARIABLE METHODS IN AUTOMATIC CONTROL We adopt the mean square error as the criterion, that is T= Eix()— 897} = tr BRONCO! (6.8a) where &(7) is the estimate of the state x(/) of the linear system (6.1) which minimizes the measure (6.8a), on the assumption that a set of measurement daa 4, = {y(z) for 0 < 7 < 1) is available and that the input function w(0) is known. X() is the estimation error defined by R= X(N 8) (6.8b) The optimal estimate minimizing the quadratic loss function (6.8a) is called the least square estimate and has the following properties: (a) The optimal estimate &(1) is the conditional expectation of x(t) given the measured data, that is 80D =8°() = EX) | 6.9) (b) The estimate &(¢) is an unbiased estimate, which means that ER() = EiX(} (6.10) Proof Equation (6.8a) can be rewritten as T= ELE x) ~ 8(9 |)? | (6.1) Let J" be denoted by T= EX\x(— X00 |)? | 0 Then it is noticed that the estimate &(¢) that minimizes the criterion /' also minimizes [. The criterion J’ can be written as T= BUX) 8°) + RH) — RED II? | oH = Btilx() KOI? | 8 + DEUX) — 8°(OTRCD ~ 801 | 1 + BAS) — 809? | 0 = BUN) —8°C) ||? | + ESC RE) | 1 Therefore, &(¢) that minimizes 7’ is given by &(1)=£°(r) and (6.9) is established. We restrict-our attention to an estimate of the form RD = WUD + \, H(t, y(a) de 12) where h(1) is an n-vector and (1,7) an nxn matrix. It is assumed in (6.12) that the optimal estimate is obtained through a linear operation on THE KALMAN FILTER AND STOCHASTIC OPTIMAL CONTROL 179 wir) a 2) + LV D optinet it c ae ar) iM E {ll £09 |2}= mn eure 6.1 Optimal her the measured data observed up to the present time ¢, The problem is to determine both h(s) and (1, 7) such that the mean square error criterion (6.8a) is minimized, as illustrated in Fig. 6.1. It is kaown that, if all of the random variables are Gaussian, tHe linear estimationgiven by (6.12) is optimum. It follows from the result of the unbiased property (b) of the estimate (0) that PRO} = BO) + | HE, EW de = E18(0) then, ne 800~ [| He n9e) de (6.13) where R= EX, 1D = Hence we have Ely() = CXC) 6.14) N= 814 | HC tye) FO) br (6.15) Now the problem is reduced to that of determining the optimal kernel H(i,3), which is given by the following theorem, Theorem 6.1 A necessary and sufficient condition for (7) 10 be the optimal estimate is the kernel H(t, r) satisfies the Wiener—Hopf integral equation: comntoysto= fi Maokorvionyten de for dgr 0,r<0 As the time # tends to infinity, the solution 2:(1) converges to the pasitive constant Im 2a) = Wa le? Then we have the stationary Kalman filter given by Bin = ~ 28+ = yen Example 6.2 Consider the following second-order system perturbed by stochastic disturbances, that is HO +o 0 3D = XC) + WU), Assigning the state variables as x)(¢)=x(0 and xo(7) = 4r(1) the system can be en Cath a(8, ( (8) YO=C OND we) 182 STATE VARIABLE METHODS IV AUTOMATIC CONTROL 11 Follows from (6.21) that each element of the covariance matrix satisfies Bat) = Daal) — WAY Zia) = Lar(t) = w7 La) = WANE) En) = ~ 267 Lal + V— WIE B(A) with the boundary condition 20) ‘The steady-state solutions for infinite + of the above equations should be non-negative definite and are given by Bin = WRG) 9))"4 Snes Mie) Bn = Wea 7? where y= ot + HW. The stationary Kaliman filter is thus described by B= ~ 26) -09)) F810) +N + B= 87} RN = = RUN +O)-wO not asympcoticaly stable, the covariance matrix tends to @ 1. The asymptotic properties of the Kalman filter are 6.2.1 Properties of the Kalman Filter Asymptotic characteristies We pay attention only to the stationary property of the covariance equation (6.21) because of the difficulties associated with studying directly the asymptotic behaviour of the random variable &(1). The Riccati equation (6.21) which the convariance matrix 2(,) satisties has a similar form to {8.5a) which was used to solve the optimal control problem presented in Chapter 5. By making comparisons of (6.21) and (5.5) itis seen that the following dual relations hold between the optimal regulator and the optimal state estimation problems: yep, aed, Clas Wer, PVIT#Q, Ke-F, ren By considering the results of the optimal regulator problem, we can obtain the asymptotic characteristics of the Kalman filter. The following theorems can be derived corresponding to Corollary 5.1 and Theorem 5.2. THE KALMAN FILTER AND STOCHASTIC OPTIMAL CONTROL 183 Theorem 6.3 If the linear dynamical system (A, C) is observable, the solution of (6.21) for infinite ¢ tends to a non-negative matrix U which satisfies 0= Any LATEPVTT-ECTWICE (6.22) Theorem 6.4 If the linear system (4, 1°, C) is both controllable and observable, then the solution of (6.21) tends to @ unique positive definite matrix © and the matrix (A=ECTW™'C) is stable. Innovations process The innovation process »(¢) is defined by wr) = (= CRU) (6.23) ‘The innovations process is a forcing term for the Kalman filter of (6.19), which contributes to the operation of corrections in the state estimate. (1) may also be written using (6.2) and (6.8b) as ID = CX(D + WL) ~ CRD COE WD (6.24) We now discuss properties of the innovations process »(1). Since 8(0 is an unbiased estimate, it follows trom (6.4) that Ey=0 (6.28) ‘The covariance of »(1) is given by ty Qe" (7) = CER(OR(DICT + CERT) + EWw(OR™ (DCT + Elw(Qw'(r)} (6.26) It is easily shown from (6.1), (6.2) and (6.19) that the estimation error 8(1) satisfies AD = (A ~ KIDER(D + PVD ~ KOO, (6.27) and thus (7) does not depend on the known input u(2). The solution of 6.27) is = BEARD + | BEoMP Yo) — Klow(o)} do (6.28) Substituting (6.28) into (6.26), and making use of the noise characteristics 184 STATE VARIABLE METHODS IN AUTOMATIC CONTROL, aiven from (6.3) to (6.6), we obtain EO" (2) = COU NEC COU DKDW+ WSU) for (2 Then it follows, by use of (6.20), that Ely (de' (n= W(t - 7) (6.29) This implies that there is no information left in the innovations process »(¢) if €(2) is the optimal estimate. Theorem 6.5 ‘The innovations process defined by (6.23) is a white Gaussian process with mean and covariance matrix given by Ei(Q\=0 and Eloy") Wwou-n) (6.30) 6.2.2 Treatment of Various Types of Random Noise ‘The Kalman filter given in Theorem 6.2 assumes the conditions from (6.3) to (6.6) for the random noise signals. In this section, we make some generalizations to the Kalman filter for the case where the system distur bance is correlated with the measurement noise, and the system disturbance and measurement noise are not white. Since the known input u(1) plays no significant role in the Kalman fiter, then we assume that u(1) = 0. Correlated system and measurement noises We assume here that the system disturbance v(‘) and the measurement noise w(1) are correlated, so that covly(r), Wiz) = SOU 7) (6.31) Since the derivation of the optimal filter for this case is similar to the previous derivation of Theorem 6.2 where ¥(1) and w() are uncorrelated, we present only the results, Theorem 6.6 Assume that v(7) and w(1) are characterized by (6.3), (6.4), (6.6) and (6.31), Then the optimal state estimate &() for the linear system described by (6.1) and (6,2) is given by R(D = AR(D) + K(DIY(D) ~ CR(DLRO) = Ko (6.32) KW) = QC + Psy! (6.33) THE KALMAN FILTER AND STOCHASTIC OPTIMAL CONTROL 185 BW = AN) + SATE PTT DCC + PS|WAL(QCT + PSH, EO) = Zo (6.34) Example 6.3 Consider the linear system A) = axl) 4 9) =X) + 001, where @ <0 and p > 0. Let the vatiance of u(t) be denoted by V. Then we obtain We p'V and S=p¥. It follows from (6.34) that e 1 X= 2(a-4) sy- Ere wo ( 4) (0p PO Since the stationary solution for infinite ris ¥ = lim,.=2(t) = 0, then it will be seen that the estimate £(¢) approaches the true value of (7). The stationary Kalman hiter is given by 1 aX(1) += [= 80) ° Coloured system disturbance In this section we consider the optimal filter for the case when the system disturbance is not white. This disturbance can be modelled as the output of @ linear dynamical system driven with a white noise input. Then, by adjoining the dynamical system associated with the disturbance generation to the original linear system, we obtain an augmented dynamical system which has a white noise input for which we can design a Kalman filter Instead of condition (6.3), let ¥(/) be modelled as the output of the q-dimensional linear system MO = MAH) NEW, (0) = Do (6.352) Vins GO (6.356) where £(¢) and do are issian random variables characterized by Elo} =O, ELAoAS}= Ao (6.36) FIRQI= 0, ELEWDE = ZU 6.37) ELECON) = 0. (6.38) Iis also assumed that (0) is statist that M is a stable matrix. lly independent of x(0) and w(t) and

You might also like