Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
00051098189 I3.00 + 0.00 Pergamon Press pie ~) 1989 International Federation of Automatic Control
Brief Paper
An Algorithm for the Computation of the Structured Complex Stability Radius*
DIEDERICH HINRICHSEN,t~ BERND KELBt and ARNO LINNEMANN§
Key WordsLinear systems; state space; control systems analysis; stability; robustness; parameter
perturbation; computational methods; Hamiltonian matrices.
AbstractPerturbed state space models of the form = (A + BDC)x are considered, where A, B, C are given matrices, A is stable, and D is an unknown complex perturbation matrix. A reliable and efficient algorithm for the computation of the norm of the minimal norm destabilizing matrix D is presented. The performance of the algorithm and possible applications are illustrated.
1. Introduction A FUNDAMENTAL problem of robustness analysis is to determine to what extent a stable nominal system remains stable when subject to parameter perturbations. In this paper we consider perturbed system equations of the form 2o: :c(t) (A + BDC)x(t)
(1)
where A ~ K nXn is the nominal system matrix, B e K "×m and C e K p×" are given matrices defining the structure of the perturbations, D e K ''Xp is the unknown perturbation matrix, and K ~R or C. Suppose that the nominal system it = A x is stable, i.e. the spectrum a(A) is contained in the open left half plane C_ of C. Various bounds 6 > 0 have been derived in the recent literature which guarantee stability of all systems (1) satisfying IIDII < 6, where I1"11is a matrix norm on K ''xp (Patei and Toda, 1980; Yedavalli, 1985, 1986; Yedavalli and Liang, 1986; Qiu and Davison, 1986; Petersen and Hollot, 1986; Zhou and Khargonekar, 1987; Hyland and Bernstein, 1987). The question arises which of these bounds are tight in the sense that there exist destabilizing perturbations the norms of which come arbitrarily close to that bound. For the unstructured case B = C = I this question was addressed independently in Van Loan (1985), Hinrichsen and Pritchard (1986a), Martin (1987), and for the structured case (B, C arbitrary) in Hinriclisen and Pritchard (1986b). In the latter paper the structured stability radius of the system (1) is defined by
The real stability radius (K = R, i.e. only real perturbations D are considered in (2)) is of particular interest for applications, but more difficult to analyse (Van Loan, 1985; Hinrichsen and Pritchard, 1986a). To date, computable formulae are only available for the special cases m = 1 or p = 1 (Biernacki et al., 1987; Hinrichsen and Pritchard, 1988a). The complex case where perturbations D E C "~'p are allowed in (2), is more tractable and a substantial body of theoretical results is available including extensions to infinitedimensional and timevarying linear systems (Hinrichsen and Pritchard, 1986a, b; Martin, 1987; Martin and Hewer, 1987; Dickmann, 1987; Hinriehsen and Motscha, 1987; Pritchard and Townley, 1987, 1989; Hinrichsen et al., 1987). Comparing r R and r c, it is important to note that the complex stability radius, while a conservative measure of robustness in the real case, offers advantages in dealing with nonlinear and timevarying perturbations. For instance, the robustness results for nonlinear timevarying perturbations derived in Hinrichsen and Pritchard (1986b) do not hold if rc is replaced by rR. The matrices B and C should not be considered as input and output matrices, but as scaling matrices defining the structure of the perturbation BDC of A. By the possibility to choose B and C ad libitum, the stability radius is a very flexible tool of robustness analysis. Setting B = C = I one obtains the unstructured complex stability radius rc(A ) = re(A, I, I) which measures the distance of A from the set of unstable matrices in C ~×n. Choosing m = p = 1 and B, C as unit vectors, the effect of perturbations of individual entries in A can be studied. Another special choice of B and C allows the robustness analysis of stable polynomials (cf. Section 4.3), which presently has become a very active area of research (Kharitonov, 1978; Barmish, 1984; S o h e t al., 1985; Bose, 1985; Argoun, 1987; Hinrichsen and Pritchard, 1988b). Finally, we shall see (Section 2) that the determination of re(A, B, C) is equivalent to the computation of the//~norm
IIGII.® = max IIG(it0)ll
toi/K
rr(A, B, C) = inf {IIDII; D ~ K mxp, o(A + BDC) N C+ =/:O}
(2)
where C+ denotes the closed right half plane and IIDII the spectral norm (i.e. largest singular value) of D. * Received 22 December 1987; revised 18 October 1988; received in final form 23 February 1989. The original version of this paper was not presented at any IFAC meeting. This paper was recommended for publication in revised form by Associate Editor R.V. Patel under the direction of Editor H. Kwakernaak. t Institut for Dynamische Systeme, Fachbereich Mathematik, Universitiit Bremen, Postfach 330440, D2800 Bremen 33, F.R.G. Author to whom correspondence should be addressed. § Fachbereich Elektrotechnik, Gesamthochschule Kassel, Postfach 101380, D3500 Kassel, F.R.G. 771
of the associated "transfer function" G (s ) = C(sl  A ) t B. Because of this fact, an algorithm for the computation of re(A, B, C) is of some interest for /~control theory, cf. Francis (1987). Moreover, since the available formulae for r n ( A , B , C ) in the cases m = l or p = l require the maximization of strictly proper rational functions, it can also be applied to compute the real stability radius, see Hinrichsen and Pritchard (1988a). In this paper a reliable bisectional algorithm for the computation of re(A, B, C) is presented (Section 3) which is based on a characterization of re(A, B, C) by means of a parametrized Hamiltonian matrix (Section 2). A similar algorithm for the special case B = C = I has been described in Byers (1986). However, this computational method is
772
Brief P a p e r
beyond r c there remains at least one eigenvalue of H, on the imaginary axis.
slow. We therefore develop in Section 3 a more efficient and equally reliable algorithm. In Section 4, the performance and applicability of the algorithm is illustrated by various examples.
2. Hamiltonian matrices and the structured stability radius
Throughout the paper we suppose that A e C "×" is stable and B e C ~×', C e C p×" are given matrices defining the structure of the perturbations. In this section we establish a relation between rc(A, B, C) and the parametrized Hamiltonian matrix
3. The algorithm We assume G(s) = C(sl  A)  IB t~ 0 so that re(A, B, C)
is finite by Lemma 1. Starting from initial estimates P~i, P,~ of rc(A, B, C) such that, with k = 0
O<p~ <r~(A, B, C)<p~ < ~
(5)
Hp = Hp(A. B, C) = [
A pC*C
BB* ] A* '
p>O
one can compute successively better estimates by the following algorithm. Suppose that in the kth step estimates p~ and p~ are given satisfying (5). Take Pk such that
p~ < Pk < P~.
where M* denotes the conjugate transpose of a matrix M, The following lemmata are required. For convenience, we set 01: = ~,
Lemma 1. Let G(s) = C(sl  A )  ~ B . Then rc(A, B, C) = max IIG(i~o)ll
[
]
= IIGII~L
(3)
A proof can be found in Hinrichsen and Pritchard (1986b).
Lemma 2. For all ~ e R and p > 0 ico • cr(Hp)¢~ O' • o(G*(ico)G(ico)). Proof. The equivalence follows from the identities
det (sl  Ho) = det (sl  A) det [(sl + A *)
If Hp, has eigenvalues on the imaginary axis then set Pk+l = p~ and Pk++ = Pk" Otherwise set Pk*l = P k and 1 + + Pk+~ = P* • Increase k. By Proposition 1 the inequalities (5) are satisfied for each k> 0. The performance of the algorithm crucially depends on the strategy for the choice of Pk, and on the initial estimates Po and p~'. 3.1. Initial estimates. It is to be expected that the maximum of IIG(ito)ll will be attained near the poles of G(s). This idea leads to the following lemma. Suppose that % e R , j e _ n = { 1 , 2 . . . . . n} are chosen such that the imaginary parts of the eigenvalues of A are among the wj and ~01 . . . . . co, are mutually distinct. Then p~ := max IIG(ioJj)tl
L
Lemma 3.
+ pC*C(sl  A )  tBB*]
= det (sl  A) det (sl + A*) x det [1 + p(sl + A*)IC*G(s)B*] = (  1)~ det (sl  A) det (  s l  A*) x det [1  p B * (  s l  A * )  ' C * G ( s ) I = (  1 ) " (  p ) m det (sl  A) det (  s l  A*) x det [G(s*)*G(s)  pI1]. Moreover
jeo
>r~(A, B, C).
(6)
p~ = o0~rc(A, B, C)=~.
Proof. The result follows from Lemma 1 and the fact that a nonzero G(s) can have at most n  1 finite zeros. [] []
Computational experiments with random data suggest that the upper bound p~ is usually an excellent estimate of r~:(A, B, C). Lower estimates Po can be derived from sufficient stability criteria in the literature (see Section 1). Here we set P o = 0. 3.2. Chmce o f p k. The simplest strategy is to use bisection
p,, = ~(m: +
Proposition 1. p >r c( A , B, C) ¢o o(14o) f3 iR ~ ~ . Proof. Suppose iwo • o(Hp). Then Lemmata 1 and 2 imply
P 1 _< iiG(iw0)ll 2 _ r~2(A, B, C). Conversely, p ~ rc(A, B, C) implies
P;)
pI/2 <rcl(A, B, C ) = IIGII.~.
Since lim IlG(i~o)ll = 0 there exists woe R such that p  ' / " = IIG(i¢oo)ll, i.e. The result now follows from []
pteo(G*(i~oo)G(i~oo)).
Lemma 2.
In Hinrichsen and Motscha (1987) Proposition 1 was derived from a characterization of rc(A, B, C) via a parametrized algebraic Riccati equation. An independent proof for the special case m = p = n , B=C=I is contained in Byers (1986). Moreover, a referee of this paper has informed us that an equivalent result to Proposition 1 can be found in the lecture notes (Doyle, 1984). As a consequence of the above proof we note that IIG(icoo)ll = IIGllH~c:~ioJo • o(H,~) (4)
With this choice, the algorithm is guaranteed to converge, but convergence is rather slow: after k iterations the estimate of r~:(A, B, C) will be precise up to an error bound 2k(p~po). To accelerate the algorithm it seems reasonable to use the information about previously computed values of the function p,a(Hp). Excellent results are obtained with the following extrapolation technique which is motivated by the observation that the graph of p* dist(a(Ho), iR) has an almost parabolic shape in many examples (Fig. 1). Hence, if Pk < r~(A, B, C) and Pt is the most recent estimate such that Pt < Pk, l < k  1, the graph of the function d : p  * dist (o(Hp), iR) is expected to have its
d /
i.e. the set of all global maxima of the function to* IIG(i¢o)ll is known once a(H,~) has been computed. Proposition 1 yields the following graphical characterization of the structured stability radius: drawing the eigenloci of Hp as p increases from 0 to oo r~(A, B, C) is that value of p for which one (or more) of the eigenioci hit the imaginary axis for the first time. By (4) the frequencies o~o, where the eigenloci hit the imaginary axis are exactly those frequencies which maximize I[G(ito)ll. Furthermore, if p is increased
FIG. 1. Parabolic extrapolation of d: p~ dist (o(Hp), iR).
Brief Paper
smallest zero near
p,+~= P t ~k 
773
0.2 d2
Pk I
d 2
(7)
O.t
~ k  ~1
dj := dist (o(Hp), iR). Since this value usually overestimates r2e(A, B, ~C) and the extrapolation step is only performed when r~:(A, B, C) is underestimated we reduce
where the value of/Sk+~ in order to activate this efficient step more frequently
p k . i = ~'~ p/< + (1  ~'4h)lSk+1.
Here h is the
1
i
(8,
0
I
2
3
4
number of previous underestimations pj < r2(A, B, C), j < k. Note that, as the algorithm proceeds, the reduction becomes less and less significant. A very promising speed up has been observed with the above extrapolation procedure. However, a theoretical result about the order of convergence has not yet been established. 3.3. Stabrad 1 and Stabrad 2. The upper and lower bounds given in Section 3.1 together with the bisection strategy of Section 3.2 lead to the following algorithm.
Fro. 2. The function co. Ig(ko)l. radius re(A, I, 1). Again the results coincided up to the required accuracy. Applying both algorithms to random matrices A of order n = 10, Stabrad 2 was slightly faster on the average. More importantly, it showed a steady performance whereas the executing time of Motscha's algorithm varied considerably depending on the specific example.
Algorithm Stabrad 1
(1) Input: A, B, C, 6 > O, e > O. (2) Determine the eigenvalues of A. Stop if A has an eigenvalu¢ with nonnegative real part. (3) Determine O~ as given by Lemma 3 and set Po = 0,
4. Examples and applications 4.1. Problems of global optimization. By I..¢mma 1, the computation of re(A, B, C) is equivalent to the computation of the H®norm of G ( s ) = C ( s I  A )  1 B . This norm is usually determined by evaluating {[G(ie0)[[ for many values of to or by using nonlinear optimization algorithms. However, these maximizers will in general only yield a local maximum (Van Loan, 1985) whereas the algorithms presented in this paper yield a global one. This is illustrated by Fig. 2 which shows the gain plot of the transfer function
100(s + 1.8i)(s + 2.01i)(s + 2.19i)(s + 2.60
g(s) =
Po = p~ /2, k = O.
(4) Determine the eigenvalues of Hak and let or:= dist (o(Ho,), JR). (5) If or < 6, then
P~+l = Pk, P[+t = P[
Pk+t = (Pk + P k ) / 2 .
(6) If 0: > 6, then
(s + 10)(s + 0.12 + 2i)2(s + 0.1 + 2.2i~2 x (s + 1.3 + 1.gi)'(s + 0.12:1: 1.7i)
P~+l = Ok,
Pk+l +
=
P* +
Pk+~ = (Pk + p'~)/2. (7) If IX/P~+I  X/P[+ll > eX/pk+~ then set k   k + 1 and go to 4. (8) Stop: X/pk+t approximates re(A , B, C).
The improved algorithm Stabrad 2 replaces the bisection in step 6 by the extrapolation (8), (7). Both versions have been implemented on a SUN 3/50 workstation using the PROMATLAB software package in double precision (approx. 14 significant digits). The ¢igenvalues of Hot are computed by a QR algorithm which does not exploit the Hamiltonian structure of Ha,. Further improvement in execution time can be expected from the use of a symplectic QR like algorithm (Van Loan, 1984). 3.4. Comparison of performance. In order to compare the efficiency of Stabrad 1 and Stabrad 2, the stability radius has been computed for random triples (A, B, C) with stable A, n = 20 and m = p = 3. Choosing 6 = 10 s, e = 10 6 the two approximate values rtc and r2c of re(A, B, C), as computed by Stabrad 1 and Stabrad 2, always coincided up to a relative error [ r t c  r2cl/rlc< e. Using the extrapolation (8), (7) in step 6 resulted in an average speedup by a factor greater than or equal to 2.5 in execution time. Further experiments showed that this factor is independent of the order n of A but depends on the required accuracy. If more accurate estimates are required the observed factor increases. We conclude this subsection with some brief comments concerning the numerical correctness of the computed value of re(A, B, C). The correctness crucially depends on the ability to decide whether Hpk has an ¢igenvalu¢ on the imaginary axis or not. Difficulties have only been observed when the algorithms were applied to highly unbalanced matrices A. In these cases overestimations of re(A, B, C) were caused by rounding errors in the computation of o(Ho, ) being larger than the threshold 6. We also tested Stabrad 2 against the algorithm of Motscha (1988) for the computation of the unstructured stability
The frequency corresponding to the initial estimate ( p ~ )  v 2 = 0 . 0 9 2 is Wo=0, whereas the maximum value 0.165 of Ig(iro)l is obtained at wr,,x= 2.107. It is evident from Fig. 2, that a local maximizer starting at too will lead to a local maximum of Ig(iw)l, which is not a global one. A local maximizer will only find the global maximum if it starts in the small neighbourhood [2.01, 2.19] of wmx. Hence this example also illustrates that a local maximizer with random initial values may have problems finding the global maximum. The algorithm Stabrad 2 has been applied to a state space realization of the above g(s) in companion form and determined the correct value re(A, B, C ) = 6 . 0 5 6 = 0.165 1. 4.2. Stability of parametrized systems. In many applications model uncertainty is not adequately representable by a hyperball or an ellipsoid of system matrices around a given nominal matrix but is described by a parametrized model set with a given range of parameter values. In such a situation one can try to cover the model set by a number of balls or ellipsoids centred at suitably chosen system matrices. We illustrate the procedure by a model of a satellite controlled by linear state feedback, as described in Franklin et al. (1986). The equations are given by where Xo: i = [A(d o, ko) + BDC]x (9)
a(,~o, ko)
=  10d 0 0 [_0.595 + ko 0.275 + do
o _,

ko
,.rOo]
10k o lOdo 0 1 '  1 . 3 2  ko  1 . 6 6  do
O = [Ak, Aa].
774
OI
Brief Paper
(1987). Robust stability with structured real parameter perturbations. IEEE Trans. Aut. Control, AC32(6). 495506. Bose, N. K. (1985). A systemtheoretic approach to stability of sets of polynomials. Contemporary Math., 47, 2534. Byers, R. (1986). A bisection method for measuring the distance of a stable matrix to the unstable matrices. CRSC Technical Report Series No. 11108601. Dickmann, A. (1987), On the robustness of multivariable linear feedback systems in state space representation. IEEE Tram. Aut. Control, AC32, 407410. Doyle, J. (1984). Lecture Notes. ONR/Honeywell Workshop. Francis, B. A. (1987). A Course in H ~ Control Theory, Lecture Notes in Control and Information Sciences No. 88. Springer, Berlin. Franklin, G. F., J. D. Powell and A. EmamiNaeini (1986). Feedback Control of Dynamic Systems. AddisonWesley, Reading, Massachusetts. Hinrichsen, D. and A. J. Pritchard (1986a). Stability radii of linear systems. Syst. Control Lett., 7, 110. Hinrichsen, D. and A, J. Pritchard (1986b). Stability radius for structured perturbations and the algebraic Riecati equation. Syst. Control Lett., 8, 105113. Hinrichsen, D., A. Ilchmann and A. J. Pritchard (1987). Robustness of stability of timevarying linear systems. Institut fiir Dynamische Systeme, Universit~it Bremen, Report No. 161. Also J. Diff. Equ. (in press). Hinrichsen, D. and M. Motscha (1987). Optimization problems in the robustness analysis of linear state space systems. In A. Gomez, F. Guerra, M. A. Jimenez and G. L6pez (Eds), Approximation and Optimization, Lecture Notes in Mathematics 1354. Springer, Berlin. Hinrichsen, D. and A. J. Pritchard (1988a). New robustness results for linear systems under real perturbations. Proc. 27th IEEE Conf. on Decision and Control, Austin, pp. 14101414. Hinrichsen, D. and A. J. Pritchard (1988b). An application of state space methods to obtain explicit formulae for robustness measures of polynomials. In M. Milanese et al. (Eds), Proc. Int. Workshop "Robusmess in Identification and Control", Torino. Hyland, D. C. and D. S. Bernstein (1987). The majorant Lyapunov equation: a nonnegative matrix equation for robust stability and performance of large scale systems. IEEE Trans. Aut. Control, AC32(I1), 10051013. Kharitonov, F. L. (1978). Asymptotic stability of an equilibrium position of a family of systems of linear differential equations. Diff. Uravn., 14, 20862088. Martin, J. M. (1987). State space measures for stability robustness. IEEE Tram. Aut. Control, AC32(6), 509512. Martin, J. M. and G. A. Hewer (1987). Smallest destabilizing perturbations for linear systems, Int. J. Control, 45(5), 14951507. Motscha, M. (1988). An algorithm to compute the complex stability radius. Int. J. Control, 48(6), 24172428, Patel, R. V. and M. Toda (1980). Quantitative measures of robustness for multivariable systems. Proc. Joint Aut. Control Conf., Paper TD8A. Petersen, I. R. and C. V. Hollot (1986). A Riccati equation approach to the stabilization of uncertain systems. Automatica, 22, 3373~1. Pritchard, A. J. and S. Townley (1987). A stability radius for infinite dimensional systems. Proc. Conf. Distributed Parameter Systems, L N Control and Information Sciences, Vol. 102. Pritchard, A. J. and S. Townley (1989). Robustness of linear systems. J. Diff. Equ., 77(2), 254286. Qiu, L. and E. J. Davison (1986). New perturbation bounds for the robust stability of linear state space models. Proc. 25th Conf. on Decision and Control, Athens, pp. 751755. Soh, C. B., G. S. Berger and K. P. Dabke (1985). On the stability properties of polynomials with perturbed coefficients. IEEE Tram. Aut. Control, AC30(10), 10331036. Van Loan, C. (1984). A symplectic method for approximat
d
0
OI
1
OI
l
02 k
i
03
I
0.4
l
05
FIG. 3. Covering of the parameter set.
Here, k = ko + Ak is an (unknown) torque constant, k o its nominal value, d = do + Ad is an (unknown) viscous damping constant, and d o its nominal value. Computing the stabifity radii re(A, B, C) for seven suitably chosen values of (k0, do) we obtain the asymptotic stability of the uncertain dosedloop system (9) for all parameter pairs (k, d) contained in one of the seven circular regions shown in Fig, 3. These disks cover the parameter set given in Franklin et al. (1986) for which stability has to be guaranteed. 4.3. Robust stability of polynomials. Consider a Hurwitz polynomial p(s, a) =s n + an_is nl +. • • + als + a o the coefficients aj_ 1 of which are perturbed to aj_l(d)=aj_,  ~ d~cjj, ] e n (10)
where C = (%) e K p×" is a given matrix and d = [ d l . . . . . dp] is an unknown disturbance vector. We identify p(s, a) with its coefficient vector a = [a o. . . . , anl] e K ~'" and define the stability radius of p(s, a) (with respect to the perturbation structure C) by rx(a, C) = inf {lid II; d e K a'p, ::13.¢ 12+: p(it, a(d)) = 0}.
(11)
Then it is easily seen that rx(a, C) = rx(A , B, C) where B = [ 0 , . . . , 0, l i T e R " and A is the companion matrix of the polynomial p(s, a). Thus Stabrad 2 can be applied to compute the complex stability radius re(a, C). Moreover, since the available formulae for ra(a, C) only require the maximization of strictly proper rational functions (Hinrichsen and Pritchard, 1988b), Stabrad 2 can also be applied to compute the real stability radius of any Hurwitz polynomial under arbitrary affine perturbations. Experiments with random polynomials and with special illconditioned polynomials confirm the remarkable reliability of the algorithm, i:urther details and many examples can be f o u n d in Hinrichsen and Pritchard (1988b). 5. Conclusions The stability radius measures the stability robustness of linear state space models when subject to structured complex valued parameter perturbations. An algorithm for the computation of this radius is presented which is based on a test of a Hamiltonian matrix for eigenvalues on the imaginary axis. The algorithm is iterative and has guaranteed performance. References Argoun, M. B. (1987). Stability of a Hurwitz polynomial under coefficient perturbations: necessary and sufficient conditions. Int. J. Control, 45, 739744. Barmish, B. R. (1984). Invariance of the strict Hurwitz property for polynomials with perturbed coefficients. I E E E Trans. Aut. Control, AC.29, 935936. Biernacki, R. M., H. Hwang and S. P. Bhattacharyya
Brief Paper ing all the eigenvalues of a Hamiltonian matrix. Linear Algebra Applic., 61, 233251. Van Loan, C. (1985). How near is a stable matrix to an unstable matrix? Contemporary Math., 47, 465478. Yedavalli, R. K. (1985). Improved measures of stability robustness for linear state space models. IEEE Trans. Aut. Control, AC30, 577579. Yedavalli, R. K. (1986). Stability analysis of interval matrices: another sufficient condition. Int. J. Control,
775
43(3), 767772. Yedavalli, R. K. and Z. Liang (1986). Reduced conservatism in stability robustness bound by state transformations. IEEE Trans. Aut. Control, AC31(9), 863866. Zhou, K. and P. P. Khargonekar (1987). Stability robustness bounds for linear statespace models with structured uncertainty. IEEE Trans. Aut. Control, AC32(7), 621623.
This action might not be possible to undo. Are you sure you want to continue?
Use one of your book credits to continue reading from where you left off, or restart the preview.