Professional Documents
Culture Documents
DOI : 10.1016/j.automatica.2008.05.022
Abstract
This paper presents a new controller validation method for linear multivariable time-invariant models. Classical prediction
error system identification methods deliver uncertainty regions which are nonstandard in the robust control literature. Our
controller validation criterion computes an upper bound for the worst case performance, measured in terms of the H ∞ -norm
hal-00347166, version 1 - 15 Dec 2008
of a weighted closed loop transfer matrix, achieved by a given controller over all plants in such uncertainty sets. This upper
bound on the worst case performance is computed via an LMI-based optimization problem and is deduced via the separation
of graph framework. Our main technical contribution is to derive, within that framework, a very general parametrization for
the set of multipliers corresponding to the nonstandard uncertainty regions resulting from PE identification of MIMO systems.
The proposed approach also allows for iterative experiment design. The results of this paper are asymptotic in the data length
and it is assumed that the model structure is flexible enough to capture the true system.
Key words: Identification for robust control; Experiment design; Closed loop system identification; Controller validation;
Multivariable plants; Linear multivariable feedback; LMI optimization.
∆ ∈ ∆. In [16], such a parametrization (called set of imum singular value. The symbol ⊗ denotes the Kro-
multipliers) is derived for some classical uncertainty sets necker product and ? the Redheffer star product. Let M
encountered in the robustness analysis literature. How- denote a complex matrix partitioned as
ever, the set ∆ corresponding to the MIMO systems in
the uncertainty region D is not at all classical. The set ∆ !
has indeed a very particular structure where the param- M11 M12
M= ∈ C(p1 +p2 )×(q1 +q2 ) . (1)
eter vector θ of the systems in D is repeated on the di- M21 M22
agonal. To the best of our knowledge, such structure has
never been considered in the literature. Consequently,
our main technical contribution is to determine a (very An (upper) LFT with respect to ∆ ∈ Cq1 ×p1 is defined as
general) parametrization for the set of multipliers corre- the map F(M, •) : Cq1 ×p1 7→ Cp2 ×q2 with F(M, ∆) =
sponding to the set ∆. M22 +M21 ∆(I −M11 ∆)−1 M12 , provided that the inverse
(I − M11 ∆)−1 exists, see [23]. The time shift operator
Based on this set of multipliers, we develop an optimiza- is given by q, i.e. qr(t) = r(t + 1), q −1 r(t) = r(t − 1).
tion problem involving linear matrix inequality (LMI) Given an m × n matrix X, the operation Y = vec(X)
constraints which allow to compute an upper bound for produces the vector Y of size 1 × mn that contains the
the worst case performance achieved by a controller over rows of the matrix X, stacked adjacent to each other.
the plants in D. We can only obtain an upper bound
for the worst case performance since we apply the sep- 3 Prediction Error Identification Aspects
aration of graphs theorem restricting attention to a set
of multipliers which is parametrized in an explicit way.
However, this intrinsic conservatism is here strongly re- We consider the following multivariable true system with
duced by the choice of a very general set of multipliers output y ∈ Rny and input u ∈ Rnu ,
(see [17] for an analysis of this conservatism reduction).
Another contribution of this paper is to show how we can S : y(t) = Go (q)u(t) + v(t) (2)
perform experiment design using the results on the vali- v(t) = Ho (q)e(t),
dation of a controller for performance (worst case perfor-
mance computation). The experiment design problem is where Go (q) is a stable transfer matrix. Furthermore,
also formulated as an LMI-based optimization problem. e(t) ∈ Rny is white noise and Ho (q) is a stable in-
versely stable monic transfer matrix. The power spec-
Note that experiment design for MIMO systems is also trum of the additive noise v(t) is therefore given by
treated in [6]. However, unlike the approach to experi- Φv (ω) = Ho (ejω )Λo Ho∗ (ejω ), where Λo is the covariance
ment design proposed in the present paper, the approach of e(t), i.e. E{e(t)eT (t)} = Λo . The true system will be
presented in [6] is non-convex and the results can only identified within the following model structure chosen
globally identifiable:
1
In [3], it is shown that the results in [4] can be extended
to the case of multiple input single output systems, modulo M : y(t) = G(q, θ)u(t) + v(t) (3)
the choice of a particular model structure. However, the ex-
tension of the result in [3] to MIMO systems is not possible. v(t) = H(q, θ)e(t),
2
where the vector θ ∈ Rn represents the parameters to be Note that U and P (θo ) depend on the true, and un-
identified. We assume that this model structure has been known, system. In controller validation, for a given
chosen in such a way that it is flexible enough to capture model θ̂N , we can construct a confidence region Uθ̂N for
the true system i.e. there exists a parameter vector θo θo , where
for which Go (q) = G(q, θo ) and Ho (q) = H(q, θo ). The
parameter estimate is then picked as n o
Uθ̂N = θ : (θ − θ̂N )T P̂N−1 (θ − θ̂N ) < χ (10)
N
1 X
θ̂N = arg min (y(t) − ŷ(t, θ))T Λ−1
o (y(t) − ŷ(t, θ)),
θ 2N t=1 with P̂N an estimate of N1 P (θo ) [13]. The true parameter
(4) vector θo is contained, to a certain probability, in Uθ̂N .
A corresponding confidence region for Go is thus:
where the predictor ŷ(t, θ) is given by the stable filter n o
Dθ̂N = G(q, θ) | θ ∈ Uθ̂N . (11)
ŷ(t, θ) = H −1 (q, θ)G(q, θ)u(t) + [I − H −1 (q, θ)]y(t),
(5)
In experiment design, the need for apriori system knowl-
see [19,13]. Notice that (4) requires knowledge of the true edge is a well known Achilles’ heel [9,14]. One common
noise covariance. One way of handling this contradiction approach to handle this issue in practice is by replacing
in a practical situation is to estimate Λo iteratively, see θo by an initial estimate in (8) and then to use sequen-
Section 15.2 in [13]. Both open loop and closed loop iden- tial procedures where the input design is altered on-line
tification of the true system (2) is considered. For closed as more information becomes available, see e.g. [7].
loop identification, the setup is represented in Figure 1
hal-00347166, version 1 - 15 Dec 2008
3
Remark 1 Also the noise model H(θ) can be included in uncertainty ∆ varies in a given set ∆, which contains
the performance function (12). The same LFT technique the point 0: 3
can be used in this case.
J(q, ∆) = F(M (q), ∆). (15)
Based on the reasoning above, a very important quan-
tity in order to validate the controller C(θ̂N ) designed In order to be able to use the result of Proposition 1
with the identified model is the worst case performance to verify a constraint similar to (14) for this uncertain
achieved over the plants in the confidence region Dθ̂N : transfer matrix J, it is required to associate to the para-
metric uncertainty set ∆ a so-called set of multipliers.
The set A of multipliers used here is a set of affinely
σW C (C(ejω , θ̂N ), Dθ̂N ) = parametrized Hermitian matrices A satisfying:
= max kJ(G(ejω , θ), C(ejω , θ̂N ))k. (13)
θ∈Uθ̂
N
A∈A⇒
!∗ ! !
I A11 A12 I
The worst case performance (13) is thus a frequency > 0, ∀∆ ∈ ∆ (16)
∆ A∗12 A22 ∆
function. Since σW C (C(ejω , θ̂N ), Dθ̂N ) is an upper bound | {z }
of the performance kJ(Go (ejω ), C(ejω , θ̂N ))k of the loop =A
min γ
γ A11 0 A12 0
!∗ !
subject to M (ejω ) 0 I 0 0 jω
M (e ) < 0
kJ(G(ejω , θ), C(ejω , θ̂N ))k < γ, ∀θ ∈ Uθ̂N . (14) I A∗ 0 A
12 22 0 I
2
0 0 0 −γ I
(18)
5 A Tractable Formulation of the Worst Case
Performance Problem in the Robustness
Analysis Framework
PROOF. By the small gain theorem, expression (17) is
equivalent to the fact that det(I−M (ejω )diag(∆, ∆ext )) 6=
Computing the worst case performance at one frequency 0 ∀∆ ∈ ∆ and for all complex matrices ∆ext of dimen-
ω is thus equivalent to determining the smallest γ such sion nu,J × ny,J such that k∆ext k < γ1 , see e.g. [23,18].
that the constraint (14) holds. The constraint (14) as
such is not tractable. However, we will show in this sec- The sufficient condition presented in the statement of
tion that if we solve an LMI-based optimization problem the proposition is then a consequence of the separation
then the constraint (14) is satisfied. For this purpose, we of graphs theorem [15,8,16].
will show that J(G(θ), C) can be expressed as an LFT
in (a function of) θ. (For an extensive discussion on LFT Remark 2 It is important to stress that, in order to
techniques, see i.e. [23]). This will allow us to use the verify (17) at different frequencies ω, the condition (18)
following classical result of the robustness analysis liter- can be verified with different matrices A ∈ A.
ature that we present in the sequel in its general form
(see Proposition 1) before particularizing it to the prob- 3
Note that the set ∆ must satisfy some additional minor
lem considered in the present paper. This result consid- assumptions which are classically met, see e.g. [16]. Note
ers a transfer matrix J of dimension ny,J × nu,J that also that the result holds for any type of uncertainty, but we
can be expressed, for some given stable transfer matrix restrict our attention to parametric uncertainties since it is
M (q), as an LFT in a parametric uncertainty ∆. The the case considered in this paper.
4
The broader the parametrization of the matrices A in Now that we have shown that G(θ) is an LFT in m(δθ),
the set of multipliers, the smaller the conservatism re- we recall that J(G(θ), C) has been chosen as an LFT
lated to the non-necessary formulation of the condition in G(θ) i.e. J(G(θ), C) = F(MJ , G(θ)) for some known
(18). Proposition 1 formulates the constraint (17) as a MJ . Combining these two facts, we conclude that the
convex feasibility problem. In order to use this propo- transfer matrix J(G(θ), C) is also a LFT in m(δθ). In-
sition to formulate the constraint (14) also as a convex deed,
feasibility problem it is required to rewrite the functional
J(G(ejω , θ), C(ejω , θ̂N )) as an LFT in which θ represents J(G(q, θ), C(q, θ̂N )) = F(M (q), m(δθ)), (23)
the uncertainty and then to determine the set A of mul-
tipliers corresponding to the uncertainty involved in this where M (q) = MG (q) ? MJ (q). Note that, as required
LFT. by Proposition 1, M (q) is here stable since the loop
[C(q, θ̂N ) G(q, θ̂N )] is stable.
5.1 LFT representation of J(G(θ), C(θ̂N ))
5.2 Set A of multipliers
As mentioned above, the functional J must be formu-
lated as an LFT. A first step towards this end is to note
that G(q, θ) is a matrix of transfer functions for which Given the LFT representation (23), in order to use
each entry is a rational function of θ. Consequently, each Proposition 1 to obtain a convex sufficient condition for
entry of G(q, θ) is an LFT in θ. It is then a direct conse- (14) to hold, we need to determine the set A of multi-
quence of the properties of LFT functions that G(θ) is pliers corresponding to the uncertainty m(δθ) present
an LFT in in the LFT (23). The matrices A ∈ A must therefore
satisfy (16) for the parametric uncertainty m(δθ) with
m(θ) = Iñ ⊗ θ. (19) δθ ∈ Ũ with Ũ defined in (22). This set of multipli-
ers is given in the following proposition. Before giving
hal-00347166, version 1 - 15 Dec 2008
where the value of ñ depends on the particular this proposition, it is important to recall that the more
parametrization (see the following example), but is al- general the parametrization of the matrices A ∈ A is,
ways smaller than or equal to nu ny (the number of the less conservative the sufficient condition presented
entries of the matrix G(q, θ)). in Proposition 1 is for the verification of (17). Conse-
quently, in Proposition 2, we present the most general
Example 1 An autoregressive exogeneous (ARX) affine parametrization we could find for the set of mul-
model with 2 inputs/2 outputs is given by tipliers corresponding to the LFT (23).
! Proposition 2 Consider the uncertainty m(δθ) with
1 Z2T θ Z3T θ
G(q, θ) = , (20) δθ ∈ Ũ defined in (22). Define the parametrized set A of
1 + Z1T θ Z4T θ Z5T θ matrices A as
1 ( !)
H(q, θ) = I2 , (21) A11 A12
1 + Z1T θ A= A|A= . (24)
A∗12 A22
where θ ∈ R5 and Zi = q −1 ei with ei defining the i:th unit
vector. The LFT description of G(q, θ) is F(X, m(θ)) In this set the matrices A11 , A12 and A22 are affinely
with parametrized as follows:
! !
X11 X12 −Z1T 0 A11 = A0 , (25)
X= , X11 = , X12 = I,
X21 X22 0 −Z1T
! !
Z2T Z3T θ 0 jpT11 jpT12 . . . jpT1ñ 0 p̃T12 . . . p̃T1ñ
X21 = , X22 = 0, m(θ) = .
Z4T Z5T 0θ T
jp12 jpT22
T
. . . jpT2ñ
0 . . . p̃T2ñ
−p̃12
A12 = . .. .. .. + . . . . . .. ,
Consequently, for this model structure ñ = 2. .. . . . . . . .
.
T T
Since G(θ) is an LFT in m(θ), it is also an LFT in m(δθ), jp1ñ . . . . . . jpññ −p̃T1ñ ... ... 0
where δθ = θ − θ̂N , that is, G(θ) = F(MG , m(δθ)) for (26)
some known MG . Note that we consider δθ instead of θ
to fulfill the requirement of Proposition 1 of a parametric
uncertainty varying in a set containing 0. Indeed, δθ is 1
A22 = − A0 ⊗ P̂N−1 − j à + B̃ . (27)
constrained to lie in Ũ where χ
5
1) A0 is a positive definite complex Hermitian matrix of Remark 4 The matrices A ∈ A are explicit functions
dimension ñ × ñ, of the inverse P̂N−1 of the estimated covariance matrix.
2) Ã ∈ Rnñ×nñ belongs to the set This is not an issue here since P̂N is known. However,
this observation will be important when we consider ex-
L11 L12 . . . L1ñ periment design in Section 6.
L12 L22 . . . L2ñ
à = . .. ..
5.3 Sufficient convex formulation of constraint (14)
.. . .
L1ñ L2ñ . . . Lññ Gathering the LFT representation (23), the set of mul-
o tipliers defined in Proposition 2 and the result of Propo-
Lij = −LTij ∈ Rn×n (28) sition 1, we can now formulate the constraint (14) as a
convex feasibility problem involving LMI constraints.
3) B̃ ∈ Rnñ×nñ belongs to the set
Theorem 1 Consider the constraint (14) at a partic-
ular frequency ω and suppose that γ is given. Then,
0 K 12 . . . K 1ñ
the constraint (14) holds if there exists a matrix A as
..
−K12 0 .
parametrized in Proposition 2 such that the following LMI
B̃ = .
holds:
. . .
. . K(ñ−1)ñ
−K1ñ . . . −K(ñ−1)ñ 0 A11 0 A12 0
!∗ !
T
o M (ejω ) 0 I 0 0 jω
M (e ) < 0
Kij = −Kij ∈ Rn×n (29)
I A∗ 0 A 0 I
hal-00347166, version 1 - 15 Dec 2008
12 22
4) plm , p̃lm ∈ Rn , l = 1, 2, . . . , ñ and m = 1, 2, . . . , ñ. 0 0 0 −γ 2 I
Then for any A ∈ A we have (35)
!∗ ! for M (q) as defined in (23).
I I
A > 0, ∀δθ ∈ Ũ (30)
m(δθ) m(δθ)
PROOF. This theorem is a direct consequence of
Proposition 1. Indeed, the functional J is represented
PROOF. First, notice that, for all A12 , Ã and B̃ as by the LFT (23) and the matrix A as defined in Proposi-
defined in the statement of the proposition, we have: tion 2 represent the set of multipliers for the uncertainty
involved in (23). Note that (35) is indeed an LMI since
the matrix A is linearly parametrized in the elements A0 ,
A12 m(δθ) + (m(δθ))T A∗12 = 0 (31)
T
Ã, B̃, plm and p̃lm , l = 1, 2, . . . , ñ; m = 1, 2, . . . , ñ.
(m(δθ)) j Ãm(δθ) = 0 (32)
T
(m(δθ)) B̃m(δθ) = 0. (33)
5.4 Computing an upper bound for the worst case per-
Therefore, for every matrix A ∈ A, we can write formance
!∗ !
I I Observe that (35) is not only affine in the matrices A11 ,
A = A12 , A22 , but also in the variable γ 2 . Based on this ob-
m(δθ) m(δθ) servation, we are now ready to derive an algorithm to
1 compute a frequency wise upper bound for the worst case
= (1 − δθT ( P̂N−1 )δθ)A0 , (34)
χ performance achieved by the designed controller C(θ̂N )
over the plants in the confidence region Dθ̂N .
and we observe that this matrix is indeed positive defi-
nite when δθ ∈ Ũ since A0 > 0. Theorem 2 Consider, at a particular frequency ω, the
worst case performance σW C (C(ejω , θ̂N ), Dθ̂N ) achieved
Remark 3 The determination of the set of multipliers
corresponding to the uncertainty involved in the LFT (23) by a controller C(θ̂N ) over the plants in a confidence
is the main technical contribution of this paper. Indeed, region Dθ̂N . An upper bound for σW C (C(ejω , θ̂N ), Dθ̂N )
to our knowledge, such uncertainty structure has never can be obtained by solving the LMI-based optimization
been considered in the robustness analysis literature. An problem consisting of determining the smallest value γopt2
interesting feature of this set of multipliers is the skew- 2
of γ for which there exists a matrix A in the set A de-
symmetric multipliers à and B̃ in the diagonal block A22 . fined in (24) such that (35) holds. The q upper bound on
Indeed, skew-symmetric terms (such as A12 here) are jω 2 .
generally only present in the off-diagonal blocks. σW C (C(e , θ̂N ), Dθ̂N ) is then given by γopt
6
PROOF. The worst case performance at the frequency
ω can be computed by finding the smallest γ such that
(14) holds. Consequently, this theorem is a direct conse- R1 (Φr (ω), θo ) =
Z π
quence of Theorem 1. 1
= Γr (ejω , θo ) Λ−1 jω ∗ jω
o ⊗ Φr (e ) Γr (e , θo )dω,
2π −π
(37)
Z π
Since Theorem 2 only delivers an upper bound for 1 jω −1 ∗ jω
R2 (θo ) = Γe (e , θo ) Λo ⊗ Λo Γe (e , θo )dω,
2π −π
σW C (C(ejω , θ̂N ), Dθ̂N ), it may be important to check
(38)
how conservative this upper bound is. Due to the very
general parametrization of the set of multipliers, this where
conservatism should be limited (see [17]). However, in
order to have a good idea of the conservatism, we will,
besides the upper bound, also compute a lower bound vec(Fr1 (q, θ))
for σW C (C(ejω , θ̂N ), Dθ̂N ). This can be done e.g. by vec(Fr2 (q, θ))
gridding the uncertainty region Uθ̂N and take, at each Γr (q, θ) = .. , (39)
.
ω, the maximal value of ||J(G(ejω , θ), C(ejω , θ̂N ))|| for
the θ in this grid. vec(Frn (q, θ))
6 Experiment Design vec(Fe1 (q, θ))
vec(Fe2 (q, θ))
hal-00347166, version 1 - 15 Dec 2008
Proposition 3 The matrices R1 (Φr (ω), θo ) and R2 (θo ) where m is a user-choice and the matrices ck ∈
in (36) are given by Rnu ×nu , k = −m, . . . , m, must be such that c−k = cTk
7
and such that Φr (ω) ≥ 0, ∀ω. Using the Positive Real set of multipliers A, P̂N−1 is replaced by N P −1 (θo ). The
Lemma [22,21], the constraint that Φr (ω) ≥ 0, ∀ω, constraint (46) at each ω ∈ Ω is dependent on the deci-
can be recast into an LMI constraint on the matri- sion variables ck , k = −m, . . . , m, via the parametriza-
ces ck , k = −m, . . . , m, (see [12,11,5]). The matrices tion of A(ω). Indeed, A22 (ω) is a function of P −1 (θo )
ck , k = −m, . . . , m, in (44) entirely determine the and thus of the matrices ck , k = −m, . . . , m, via (36)
spectrum Φr (ω) and are the actual decision variables and (44). Even though P −1 (θo ) is affine in the matrices
of the experiment design problem. In this aspect, it is ck , k = −m, . . . , m, defining Φr (ω) (see above), the con-
important to note that any affine function of Φr (ω) straint (46) is not an LMI due to the product of P −1 (θo )
(such as e.g. P −1 (θo )) is also an affine function of the and the multiplier A0 (ω) which is also a decision vari-
frequency-independent matrices ck , k = −m, . . . , m,. able. The only solution is then to modify the set A of
multipliers by fixing A0 (ω) to a constant matrix e.g. the
In this experiment design problem, our objective is to identity matrix. For a fixed A0 (ω), (46) is an LMI in the
determine the matrices ck , k = −m, . . . , m, in (44) in matrices ck , k = −m, . . . , m, and these matrices can
such a way that the corresponding spectrum Φr (ω) is therefore be easily determined. Fixing A0 (ω) neverthe-
the least disturbing spectrum 4 which nevertheless guar- less increases conservatism since the set of multipliers
antees that the covariance matrix P (θo ) is small enough is then less general and it is furthermore very unlikely
for (45) to hold: that the chosen A0 (ω) is the optimal for the considered
optimization problem. In order to improve the choice of
A0 (ω), we propose the following iterative procedure in-
spired by the so called D-K iterations [23]. In a first step,
we determine the matrices ck , k = −m, . . . , m, with a
kJ(G(ejω , θ), C(ejω , θ̂N ))k < γmax (ω) ∀ω, fixed A0 (ω) as already described. This delivers the ma-
∀θ ∈ U (45) trices ck,int . In a second step, we then parametrize the
matrices ck , k = −m, . . . , m, as ck = λck,int with λ
hal-00347166, version 1 - 15 Dec 2008
In (45), γmax (ω) is a given threshold representing the a to-be-optimized scalar and we then determine by di-
chotomy the smallest λ for which we can find at each fre-
minimal admissible performance. Furthermore, θ̂N and quency ω ∈ Ω a matrix A(ω) ∈ A for which (46) holds,
C(θ̂N ) are the ones corresponding to the experiment we but this time with a free A0 (ω) ((46) is indeed an LMI
want to design and are thus unknown at the very mo- in A0 for a fixed λ i.e. for fixed ck , k = −m, . . . , m). De-
ment we design the experiment. They are therefore re- note ∀ω ∈ Ω by A0,int (ω) the value for A0 (ω) found with
placed by initial estimates, cf Section 3. Reasonable ini- this optimal λ. Next we repeat the first step with A0 (ω)
tial estimates (see e.g. [5]) are those obtained in the ini- now fixed to A0,int (ω), etc. Note that multiplying the
tial experiment 5 . See also [1]. matrices ck,int by a scalar λ corresponds to multiplying
the corresponding spectrum by λ, i.e. λΦr,int (ω).
The constraint (45) is an infinite-dimensional constraint
since it must hold at each frequency. This constraint can Remark 5 For the case the goal of the experiment design
nevertheless be made finite-dimensional by considering is to reduce the duration of the experiment, we can by
a finite frequency grid Ω instead of the whole frequency dichotomy find the smallest N for which (46) holds ∀ω ∈
range. The corresponding optimization problem is thus Ω, for a given Φr (ω).
made up of one constraint per frequency in the finite
grid Ω (see [11]). Given the result of Theorem 1, this
problem can be reformulated as: determine the matrices 7 Numerical Illustration
ck , k = −m, . . . , m, corresponding to the least costly
spectrum for which we can still find, ∀ω ∈ Ω, a matrix
A(ω) ∈ A for which it holds that: Consider the model in Example 1 with the true dynamics
of the system defined by
!
∗ A11 (ω) 0 A12 (ω) 0
M (ejω ) 0 I 0 0 M (ejω ) θo = (−0.7558, 0.4577, −0.2180, 0.2180, −0.4577)T
A22 (ω) 0 < 0.
I A∗
12 (ω) 0
2
I (47)
0 0 0 −γmax (ω)I
(46)
and Λo = 0.2I2 . First a closed loop identification exper-
iment with N = 1000 data is performed. The controller
Here, we use the notation A11 (ω) etc. to stress that in the loop is Cid = Cstart where
these matrices can be different at each frequency (see
Remark 2). Note also that, in the parametrization of the
Z T ηs,1 Z T ηs,2
R Cstart (q) = TTηs,3
Z Z T ηs,3
. (48)
4 π Z η ZT η
i.e. the one with smallest Tr −π Φur (ω) + Φyr (ω)dω , − Z T ηs,2
s,3
− Z T ηs,1
s,3
where ur and yr are defined in (6)-(7). This expression
is affine in Φr (ω) and thus also in the matrices ck , k =
−m, . . . , m, defining Φr (ω). Here Z = (1 q −1 q −2 . . . q −6 )T and ηs,1 , ηs,2 , ηs,3 are
5
The parameter vector θ̂N identified in the first experiment given in Appendix B. Furthermore, since the controller
is generally also used to approximate θo in (36). Cstart is of sufficient order, it is so that the closed loop
8
identification can be performed consistently using the
excitation of the noise e(t) exclusively i.e. with r(t) = 0. 1.8
is obtained:
1.4
1
Now the proposed controller validation procedure will
be applied. The closed loop expression (12) is given by
0.8
C1 (q) = T (51) C2 .
Z η1,5 Z T η1,3 Z T η1,4
identified with 1000 data generated with the designed
excitation signal r(t) 7 is given by:
where η1,1 , η1,2 , . . . , η1,5 are given in Appendix B. The
scalar χ in (8) is here chosen in such a way that θo belongs
to Uθ̂N,1 with a probability of 95%. The largest accept- θ̂N,2 = (−0.7570, 0.4527, −0.2087, 0.2133, −0.4711)T
(52)
able worst case performance is specified as γmax (ω) =
1.05, ∀ω, and we will now use Theorem 2 to validate
whether this requirement is fulfilled with the controller and a controller C2 = C2 (θ̂N,2 ) is designed (with the
C1 or not. The computed upper bound of the worst case same H∞ method as in the first experiment) as
performance is plotted in Figure 2 for a frequency grid of
100 points. It is clearly larger than our specified γmax (ω). !
The controller C1 is therefore invalidated. 1 Z T η2,1 Z T η2,2
C2 (q) = T (53)
Z η2,5 Z T η2,3 Z T η2,4
In order to get an idea of the conservatism of this upper
bound, a lower bound is calculated using gridding and where η2,1 , η2,2 , . . . , η2,5 are given in Appendix B. In Fig-
plotted in Figure 2. It is clear that the lower bound is ure 2 the resulting upper bound for the worst case per-
close to the upper bound. Thus in this case the conser- formance achieved by C2 over the plants in the uncer-
vatism is small.
tainty region identified along with θ̂N,2 is plotted. For
each ω, it is clearly equal to or below γmax (ω) and there-
Since the controller C1 is deemed invalidated we now fore C2 is validated. Also here, a lower bound for the
perform experiment design for Φr (ω). The new designed worst case performance is calculated using gridding. It
identification experiment will deliver a model G(θ̂N,2 ) almost coincides with the upper bound.
and a model based controller C2 = C2 (θ̂N,2 ) which guar-
antees the worst case performance γmax (ω). We still con- 8 Conclusions
sider closed loop identification with Cid = Cstart in the
loop. This controller Cstart is also used as initial esti-
mate for C(θ̂N,2 ) in (45). The design is based on knowl- This paper presents a new controller validation method
edge on θo , cf Section 3. Furthermore, we have chosen for multivariable models obtained with standard PE
m = 9 in (44). The choice of A0 (ω) is improved through identification methods. The confidence regions of such
2 D-K like iterations, which decreases the cost function models are nonstandard in classical robust control the-
by 32%. The spectrum Φr (ω) designed after these 2 iter- ory. The main contribution is a parametrization of a
ations is shown in Figure 3. A new parameter estimate certain set of multipliers which allows a classical robust
7
There are many possible realizations corresponding to one
6
We have used MATLAB
Version
R 7.3.0.267 (R2006b) spectrum, see e.g. [11]. Here the signal r(t) is realized as
with Control System Toolbox. filtered white noise.
9
which is the i:th row of Ψ(t, θo ). This implies that
2
10
Ψ1 (t, θo )
Ψ2 (t, θo )
Singular Values (abs)
0
10
Ψ(t, θo ) = .. =
.
−2
Ψn (t, θo )
10
= Γr (q, θo ) Iny ⊗ r(t) + Γe (q, θo ) Iny ⊗ e(t) .
−4
(A.3)
10
−6 −4 −2
A similar open loop expression for Ψ(t, θo ) is found in
10 10
frequency (rad/sec)
10
[24]. When N → ∞ and the model structure is flexible
enough to capture the true system, we have
Fig. 3. Solid line: singular values of Φr (ω). Dashed line:
T
1/WS . P −1 (θo ) = EΨ(t, θo )Λ−1
o Ψ (t, θo ) (A.4)
control theory framework to be used for the considered [13]. Expressions (37) and (38) are obtained by inserting
confidence regions. Therefore, this paper is part of the (A.3) in (A.4), which concludes the proof.
wide-spread effort to connect PE system identification
and robust control. Only variance errors are considered
B Controller Parameters
hal-00347166, version 1 - 15 Dec 2008
A Proof of Proposition 3
References
d
Define Ψ(t, θ) = dθ ŷ(t, θ) and denote the columns of the
matrix Ψ (t, θ) by ΨTi (t, θ) ∈ Rny , i =
T
1, . . . , n. With [1] M. Barenthin, X. Bombois, and H. Hjalmarsson. Mixed H∞
and H2 input design for multivariable systems. In Proceedings
(6)-(7) in (5) we obtain of the 14th IFAC Symposium on System Identification,
Newcastle, Australia, March 2006.
ΨTi (t, θo ) = Fri (q, θo )r(t) + Fei (q, θo )e(t) (A.1) [2] M. Barenthin and H. Hjalmarsson. Identification and control:
Joint input design and H∞ state feedback with ellipsoidal
parametric uncertainty via LMIs. Automatica, 44(2):543–
Now, the transpose of (A.1) can be written 551, February 2008.
[3] X. Bombois and P. Date. Connecting PE identification and
robust control theory: the multiple-input single-output case.
Ψi (t, θo ) =vec(Fri (q, θo ))(Iny ⊗ r(t))+ Part ii: controller validation. In Proceedings of the 13th
IFAC Symposium on System Identification, Rotterdam, The
+vec(Fei (q, θo ))(Iny ⊗ e(t)), (A.2) Netherlands, 2003.
10
[4] X. Bombois, M. Gevers, G. Scorletti, and B.D.O. Anderson. [24] Y.C. Zhu. Black-box identification of MIMO transfer
Robustness analysis tools for an uncertainty set obtained functions: asymptotic properties of prediction error models.
by prediction error identification. Automatica, 37(10):1629– Int J of Adaptive Control and Signal Processing, 3:357–373,
1636, 2001. 1989.
[5] X. Bombois, G. Scorletti, M. Gevers, P. Van den Hof, and
R. Hildebrand. Least costly identification experiment for
control. Automatica, 42(10):1651–1662, October 2006.
[6] B.L. Cooley and J. H. Lee. Control-relevant experiment
design for multivariable systems described by expansions in
orthonormal bases. Automatica, 37:273–281, 2001.
[7] L. Gerencsér and H. Hjalmarsson. Adaptive input design
in system identification. In Proceedings of the 44th
IEEE Conference on Decision and Control, Seville, Spain,
December 2005.
[8] K-C. Goh and M.G. Safonov. Robust analysis, sectors, and
quadratic functionals. In Proceedings of the 34th IEEE
Conference on Decision and Control, New Orleans, LA, USA,
1995.
[9] G. C. Goodwin and R. L. Payne. Dynamic System
Identification: Experiment Design and Data Analysis.
Academic Press, New York, 1977.
[10] R. Hildebrand and M. Gevers. Identification for control:
Optimal input design with respect to a worst case ν-gap
cost function. SIAM Journal on Control and Optimization,
41(5):1586–1608, 2003.
hal-00347166, version 1 - 15 Dec 2008
11