You are on page 1of 11

Minimum Variance Control (Controlled autore-

gressive moving average model (CARMA))

We start with the CARMA model


A(q −1)y(t) = q −dB(q −1 )u(t) + C(q −1)e(t) (1)
where
A(q −1) = 1 + a1q −1 + ...anaq −na, (2)

B(q −1) = b0 + b1q −1 + ...bnbq −nb, (3)

C(q −1) = 1 + c1q −1 + ...cncq −nc. (4)


Assumptions:

1. e(t) is independent random variable with


variance σe2.

2. No common factors in (A(q −1), C(q −1)), or


in (A(q −1), B(q −1)).

3. C(q −1) has zeros inside unit circle.

1
Or equivalently
B(q −1) C(q −1)
y(t + d) = −1
u(t) + −1
e(t + d) (5)
A(q ) A(q )
and our aim is to choose the control u(t) to
minimize the output variance
J = E{y 2(t + d)} (6)
where d is the time delay.

Main results: In order to minimize the output


variance J = E{y 2(t + d)}. The controller is in
the form of
G
u(t) = − y(t) (7)
BF
with polynomials of the form
F (q −1 ) = 1 + f1q −1 + f2q −2 + ... + fd−1q −(d−1)
G(q −1) = g0 + g1q −1 + g2q −2 + ... + gng q −ng
ng = max(na − 1, nc − 1) (8)
to satisfy the polynomial relation
C = AF + q −dG (9)
2
Proof: Substitute this into (5) gives
" #
B(q −1) G(q −1)
y(t+d) = u(t) + e(t) +F e(t+d)
A(q −1) A(q −1)
(10)

At time t, the present and past errors, e(t),


e(t − 1), ... are given by the system equation (
Eq.(eq:1))
A(q −1) −d B(q
−1 )
e(t) = −1
y(t) − q −1
u(t) (11)
C(q ) C(q )
Substituting this into (10), and applying the
identity
C − q −dG = AF (12)
gives

3
"
B(q −1) G(q −1) A(q −1)
y(t + d) = −1
u(t) + −1
{ −1
y(t)
A(q ) A(q ) C(q )
#
−1
B(q )
−q −d −1
u(t)} + F e(t + d)
C(q )
"
B(q −1) −d B(q −1)G(q −1)
= { −q }u(t)
A(q −1) A(q −1)C(q −1)
#
−1
G(q )
+ −1
y(t) + F e(t + d)
C(q )
"
B(q −1)[C(q −1 ) − q −dG(q −1)]
= u(t)
A(q −1)C(q −1)
#
−1
G(q )
+ −1
y(t) + F e(t + d)
C(q )
"
B(q −1)A(q −1)F (q −1)
= { −1 −1
}u(t)
A(q )C(q )
#
−1
G(q )
+ −1
y(t) + F e(t + d)
C(q )
 
BF G
= u(t) + y(t) + F e(t + d)
C C
(13)

4
Continuing ...
 
BF G
y(t + d) = u(t) + y(t) +F e(t + d)
| C {z C }
ŷ(t+d|t)
(14)
where ŷ(t + d|t) is the d-step ahead prediction
of y(t + d), based on the data up to time t.

F e(t + d) = y(t + d) − ŷ(t + d|t)


= e(t + d) + f1e(t + d − 1)
+... + fd−1e(t + 1) (15)
is the output prediction error arising from (un-
known) noise sources e(t+1), e(t+2), ..e(t+d).
This term cannot be controlled by using u(t).

The term
BF G
[ u(t) + y(t)]
C C
depends on input/output information up to t,
and is controllable by using u(t).
5
The objective function, variance of y(t + d), is

J = E{y 2(t + d)} = E{[ŷ(t + d|t) + F e(t + d)]2}


= E{ŷ(t + d|t)}2 + E{F e(t + d)}2 +
+2E{ŷ(t + d|t) × F e(t + d)}
= E{ŷ(t + d|t)}2 + σe2(1 + f12 + ...fd−1
2
) (16)

(NB: The assumption that e(t) is an indepen-


dent random sequence is essential here, so that
the term E{ŷ(t + d|t) × F e(t + d)} vanishes.)

Minimizing J means that we can choose u(t)


such that
 
BF G
u(t) + y(t) = 0 (17)
C C
This gives us the minimum variance controller
G
u(t) = − y(t) (18)
BF

6
Calculation of the Optimal Predictors

The polynomial F and G can be determined


by polynomial division. An explicit formula for
the coefficients of the polynomials can also be
given. Equating the coefficients of the equal
powers of q −1 in
C = AF + q −dG (19)
gives the following equations:
c1 = a1 + f1
c2 = a2 + a1f1 + f2
....
cd−1 = ad−1 + ad−2f1 + ...a1fd−2 + fd−1
cd = ad + ad−1f1 + ...a1fd−1 + g0
cd+1 = ad+1 + adf1 + ...a2fd−1 + g1
....
cng = ang + ang−1f1 + ...ang−d+1fd−1 + gng−d
0 = ang f1 + ang−1f2 + ...a2fng−1 + gng−d+1
....
0 = ang+1fd−1 + gng (20)
These equations are easy to solve recursively.
7
Example: Assume the system is

y(t) = −a1y(t − 1) − a2y(t − 2) + b0u(t − 2)


+c1e(t − 1) + e(t) (21)
Find the minimum variance control that mini-
mizes E[y 2(t + 2)]

Parameter estimation:

y(t) = θ̂ T φ(t) + e(t) (22)


where
θ̂ = [−â1, −â2, b̂0, ĉ1]T (23)

φ(t) = [y(t−1), y(t−2), u(t−2), e(t−1)]T (24)


(NB: Use RLS algorithm with pseudo linear
models for parameter estimates).

8
If

â1 = −0.5, â2 = −0.1, b̂0 = 1, ĉ1 = 0.2. (25)


So

A(q −1) = 1 − 0.5q −1 − 0.1q −2


C(q −1) = 1 + 0.2q −1
B(q −1) = 1
d = 2, ng = 1. (26)
Substituting this into

C = AF + q −dG (27)
gives
1 + 0.2q −1 = (1 − 0.5q −1 − 0.1q −2 )(1 + f1q −1 )
+q −2 (g0 + g1q −1 ) (28)

Equivalently

0.2 = −0.5 + f1
0 = −0.1 − 0.5f1 + g0
0 = −0.1f1 + g1 (29)

9
Gives f1 = 0.7, g0 = 0.45, g1 = 0.07. The
minimum variance controller
G 0.45 + 0.07q −1
u(t) = − y(t) = − −1
y(t) (30)
BF 1 + 0.7q
Or the control law is given by
u(t) = −0.45y(t) − 0.07y(t − 1) − 0.7u(t − 1)
(31)
5

4
Open loop

3 MVC

0
y

−1

−2

−3

−4

−5
0 100 200 300 400 500 600 700 800 900 1000
t

Figure: Performance of MVC for above ex-


ample. Horizontal line indicates level of mean
square errors.
10
Minimum variance control algorithm (a sum-
mary)

1. Measure current output y(t).

2. Recall past y’s and u’s and form data φ(t).

3. Predict estimated output ŷ(t) from φ(t),


and θ̂ (model).

4. Recalculate new parameter θ̂ from estima-


tor, which gives new Â, B̂.

5. Calculate u(t) from control law.

6. Step 1-5 are repeated at each sampling pe-


riod.

11

You might also like