You are on page 1of 12

r ,.

,;:, ,<
- ~ ---

"

..J- / AUlamolica, Vol. 23, No.2, pp. 137-148. 1987 0005-1098/87 S3.00+0.00
(i)
.' Printed in Great Britain.
([11987 International
Pergamon Journals Ltd.
Federation of Automatic Control

:i
~, ..,/(\.
,

'
I
,
'

Generalized Predictive Control-Part I. The Basic


Algorithm*
If
D. w. CLARKEt, c. MOHTADlt and P. S. TUFFSt

A new member of the family of long-range predictive controllers is shown to be suitable


for the adaptive control of processes with varying parameters, dead-time and model-
order.

Key Words-Adaptive control; predictive control; LQ control; self-tuning control; nonminimum-phase


plant.

f"l"
Abstract-Current self-tuning algorithms lack robustness to (1) a nonminimum-phase plant: most continuous- ,
prior choices of either dead-time or model order. A novel
method-generalized predictive control or GPC-is developed
time transfer functions tend to exhibit discrete-time I
which is shown by simulation studies to be superior to accepted zeros outside the unit circle when sampled at a fast
I
techniques such as generalized minimum-variance and pole- enough rate (see Clarke, 1984);
placement. This receding-horizon method depends on predicting
(2) an open-loop unstable plant or plant with I
the plant's output over several steps based on assumptions
.about future control actions. One assumption-that there is a badly-damped poles such as a flexible spacecraft or I
"control horizon" beyond which all control increments become robots; !

~ro-is shown to be beneficial both in terms of robustness and (3) a plant with variable or unknown dead-time:
for providiRg simplified calculations. Q100sing particular val,!~
of the output and control horizons produces as subsets of the some methods (e.g. minimum-variance self-tuners, Ii
method various useful al~orithms such as GMV, EPSA<:, Astrom and Wittenmark, 1973) are highly sensitive!
-
Peterka's predictive controller 11984, Automatica,20, 39-50)
and Ydstie's extended-horizon desi~n (l984,1FAC9fli-Wortd
to the assumptions made about the dead-time and
Congress, Budapest, Hungary). Hence GPC can be used either approaches (e.g. Kurz and Goedecke, 1981) which
to control a "simple" plant (e.g. open-loop stable) with little attempt to estimate the dead-time using operating
prior knowledge or a more complex plant such as nonminimum-
phase, open-loop unstable and having variable dead-time. In data tend to be complex and lack robustness;
particular GPC seell1~Jo be unaffected- (unlike pole-placement (4) a plant with unknown order: pole-placement
strategies) if the plant model is overp,arameterized.Furthermore, and LQG self-tuners perform badly if the order of
as offsets are eliminated by the consequence of assuming a
CARIMA plant model, GPC is a contender for general self- the plant is overestimated because of pole/zero
tuning applications. This is verified by a comparative simulation cancellations in the identified model, unless special
study. precautions are taken.
The method described in this paper-General-
ized Predictive Control or GPC-appears to over-
I. INTRODUCTION
come these problems in one algorithm. It is capable
ALTHOUGHSELF-TUNINGand adaptive control has
of stable control of processes with variable parame~
made much progress over the previous decade, ters, with variable dead-time, and with a model
both in terms of theoretical understanding and
order which changes instantaneously provided that
practical applications, no one method proposed so
the input/output data are sufficiently rich to allow
far is suitable as a "general purpose" algorithm for
reasonable plant identification. It is effective with
the stable control of the majority of real processes.
a plant which is simultaneously nonminimum-
To be considered for this role a method must be
phase and open-loop unstable and whose model
applicable to:
is overparameterized by the estimation scheme
d . Received 2 March 1985; revised 7 July 1985; revised 3 March
1986; revised 22 September 1986. The original version of this
without special precautions being taken. Hence it
is suited to high-performance applications such as
paper was not presented at any IF AC meeting. This paper was
the control of flexible systems.
". recommended for publication in revised form by Associate Hitherto the principal applied self-tuning
Editor M. Gevers under the direction of Editor P. C. Parks. methods have been based on the "Generalized
t t Department of Engineering Science. Parks Road, Oxford Minimum-Variance" approach (Clarke and
0
S,
OXI 3PJ, U.K.
5, ~Aluminum Company of America, Alcoa Technical Center. Gawthrop, 1975, 1979) and the pole-placement
Pittsburgh, PA 15069, U.S.A. algorithm (Wellstead et al., 1979; Astr6m and
It
n
I,
AUT 23/2 - A 137

. .
------ --'---
138 D. W. CLARKE et al. ...

Wittenmark, 1980). Thei~plicit G MY self-tuner If the plant has a non-zero dead-time the leading
(so-called because the controller parameters are elements of the polynomial B(q - I) are zero. In (1),
directly estimated without an intermediate calcul- u(t) is the control input, ~t) is the measured variable
ation) is robust against model order assumptions or output, and x(t) is a disturbance term.
but can periOrm--"a-dly {fihe plant dead-time varies. In the literature x(t) has been considered to be
fhe explicit -pole-placement method, in which a of moving average form:
Diophantine equation is numerically solved as a
bridging step between plant identification and the x(t) = C(q - 1 )~(t) (2)
control calculation, c~!l__cope with variable dead-
tiI1?ebut not with a model with overspecified order. 1 + clq-I + ... + cncq-nc.
Its good behaviour with variable dead-time is due
where C(q-I) =
In this equation, ~(t) is an uncorrelated random
to overparameterization ofthe numerator dynamics sequence, and combining with (1) we obtain the
B(q - I); this means that the order of the denominator
CARMA (Controlled Auto-Regressive and Mov-
dynamics has to be chosen with great care to avoid ing-A verage) model:
singularities in the resolution of the Diophantine
identity. Th~_GJ~~G._~p"'proachbeing based on an
A(q-I)y(t) = B(q-I)U(t - 1)
explicit plant formulation can deal with variable
~~Q-tiII1~-1.~ut as it is a predictive method it can. + C(q - 1)~(t). (3)
..
a!soc<:>~ with overparameterization.
The type of plant that a self-tuner is expected to Though much self-tuning theory is based on this
control varies widely. On the one hand, many
model it seems to be inappropriate for many
industrial processes have "simple" models: low
industrial applications in which disturbances are
order with real poles and probably with dead-time. non-stationary. In practice, two principal disturb-
For some critical loops, however, the model might
ances are encountered: random steps occuring at
be more complex, such as open-loop unstable,
random times (for example, changes in material
underdamped poles, multiple integrators. It will be quality) and Brownian motion (found in plants
shown that GPC has a readily-understandable relying on energy balance). In both these cases an
default operation which can be used for a simple
appropriate model is:
plant without needing the detailed prior design of
many adaptive methods. Moreover, at a slight
x(t) = C(q-IR(t)/A (4)
increase of computational time more complex pro-
cesses can be accommodated by GPC within the
basic framework. where A is the differencing operator 1 - q - I.
All industrial plants are subjected to load-dis- Coupled with (1) this gives the CARIMA model
turbances which tend to be in the form of random- (integrated moving-average):
steps at random times in the deterministic case or
of Brownian motion in stochastic systems. To A(q - 1 )y(t) = B(q - 1 )u(t - 1) Ii
achieve offset-free closed-loop behaviour given
these disturbances the controller must possess + C(q - 1 )~(t)/A.
inherent integral action. It is seen that G PC adopts
an integrator as a natural consequence of its This model has been used by Tuffs and Clarke
assumption about the basic plant model, unlike the (1985) to derive GMY and pole-placement self-
majority of designs where integrators are added in tuners with inherent integral action. For simplicity
an ad hoc way. in the development here C(q - I) is chosen to be 1
(alternatively C-I is truncated and absorbed into
2- THE CARIMA PLANT MODEL AND OUTPUT the A and B polynomials; see Part II for the case
PREDICTION
of general C) to give the model:
When considering regulation about a particular
operating point, even a non-linear plant generally
A(q-I)y(t) = B(q-l)U(t - 1) + WtA. (5)
admits a locally-linearized model:

A(q-Ih\t) = B(q-I)U(t - I) + x(t) (1) To derive a j-step ahead predictor of )(t + j)


based on (5) consider the identity:
where A and B are polynomials in the backward
shift operator q - I: ,,
1 = E}q-I)AA + q-jFM-1) (6) . PC!. I
I
A(q-I) = 1 + alq-I + ... + an"q-n" where Ej and Fj are polynomials uniquely defined cpt-j,);(, \
.
\

B(q-I) ... + bnbq-nb. given A(q - and the prediction interval j. If (5) is '
= bo + b1q-1 + I)
-th,,,,.
" ~ "
A S
t-r"~'Y'

-
Generalized predictive control- Part I 139
multiplied by EjAqj we have: Subtracting (9) from (10) gives:

EjAAy(t + j) = EjBAu(t + j - 1) = A(R


0 - E) + q-j(q-IS - F).
+ EJ-c;(t+ j)
The polynomial R - E is of degree j and may be
and substituting for EjAA from (6) gives: split into two parts:

y(t + j) = EjBAu(t + j - 1) R - E = R + r/l-j


+ Fjy(t) + Ej~(t + j). (7)
so that:
As Ejq - I) is of degree j - 1 the noise components
are all in the future so that the optimal predictor, AR + q-j(q-IS - F + Arj) = O.
given measured output data up to time t and any
given u(t + i) for i > 1, is clearly: Clearly then R = 0 and also S is given by
Sq(F - Ar). ~ = 1- ( r: - A1"; ~.
p(t + j It) = GjAu{t + j - 1) + Fjy(t) (8) As A has a unit leading element we have:

where Gjq-I) = EjB. rj = fo (l1a)


Note that .
Gjq-I) = B(q-l)[1 - q-JFjq-I)]/
A(q - I)A so that one way of computing Gj is simply
Si=h+1 -iii+lrj (11 b)
to consider the Z-transform of the plant's step-
response and to take the first j terms (Clarke and
Zhang, 1985). for i = 0 to the degree of S(q-I);
In the development of the GMV.. ~lf:!~lling
controller only one prediction p(t,+ k r t) is used and: R(q-I) = E(q-I) + q-jrj (12)
wherek is the assumecfvaille of the plant's~~ad-=-
time. Here we consider a whole..set of predictiol!,s Gj+1 = B(q-I)R(q-I). (13)
i~r-~hich j runs from a minimum up to a large
value: these are termed the minimum and maximum
Hence given the plam polynomials A(q - I) and
"prediction horizons". For j < k th~ prediction B(q-I) and one solution Eiq-I) and Fjq-I) then
I
process p(t + j t) depends entir~!x_~_~vailab~c:
(II) can be used to obtain Fj+ I(q-I) and (12) to
Q~1~:1)uO§~7'?i'~.~~~u~P!~ns__need to !'e~-~~~ give Ej+ l(q-l) and so on, with little computational
about future. c_()n~~~l.~ft!()!1~~1h~~_~~s~!l!I!tio~~,
effort. To initialize the iterations note that for j = 1:
~re the c~rll.e~~_().!l.e
of the G~C approac~.:..

2.1. Recursion of the Diophantine equation 1 = EIA + q-IF1


One way to implement long-range prediction is
to have a bank of self-tuning predictors for each and as the leading element of A is I then:
horizon j; this is the approach of De Keyser
and Van Cauwenberghe (1982,1983) and of the EI = 1, F 1 = q( 1 - A).
MUSMAR method (Mosca et at., 1984). Altern-
atively, (6) can be resolved numerically for Ej and
F,J for the whole range of js being considered. The calculations involved, therefore, are straightfor-
. ward and simpler than those required when using
Both these methods are computationally expensIve.
a separate predictor for each output horizon.
Instead a simpler and more effective scheme is to
use recursion of the Diophantine equation so that
the polynomials Ej+ 1 and Fj+ 1 are obtained given 3. THE PREDICTIVE CONTROL LAW
the values of Ej and Fj' Suppose a future set-point or reference sequence
SupposeforclarityofnotationE [w(t + j); j = 1,2,...] is available. In most cases
= Ej,R = Ej+I' w(t + j) will be a constant w equal to the current set-
F = Fj, S = Fj +1and consider the two Diophantine
equations with A defined as AA: point w(t), though sometimes (as in batch process
~
control or robotics) future variations in w(t + j)
I I
would be known. As in the IDCOM algorithm
1 = EA + q-jF
f;"

I, (9) (Richalet et at., 1978) it might be considered that a


f'
.j , smoothed approach from the current output }(t) to
1 = RA + q-u+ oS. (10) w is required which is obtainable from the simple
le..
f ,
\.]
'7'""'

140 D. W. CLARKE et al.

w set-point

~ Predicted output

(-2 (-I t+1 t + NU (+N Time t

Projected controls
u

FIG. 1. Set-point, control and outputs in GPc.

"
first-order lag model: polynomials, which means that the fast Diophantine
recursion is useful for adaptive control applications
w(t) = y(t) of the GPC method.
Consider a cost function of the form:
w(t + j) = aw(t + j - 1)
N2
+ (1 - a)w j = 1,2,... [y(t + j) - w(t + j)]2
J(Nt,N2) = E
where a ~ 1 for a slow transition from the current
measured-variable to the real set-point w. GPC is
capable of considering both constant and varying
{ j~!

+
j~1
A(j)[Au(t + j - 1)]2} (14) ,
,;

future set-points.
The objective then of the predictive control law where:
is to drive future plant outputs }(t + j) "close" to
w(t + j) in some sense, as shown in Fig. 1, bearing N 1 is the minimum costing horizon;
in mind the control activity required to do so. This N 2 is the maximum costing horizon, and
is done using a receding-horizon approach for A(j) is a control-weighting sequence.
which at each sample-instant t:
(1) the future set-point sequence w(t + j) is calcu- The expectation in (14) is conditioned on data up ~
.
, to time t assuming no future measurements are
lated;
\
t,')I..)' <' -IV",
l
available (i.e. the set of control signals are applied
~
1-
..: (2) the prediction model of (8) is used to generate
I
a set of predicted outputs .9(t + j t) with corre- in open-loop in the sequel). ~~ _rI1entionedearlier,
sponding predicted system errors e(t + j) ~h~1i:r~tcontrol is applied and the minimi:i:ation is
= w(t + j) - J,t + j t) noting
I
that I
.9(t + j t) for ~~a.t~~~~i_t_i!~_ne~t sample. The_r.esulting c()!1!r()1
law bel()ngs ~!~-~~~ ~(;_Ia._ss..~!l~_\\'I1.as_()pen-I..~~l?_~
j > k depends in part on future control signals
u(t + i) which are to be determined; FeedJ?ack-Qp_!!!1!~_~()-'!troHI!~t~~~~s~J't!§lAppen-
(3) some appropriate quadratic function of the dix A examines the relation between GPC and
future errors and controls is minimized, assuming Closed-Loop-Feedback-Optimal when the disturb-
that after some "control horizon" further incre- ance process is autoregressive. It is seen that costing
ments in control are zero, to provide a suggested on the control is over all future inputs which affect
sequence of future controls u(t + j); the outputs included in J. In general t"2 is chos~n
(4) the first element u(t) of the sequence is asserted to encompass all the response which is sign}ficalltly
and the appropriate data vectors shifted so that the affected by the curreQt~~Q!!Ql; it is rea~<:>l!a.blethat
calculations can be repeated at the next sample it should at least be greater than the degree of
."
B(q - 1) as then all states contribute to the cost
Tinstant.
Note that the effective control law is stationary,
unlike a fixed-horizon LQ policy. However, in the
self-tuned case new estimates of the plant model
(Kailath, 1980), but more typically N 2 is set to
approximate the rise-time of tiiel'lant. N; can ()f!ell
be taken as 1; ili_Lis~llown a priori that the dead~
] ~

parameters requires new values for the parameter ti~e of the plant is at least k sample-intervals then

r
~ Generalized predictive control-Part I 141

N 1 can bechose!la(l.sk~r mo~~__t~_ITltl1~ITli~~ go 0 ... 0


computations. It is found, however, that a large' gl go ... 0
~I_ass of pla!lL~odel~u_ca.!t__l?e stabilized by G PC G=
with_default values of I and 10 for N 1 and N 2' Part
II provides a theoretical justification for these ... go
gN-l gN-2
choices of horizon. For simplicity in the derivation,
below A(j) is set to the constant A, N 1 to 1 and N 2 Note that if the plant dead-time is k > I the first
to N: the "output horizon". k - I rows of G will be null, but if instead N 1 is
Recall that (7) models the future outputs:
assumed to be equal to k the leading 'element is
non-zero. However, as k will not in general be
y(t + 1) = G1Au(t) + F1y(t) + E1~(t + 1) known in the self-tuning case one key feature of
y(t + 2) = G2Au(t + 1) + F2y(t) + E2~(t + 2) the GPC approach is that a stable solution is
possible even if the leading rows of G are zero.
From the definitions of the vectors above and
y(t + N) = GNAu(t + N - 1) with:
+ FNy(t) + EN~(t+ N).
w = [w(t + I),w(t + 2),...,w(t + N)]T
Consider }(t + j). It consists of three terms: one
depending on future control actious yet to be the expectation of the cost-function of (14) can be
determined, one depending on past known controls written:
together with filtered measured variables and one
depending on future noise signals. The assumption J1 = E{J(I,N)}
that the controls are to be performed in open-loop
= E{(y -
W)T(y - w) + AUTU} (16)
is tantamount to ignoring the future noise sequence 1\
{W + j)} in calculating the predictions. Let f(t + j) r,'L
,.J II

be that component of ){t + j) composed of signals 1.e. I

which are known at time t, so that for example:


J1 = {(Gu + f - W)T(Gu+ f - w) + AUTU}.
f(t + I) = [G1(q-1) - glO]Au(t)
The minimization of J 1 assuming no constraints
+ F 1y(t), and on future controls /esults in the projected control-
f(t + 2) =,,q[G2(q-1) - q-lg21 increment vector:

g2o]Au(t) + F2y(t), u = (GTG + ).I)-lGT(W - f). (17)


,.\'
~
I'
"etc.
x' ,'~ . 'Ji,"
,'\
Note that the first element of u is Au(t) so that
-J-' -
+ gilq-l + ... . the current control u(t) is given by:
where G~q-l) = gjO

u(t) = u(t - I) + gT(W - f) (18)


Then the equations above can be written in the key
vector form:
where gT is the first row of(GTG + AI)-lGT. Hence.
(15) the control includes integral action which provides
,=Gu+f
zero offset provided that for a constant set-point
w(t + i) = w,say, the vector f involves a unit steady-
where the vectors are all N x I: state gain in the feedback path.

, = [.9(t + 1),.9(t + 2),...,.9(t + N)]T


Now the Diophantine equation (6) for q = I gives

u = [Au(t),Au(t + I),..., Au(t + N - I)]T I = Eil)A(1)A(l) + Fj,I)


f = [f(t + I),f(t + 2),... ,f(t + N)]T. and as .1(1) = 0 then FJ{I) = I so that
f(t + j) = Fj}(t) is a signal whose mean value equals
As indicated earlier, the first j terms in GJ{q-l) are that of y(t). Furthermore. defining F;{q-l) to be
the parameters of the step-response and therefore Ej,q-l)A(q-l) gives:
gjj = gj for j = 0, I, 2,... < i independent of the
particular G polynomial.
The matrix G is then lower-triangular of dimen-
FM-1)y(t) = (I - Fj(q-l)A)y(t)
sion N x N: = y(t) - FjAy(t)

~
142 D. W. CLARKE et at.

which shows that if y(t) is the constant y so that of dimension NU and the prediction equations
A}(t) = 0 then the component Fiq - 1 )}(t) reduces reduce to:
to y. This, together with the control given by (18),
ensures offset-free behaviour by integral action. ~ = Glu + f
3.1. The control horizon where:
The dimension of the matrix involved in (17) is
N x N. Although in the non-adaptive case the go 0 ... 0
inversion need be performed once only, in a self- gl go ... 0
tuning version the computational load of inverting
.
at each sample would be excessive. Moreover, if Gl = is N x N U.
go
the wrong value for dead-time is assumed, GTG is
singular and hence a finite non-zero value of
gN-l gN-2 ... gN-NU
weighting A. would be required for a realizable
control law, which is inconvenient because the
"correct" value for ;, would not be known a priori. The corresponding control law is given by:
The real power of the GPC approach lies in the
assumptions made about future control actions. ii = [GIGl + ar lGI<w- f) (20)
Instead of allowing them to be "free" as for the
and the matrix involved in the inversion is of the
~
above development, GPC borrows an idea from
the Dynamic Matrix Control method of Cutler and much reduced dimension NU x NU. In particular,
Ramaker (1980). This is that after an interval if NU = 1 (as is usefully chosen for a "simple"
NV < N 2 projected control increments are assumed plant), this reduces to a scalar computation. An
to be zero, example of the computations involved is given in
Appendix B.
I.e. Au(t +j - 1) = 0 j> NU. (19)
3.2. Choice of the output and control horizons
The value NU is called the "control horizon". In Simulation exercizes on a variety of plant models,
'~ ,"
including stable, unstable and nonminimum-phase IIi
cost-function terms this is equivalent to placing
effectively infinite weights on control changes after processes with variable dead-time, have shown how
some future time. For example, if NU = 1 only one N 1 , N 2 and N V should best be selected. These
control change (i.e. Au(t» is considered, after which studies also suggest that the method is robust
the controls u(t + j) are all taken to be equal to against these choices, giving the user a wide latitude
u(t). Suppose for this case that at time t there is a in his design.
step change in w(t) and that N is large. The choice
of u(t) made by GPC is the opt;mal "mean-level" 3.2.1. N l: The minimum output horizon. If the
.
controller which, if sustained, would place the dead-time k is exactly known there is no point in
settled plant output to w with the same dynamics setting N 1 to be less than k since there would then be ~
as the open-loop plant. This control law (at least superfluous calculations in that the corresponding
for a simple stable plant) gives actuations which outputs cannot be affected by the first action u(t).
are generally smooth and sluggish. Larger values If k is not known or is variable, then N 1 can be set
of NU, on the other hand, provide more active to I with no loss of stability and the degree of
controls. B(q-l) increased to encompass all possible values
°l1e useful intuiti,,-~~~int~!J~~~tat!()n.of the use of k.
of a control horizon is in the stabilization of
~().I}.!rii.!!lrii~~-'p.hasepiant. ie-the coniror~elghilng 3.2.2. N2: The maximum output horizon. If the ~or
;. is set to ze!() th~()ptill1,a.1control~~i~~i!imIIji!~~~ plant has an initially negative-going non minimum- (Iv" w

~-is a cancellation law which attempts to remove phase response, N 2 should be chosen so that the ",i n;ntl,i.rY) f
the prOcessaynamics~~il!g~!lin.y~~sepla~t D10del later positive-going output samples are included in plA.t's.
in the controller. As is well known, such (minimum- the cost: in discrete-time this implies that N 2 exceeds
va!i~l1ce)Jaws are in practice unstable'because the degree of B(q-l) as demonstrated in Appendix
the~ il1_'I()I'Ie. _g~()\\,i.ng m.()(jes in 'the ~:<?_~:t~o(~i~~~ B. In practice, however, a rather larger value of N 2
corresponding to the plant's nonminirI1~Il1~p~a~~ is suggested, corresponding more closely to the rise-
~~~os. Constraining these modes by placing infinite time of the plant. ~
costing on future control increments stabilizes the
resulting closed-loop even if the weighting A.is zero. 3.2.3.NU: The control horizon. This is an import-
The use of NU < N moreover significantly reduces ant design parameter. For a simple plant (e.g. open-
the computational burden, for the vector ii is then loop stable though with possible dead-time and

.....
\
EJ~("(~ oe Increil,
. f J -, Generalized predictive control-Part I 143
"'Jt",.,J 'I'-

nonminimum-phasedness) a value of NU of 1 gives considered as subsets of the GPC approach so that


generally acceptable control. Increasing NU makes accepted theoretical (e.g. convergence and stability)
I"" the control and the corresponding output response and practical results can be extended to this new
more active until a stage is reached where any method.
further increase in N U makes little difference. An The concept of using long-range prediction as a
increased value of NU is more appropriate for potentially robust control tool is due to Richalet
complex systems where it is found that good control et al. (1978) in the IDCOM algorithm. This method,
Gh'Ole€- . . .
cl N/J
f!-.o r
~
J:,~
IkYlS/-«'
[ ISac h leve d w h en NU ISat least equa I to the number
of unstable or badly-damped poles.
One interpretation of these rules for chosing N U
though reportedly having some industrial success,
is restricted by its assumption of a weighting-
sequence model (all-zeros), with an ad hoc way of
is as follows. Recall that GPC is a receding- solving the offset problem and with no weighting
f!a,,,h horizon LQ control law for which the future control on control action. Hence it is unsuitable for unstable
sequence is recalculated at each sample. For a or nonminimum-phase open-loop plants. The
simple, stable plant a control sequence following a DMC algorithm of Cutler and Ramaker (1980) is
\'1~ufrt~..\iC.,step in the set-point is generally well-behaved (for based on step-response models but does include
example, it would not change sign). Hence there the key idea of a control horizon. Hence it is
would not be significant corrections to the control effective for nonminimum-phase plants but not
at the next sample even if NU = 1. However, it is for open-loop unstable processes. Again the offset
known from general state-space considerations that problem is dealt with heuristically and moreover
a plant of order n needs n different control values the use of a step-response model means that the
for, say, a dead-beat response. With a complex savings in parameterization using a A(q-l) poly-
system these values might well change sign frequ- nomial are not available. Clarke and Zhang (1985)
ently so that a short control horizon would not compare the IDCOM and DMC designs.
\J
allow for enough degrees of freedom in the deriv- The GMV approach of Clarke and Gawthrop \~\O~
(1975) for a plant with known dead-time k can be .{Q.
ation of the current action. ~
Generally it is found that a value of N U of 1 is seen to be a special case of GPC in which both the (.?'.f
adequate for typical industrial plant models, minimum and maximum horizons N 1 and N 2 are /'i
whereas if, say, a modal model is to be stabilized set to k and only one control signal (the current ~,
NU should be set equal to the number of poles control u(t) or L\u(t))is weighted. This method is '\
" '\.,
, near the stability boundary. I.f...fu!ther damping of known to be robust against overspecification of
the control action is then required A.can be increased model-order but it 4:an only stabilize a certain class
from zero. Note in particular that, unlike with of nonminimum-phase plant for which the control
GMV, GPC can be used with a nonminimum- weighting A.has to be chosen with reasonable care.
phase plant even if A.is zero. Moreover, GMV is sensitive to varying dead-time
The above discussion implies that GPC can be unless A.is large, with correspondingly poor control.
considered in two ways. For a process control GPC shares the robustness properties of GMV
default setting of N I = 1,N 2 equal to the plant rise- without its drawbacks.
time and NU = 1 can be used to give ~easonable By choosing N1 = N2 = d > k and NU = 1with \,0" ~
performance. For high-performance applications ;. = 0, GPC becomes Ydstie's (1984) extended-I ft-\ <>-
~,.1<>
such as the control of coupled oscillators a larger horizon approach. This has been shown theoret- ~)-\'0°'
value of NU is desirable. ically to be a stabilizing controller for a stable ~o(:..;
s
nonminimum-phase plant. Ydstie, however, uses a, -<. ~
4, RELATIONSHIP WITH OTHER APPROACHES CARMA model and the method has not been
GPC depends on the integration of five key shown to stabilize open-loop unstable processes.
ideas: Q1e assumption of aC~RIMA rather than Indeed, for a plant with poorly damped poles
a_<::ARMA. plant model, theus~9fJQfl~-range simulation experience shows that the extended-
prediction over a finite horizon greater than the horizon method is unstable, unlike the G PC design.
dea(Hif!1~_QJdle_pll.ln! _and at leastuc::qll.al.ut()the This is because in this case more than one future
1!1.0dc::L
order, recu~~ion of the Diophantin~_~g~- output' needs to be accounted for in the cost-
~ti()n, ~1:1~_~onsideration of weighting of control function for stability (i.e. N 2 > Nd.
increments in the cost-function, and th~ShQi~~<>f With N1 = I, N2 = d > k and NU = I with
a control horizon after which projected control ;. = 0 and a CARIMA model, GPC reduces to
i!lg~ments a~tal<en to be zero. Many of these the independently-derived EPSAC algorithm (De
ideas have arisen in the literature in one form or Keyser and Van Cauwenberghe, 1985). This was
another but not in the particular way described shown by the authors to be a particularly useful
here, and it is their judicious combination which method with several practical applications: this is
gives G PC its power. Nevertheless, it is useful verified by the simulations described below where
to see how previous successful methods can be the "default" settings of GPC that were adopted

I"
F-

144 D. W. CLARKE et al.

TABLE 1. TRANSFER-FUNCTIONS
OF THE SIMU-
with N 2 = 10 are similar to those of EPSAC. Part LATED MODELS
II of this paper gives a stability result for these Number Samples Model
settings of the G PC horizons. I
Peterka's elegant predictive controller (1984) is 1-79
I + IOs + 40s2
nearest in philosophy to GPC: it uses a CARIMA- e - 2.'$
2 80-159
like model and a similar cost-function, though I + IOs + 4Os2
concentrating on the case where N 2 --+ 00. The e - 2.'$
3 160-239
algorithmic development is then rather different 1+ IOs
I
from GPC, relying on matrix factorization and 4 240-319
1+ 10s
decomposition for the finite-stage case, instead of I
5 320-400
105(1 + 2.5s)
the rather more direct formulation described here.
Though Peterka considers two values of control
weighting (AI for the first set of controls and A2for
the final bB steps), he does not consider the useful
GPC case where A2 = 00. GPC is essentially a the simulations were chosen to illustrate the relative
finite-stage approach with a restricted control hor- robustness and adaptivity of the methods. The
izon. This means (though not considered here) that models are given as Laplace transforms and the
constraints on both future controls and control
rates can be taken into account by suitable modi-
sampling interval was chosen in all cases to be 1s. ~
Figure 2 shows the behaviour of a fixed digital
fications of the GPC calculations-a generalization PID regulator which was chosen to give reasonable
which is impossible to achieve in infinite-stage if rather sluggish control of the initial plant. It
designs. Nevertheless, the Peterka procedure is an was implemented in interacting PID form with
important and applicable approach to adaptive numerator dynamics of (1 + 10s + 25s2) and with
control. a gain of 12. For models 1 and 5 this controller
gave acceptable results but its performance was
5. A SIMULATION STUDY poor for the other models with evident excessive
The objective of this study is to show how an gain. Despite integral action (with desaturation)
adaptive GPC implementation can cope with a
plant which changes in dead-time, in order and in
offset is seen with models 3 and 4 due to persistent
control saturation.
\'
parameters compared with a fixed PID regulator, The first adaptive controller to be considered
with a GMV self-tuner and with a pole-placement was the GMV approach" of Clarke and Gawthrop
self-tuner. The PID regulator was chosen to give (1975, 1979) using design transfer-functions:
good control of the initial plant model which for
this case was assumed to be known. For simplicity p(q-l) 0.5q-l)/0.5 and
all the adaptive methods used a standard recursive-
= (1 -
Q(q-l) (1 - q-I)/(l-O.5q-l),
least-squares parameter estimator with a fixed for- =
getting-factor of 0.9 and with no noise included in ~
the simulation. with the "detuned model-reference" interpretation.
The graphs display results over 400 samples of This implicit self-tuner used a k-step-ahead predi-
simulation for each method, showing the control ctor model with 2F(q - 1) and 5 G(q - I) parameters
signal in the range -100 to 100, and the set-point and with an adopted fixed value for k of 2. The
w(t) with the output }(t) in the range - 10 to 80. detuned version of GMV was chosen as the dead-
The control signal to the simulated process was time was known to be varying and the use of Q(q - 1)
clipped to lie in [ -100,100], but no constraint was makes the design less sensitive to the value of k.
placed on the plant output so that the limitations Note in particular that for models 2 and 3 the
on }(t) seen in Fig. 4 are graphical with the actual real dead-time is greater than in the two samples
output exceeding the displayed values. assumed.
Each simulation was organized as follows. Q!!!:: The simulation results using GMV are shown in
ing the first 10 saI!l~the control sigI!al wa.~fixed Fig. 3; reasonable if not particularly "tight" control
~IO and the estimator (initialized with parameters was achieved for models 1,3,4 and 5. The weighting
[1,0,0, . . .]) was enabled for the adaptive control- of control increments (as found in other cases)
lers. A sequence of set-point changes between three contributes to the overshoots in the step responses.
The behaviour with model 2 was, however, less ~
distinct levels was provided with switching every
20 samples. After every 80 samples the simulated acceptable with poorly damped transients. Never- "
continuous-time plant was changed, following the theless, the adaptation mechanism worked well and
models given in Table 1. It is seen that these changes the responses are certainly better than with the
in dynamics are large, and though it is difficult to nonadaptive PID controller.
imagine a real plant varying in such a drastic way, An increasingly popular method in adaptive

..
"':
~
Generalized predictive control-Part I 145

(y)
60°/.
Output
/

-10°/. a 400
System change
Control signal
100./.

0°'.

,..
-100°/.a 400

FIG. 2. The fixed prD controller with the variable plant.

60°/.

-10°/.
a
System change
Control signal
100°/.

0°/.

- 100 °/. a 400

FIG. 3. The behaviour of the adaptive GMV controller.

control is pole-placement (Wellstead et al., 1979) as 3 and 4, however, the algorithm is entirely ineffective
variations in plant dead-time can be catered for due to singularities in the Diophantine solution.
using an augmented B(q-l) polynomial. To cover This verifies the observations that pole-placement
the range of possibilities of dead-time here 2A(q - 1) is sensitive to model-order changes. In other cases, I
and 6B(q - 1) parameters were estimated and the even though the response is good, the control is
desired pole-position was specified by a polynomial rather "jittery".
P(q-l) = 1 - O.5q-l (i.e. like the GMV case The GPC controller involved the same numbers
~, without detuning). Figure 4 shows how an adaptive of A and B (2 and 6) parameters as in the pole-
pole-placer which solves the corresponding placement simulation. Default settings of the output
Diophantine equation at each sample coped with and control horizons were chosen with N 1 = 1,
the set of simulated models. For cases I, 2 and 5 N 2 = 10 and N U = 1 throughout, as it has been
the output response is good with less overshoot found that this gives robust performance. Figure 5
than the GMV approach. For the first-order models shows the excellent behaviour achieved in all cases
146 D. W. CLARKE et al.

60"1. Output (y)

-10./.
a 400
System change System change

Control signal
100-/.

0./0

-100./.
a 400
;~
1

FIG. 4. The behaviour of the adaptive pole-placer.

Output (y)
60./.

\
-108/. 400
a

Control signal
100./.

~
08/.
~
.\
.

'"!'"

-100./. 400
a

FIG. 5. The behaviour of the adaptive GPC algorithm.

by the GPC algorithm. For each new model only when used on a plant which has large dynamic
at most two steps in the set-point were required for variations.
\ full adaptation, but more importantly there is no Montague and Morris (1985) report a compar-
sign of instability, unlike all the other controllers. ative study of GPC, LQG. GMV and pole-place-
ment algorithms for control of a heat-exchanger
6. CONCLUSIONS with variable dead-time (12-85 s. sampled at 6 s
This paper has described a new robust algorithm intervals) and for controlling the biomass of a
;f!J
which is suitable for challenging adaptive control penicillin fermentation process. The controllers
applications. The method is simple to derive and were programmed on an IBM PC in pro-
to implement in a computer; indeed for short Fortran/proPascal. Their paper concludes that the
control horizons G PC can be mounted in a micro- GPC approach behaved consistently in practice
computer. A simulation study shows that GPC is and that "the LQG and GPC algorithms gave the
superior to currently accepted adaptive controllers best all-round performance", the GPC method
\, Generalized predictive control-Part I 147

/ being preferred for long time-delay processes, and e = [E1~(t + 1),E2~(t+ 2),...,ENW + NW.
it confirms that G PC is simple to implement and Clearly as e is a stochastic process the expected value of the
i to use.
The G PC method is a further generalization of
cost subject to appropriate conditions must be minimized. The
cost to be minimized therefore becomes
the well-known GMV approach, and so can be
J1 = E{(y - wITty - w) + ).iiTii}.
equipped with design polynomials and transfer-
functions which have interpretations as in the GMV Two different assumptions will be made in the next section
case. The companion paper Part II, which follows, about the class of admissible controllers for the minimization
of the cost above. Only models of autoregressive type are
explores these ideas and presents simulations which considered:
show how G PC can be used for more demanding
control tasks. A(q-I)y(t) = B(q-l)u(t - I) + W)/~.

REFERENCES In the general case of CARIMA models where the disturbances


Astrom, K. J. and B. Wittenmark (1973). On self-tuning regula- are non-white the predictions i(t + i) are only optimal asymptot-
tors. Automatica, 9, 18S-199. icallyas they have poles at the zeros of the qq-I) and the effect
Astrom, K. J. and B. Wittenmark (1980). Self-tuning controllers of initial conditions is only eliminated as time tends to infinity.
based on pole-zero placement. Proc. lEE, 127D, 120-130. The closed-loop policy requires an optimal Kalman filter with
Bertsekas, D. P. (1976). Dynamic Programming and Stochatic time-varying filter gains, see Schweppe (1973); the parameters
Control. Academic Press, New York. of the C-polynomial are only the gains of the steady-state
Clarke, D. W. (1984). Self-tuning control of nonminimum-phase Kalman filter once convergence is attained. Thus the closed-
systems. Automatica, 20, SOI-SI7. loop and the open-loop-feedback policies examined below are
Clarke, D. W. and P. J. Gawthrop (197S). Self-tuning controller. different in the case of coloured noise.
Proc. lEE, 122, 929-934.
Clarke, D. W. and P. J. Gawthrop (1979). Self-tuning control. A.I. RELATION OF GPC AND OPEN-LOOP-
Proc. lEE, 126, 633-640. FEEDBACK-OPTIMAL CONTROL
Clarke, D. W. and L. Zhang (198S). Does long-range predictive In this strategy it is assumed that the future control signals
control work? lEE Conference "Control 8S", Cambridge. are independent offuture measurements (i.e. all will be performed
Cutler, C. R. and B. L. Ramaker (1980). Dynamic Matrix in open-loop) and we calculate the set ii such that it minimizes
Control-A Computer Control Algorithm. JACC, San Franci- the cost J I' The first control in the sequence is applied and at
sco. the next sample the calculations are repeated. The cost becomes:
De Keyser, R. M. C. and A. R. Van Cauwenberghe (1982).
Typical application possibilities for'self-tuning predictive E{iiT(GTG + ;.I)ii + 2iiT(GT(r - w) + GTe)
control. IF AC Symp. Ident. Syst. Paramo Est., Washington.
J1 =
~

. De Keyser, R. M. C. and A. R. Van Cauwenberghe (1983). + err + eTe + WTW+ 2er(e - w) - 2eTw}.
Microcomputer-controlled servo system based on self-adap-
tive long-range prediction. Advances in Measurement and Because of the assumption above, E{iiTGTe} = 0; note also that
Control-MECO 1983. the cost is only affected.by the first two terms. By putting the
De Keyser, R. M. C. and A. R. Van Cauwenberghe (198S). first derivative of the cost equal to zero one obtains:
Extended prediction self-adaptive control. IF AC Symp. Ident.
Syst. Paramo Est., York.
Kailath, T. (1980). Linear Systems. Prentice-Hall, Englewood
ii = (GTG + i.I)-IGT(W - f).

Cliffs, NJ.
A.2. RELATION OF GPC AND CLOSED-LOOP-
Kurz, H. and W. Goedecke (1981). Digital parameter-adaptive FEEDBACK-OPTIMAL CONTROL
control of processes with unknown dead time. Automatica,
17, 24S-2S2. Choose the set ii such that the cost J 1 is minimized subject

.~
Montague, G. A. and A. J. Morris (198S). Application ofadaptive
control: a heat exchanger system and a peniciIlin
process. Third workshop on the theory and application of
self-tuning and adaptive control, Oxford, U.K.
fermentation
to the condition that ~u(i) is only dependent on data up to time
i. This is the statement of causality. In order to find the set ii
assume that by some means {~u(t), ~u(t + I),..., ~u(t + N - 2)}
is available.
Mosca, E., G. Zappa and C. Manfredi (1984). Multistep horizon
self-tuning controllers: the MUSMAR approach. IF AC 9th. J1 = E{(~u(t),...,
World Congress, Budapest, Hungary. ~u(t + N - I»)(GTG + ;.I)(~u(t),....
X; Peterka, V. (1984). Predictor-based self-tuning control. Automa-
~u(t + N - W
tlca, 20. 39-S0.
Richalet, J.. A. Rault, J. L. Testud and J. Papon (1978). + 2iiT(GT(f - w))
Model predictive heuristic control: applications to industrial + 2(~u(t), ~u(t + N - 1)GTe}
processes. Automatica, 14,413-428.
Schweppe. F. C. (1973). Uncertain Dynamic Systems. Prentice + extra terms independent of minimization.
Hall, Englewood Cliffs, NJ.
Tuffs, P. S. and D. W. Clarke (198S). Self-tuning control of Note that in evaluating ~u(t + N - 1) the last row GTe is
offset: a unified approach. Proc. lEE, 132D, 100-110. goLeiW + N - i) and E{~u(t + N - I)W + N)} = 0 by the
Wellstead. P. E.. D. Prager and P. Zanker(1979). Pole assignment assumption of partial information. Therefore ~u(t + IV - I) can
self-tuning regulator. Proc. lEE, 126,781-787. be calculated assuming ';(1 + N) = O. From general state-space
Ydstie, B. E. (1984). Extended horizon adaptive control. IFAC considerations it is evident that when minimizing a quadratic
9th World Congress, Budapest, Hungary. cost, assuming a linear plant model with additive noise, the
resulting controller is a linear function of the available measure-
APPENDIX A. MINIMIZATION PROPERTIES OF GPC ments (i.e. u(t) = -U(r) where k is the feedback gain and i(t)
is the state estimate from the appropriate Kalman filter).
Consider the performance index
~u(t + N - 1) = linearfunction(~u(t + N - 2).
J = (~. - wITty - w) + AiiTii wherey = Gii + r + e ~u(t + N - 3),...,W + N - I),
where y, w. ii are as defined in the main text and W + N - 2),...,y(t),W -I),...).

r
~,
D. W. CLARKE et al.
.,. -L
148
,
In calculating ~u(l + N - 2) the following expectations are zero: For a controller of the form R(q-')~u(t) =
w(r)
- S(q-')(t)
E{~u(t + N - 2)~(t + N)} = 0,
~
E{~u(t + N - 2)W + N - I)} = O. So = Igjho/Igj
In order to work out E{~u(t + N - I)~u(t + N - 2)}, the part s. = IgJ;,/Igj
of the expectation associated with ~(r + N - I) is set to zero as
the control signal ~u(t + N - 2) cannot be a function of ro = I (g;)2/Igj
W + N - I). As ~u(l + N - 1) is a linear function of rl = Iggj;/Igj
~(r + N - 1) this implies that as far as calculation of
~u(t + N - 2) is concerned, ~u(r + N - I) could be calculated
by assuming that W + N - I) and W + N) were zero. Similarly, and the closed-loop pole-positions are at the roots of
the same argument applies for the rest of control signals at each RA~ + q-'SB.
step i assuming E{~u(l + i)W + j)} = 0 for j > i. This implies
that each control signal ~u(r + i) is the ith control in the
sequence assuming that W + j) = 0 for j > i. Hence for ~u(t) N2 =I
~u(r) = [w - 1.9y(t)+ 0.9y(r - I) - 2~u(t - 1)]
the solution amounts to calculating the first control signal
setting W + j) for j > 0 to zero-the same as that considered RA~ + q-'SB=>(I + 2q-').
in part I for an Open-Loop-Feedback-Optimal controller. Note
that in the case of N U < N 2 the minimization is performed This is the minimum-prototype controller which, because of the
subject to the constraint that the last N 2 - N U + I control
cancellation of the nonminimum-phase zero, is unstable.
signals are all equal. Consideration of causality of the controller
implies that ~u(r + NU - I) is the first free signal that can be
calculated as a function of data a vailable up to time t + N U - I N2 = 2 (i.e. greater rhan deg(B)) ~
in the sequence and the argument above follows accordingly.
Therefore, the GPC minimization (OLFO) is equivalent to the 1.9y(r) + 0.9y(r - I) - 2~u(t - 1)
closed-loop policy for models of regression type.
~u(r)
= [w -
+ 3.9(w - 2.71y(t)+ 1.71y(t - I)
APPENDIX B. AN EXAMPLE FOR A FIRST ORDER
PROCESS - 3.Mu(t - 1))]/16.21
This appendix examines the relation of N 2 and the c!osed-
loop pole position for a simple example. Consider a nonminim- or:
urn-phase first-order process (first-order + fractional dead-time):
~u(t) = [4.9w - 12.469y(t) + 7.569y(t - I)
(I + a,q-I)y(t) = (bo + b,q-')u(t - 1)
(I - 0.9q-')y(t) = (\ + 2q-')u(t - I). - 16.82~u(r - 1)]/16.21

B.I. E AND F PARAMETERS


and: ,....
From the recurrence relationships in the main text it follows
that:
AR~ + q-'SB =>(1 - O.09q-').
ej
= I - a,ej-l;fiO = I - h,;h, = eja.
eo =. I:[1.0];f1O = (\ - al):[1.9];f1l = a.:[ -0.9] The closed-loop pole is withinothe unit circle and therefore the
el = I - a.: [1.9];f20 = I - a.(\ - a.):[2.71]; closed-loop is stable. In fact it can easi1~ be demonstrated that
f21 = a.(\ - a.):[ -1.71] for all first order processes N2 > t5B(q ) + k - I stabilizes an

e2 = I - a.(\ - a,):[2.71]; open-loop stable plant, for sign Igi = sign(D.C.gain).


f30 = I - a,(1 - al(1 - a,)): [3.439]
f31 = al(\ - al(\ - al)):[ -2.439].
N2 =3 ~

,
8.2. CONTROLLER PARAMETERS
go = bo:[I.O];gll = b,:[2.0] ~u(r) 23.447y(t - I)
= [11.41 w - 34.857}~t) +
gl = bo(\ - al) + b.=[3.9];gn = b,(\ - a.):[3.8] - 52.1 04~u(l - 1)]/58.591
g2 = bo(\ - a,(\ - ad) + b,(\ - a ,):[6.5 I];
g33 = b,(1 - a,(1 - a.)):[5.42]. and:
Assuming N U = I. the effect o~ N 2 on the control calculation
and the closed-loop pole position is now examined: AR~ + q-1SB =>(1 - 0.416q-l).

~u(t) = (Ig,{w - hoy(t - I) - h,y(t - 2))


The pole is again within the unit circle. Note that the closed-
loop pole is tending towards the open-loop pole as N 2 increases
- gii~u(l - I)I (gj)2. (see Part II).

...

1..

You might also like