January 2013
Delft
University of
Technology
Artikelnummer 06917690017
Contents
1 Introduction
1.1 Model predictive control . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 History of Model Predictive Control . . . . . . . . . . . . . . . . . . . . . .
9
9
10
.
.
.
.
.
.
.
.
13
13
13
15
20
21
25
28
29
3 Prediction
3.1 Noiseless case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Noisy case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 The use of a priori knowledge in making predictions . . . . . . . . . . . . .
31
32
33
36
4 Standard formulation
4.1 The performance index . . . . . . . . . . . .
4.2 Handling constraints . . . . . . . . . . . . .
4.2.1 Equality constraints . . . . . . . . .
4.2.2 Inequality constraints . . . . . . . . .
4.3 Structuring the input with orthogonal basis
functions . . . . . . . . . . . . . . . . . . .
4.4 The standard predictive control problem . .
4.5 Examples . . . . . . . . . . . . . . . . . . .
.
.
.
.
39
39
44
46
48
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
50
51
52
55
56
56
58
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
5.2
5.3
5.4
5.5
6 Stability
6.1 Stability for the LTI case . . . . . . . . . . . . . . .
6.2 Stability for the inequality constrained case . . . .
6.3 Modications for guaranteed stability . . . . . . . .
6.4 Relation to IMC scheme and Youla parametrization
6.4.1 The IMC scheme . . . . . . . . . . . . . . .
6.4.2 The Youla parametrization . . . . . . . . . .
6.5 Robustness . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
62
65
66
67
69
72
76
89
89
91
96
97
.
.
.
.
.
.
.
101
101
105
105
116
116
121
124
.
.
.
.
.
131
132
134
137
140
146
.
.
.
.
.
151
152
152
154
158
163
167
177
181
CONTENTS
183
Appendix D: Model descriptions
D.1 Impulse and step response models . . . . . . . . . . . . . . . . . . . . . . . 183
D.2 Polynomial models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Appendix E: Performance indices
201
205
206
207
209
Index
257
References
259
CONTENTS
Abbreviations
CARIMA model
CARMA model
DMC
EPSAC
GPC
IDCOM
IIO model
IO model
LQ control
LQPC
MAC
MBPC
MHC
MIMO
MPC
PFC
QDMC
RHC
SISO
SPC
UPC
Chapter 1
Introduction
1.1
10
CHAPTER 1. Introduction
For linear models with linear constraints and a quadratic performance index the
solution can be found using quadratic programming algorithms.
Receding horizon principle: Predictive control uses the socalled receding horizon principle. This means that after computation of the optimal control sequence, only the
rst control sample will be implemented, subsequently the horizon is shifted one
sample and the optimization is restarted with new information of the measurements.
1.2
Since 1970 various techniques have been developed for the design of model based control
systems for robust multivariable control of industrial processes ([10], [24], [25], [29], [43],
[46]). Predictive control was pioneered simultaneously by Richalet et al. [53], [54] and
Cutler & Ramaker [18]. The rst implemented algorithms and successful applications were
reported in the referenced papers. Model Predictive Control technology has evolved from
a basic multivariable process control technology to a technology that enables operation
of processes within well dened operating constraints [2], [6]), [50]. The main reasons for
increasing acceptance of MPC technology by the process industry since 1985 are clear:
MPC is a model based controller design procedure, which can easily handle processes
with large timedelays, nonminimum phase and unstable processes.
It is an easytotune method, in principle there are only three basic parameters to be
tuned.
Industrial processes have their limitations in, for instance, valve capacity and other
technological requirements and are supposed to deliver output products with some
prespecied quality specications. MPC can handle these constraints in a systematic
way during the design and implementation of the controller.
Finally MPC can handle structural changes, such as sensor and actuator failures,
changes in system parameters and system structure by adapting the control strategy
on a samplebysample basis.
However, the main reasons for its popularity are the constrainthandling capabilities, the
easy extension to multivariable processes and most of all increased prot as discussed in the
rst section. From academic side the interest in MPC mainly came from the eld of selftuning control. The problem of Minimum Variance control (
Astrom & Wittenmark [3]) was
studied while minimizing the performance index J(u, k) = E {( r(k + d) y(k + d) )2 }
at time k, where y(k) is the process output signal, u(k) is the control signal, r(k) is the
reference signal, E() stands for expectation and d is the process deadtime. To overcome
stability problems with nonminimum phase plants, the performance index was modied
by adding a penalty on the control signal u(k). Later this u(k) in the performance index
was replaced by the increment of the control signal u(k) = u(k) u(k 1) to guarantee
11
a zero steadystate error. To handle a wider class of unstable and nonminimum phase
systems and systems with poorly known delay the Generalized Predictive Control (GPC)
scheme (Clarke et al. [13],[14]) was introduced with a quadratic performance index.
In GPC mostly polynomial based models are used. For instance, Controlled AutoRegressive Moving Average (CARMA) models or Controlled AutoRegressive Integrated Moving
Average (CARIMA) models are popular. These models describe the process using a minimum number of parameters and therefore lead to eective and compact algorithms. Most
GPCliterature in this area is based on SingleInput SingleOutput (SISO) models. However, the extension to MultipleInput MultipleOutput (MIMO) systems is straightforward
as was shown by De Vries & Verbruggen [21] using a MIMO polynomial model, and by
Kinnaert [34] using a statespace model.
This text covers stateoftheart technologies for model predictive process control that are
good candidates for future generations of industrial model predictive control systems. Like
all other controller design methodologies, MPC also has its drawbacks:
A detailed process model is required. This means that either one must have a good
insight in the physical behavior of the plant or system identication methods have
to be applied to obtain a good model.
The methodology is open, and many variations have led to a large number of MPCmethods. We mention IDCOM (Richalet et al. [54]), DMC (Cutler & Ramaker [17]),
EPSAC (De Keyser and van Cauwenberghe [20]), MAC (Rouhani & Mehra [57]),
QDMC (Garca & Morshedi [27]), GPC (Clarke et al. [14][15]), PFC (Richalet et al.
[52]), UPC (Soeterboek [61]).
Although, in practice, stability and robustness are easily obtained by accurate tuning,
theoretical analysis of stability and robustness properties are dicult to derive.
Still, in industry, for supervisory optimizing control of multivariable processes MPC is often
preferred over other controller design methods, such as PID, LQ and H . A PID controller
is also easily tuned but can only be straightforwardly applied to SISO systems. LQ and
H can be applied to MIMO systems, but cannot handle signal constraints in an adequate
way. These techniques also exhibit diculties in realizing robust performance for varying
operating conditions. Essential in model predictive control is the explicit use of a model
that can simulate dynamic behavior of the process at a certain operating point. In this
respect, model predictive control diers from most of the model based control technologies
that have been studied in academia in the sixties, seventies and eighties. Academic research
has been focusing on the use of models for controller design and robustness analysis of
control systems for quite some time. With their initial work on internal model based
control, Garcia and Morari [26] made a rst step towards bridging academic research in
the area of process control and industrial developments in this area. Signicant progress
has been made in understanding the behavior of model predictive control systems, and a lot
of results have been obtained on stability, robustness and performance of MPC (Soeterboek
[61], Camacho and Bordons [11], Maciejowski [44], Rossiter [56]).
12
CHAPTER 1. Introduction
Since the pioneering work at the end of the seventies and early eighties, MPC has become
the most widely applied supervisory control technique in the process industry. Many papers
report successful applications (see Richalet [52], and Qin and Badgewell [50]).
Chapter 2
The basics of model predictive
control
2.1
Introduction
MPC is more of a methodology than a single technique. The dierence in the various
methods is mainly the way the problem is translated into a mathematical formulation, so
that the problem becomes solvable in the limited time interval available for calculation of
adequate process manipulations in response to external inuences on the process behavior (Disturbances). However, in all methods ve important items are part of the design
procedure:
1. Process model and disturbance model
2. Performance index
3. Constraints
4. Optimization
5. Receding horizon principle
In the following sections these ve ingredients of MPC will be discussed.
2.2
14
The models required for these tasks do not necessarily have to be the same. The model
applied for prediction may dier from the model applied for calculation of the next control
action. In practice though both models are almost always chosen to be the same.
As the models play such an important role in model predictive control the models are
discussed in this section. The models applied are so called InputOutput (IO) models.
These models describe the inputoutput behavior of the process. The MPC controller
discussed in the sequel explicitly assumes that the superposition theorem holds. This
requires that all models used in the controller design are chosen to be linear. Two types
of IO models are applied:
Direct InputOutput models (IO models) in which the input signal is directly applied
to the model.
Increment InputOutput models (IIO models) in which the increments of the input
signal are applied to the model instead of the input directly.
The following assumptions are made with respect to the models that will be used:
1. Linear
2. Time invariant
3. Discrete time
4. Causal
5. Finite order
Linearity is assumed to allow the use of the superposition theorem. The second assumption time invariance enables the use of a model made on the basis of observations of
process behavior at a certain time for simulation of process behavior at another arbitrary
time instant. The discrete time representation of process dynamics supports the sampling
mechanisms required for calculating control actions with an MPC algorithm implemented
in a sequential operating system like a computer. Causality implies that the model does
not anticipate future changes of its inputs. This aligns quite well with the behavior of
physical, chemical, biological systems. The assumption regarding the order of the model
to be nite implies that models and model predictions can always be described by a set of
explicit equations. Of course, these assumptions have to be validated against the actual
process behavior as part of the process modeling or process identication phase.
In this chapter only discrete time models will be considered. The control algorithm will
always be implemented on a digital computer. The design of a discrete time controller is
obvious. Further a linear timeinvariant continuous time model can always be transformed
into a linear timeinvariant discrete time model using a zeroorder hold ztransformation.
The most general description of linear time invariant systems is the state space description. Models, given in a impulse/step response or transfer function structure, can easily be
converted into state space models. Computations can be done numerically more reliable if
15
based on (balanced) state space models instead of impulse/step response or transfer function models, especially for multivariable systems (Kailath [33]). Also system identication,
using the inputoutput data of a system, can be done using reliable and fast algorithms
(Ljung [41]; Verhaegen & Dewilde [71]; Van Overschee [70]).
2.2.1
(2.1)
in which Go (q) is the process model, Fo (q) is the disturbance model, Ho (q) is the noise
model, y(k) is the output signal, u(k) is the input signal, do (k) is a known disturbance
signal, eo (k) is zeromean white noise (ZMWN) and q is the shiftoperator q 1 y(k) =
y(k 1). We assume Go (q) to be strictly proper, which means that y(k) does not depend
on the present value of u(k), but only on past values u(k j), j > 0.
eo (k)
Ho
u(k) Go
?
e
6
y(k)

Fo
6
do (k)
(2.2)
(2.3)
and so the transfer functions Go (q), Fo (q) and Ho (q) can be given as
Go (q) = Co ( q I Ao )1 Bo
Fo (q) = Co ( q I Ao )1 Lo + DF
Ho (q) = Co ( q I Ao )1 Ko + DH
(2.4)
(2.5)
(2.6)
16
eo (k)
?
u(k)
Ko
Bo
Lo
DH
?
e xo (k+1)J
]
6
J
J
J
q 1
xo (k) Co
Ao
?
e
6
y(k)
DF
6
do (k)
(2.7)
where di (k) is a known disturbance signal and ei (k) is ZMWN. A state space representation
for this system is given by
xi (k + 1) = Ai xi (k) + Ki ei (k) + Li di(k) + Bi u(k)
y(k) = Ci xi (k) + DH ei (k) + DF di (k)
(2.8)
(2.9)
(2.10)
(2.11)
(2.12)
17
Ki =
DH
Ko
Ci =
I Co
Li =
DF
Lo
18
(2.13)
where (q) = (1 q 1 ). In appendix D, we will show how the other models, like impulse
response models, step response models and polynomial models relate to state space models.
N2
j=1
19
Let the reference be constant r(k) = rss = 0 for large k, and let k . Then u and y
will reach their steadystate and so J(k) = N2 Jss for k , where
Jss = yss rss 2 + 2 uss 2
= Go (1)uss rss 2 + 2 uss2
T
= uTss GTo (1)Go(1) + 2 I uss 2uTssGTo (1)rss + rss
rss
Minimizing Jss over uss means that
Jss /uss = 2 GTo (1)Go (1) + 2 I uss 2GTo (1)rss = 0
so
1
uss = GTo (1)Go(1) + 2 I
GTo (1)rss
The steadystate output becomes
1
GTo (1)rss
yss = Go (1) GTo (1)Go (1) + 2 I
It is clear that yss = rss for > 0 and for this IO model there will always be a steadystate
error for rss = 0 and > 0.
Steadystate behavior of an IIO model:
Consider the above model in IIO form, so Gi (q) = 1 Go (q) under the same assumptions
as before, i.e. Go (1) is welldened. Moreover, eo = 0 and do = 0 which implies that ei = 0
and di = 0. Let it be controlled by a stabilizing predictive controller, minimizing the IIO
performance index
J(k) =
N2
j=1
In steady state the output is given by yss = Go (1)uss and the increment input becomes
uss = 0, because the input signal u = uss has become constant. In steadystate we will
reach the situation that the performance index becomes J(k) = N2 Jss for k , where
Jss = yss rss 2
The optimum Jss = 0 is obtained for yss = rss which means that no steadystate error
occurs. It is clear that for a good steadystate behavior in the presence of a reference
signal, it is necessary to design the predictive controller on the basis of an IIO model.
20
(2.14)
(2.15)
where x(k) is the state, e(k) is zeromean white noise (ZMWN), v(k) is the control signal,
w(k) is a vector containing all known external signals, such as the reference signal and
known (or measurable) disturbances, and y(k) is the measurement. The control signal
v(k) can be either u(k) or u(k).
2.2.2
In this course we will consider state space description (2.14)(2.15) as the model for the
system. In literature three other model descriptions are also often used, namely the impulse
response models, the step response models and the polynomial model. In this section we
will shortly describe their features, and discuss the choice of model description.
Impulse and step response models
MPC has part of its roots in the process industry, where the use of detailed dynamical
models is less common. To get reliable dynamical models of these processes on the basis
of physical laws is dicult and it is therefore not surprising that the rst models used in
MPC were impulse and step response models (Richalet [54]), (Cutler & Ramaker [17]).
The models are easily obtained by rather simple experiments and give good models for
suciently large length of the impulse/step response.
In appendix D.1 more detailed information is given on impulse and step response models.
Polynomial models
Sometimes good models of the process can be obtained on the basis of physical laws or
by parametric system identication. In that case a polynomial description of the model
is preferred (Clarke et al.[14],[15]). Less parameters are used than in impulse or step
response models and in the case of parametric system identication, the parameters can
be estimated more reliably (Ljung [41]).
In appendix D.2 more detailed information is given on polynomial models.
Choice of model description
Whatever model is used to describe the process behavior, the resulting predictive control
law will (at least approximately) give the same performance. Dierences in the use of a
model are in the eort of modeling, the model accuracy that can be obtained, and in the
computational power that is needed to realize the predictive control algorithm.
21
Finite step response and nite impulse response models (nonparametric models) need
more parameters than polynomial or state space models (parametric models). The larger
the number of parameters of the model the lower the average number of data points
per parameter that needs to be estimated during identication. As a consequence the
variance of the estimated parameters will be larger for the parameters of nonparametric
models than for semiparametric and parametric models. In turn, the variance of estimated
parameters of semiparametric models will be larger than the variance of the parameters
of parametric models. This does not necessarily imply that the quality of the predictions
with parametric models outperforms the predictions of nonparametric models! If the test
signals applied for process identication have excited all relevant dynamics of the process
persistently, the qualities of the predictions are generally the same despite the signicantly
larger variance of the estimated model parameters. On the other hand step response
models are intuitive, need less a priori information for identication and the predictions
are constructed in a natural way. The models can be understood without any knowledge
of dynamical systems. Step response models are often still preferred in industry, because
advanced process knowledge is usually scarce and additional experiments are expensive.
The polynomial description is very compact, gives good (physical) insight in the systems
properties and the resulting controller is compact as well. However, as we will see later,
making predictions may be cumbersome and the models are far from practical for multivariable systems.
State space models are especially suited for multivariable systems, still providing a compact
model description and controller. The computations are usually well conditioned and the
algorithms easy to implement.
2.3
Performance index
N
z1T (k + j 1k)
z1 (k + j 1k) +
j=Nm
N
z2T (k + j 1k)
z2 (k + j 1k)
(2.16)
j=1
22
These three performance indices can all be rewritten into the standard form of (2.16).
Generalized Predictive Control (GPC)
In the Generalized Predictive Control (GPC) method (Clarke et al., [14][15]) the performance index is based on control and output signals:
J(u, k) =
N
yp (k + jk) r(k + j)
T
yp (k + jk) r(k + j) +
j=Nm
N
uT (k + j 1k)u(k + j 1k)
(2.17)
j=1
yp (k)
r(k)
y(k)
u(k)
Nm
N
Nc
P (q)
N
j=Nm
xT (k+jk)Q
x(k+jk) +
N
j=1
(2.18)
is
is
is
is
the
the
the
the
23
where x(k +jk) is the prediction of x(k +j), based on knowledge until time k. Q and R
are positive semidenite matrices. In some papers the control increment signal u(k) is
used instead of the control signal u(k). Note that the LQPC performance index can be
translated into the form of (2.16) by choosing
1/2
z1 (k)
Q x(k + 1)
z(k) =
.
=
z2 (k)
R1/2 u(k)
Zone performance index
A performance index that is popular in industry is the zone performance index:
J(u, k) =
N
T
(k+jk)
(k+jk) +
N
(2.19)
j=1
j=Nm
where i (i = 1, . . . , m) contribute to the performance index only if yi(k) ri (k) > max,i .
To be more specic:
min
i (k)max,i
The relation between the tracking error y(k)r(k), the zonebounds and zone performance
signal (k) are visualized in gure 2.3. Note that the zone performance index can be
translated into the form of (2.16) by choosing
z1 (k)
r(k + 1) y(k + 1) + (k + 1)
z(k) =
=
z2 (k)
u(k)
and
v(k) =
u(k)
(k + 1)
24
3
2
1
0
1
2
3
10
12
14
16
18
20
10
12
14
16
18
20
1.5
(k)
1
0.5
0
0.5
1
1.5
Figure 2.3: Tracking error y(k) r(k) and zone performance signal (k)
Standard performance index
Most papers on predictive control deal with either the GPC or the LQPC performance
indices which are clearly weighted squared 2norms, which can be rewritten in the form of
(2.16). Now introduce the diagonal selection matrix
0
0
0 I
(j) =
I
0
0 I
for
0 j < Nm 1
,
for
(2.20)
Nm 1 j N 1
and dene
z(k) =
z1 (k)
.
z2 (k)
J(v, k) =
N
1
zT (k + jk)(j)
z (k + jk)
(2.21)
j=0
In this text, we will mainly deal with this 2norm standard performance index (2.21). Other
performance indices can also be used These are often based on other norms, for example
the 1norm or the norm (Genceli & Nikolaou [30], Zheng & Morari [80]).
2.4. CONSTRAINTS
2.4
25
Constraints
In practice, industrial processes are subject to constraints. Specic signals must not violate
specied bounds due to safety limitations, environmental regulations, consumer specications and physical restrictions such as minimum and/or maximum temperature, pressure,
level limits in reactor tanks, ows in pipes and slew rates of valves.
Careful tuning of the controller parameters may keep these values away from the bounds.
However, because of economical motives, the control system should drive the process towards the constraints as close as possible, without violating them: closer to limits in general
often means closer to maximum prot.
Therefore, predictive control employs a more direct approach by modifying the optimal
unconstrained solution in such a way that constraints are not violated. This can be done
using optimization techniques such as linear programming (LP) or quadratic programming
(QP) techniques.
In most cases the constraints can be translated in bounds on control, state or output
signals:
umin u(k) umax , k
ymin y(k) ymax , k
Under no circumstances the constraints may be violated, and the control action has to be
chosen such that the constraints are satised. However, when we implement the controller
we do not know the future state and replace the original constraints by constraints on the
predicted state.
Besides inequality constraints we can also use equality constraints. Equality constraints
are usually motivated by the control algorithm itself. An example is the control horizon,
which forces the control signal to become constant:
u(k+jk) = 0 for j Nc
This makes the control signal smooth and the controller more robust. A second example
is the state endpoint constraint
x(k + Nk) = xss
which is related to stability and forces the state at the end of the prediction horizon to its
steady state value xss .
To summarize, we see that there are two types of constraints (inequality and equality
constraints). In this course we look at the two types in a generalized framework:
Inequality constraints:
(k) (k) k
(2.22)
Since we generally do not know future values of , we often use in the optimization at time
k, constraints of the form:
+ jk) (k + j)
(k
(2.23)
26
(k) = 0 k
(2.24)
Again we generally do not know future values of but we also often want to use equality
constraints, as noted before, to force the control signal to become constant or to enforce
an endpoint penalty. Hence we use:
+ jk) = 0
Aj (k
(2.25)
1 (k)
2 (k)
1 (k)
1 (k)
..
(k) = ...
= (k)
.
4 (k)
4 (k)
=0
1 (k)
1 (k)
1,max
1,min
= (k)
2.4. CONSTRAINTS
27
for j Nc
Blocking:
Instead of making the input constant beyond the control horizon, we can force the input
to remain constant during some predened (nonuniform) intervals. In that way, there is
some freedom left beyond the control horizon. Dene nbl intervals with range (m , m+1 ).
The rst m1 control variables (u(k), . . . , u(k + m1 1)) are still free. The input beyond
m1 is restricted by:
u(k+jk) = u(k + m 1k)
Basis functions:
Parametrization of the input signal can also be done using a set of basis functions:
u(k+jk) =
M
Si (j) i (k)
i=0
where Si (j) are the basis functions, M are the degrees of freedom left, and the scalars
i (k), i = 1, . . . , M are the parameters, to be optimized at time k.
The already mentioned structuring of the input signal, with either a control horizon or
blocking, can be seen as a parametrization of the input signal with a set of basis functions.
For example, by introducing a control horizon, we choose M = Nc 1 and the basis
functions become:
Si (j) = (j i) for i = 0, . . . , Nc 2
SNc 1 (j) = E(j Nc + 1)
where (k i) is the discretetime impulse function, being 1 for j = i and zero elsewhere,
and E is the step function, i.e.
0 v<0
E[v] =
1 v0
28
for i = 0, . . . , Nc 1
for i = 0, . . . , m1 1
and
m1 + (k) = u(k + m k)
for = 1, . . . , nbl
which corresponds to the Nc 1 + nbl free parameters. Figure 2.4 gives the basis functions
in case of a control horizon approach and a blocking approach, respectively.
SNc1
SM
SNc2
SM1
SNc3
SM2
SNc4
SM3
SNc5
SM4
S2
S2
S1
S1
S0
0
Nc 1
S0
N 1
(b) blocking
Figure 2.4: Basis functions in case of a control horizon approach (a) and a blocking approach (b).
To obtain a tractable optimization problem, we can choose a set of basis functions Si (k)
that are orthogonal. These orthogonal basis functions will be discussed in section 4.3.
2.5
Optimization
29
with linear constraints and a quadratic (2norm) performance index the solution can be
found using quadratic programming algorithms. If a 1norm or  norm performance index
is used, linear programming algorithms will provide the solution (Genceli & Nikolaou [30],
Zheng & Morari [80]). Both types of algorithms are convex and show fast convergence.
In some cases, the optimization problem will have an empty solution set, so the problem
is not feasible. In that case we will have to relax one or more of the constraints to nd a
solution leading to an acceptable control signal. A prioritization among the constraints is
an elegant way to tackle this problem.
Procedures for dierent types of MPC problems will be presented in chapter 5.
2.6
Predictive control uses the receding horizon principle. This means that after computation of the optimal control sequence, only the rst control sample will be implemented,
subsequently the horizon is shifted one sample and the optimization is restarted with new
information of the measurements. Figure 2.5 explains the idea of receding horizon. At
PAST
FUTURE
r(k)
y(k)
u(k)
Nm
k
Nc
N
k+j
30
impulse response models or state space models. In this course we will consider a state
space representation of the model. In appendix D it is shown how other realizations can
be transformed in a state space model.
Chapter 3
Prediction
In this chapter we consider the concept of prediction. We consider the following standard
model:
x(k + 1) = A x(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D12 w(k)
p(k) = Cp x(k) + Dp1 e(k) + Dp2 w(k) + Dp3 v(k)
(3.1)
The prediction signal p(k) can be any signal which can be formulated as in expression (3.1)
(so the state x(k), the output signal y(k) or tracking error r(k) y(k) ). We will come
back on the particular choices of the signal p in Chapter 4.
The control law in predictive control is, no surprise, based on prediction. At each time
instant k, we consider all signals over the horizon N. To do so we make predictions at time
k of the signal p(k + j) for j = 1, 2, . . . , N. These predictions, denoted by p(k + jk), are
based on model (3.1), using information given at time k and future values of the control
signal v(k+jk). The predictions p(k+jk) of the signal p(k+j) are based on the knowledge
at time k and the future control signals v(kk), v(k+1k), . . . , v(k+N 1k).
At time instant k we dene the signal vector p(k) with the predicted signals p(k + jk),
the signal vector v(k) with the future control signals v(k + jk) and the signal vector w(k)
with the future reference signals w(k + jk) in the interval 0 j N 1 as follows
w(kk)
p(kk)
v(kk)
w(k+1k)
p(k+1k)
v(k+1k)
=
p(k) =
(3.2)
v(k) =
w(k)
..
..
..
.
.
.
w(k+N 1k)
p(k+N 1k)
v(k+N 1k)
The goal of the prediction is to nd an estimate for the prediction signal p(k), composed
p3 such that
of a freeresponse signal p0 (k) and a matrix D
p3 v(k)
p(k) = p0 (k) + D
p3 v(k) is the part of p(k), which is due to the selected control signal
where the term D
v(k + jk) for j 0 and p0 (k) is the so called freeresponse. This freeresponse p0 (k) is
31
32
CHAPTER 3. PREDICTION
the predicted output signal when the future input signal is put to zero (v(k + jk) = 0 for
j 0) and depends on the present state x(k), the future values of the known signal w,
and the characteristics of the noise signal e.
3.1
Noiseless case
The prediction mainly consists of two parts: Computation of the output signal as a result
of the selected control signal v(k + jk) and the known signal w(k + j) and a prediction of
the response due to the noise signal e(k + j). In this section we will assume the noiseless
case (so e(k + j) = 0) and we only have to concentrate on the response due to the present
and future values v(k + jk) of the input signal and known signal w(k + j).
Consider the model from (3.1) for the noiseless case (e(k) = 0):
x(k + 1) = Ax(k) + B2 w(k) + B3 v(k)
p(k) = Cp x(k) + Dp2 w(k) + Dp3 v(k)
For this state space model the prediction of the state is computed using successive substitution (Kinnaert [34]):
x(k + jk) = Ax(k + j 1k) + B2 w(k + j 1k) + B3 v(k + j 1k) =
A2 x(k + j 2k) + A B2 w(k + j 2k) + B2 w(k + j 1k) +
+A B3 v(k + j 2k) + B3 v(k + j 1k)
..
.
j
j
Ai1 B2 w(k + j ik) +
Ai1 B3 v(k + j ik)
= Aj x(k) +
i=1
i=1
j
i=1
Now dene
Cp =
Cp
Cp A
Cp A2
..
.
Cp AN1
p2
D
Dp2
Cp B2
Dp2
Cp AB2
..
.
Cp B2
Cp AN2 B2
0
0
..
..
..
.
.
.
..
.
0
0
..
. Dp2
0
Cp B2 Dp2
(3.3)
p3
D
33
Dp3
Cp B3
Dp3
Cp AB3
..
.
Cp B3
Cp AN2 B3
0
0
..
..
..
.
.
.
..
.
0
0
..
. Dp3
0
Cp B3 Dp3
(3.4)
then the vector p(k) with the predicted output values can be written as
p2 w(k)
p3 v(k)
p(k) = Cp x(k) + D
+D
p3 v(k)
= p0 (k) + D
p2 w(k)
In this equation p0 (k) = Cp x(k) + D
3.2
Noisy case
As was stated in the previous section, the second part of the prediction scheme is making
a prediction of the response due to the noise signal e. If we know the characteristics of
the noise signal, this information can be used for prediction of the expected noise signal
over the future horizon. In case we do not know anything about the characteristics of
the disturbances, the best assumption we can make is the assumption that e is zeromean
white noise. In these lecture notes we use this assumption of zeromean white noise e, so
the best prediction for future values of signal e(k + j), j > 0 will be zero (
e(k + j) = 0 for
j > 0).
Consider the model given in equation (3.1):
x(k + 1) = A x(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D12 w(k)
p(k) = Cp x(k) + Dp1 e(k) + Dp2 w(k) + Dp3 v(k)
the prediction of the state using successive substitution of the state equation is given by
34
CHAPTER 3. PREDICTION
(Kinnaert [34]):
j
j
i=1
j
i=1
j
i=1
p(k + jk) = Cp x(k + jk) + Dp1e(k + jk) + Dp2 w(k + jk) + Dp3 v(k + jk)
The term e(k + mk) for m > 0 is a prediction of e(k + m), based on measurements at time
k. If the signal e is assumed to be zero mean white noise, the best prediction of e(k + m),
made at time k, is
e(k + mk) = 0
for m > 0
j
i=1
j
i=1
p(k + jk) = Cp x(k + jk) + Dp2 w(k + jk) + Dp3 v(k + jk)
= Cp Aj x(k) + Cp Aj1B1 e(kk) +
j
+
Cp Ai1 B2 w(k + j ik) + Dp2 w(k + jk)
i=1
j
i=1
p1
D
Dp1
Cp B1
Cp AB1
..
.
(3.5)
Cp AN2 B1
then, the vector p with the predicted output values can be written as
p1 e(kk) + D
p2 w(k)
p3 v(k)
+D
p(k) = Cp x(k) + D
p3 v(k)
= p0 (k) + D
(3.6)
35
p0 (k) = Cp x(k) + D
(3.7)
Now, assuming D11 to be invertible, we derive an expression for e(k) from (3.1):
1
e(kk) = D11 y(k) C1 x(k) D12 w(k)
in which y(k) is the measured process output and x(k) is the estimate of state x(k) at
previous time instant k 1.
p1 e(k) + D
p2 w(k)
=
p0 (k) = Cp x(k) + D
p1 D 1 y(k) C1 x(k) D12 w(k) + D
p2 w(k)
=
= Cp x(k) + D
11
(3.8)
(3.9)
p1 D 1 C1 ) x(k) + D
p1 D 1 y(k) + (D
p2 D
p1 D 1 D12 Ew ) w(k)
= (Cp D
(3.10)
11
11
11
where Ew = I 0 0 is a selection matrix such that
w(k) = Ew w(k)
1
0
C1
C1 A 0.7
1.0
C1 =
C1 A2 = 0.39 0.7
C1 A3
0.203 0.39
1
D11
11 = C1 B1 = 1
D
C1 AB1 0.5
C1 A2 B1
0.25
36
CHAPTER 3. PREDICTION
12
D
13
D
3.3
0
0
0
D12
C1 B2
D12
0
0
=
C1 AB2 C1 B2
D12
0
C1 A2 B2 C1 AB2 C1 B2 D12
0
0
0
D13
C1 B3
D13
0
0
=
C1 AB3 C1 B3
D13
0
C1 A2 B3 C1 AB3 C1 B3 D13
0
0
0.1
0
=
0.02 0.1
0.004 0.02
0
0 0
1
0 0
=
0.7 1 0
0.39 0.7 1
0
0
0
0.1
0
0
0
0
0
0
0
0
In this chapter we have discussed the prediction of the signal p(k), based on the vector
w(k)
w(k+1)
w(k)
...
w(k+N 1)
This means that we assume future values of the reference signal or the disturbance signals
to be known at time k. In practice this is often not the case, and estimations of these
signals have to be made.
w(kk)
w(k+1k)
w(k)
...
w(k+N
1k)
The estimation can be done in various ways. One can use heuristics to compute the
estimates w(k
+ jk), or one can use an extrapolation technique. With extrapolation we
can construct the estimates of w beyond the present time instant k. It is similar to the
process of interpolation, but its results are often subject to greater uncertainty. We will
discuss three types of extrapolation, zeroth order, rst order and polynomial extrapolation.
We assume the past values of w to be known, and compute the future values of w on the
basis of the past values of w.
Zeroth order extrapolation:
In an zeroth order extrapolation we assume w(k + j) to be constant for j 0, and so
w(k
+ j) = w(k 1) for j 0. This results in
w(k 1)
w(k 1)
w(k)
...
w(k 1)
37
2w(k 1) w(k 2)
3w(k 1) 2w(k 2)
w(k)
...
(N + 1)w(k 1) Nw(k 2)
Polynomial extrapolation:
A polynomial curve can be created through more than one or two past values of w. For an
nth order extrapolation we use n + 1 past values of w. Note that high order polynomial
extrapolation must be used with due care. The error estimate of the extrapolated value
will grow with the degree of the polynomial extrapolation. Note that the zeroth and rst
order extrapolations are special cases of polynomial extrapolation.
The quality of extrapolation
Typically, the quality of a particular method of extrapolation is limited by the assumptions about the signal w made by the method. If the method assumes the signal is smooth,
then a nonsmooth signal will be poorly extrapolated. Even for proper assumptions about
the signal, the extrapolation can diverge exponentially from the real signal. For particular problems, additional information about w may be available which may improve the
convergence.
In practice, the zeroth order extrapolation is mostly used, mainly because it never diverges,
but also due to lack of information about the future behavior of the signal w.
38
CHAPTER 3. PREDICTION
Chapter 4
Standard formulation
In this chapter we will standardize the predictive control setting and formulate the standard predictive control problem. First we consider the performance index, secondly the
constraints.
4.1
In predictive control various types of performance indices are presented, in which either
the input signal is measured together with the state (LQPC), or the input increment
signal is measured together with the tracking error (GPC). This leads to two dierent
types of performance indices. We will show that both can be combined into one standard
performance index:
The standard predictive control performance index:
In this course we will use the state space setting of LQPC as well as the use of a reference
signal as is done in GPC. Instead of the state x or the output y we will use a general
signal z in the performance index. We therefore adopt the standard predictive control
performance index:
J(v, k) =
N
1
zT (k + jk)(j)
z (k + jk)
(4.1)
j=0
where z(k + jk) is the prediction of z(k + j) at time k and (j) is a diagonal selection
matrix with ones and zeros on the diagonal. Finally, we use a generalized input signal
v(k), which can either be the input signal u(k) or the input increment signal u(k) and we
use a signal w(k), which contains all known external signals, such as the reference signal
r(k) and the known disturbance signal d(k). In most cases, the noise signal e is ZMWN
although in some cases we actually have integrated ZMWN. The state signal x, input signal
v, output signal y, noise signal e, external signal w and performance signal z are related
39
40
(4.2)
(4.3)
(4.4)
We will show how both the LQPC and GPC performance index can be rewritten as a
standard performance index.
The LQPC performance index:
In the LQPC performance index (see Section 2.3) the state is measured together with the
input signal. The fact that we use the input in the performance index means that we will
use an IOmodel. The LQPC performance index is now given by
J(u, k) =
N
xTo (k
+ jk)Q
xo (k + jk) +
N
j=1
j=Nm
(4.6)
(4.7)
v(k) = u(k)
B1 = Ko
w(k) = d(k)
B2 = Lo
e(k) = eo (k)
B3 = Bo
C1 = Co
D11 = DH
D12 = DF
1/2
1/2
1/2
Q Ao
Q Ko
Q Lo
C2 =
D21 =
D22 =
0
0
0
1 (j) 0
(j) =
0
I
0 for 0 j < Nm 1
1 (j) =
I for Nm 1 j N 1
D23 =
Q1/2 Bo
R1/2
41
N
T
yp (k + jk) r(k + j)
yp (k + jk) r(k + j)
j=Nm
+2
N
uT (k + j 1k)u(k + j 1k)
(4.8)
j=1
where N Nm 1 and R. The output y(k) and increment input u(k) are related
by the IIO model
xi (k + 1) = Ai xi (k) + Ki ei (k) + Li di(k) + Bi u(k)
y(k) = Ci xi (k) + DH ei (k)
(4.9)
(4.10)
and the weighted output yp (k) = P (q)y(k) is given in state space representation
xp (k + 1) = Ap xp (k) + Bp y(k)
yp (k) = Cp xp (k) + Dp y(k)
(4.11)
(4.12)
42
1 (j) 0
(j) =
0
I
0 for
1 (j) =
I for
0 j < Nm 1
Nm 1 j N 1
N
(k + jk)
(k + jk) +
j=Nm
N
2 uT (k + j 1k)u(k + j 1k)
(4.13)
j=1
where i (k), i = 1, . . . , m contribute to the performance index only if yi(k)ri (k) > max,i :
min
i (k)max,i
There holds N Nm 1 and R. The output y(k) and input u(k) are related by the
IO model
xo (k + 1) = Ao xo (k) + Ko eo (k) + Lo do (k) + Bo u(k)
y(k) = Co xo (k) + DH eo (k) + DF do (k)
(4.14)
(4.15)
do (k)
w(k) =
e(k) = eo (k)
r(k + 1)
A = Ao
B1 = Ko
B2 = Lo 0
C1 = Co D11 = DH D12 = DF 0
B3 =
Bo 0
43
I
0
Co Bo I
D23 =
I 0
0 I
max
=
0 I
max
then:
x(k + 1)
y(k)
z(k)
(k)
=
=
=
=
(4.16)
The proof is left to the reader. Note that we need a constraint on . This can be incorporated through the techniques presented in the next section.
Both the LQPC performance index and the GPC performance index are frequently used in
practice and in literature. The LQPC performance index resembles the LQ optimal control
performance index. The choices of the matrices Q and R can be done in a similar way as
in LQ optimal control. The GPC performance index originates from the adaptive control
community and is based on SISO polynomial models.
The standard predictive control performance index:
We have found that the GPC, LQPC and the zone performance indices can be rewritten
into the standard form:
J(v, k) =
N1
zT (k + jk)(j)
z (k + jk)
(4.17)
j=0
If we dene the signal vector z(k) with the predicted performance signals z(k + jk) in the
as follows
interval 0 j N 1 and the block diagonal matrix
(0)
0
...
0
z(kk)
0
z(k+1k)
(1)
0
=
(4.18)
z(k) =
..
..
..
..
.
.
.
.
0
0
. . . (N 1)
z(k+N 1k)
then the standard predictive control performance index (4.17) can be rewritten in the
following way:
J(v, k) =
N1
zT (k + jk)(j)
z (k + jk)
j=0
z (k)
= zT (k)
where z(k) can be computed using the formulas from Chapter 3:
21 e(k) + D
22 w(k)
23 v(k)
z(k) = C2 x(k) + D
+D
(4.19)
44
with
C2 =
C2
C2 A
C2 A2
..
.
22
D
C2 AN1
21
D
D21
C2 B1
C2 AB1
..
.
C2 B2
D22
C2 AB2
..
.
C2 B2
0
0
..
..
..
.
.
.
..
.
0
0
..
. D22
0
C2 B2 D22
C2 AN2 B2
C2 AN2 B1
4.2
D22
23
D
D23
C2 B3
D23
C2 AB3
..
.
C2 B3
C2 AN2 B3
0
0
..
..
..
.
.
.
..
.
0
0
..
. D23
0
C2 B3 D23
Handling constraints
As was already discussed in chapter 1, in practical situations there will be signal constraints. These constraints on control, state and output signals are motivated by safety
and environmental demands, economical perspectives and equipment limitations. There
are two types of constraints:
Equality constraints:
In general, we impose constraints on the actual inputs and states. Clearly, for future
constraints we often replace the actual input and state constraints by constraints
on the predicted state and input. However, in particular for equality constraints it
often occurs that we actually want to impose constraints directly on the predicted
and state and input. This is due to the fact, that equality constraints are usually
motivated by the control algorithm. For example, to decrease the degrees of freedom,
we introduce the control horizon Nc , and to guarantee stability we can add a state
endpoint constraint to the problem.
Hence we use:
+ jk) = 0
Aj (k
+ jk) is the prediction of (k + j) at time k. The matrix
for j = 1, . . . , N, where (k
Aj is used to indicate that certain equality constraints are only active for specic j.
For instance, in case of an endpoint penalty, we would choose AN = I and Aj = 0
for j = 0, . . . , N 1.
45
In this course we look at equality constraints in a generalized form. We can stack all
constraints at time k on predicted inputs or states in one vector:
+ 1k)
A1 (k
A (k
2 + 2k)
(k) =
..
AN (k + Nk)
Clearly, if some of the Ai are zero or do not have full rank we can reduce the size of
this vector by only looking at the active constraints.
Using this framework, we can express the equality constraints in the form:
31 e(k) + D
32 w(k)
33 v(k) = 0
(k)
= C3 x(k) + D
+D
(4.20)
where w(k)
is a vector consisting
of w(k), w(k + 1), . . . , w(k + N). Similarly, v(k) contains the predicted future input
signals, i.e. v(k) is a vector consisting of v(k), v(k + 1k), . . . , v(k + Nk).
Inequality constraints:
Inequality constraints are due to physical, economical or safety limitations. We
actually prefer to work with constraints of a (k) which consists of actual inputs and
states:
(k) (k) k
Since these are unknown we have to work with predictions, i.e. we have constraints
of the form:
+ jk) (k + j)
(k
for j = 0, . . . , N. Again, we stack all out inequality constraints:
(kk)
(k
+ 1k)
=
(k)
..
.
+ Nk)
(k
41 e(k) + D
42 w(k)
43 v(k) (k)
(k)
= C4 x(k) + D
+D
where (k)
does not depend on future input signals.
(4.21)
46
(k)
=
1 (k)
2 (k)
=0
1 (k)
(k)
= ...
4 (k)
1 (k)
..
4 (k)
We will show how the transformation is done for some common equality and inequality
constraints.
4.2.1
Equality constraints
0 0 I I
0 0 I 0
.. .. ..
..
. . .
.
0 0 I 0
0
. . ..
. .
I
..
..
. 0
.
0 I
0
v(kk)
..
.
v(k + Nc 2k)
v(k + Nc 1k)
v(k + Nc k)
v(k + Nc + 1k)
..
.
v(k + N 1k)
33 v(k)
= D
= 0
31 = 0 and D
32 = 0 , the control horizon constraint can be translated
By dening C3 = 0 , D
into the standard form of equation (4.20).
47
v(k + Nc k)
v(k + Nc + 1k)
..
.
0 0 I
0 0 0
= .
.. ..
..
. .
v(k + N 1k)
0 0 0
v(kk)
..
.
. . .. v(k + Nc 1k)
. .
I
v(k + Nc k)
..
..
. 0
.
v(k + Nc + 1k)
..
0 I
.
0
v(k + N 1k)
33 v(k)
= D
= 0
31 = 0 , D
32 = 0 , the control horizon constraint can be translated
By dening C3 = 0 , D
into the standard form of equation (4.20).
where the matrix Dssx is related to the steadystate properties of the system. The state
endpoint constraint now becomes
x(k + Nk) Dssx w(k)
=
N
= 0
By dening
C3 = AN
32 =
D
31 = AN 1 B1
D
33 =
D
AN 1 B2 AN 2 B2 B2
AN 1 B3 AN 2 B3 B3
Dssx
the state endpoint constraint can be translated into the standard form of equation (4.20).
48
4.2.2
Inequality constraints
(4.22)
(4.23)
Linear inequality constraints on output y(k) and state x(k) can easily be translated into
linear constraints on the input vector v(k). By using the results of chapter 3, values of
(k + j) can be predicted and we obtain the inequality constraint:
41 e(k) + D
42 w(k)
43 v(k) (k)
(k)
= C4 x(k) + D
+D
where
(k) =
(k)
+ 1)
(k
..
.
(k) =
C4 =
C4
C4 A
C4 A2
..
.
42
D
C4 AN1
41
D
D41
C4 B1
C4 AB1
..
.
(k + N 1)
+ N 1)
(k
(k)
(k + 1)
..
.
(4.24)
C4 AN2 B1
In this way we can formulate:
43
D
D42
C4 B2
D42
C4 AB2
..
.
C4 B2
C4 AN2 B2
D43
C4 B3
=
C4 AB3
..
.
C4 AN2 B3
0
0
..
..
..
.
.
.
..
.
0
0
..
. D42
0
C4 B2 D42
0
D43
C4 B3
0
0
..
.
..
..
.
.
..
.
0
0
..
. D43
0
C4 B3 D43
49
u(k 1) + umax
umax
D43 =
(k) =
..
umax
C4 = 0
41 = 0
D
Then by dening
0
..
..
.
I I
.
. . ..
..
..
. .
.
.
0
.. . .
..
..
. 0
.
.
.
0 0 I I
I
42 = 0
D
the input increment constraint can be translated into the standard form of equation (4.21).
Constraint on the input signal for IIO model:
Assume that we have the following constraint:
u(k + jk) umax for j = 0, . . . , N 1
Consider the case of an IIO model, so v(k) = u(k). We note:
u(k + j) = u(k 1) +
j
i=0
u(k + i)
50
Then by dening
u(k 1) + umax
u(k 1) + umax
(k)
=
..
.
u(k 1) + umax
43 =
D
I
I
..
.
I
0
. . ..
. .
I
..
..
. 0
.
I I
0
the input constraint can be translated into the standard form of equation (4.21).
4.3
On page 27 we introduced a parametrization of the input signal using a set of basis functions:
v(k+jk) =
M
Si (j) i (k)
i=0
where Si (j) are the basis functions. A special set of basis functions are orthogonal basis
functions [69], which have the property:
0 for i = j
Si (k)Sj (k) =
1 for i = j
k=
Ab Bb Cb Bb Db Cb Bb Dbnv 2 Cb
0
Ab
Bb Cb Bb Dbnv 3 Cb
.
..
..
..
.
.
Ab
.
Av = ..
.
.
.
..
..
..
Bb Cb
0
0
Ab
Cv = C Db Cb Db2 Cb Dbnv 1 Cb
and let
Si (k) = Cv Akv ei+1
51
where
ei =
0 0 1 0 0
T
with the 1 at the ith position. Then, the functions Si (k) i = 0, . . . , M form an orthogonal
basis.
T
When we dene the parameter vector = 1 2 M , the input vector is given
by the expression
v(k+jk) = Cv Ajv
(4.25)
The number of degrees of freedom in the input signal is now equal to dim() = M = nv n1 ,
the dimension of parameter vector. One of the main reasons to apply orthogonal basis
functions is to obtain a reduction of the optimization problem by reducing the degrees of
freedom, so usually M will be much smaller than horizon N.
Consider an input sequence of N future values:
Cv
v(kk)
v(k + 1k) Cv Av
v(k) =
=
= Sv
..
..
.
.
N
v(k + Nk)
Cv Av
Suppose N is such that Sv has full columnrank, and a leftcomplement is given by Sv
(so Sv Sv = 0, see appendix C for a denition). Now we nd that
Sv v(k) = 0
(4.26)
We observe that the orthogonal basis function can be described by equality constraint 4.26.
4.4
From the previous section we learned that the GPC and LQPC performance index can be
formulated in a standard form (equation 4.1). In fact most common quadratic performance
indices can be given in this standard form. The same holds for many kinds of linear equality
and inequality constraints, which can be formulated as (4.21) or (4.20). Summarizing we
come to the formulation of the standard predictive control problem:
Denition 6 Standard Predictive Control Problem (SPCP) Consider a system given by
the statespace realization
x(k + 1) = A x(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D12 w(k)
z(k) = C2 x(k) + D21 e(k) + D22 w(k) + D23 v(k)
(4.27)
(4.28)
(4.29)
52
N1
zT (k + jk)(j)
z (k + jk)
j=0
z (k)
= zT (k)
is minimized subject to the constraints
31 e(k) + D
32 w(k)
33 v(k) = 0
+D
(k)
= C3 x(k) + D
(4.30)
41 e(k) + D
42 w(k)
43 v(k) (k)
+D
(k)
= C4 x(k) + D
(4.31)
where
z(k) =
z(kk)
z(k+1k)
..
.
v(k) =
z(k+N 1k)
v(kk)
v(k+1k)
..
.
w(k)
w(kk)
w(k+1k)
..
.
w(k+N 1k)
v(k+N 1k)
How the SPCP problem is solved will be discussed in the next chapter.
4.5
Examples
Cp =
1 0
Dp =
1.4
0 1 1.4 1.4 0
0.48
0 0 0.48 0.48 0
D
B
Ap Bp Ci
p
H
1
0 0
1
1
0
=
=
A=
B1 =
0
Ai
Ki
1
0 0
0
0.7 1
0.2
0 0
0
0.1 0
4.5. EXAMPLES
53
B2 =
C1 =
0 0
Li 0
0 Ci
0
0
0
0.1
0.05
0
0
0
0
0
0 0 1 1 0
B3 =
0
Bi
0
0
0
1
0
Cp Ap Cp Bp Ci + Dp Ci Ai
0 1 0.4 0.3 1
=
C2 =
0
0
0 0
0
0 0
0.6
Cp Bp + Dp Ci Ki
=
D21 =
0
0
Dp Ci Li I
Dp Ci Bi
1
0.1 1
=
=
D23 =
D22 =
I
0.001
0
0
0
0
The prediction matrices are constructed according to chapter 3. We obtain:
0 1 0.4
0.3
1
0.6
0 0
0
0
0
0
0 0 0.08 0.19
0.18
0.3
0 0
0
0
0
0
21 =
D
C2 =
0 0 0.08 0.183 0.19
0.21
0 0
0
0
0
0
22
D
23
D
0.1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.02 0
0.1 1
0
0
0
0
0
0
0
0
0
0
0
0
0.004 0 0.02 0
0.1 1 0
0
0
0
0
0
0
0
0
0
0.0088 0 0.004 0 0.02 0 0.1 1
0
0
0
0
0
0
0
0
1
0
0
0
0.001
0
0
0
0.3
1
0
0
0
0.001
0
0
0.19
0.3
1
0
0
0
0.001
0
0.183 0.19
0.3 1.000
0
0
0
0.001
54
1 u(k 1)
u(k 1) + umax
u(k 1) + umax 1 u(k 1)
(k)
=
u(k 1) + umax = 1 u(k 1)
u(k 1) + umax
1 u(k 1)
43
D
1
1
=
1
1
0
1
1
1
0
0
1
1
0
0
0
1
41 = 0 and D
42 = 0.
Further C4 = 0, D
31 , D
32 and
Next we compute the matrices C3 , D
constraint (N = 4, Nc = 2):
0
0
0
0
0
31 =
C3 =
D
0 0 0 0 0
0
0
0
0
33 = 0
32 =
D
D
0 0 0 0
0
0 1 0
0 0 1
Chapter 5
Solving the standard predictive
control problem
In this chapter we solve the standard predictive control problem as dened in section 4.4.
We consider the system given by the statespace realization
x(k + 1) = A x(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D12 w(k)
z(k) = C2 x(k) + D21 e(k) + D22 w(k) + D23 v(k)
(5.1)
(5.2)
(5.3)
Here y contains the measurements so that at time k we have available all past inputs
v(0), . . . , v(k1) and all past and current measurements y(0), . . . , y(k). z contains variables
that we want to control and hence appear in the performance index.
The performance index and the constraints are given by
J(v, k) =
N1
zT (k + jk)(j)
z (k + jk)
(5.4)
j=0
z (k)
= zT (k)
(5.5)
31 e(kk) + D
32 w(k)
33 v(k) = 0
+D
(k)
= C3 x(kk) + D
41 e(kk) + D
42 w(k)
43 v(k) (k)
+D
(k)
= C4 x(kk) + D
(5.6)
(5.7)
Note that the performance index at time k contains estimates of z(k + j) based on all
inputs v(0), . . . , v(k + j) as well as past and current measurements y(0), . . . y(k). Clearly
v(k), . . . , v(k + j) are not yet available and need to be chosen to minimize the performance
Assumptions
To be able to solve the standard predictive control problem, the following assumptions are
made:
33 has full row rank.
1. The matrix D
23
D
T has full rank, where
= diag((0), (1), . . . , (N 1))
2. The matrix D
23
nz N nz N
R
.
3. The pair (A, B3 ) is stabilizable.
Assumption 1 basically guarantees that we have only equality constraints on the input.
If the system is subject to noise it is only in rare possible to guarantee that equality
constraints on the state are satised. Assumption 2 makes sure that the optimization
problem is strictly convex which guarantees a unique solution to the optimization problem.
Assumption 3 is necessary to be able to control all unstable modes of the system.
5.1
5.1.1
N1
zT (k + jk)(j)
z (k + jk)
(5.8)
j=0
z (k)
= zT (k)
(5.9)
for the system as given in (5.1), (5.2) and (5.3), and without equality or inequality constraints. In this section we consider the case for nite N. The innite case will be treated
in section 5.2.
57
The performance index, given in (5.9), is minimized for each time instant k, and the optimal
future control sequence v(k) is computed. We use the prediction law
21 e(kk) + D
22 w(k)
23 v(k)
+D
z(k) = C2 x(kk) + D
23 v(k)
= z0 (k) + D
(5.10)
(5.11)
z0 (k) = C2 x(kk) + D
is the freeresponse signal given in equation (3.7). Without constraints, the problem becomes a standard least squares problem, and the solution of the predictive control problem
can be computed analytically. We assume the reference trajectory w(k + j) to be known
over the whole prediction horizon j = 0, . . . , N. Consider the performance index (5.9).
Now substitute (5.11) and dene matrix H, vector f (k) and scalar c(k) as
T
H = 2D
23 D23 ,
T
z0 (k)
f (k) = 2D
0T (k)
23 z0 (k) and c(k) = z
We obtain
(k) + 2
T
T
z0 (k) =
J(v, k) = vT (k)D
v T (k)D
0T (k)
23 D23 v
23 z0 (k) + z
1 T
=
v (k)H v(k) + vT (k)f (k) + c(k)
2
The minimization of J(v, k) has become a linear algebra problem. It can be solved, setting
the gradient of J to zero:
J
= H v(k) + f (k) = 0
v
By assumption 2, H is invertible and hence the solution v(k) is given by
v(k) = H 1 f (k)
1
T
T
D
= D23 D23
23 z0 (k)
1
D
23
T
C2 x(kk) + D
21 e(kk) + D
22 w(k)
T
D
= D
23
23
Because we use the receding horizon principle, only the rst computed change in the control
signal is implemented and at time k+1 the computation is repeated with the horizon moved
one time interval. This means that at time k the control signal is given by
v(kk) = I 0 . . . 0 v(k) = Ev v(k)
We obtain
v(kk) = Ev v(k)
1
21 e(kk) + D
22 w(k)
D
23
T
C2 x(kk) + D
T
= Ev D
D
23
23
= F x(kk) + De e(kk) + Dw w(k)
=
=
=
=
1 T
T
Ev ( D
23 D23 ) D23 C2
T
1 T
Ev ( D
23 D23 ) D23 D21
T
T
23
23
D23 )1 D
D22
Ev ( D
I 0 ... 0
N1
zT (k + jk)(j)
z (k + jk)
(5.12)
j=0
(5.13)
where
F
De
Dw
Ev
=
=
=
=
T
1 T
Ev ( D
23 D23 ) D23 C2
1 T
T
Ev ( D
23 D23 ) D23 D21
1 T
T
Ev ( D
23 D23 ) D23 D22
I 0 ... 0
(5.14)
(5.15)
(5.16)
(5.17)
Note that for the computation of optimal input signal v(k), an estimate x(kk) and e(kk),
for the state x(k) and the noise signal e(k) respectively, have to be available. This can
easily be done with an observer and will be discussed in section 5.3.
5.1.2
N1
zT (k + jk)(j)
z (k + jk)
(5.18)
j=0
for the system as given in (5.1), (5.2) and (5.3), and with equality constraints (but without
inequality constraints). In this section we consider the case for nite N. In section 5.2.4
we discuss the case where N = and a nite control horizon as the equality constraint.
59
31 e(kk) + D
32 w(k)
33 v(k) = 0
(k)
= C3 x(kk) + D
+D
We will solve this problem by elimination of the equality constraint. We therefore give the
following lemma:
Lemma 10
Consider the equality constraint
C =
(5.19)
C r = V2
V1
so
= V1 V2
=
V2T
= U =
and so we have to choose = 1 U T , while can be any vector (with the proper
dimension). Therefore, all satisfying (5.19) are given by
= V1 1 U T + V2 = C r + C r
(k)
= 0 are given by:
31 e(kk) + D
32 w(k)
r
r C3 x(kk) + D
+D
(k)
v(k) = D
33
33
r
= vE (k) + D
33 (k)
r
Note that by choosing v(k) = vE (k) + D
33 (k) the equality constraint is always satised
while all remaining freedom is still in the vector (k), which is of lower dimension. So
by this choice the problem has reduced in order while the equality constraint is always
satised for all
(k).
r
Substitution of v(k) = vE (k) + D
33 (k) in (5.10) gives:
z(k) =
=
+
=
21 e(kk) + D
22 w(k)
23 v(k)
C2 x(kk) + D
+D
r C3 )x(kk) + (D
r D
23 D
21 D
23 D
(C2 D
33
33 31 )e(kk)
r D
r
22 D
23 D
23 D
(D
+D
33 32 )w(k)
33 (k)
r
33
23 D
(k)
zE (k) + D
23 D
21 D
23 D
r
zE (k) = (C2 D
33
33 31 )e(kk) + (D22 D23 D33 D32 )w(k)
The performance index is given by
z (k) =
J(k) = zT (k)
T
r
r
23 D
23 D
zE (k) + D
= (
zE (k) + D
33 (k)) (
33 (k)) =
T
r (k)
r )T D
=
T (k) (D
23 D23 D33
33
r
D
23 D
zE (k) =
ET (k)
+2
zET (k)
33 (k) + z
1 T
=
+ f T (k)
(k) + c(k)
H
2
where the matrices H, f and c are given by
(5.20)
(5.21)
(5.22)
(5.23)
r )T D
T
r
H = 2 (D
23 D23 D33
33
T
r )T D
f (k) = 2 (D
23 zE (k)
33
zE
c(k) = zET
T
D23 is invertible. By construction, the right
Note that by assumption 2, the matrix D23
r
1 T
min
H
+ fT
61
which has an analytical solution. In the same way as in section 5.1, we derive:
J
= H
(k) + f (k) = 0
and so
(k) = H 1 f (k)
1
r T T
r
r T T
33
33
33
) D23 D23 D
(D
) D23 zE (k)
= (D
= zE (k)
r
r
33
33
23 D
21 )e(kk)
23 D
C3 C2 )x(kk) + (D
D31 D
= (D
r D
23 D
+(D
33 32 D22 )w(k)
(5.24)
(5.25)
where
1
T
D
23 D
r
T
r )T D
r )T D
(D
= (D
23
23
33
33
33
Substitution gives:
r C3 x(kk) + D
31 e(kk) + D
32 w(k)
r
v(k) = D
+D
33
33 (k)
r C3 C2 )x(kk) +
31 e(kk) + D
32 w(k)
r (D
23 D
r C3 x(kk) + D
+D
= D
33
33
33
r D
23 D
r r
r (D
+D
33 31 D21 )e(kk) + D33 (D23 D33 D32 D22 )w(k)
33
r C2 D
r C3 )x(kk)
23 D
r C3 D
r D
= (D
33
33
33
33
r
r
r
r
33
33
33
33
D31 )e(kk) +
D31 D
+(D
D23 D
D21 D
r
r
r
r
D
D
23 D
+(D
33 32 D33 D22 D33 D32 )w(k)
33
The control signal can now be written as:
v(kk) = Ev v(k)
r D
23 D
r C2 D
r C3 )x(kk) +
r C3 D
= Ev (D
33
33
33
33
r
r
r
r D
+Ev (D33 D23 D33 D31 D33 D21 D
33 31 )e(kk) +
r D
r D
23 D
r
r
+Ev (D
33 32 D33 D22 D33 D32 )w(k)
33
= F x(kk) + De e(kk) + Dw w(k)
where
r
r
r
r
33
33
33
33
C3 D
C3 )
D23 D
C2 D
F = Ev (D
r
r
r
r
r D
23 D
r
r
Dw = Ev (D
33
33 32 D33 D22 D33 D32 )
N1
zT (k + jk)(j)
z (k + jk)
(5.26)
j=0
31 e(kk) + D
32 w(k)
33 v(k) = 0
+D
(k)
= C3 x(kk) + D
is solved by control law
(5.27)
r
r
r
r
r )T D
r )T D
(D
(D
33
23
33
33
23
5.1.3
N1
zT (k + jk)(j)
z (k + jk)
(5.28)
j=0
for the system as given in (5.1), (5.2) and (5.3), and with equality and inequality constraints. In this section we consider the case for nite N. The innite horizon case is
treated in the Sections 5.2.4 and 5.2.5.
63
31 e(kk) + D
32 w(k)
33 v(k) = 0
+D
(k)
= C3 x(kk) + D
41 e(kk) + D
42 w(k)
43 v(k) (k)
+D
(k)
= C4 x(kk) + D
(5.29)
(5.30)
+D
v(k) = D33 C3 x(kk) + D31 e(kk) + D32 w(k)
33 (k)
r
= vE (k) + D
33 (k)
This results in
31 e(kk) + D
32 w(k)
33 v(k) =
+D
(k)
= C3 x(kk) + D
r
31 e(kk) + D
32 w(k)
33 vE (k) + D
33 D
= C3 x(kk) + D
+D
33 (k) = 0
and so the equality constraint is eliminated. The optimization vector
(k) can now be
written as
I (k)
(k) =
E (k) +
1
= H f (k) +
I (k)
(5.31)
(5.32)
(5.33)
where
E (k) = H 1 f (k) is the equality constrained solution given in the previous section, and
i (k) is an additional term to take the inequality constraints into account. Now
consider performance index (5.23) from the previous section. Substitution of (k) =
H 1 f (k) +
I (k) gives us:
1 T
(k) + f T (k)
(k) + c(k) =
(k) H
2
T
1
1
1
=
I (k) H H f (k) +
I (k)
H f (k) +
2
I (k) + c(k) =
+f T (k) H 1 f (k) +
1 T
(k) H
I (k) f T (k)H 1H
I (k)
2 I
1
I (k) + c(k)
+ f T (k)H 1 H H 1 f (k) f T (k)H 1 f (k) + f T (k)
2
1 T
1
=
I (k) f T (k)H 1 f (k) + c(k)
I (k) H
2
2
=
(k) =
=
=
+
I (k)
E (k) +
zE (k) +
I (k)
r
23 D
C3 C2 )x(kk) + (D
r D
23 D
(D
33
33 31 D21 )e(kk)
r
23 D
33 D
32 D
22 )w(k)
(D
+
I (k)
(5.34)
r
r
r I (k) in inequality
Substitution of v(k) = vE (k) + D
E (k) + D
33 (k) = v
33 E (k) + D33
constraint (5.30) gives:
43 D
41 D
43 D
r C3 )x(kk) + (D
r D
(k)
= (C4 D
33
33 31 )e(kk)
r
r
r
33
33
33
42 D
43 D
43 D
43 D
D32 )w(k)
+(D
+D
E (k) + D
I (k)
r
r
r
r
C3 + D
D
C3 D
C2 )x(kk)
43 D
43 D
23 D
43 D
= (C4 D
33
33
33
33
r
r
r D
43 D
21 )e(kk)
+(D41 D43 D33 D31 + D43 D33 D23 D33 D31 D
33
r
r
r
r
33
33
33
33
43 D
43 D
43 D
42 D
D32 + D
D32 D
D23 D
D22 )w(k)
+(D
r
+D43 D33
I (k)
r
43 D
= E (k) + D
33 I (k)
Now let
43 D
r
A = D
33
(k) 2
subject to:
A
I (k) + b (k) 0
where H is as in the previous section and we dropped the term 12 f T (k)H 1 f (k) + c(k),
because it is not dependent on I (k) and so it has no inuence on the minimization. The
above optimization problem, which has to be solved numerically and online, is a quadratic
programming problem and can be solved in a nite number of iterative steps using reliable
and fast algorithms (see appendix A).
The control law of the full SPCP is nonlinear, and cannot be expressed in a linear form as
the unconstrained SPCP or equality constrained SPCP.
Theorem 12
Consider system (5.1)  (5.3) and the constrained (nite horizon) standard predictive control problem (CSPCP) of minimizing performance index
J(v, k) =
N1
j=0
zT (k + jk)(j)
z (k + jk)
(5.35)
65
31 e(kk) + D
32 w(k)
33 v(k) = 0
(k)
= C3 x(kk) + D
+D
41 e(kk) + D
42 w(k)
43 v(k) (k)
+D
(k)
= C4 x(kk) + D
Dene:
43 D
r
A = D
33
r D
43 D
21 )e(kk)
+(D41 D43 D33 D31 + D43 D33 D23 D33 D31 D
33
r
r
r
r
33
33
33
33
43 D
43 D
43 D
42 D
D32 + D
D32 D
D23 D
D22 )w(k)
+(D
(k)
r )T D
T
r
H = 2 (D
23 D23 D33
33
r D
r C3 D
23 D
r C2 D
r C3
F = Ev D
33
33
33
33
23 D
r D
21 D
r D
r D
31 D
31
r D
De = Ev D
33
33
33
33
r D
r D
32 D
32
23 D
r D
22 D
r D
Dw = Ev D
33
33
33
33
r
D = Ev D
33
1
r
T T
r
r )T D
= (D33 ) D23 D23 D33
(D
23
33
I 0 ... 0
Ev =
The optimal control law that optimizes this CSPCP problem is given by:
+ D
I (k)
v(k) = F x(kk) + De e(kk) + Dw w(k)
where
is the solution for the quadratic programming problem
1 T
I (k) H
I (k)
min
I (k) 2
subject to:
I (k) + b (k) 0
A
5.2
In this section we consider the standard predictive control problem for the case where the
prediction horizon is innite (N = ) [59].
5.2.1
Steadystate behavior
In innite horizon predictive control it is important to know how the signals behave for
k . We therefore dene the steadystate of a system in the standard predictive
control formulation:
Denition 13
The quadruple (vss , xss , wss , zss ) is called a steady state if the following equations are
satised:
xss = A xss + B2 wss + B3 vss
zss = C2 xss + D22 wss + D23 vss
(5.36)
(5.37)
To be able to solve the predictive control problem it is important that for every possible wss ,
the quadruple (vss , xss , wss , zss ) exists. For a steady state error free control the performance
signal zss should be equal to zero for every possible wss (this is also necessary to be able to
solve the innite horizon predictive control problem). We therefore look at the existence
of a steady state
(vss , xss , wss , zss ) = (vss , xss , wss , 0)
for every possible wss .
Consider the matrix
I A B3
Mss =
C2 D23
(5.38)
(5.39)
(5.40)
67
is given by
then a direct relation between (xss , vss , wss , yss ) and w(k)
xss
Dssx
vss Dssv
(5.43)
(5.44)
Remark 14 : Sometimes it is interesting to consider cyclic or periodic steadystate behavior. This means that the signals x, w and v do not become constant for k , but are
equal to some cyclic behavior. For a cyclic steady state we assume there is a cycle period P
such that (vss (k + P ), xss (k + P ), wss(k + P ), zss (k + P )) = (vss (k), xss (k), wss (k), zss (k)).
For innite horizon predictive control, we again make the assumption that zss = 0. In this
way, we can isolate the steadystate behavior from the predictive control problem without
aecting the performance index.
5.2.2
For innite horizon predictive control we will encounter an optimization problem with an
innite degrees of freedom, parameterized by v(k + jk), j = 1, . . . , . For the unconstrained case this is not a problem, and we can nd the optimal predictive controller using
v(kk)
v(k + 1k)
v(k) =
..
.
v(k + Ns 1k)
which contains all degrees of freedom at sample time k.
We will consider two ways to expand the input signal beyond the switching horizon:
Control horizon: By taking Ns = Nc , we choose v(k + jk) = vss for j Ns . (Note that
vss = 0 for IIO models).
Basis functions: We can choose v(k + jk) for j Ns as a nite sum of (orthogonal)
basis functions. The future input v(k + jk) can be formulated using equation (4.25):
v(k + jk) = Cv Ajv (kk) + vss
where we have taken added an extra term vss to take the steadystate value into
account. Following the denitions from section 4.3, we derived the relation
Cv
vss
v(kk)
vss
v(k+1k) Cv Av
v(k) =
=
(kk) + .. = Sv (kk) + vss
..
..
.
.
.
s 1
v(k+Ns 1k)
vss
Cv AN
v
Suppose Ns is such that Sv has full columnrank, and a leftcomplement is given by
Sv (so Sv Sv = 0, see appendix C for a denition). Now we nd that
v (k) vss ) = 0
Sv (
or, by dening
vss
vss
vss = ..
.
vss
Dssv
Dssv
..
.
Dssv
(5.45)
ssv w(k)
=D
w(k)
69
we obtain
ssv w(k)
= 0
Sv v(k) Sv D
(5.46)
We observe that the orthogonal basis function can be described by equality constraint
5.46.
s
By dening Bv = AN
v Sv (where Sv is the left inverse of Sv , see appendix C for a
denition), we can express in v
s
AN
(k)
v (kk) = Bv v
and the future input signals beyond Ns can be expressed in terms of v(k):
s
Bv v(k) + vss
v(k + jk) = Cv Ajv (kk) + vss = Cv AjN
v
(5.47)
The description with (orthogonal) basis functions gives us that, for the constrained
case, the input signal beyond Ns can be described as follows:
s
Bv v(k) + Fv (
x(k + jk) xss ) + vss
v(k + jk) = Cv AjN
v
j Ns (5.48)
or
(5.49)
xv (k + Ns k) = Bv v(k)
j Ns (5.50)
xv (k+j +1k) = Av xv (k + jk)
v(k + jk) = Cv xv (k + jk) + Fv (x(k + jk) xss ) + vss j Ns (5.51)
where xv is an additional state, describing the dynamics of the (orthogonal) basis
function beyond the switching horizon. Finally, we should note that v(k) can not be
chosen arbitrarily but is constrained by the equality constraint (5.46).
5.2.3
In this section, we consider the unconstrained innite horizon standard predictive control
problem. To be able to tackle this problem, we will consider a constant external signal w,
a zero steadystate performance signal and the weighting matrix (j) is equal to identity,
so
w(k + j) = wss for all j 0
zss = 0
(j) = I for all j 0
Dene
x (k + jk) = x(k + jk) xss for all j 0
v (k + jk) = v(k + jk) vss for all j 0
zT (k + jk)(j)
z (k + jk)
J(v, k) =
=
j=0
T
xT (k + jk)C2T + e(k + jk)D21
C2 x (k + jk) + D21 e(kk) +
j=0
T
D23 v (k + jk)
2 xT (k + jk)C2T + e(k + jk)D21
T
T
+ v
D23 v (k + jk)
(k + jk)D23
Now dene
T
T
D23 )1 D23
C2 x (k + jk) + D21 e(k + jk) (5.52)
v(k + jk) = v (k + jk) + (D23
T
T
T
T
A B3 (D23
D23 )1 D23
C2 B1 B3 (D23
D23 )1 D23
D12
0
0
B3
=
0
T
T
= (I D23 (D23
D23 )1 D23
) C2 D21
= D23
A =
B
C
(5.53)
71
We obtain
x(k + jk) + B
v(k + jk)
x(k+j +1k) = A
v(k + jk)
z(k+jk) = C x(k + jk) + D
and the performance index becomes
J(v, k) =
zT (k+jk)
z(k+jk)
j=0
v(k+jk) + vT (k+jk)D
v(k+jk)
TD
=
xT (k+jk)C T C x(k+jk) + xT (k+jk)C T D
x(k+jk) + vT (k+jk)R
v(k+jk)
=
xT (k+jk)Q
j=0
where
= C T C = C2 D21 T (I D23 (D T D23 )1 D T ) C2 D21 ,
Q
23
23
=D
TD
= D T D23 .
R
(5.54)
23
T
T
1 T
T
and we used the
fact that D C = D23 (I D23 (D23 D23 ) D23 ) C2 D21 = (D23
D23 ) C2 D21 = 0.
Minimizing the performance index has now becomes a standard LQG problem (Strejc,
[63]), and the optimal control signal v is given by
T P A x(kk)
+ R)
1 B
T P B
v(k) = (B
(5.55)
where P 0 is the smallest positive semidenite solution of the discrete time Riccati
equation
T P A + Q
B
T P B
+ R)
1 B
P = AT P A AT P B(
B)
and invertibility of R.
T
1 T
(D
D
)
D
C
AB
3
23
2
T
1
T
T
T
23
23
P
PB
+ R)
B
+ (D23
F = (B
D23 )1 D23
C2
0
T
1 T
AB
(D
D
)
D
C
3
23
2
T
1
T
T
T
23
23
P
PB
+ R)
B
Dw = ( B
D23 )1 D23
C2 Dssx + Dssv
+ (D23
0
T
1 T
B
B
(D
D
)
D
D
1
3
23
12
T
1
T
T
T
23
23
P
PB
+ R)
B
De = ( B
D23 )1 D23
D21
+ (D23
0
= w(k).
and wss = w(k)
= D T D23 is invertible. This is a strengtened
Note that in the above we have used that R
23
version of assumption 2 as formulated in the beginning of this chapter. With some effort, the above can be derived solely based on assumptions 2 and 3 without relying on
invertibility of R.
The results are summarized in the following theorem:
Theorem 15
Consider system (5.1)  (5.3) with a steadystate (vss , xss , wss , zss ) = (Dssv wss , Dssx wss , wss , 0).
B,
C,
D,
Q,
and R
be dened as in
Further let w(k + j) = wss for j > 0 and let A,
equations (5.53) and (5.54). The unconstrained innite horizon standard predictive control
problem of minimizing performance index
J(v, k) =
zT (k + jk)
z (k + jk)
(5.56)
j=0
(5.57)
where
T
T
D23 )1 D23
C2
AB3 (D23
T
1 T
T
T
+ (D23
D23 )1 D23
C2
F = (B P B + R) B P
0
T
1 T
(D
D
)
D
C
AB
3
23
2
T
1
T
T
T
23
23
P
PB
+ R)
B
+ (D23
Dw = ( B
D23 )1 D23
C2 Dssx + Dssv
0
T
1 T
B
(D
D
)
D
D
B
1
3
23
12
T
1
T
T
T
23
23
P
PB
+ R)
B
+ (D23
De = ( B
D23 )1 D23
D21
0
and P is the solution of the discrete time Riccati equation
T P A + Q
B
T P B
+ R)
1 B
P = AT P A AT P B(
5.2.4
The innite Horizon Standard Predictive Control Problem with control horizon constraint
is dened as follows:
Denition 16 Consider a system given by the statespace realization
x(k + 1) = A x(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D12 w(k)
z(k) = C2 x(k) + D21 e(k) + D22 w(k) + D23 v(k)
(5.58)
(5.59)
(5.60)
73
zT (k + jk)(j)
z (k + jk)
(5.61)
j=0
(5.62)
Nc ,41 e(kk) + D
Nc ,42 w(k)
Nc ,43 v(k) (k)
+D
(k)
= CNc ,4 x(kk) + D
(5.63)
(5.64)
where
w(k)
w(kk)
w(k + 1k)
..
.
v(k) =
w(k + Nc 1k)
v(kk)
v(k + 1k)
..
.
v(k + Nc 1k)
The above problem is denoted as the Innite Horizon Standard Predictive Control Problem
with control horizon constraint.
Theorem 17
Consider system (5.58)  (5.60) and let
Nc <
Nc = diag (0), (1), . . . , (Nc 1)
for j Nc
(j) = ss
w(k + j) = wss
for j Nc
vss
Dssv
=
w(k)
xss
Dssx
I 0 0 w(k)
w(k) =
= Ew w(k)
Let A be partioned into a stable part As and an unstable part Au as follows:
As 0
Ts
A = Ts Tu
Tu
0 Au
(5.65)
Ts
Ts
I 0
Tu
=
.
Tu
0 I
Rnc5 1
(5.66)
diagonal matrix
E = diag(1 , 2 , . . . , nc5) > 0,
(5.67)
E 1 C5 Ts As
E 1 C5 Ts A2
s
Wm =
..
.
1
m
E C5 Ts As
(5.68)
C2
C2 A
CNc ,2 = ..
.
C2 ANc 1
D22
C2 B2
Nc ,22 =
D
..
Nc ,23 , CNc ,3 , D
Nc ,31 , D
Nc ,32 and D
Nc ,33 be given by
D
D21
DNc ,21 = C2 B1
..
. C2 ANc 2 B1
0
D22
..
0
0
..
.
.
D22
(5.69)
C2 ANc2 B2 C2 ANc3 B2
0
0
D23
C2 B3
D23
0
DNc ,23 =
..
..
.
.
. .
.
C2 ANc2 B3 C2 ANc2 B3 D23
Nc ,31 = ANc 1 B1
D
CNc ,3 = ANc
(5.70)
(5.71)
(5.72)
[ ANc 1 B2 B2 ] Dssx
= [ ANc 1 B3 B3 ]
Nc ,32 =
D
Nc ,33
D
75
(5.73)
(5.74)
Let Y be given by
Nc ,33 ,
Y = Tu D
(5.75)
Let Y have full rowrank with a rightcomplement Y r and a rightinverse Y r . Further let
be the solution of the discretetime Lyapunov equation
M
As M
+ T T C T ss C2 Ts = 0
ATs M
s
2
Consider the innitehorizon model predictive control problem, dened as minimizing (5.61)
subject to an input signal v(k+jk), with v(k+jk) = 0 for j Nc and inequality constraints
(5.63) and (5.64).
Dene:
Nc ,43
D
r
A =
Nc ,33 Y
Wn D
r T T
T
T
w
Nc ,42 + D
Nc ,43 D
D
(k)
(5.76)
+
Nc ,32 + D
Nc ,33 D
w ) w(k)
1
Wn (D
where
F = Z1 CNc ,3 Z2 CNc ,2 Z3 Y r Tu CNc ,3
Nc ,31
e = Z1 D
Nc ,31 + Z2 D
Nc ,21 + Z3 Y r Tu D
D
w = Z1 D
Nc ,32 + Z2 D
Nc ,22 + Z3 Y r Tu D
Nc ,32
D
r
r 1
r T T
Ts
Z1 = Y Tu 2Y H (Y ) DNc ,33 TsT M
T
Nc
Z2 = 2Y r H 1(Y r )T D
Nc ,23
T T T M
Ts D
Nc ,33 + 2Y r H 1 (Y r )T D
T
Z3 = 2Y H (Y ) D
Nc ,33 s
Nc ,23 Nc DNc ,23
T
and 1 = 1 1 1 .
The optimal control law that optimizes the inequality constrained innitehorizon MPC
problem is given by:
r
r T
+ D
I (k)
v(k) = F x(kk) + De e(kk) + Dw w(k)
where
F = Ev F
e
De = Ev D
w,
Dw = Ev D
D = Ev Y r
(5.77)
I 2
subject to:
A
I + b (k) 0
and Ev = I 0 . . . 0 such that v(k) = Ev v(k).
In the absence of inequality constraints (5.63) and (5.64), the optimum
I = 0 is obtained
and so the control law is given analytically by:
(5.78)
5.2.5
The Innite Horizon Standard Predictive Control Problem with structured input signals
is dened as follows:
Denition 18 Consider a system given by the statespace realization
x(k + 1) = A x(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D12 w(k)
z(k) = C2 x(k) + D21 e(k) + D22 w(k) + D23 v(k)
(5.79)
(5.80)
(5.81)
Consider the nite switching horizon Ns , then the goal is to nd a controller such that the
performance index
J(v, k) =
zT (k + jk)(j)
z (k + jk)
(5.82)
j=0
ssv w(k)
(k)
= Sv D
+ Sv v(k) = 0
Ns ,41 e(kk) + D
Ns ,42 w(k)
Ns ,43 v(k) (k)
+D
(k)
= CNs ,4 x(kk) + D
(5.83)
(5.84)
(5.85)
w(k)
w(kk)
w(k + 1k)
..
.
77
v(k) =
w(k + Ns 1k)
v(kk)
v(k + 1k)
..
.
v(k + Ns 1k)
j Ns
(5.86)
The above problem is denoted as the Innite Horizon Standard Predictive Control Problem
with structured input signals.
Remark 19 : The equality constraint in the above denition is due to the constraint imposed on the structure of the input signal in the equations (5.86) and (5.83) as discussed
in section 5.2.2.
Remark 20 : Equation (5.84) covers inequality constraints up to the switching horizon,
equation (5.85) covers inequality constraints beyond the switching horizon.
Theorem 21
Consider system (5.79)  (5.81) and let
Ns <
Ns = diag (0), (1), . . . , (Ns 1)
(j) = ss
for j Ns
w(k + jk) = wss
for j Ns
vss
Dssv
=
w(k)
xss
Dssx
I 0 0 w(k)
w(k) =
= Ew w(k)
Let AT , CT be given by
A B3 Cv
AT =
0
Av
CT = C2 D23 Cv
(5.87)
Ts
Ts
I 0
Tu
=
Tu
0 I
Rnc5 1
= (C5 Dssx + D52 Ew + D53 Dssv )w(k)
(5.89)
diagonal matrix
E = diag(1 , 2 , . . . , nc5) > 0,
(5.90)
and
C =
C5 D53 Cv
E 1 C Ts As
E 1 C Ts A2
s
Wm =
..
.
1
m
E C Ts As
(5.91)
(5.92)
C2
C2 A
CNs ,2 = ..
.
C2 ANs 1
D22
C2 B2
Ns ,22 =
D
..
Ns ,23 , CNs ,3 , D
Ns ,31 , D
Ns ,32 , and D
Ns ,33 be given by
D
D21
C2 B1
=
D
..
Ns ,21
.
Ns 2
C2 A
B1
0
0
D22
0
..
..
. .
Ns2
Ns3
C2 A
B2 C2 A
B2 D22
D23
0
0
C2 B3
D23
0
DNs ,23 =
..
..
.
.
. .
.
Ns2
Ns2
C2 A
B3 C2 A
B3 D23
N
N 1
A s
A s B1
DNs ,31 =
CNs ,3 =
0
0
(5.93)
(5.94)
(5.95)
(5.96)
[ ANs 1 B2 B2 ] Dssx
0
[ ANs 1 B3 B3 ]
=
Bv
79
Ns ,32 =
D
(5.97)
Ns ,33
D
(5.98)
Let Y be given by
Ns ,33
Tu D
Y =
,
Sv
(5.99)
Wn DNs ,33
Ns ,23 Y r
Ts D
Ns ,33 + D
Ns D
T T T M
T
H = 2(Y r )T D
Ns ,33 s
Ns ,23
Ns ,41 + D
Ns ,43 F
Ns ,43 D
e
D
CNs ,4 D
b (k) =
e ) e(kk)
Ns ,33 F ) x(kk)+ Wn (D
Ns ,31 + D
Ns ,33 D
Wn (CNs ,3 D
w
Ns ,42 + D
Ns ,43 D
D
(k)
(5.101)
+
w ) w(k)
Ns ,32 + D
Ns ,33 D
1
Wn (D
where
F = Z1 CNs ,3 Z2 CNs ,2 Z3 Y1r Tu CNs ,3
e = Z1 D
Ns ,31 + Z2 D
Ns ,21 + Z3 Y r Tu D
Ns ,31
D
1
w = Z1 D
Ns ,32 + Z2 D
Ns ,22 + Z3 Y r Tu D
Ns ,32 (Z3 I)Y r S D
ssv
D
1
2 v
r
r 1
r T T
T
Ts
Z1 = Y1 Tu 2Y H (Y ) DNs ,33 Ts M
T
Ns
Z2 = 2Y r H 1(Y r )T D
Ns ,23
Z3 = 2Y H
and 1 =
1 1 1
T T T M
Ts D
Ns ,33 + 2Y r H 1 (Y r )T D
T
(Y ) D
Ns ,33 s
Ns ,23 Ns DNs ,23
r T
T
+ D
I (k)
(5.102)
where
F = Ev F
e
De = Ev D
w,
Dw = Ev D
D = Ev Y r
and where
I is the solution for the quadratic programming problem
1 T
I
min
I H
I 2
subject to:
A
I + b (k) 0
and Ev =
I 0 ... 0
(5.103)
j=0
zT (k + jk)(j)
z (k + jk) = J1 (v, k) + J2 (v, k)
81
where
J1 (v, k) =
N
s 1
zT (k + jk)(j)
z (k + jk)
j=0
J2 (v, k) =
j=Ns
For technical reasons we will consider the derivation of criterion J2 before we derive criterion J1 .
Derivation of J2 :
Consider system (5.79)  (5.81) with structured input signal (5.49), (5.50) and (5.51) and
constraint (5.46). Dene for j Ns :
x (k + jk) = x(k + jk) xss
v (k + jk) = v(k + jk) vss
Then, using the fact that w(k + jk) = wss and e(k + jk) = 0 for j Ns , it follows for
j Ns :
x (k+j +1k) =
=
=
=
=
z(k + jk) =
=
=
=
Dene
xT (k + jk) =
x (k + jk)
xv (k + jk)
+
N 1
+ A s B3 B3 v(k)
together with xv (k + Ns k) = Bv v(k) we obtain
x(k + Ns k) xss
xT (k + Ns k) =
xv (k + Ns )
Ns ,31 e(kk) + D
Ns ,32 w(k)
Ns ,33 v(k)
= CNs ,3 x(kk) + D
+D
Ns ,31 , D
Ns ,32 and D
Ns ,33 as dened in (5.96)(5.98), and xss = Dssx w(k).
where CNs ,3 , D
The prediction z(k + jk) for j Ns is obtained by successive substitution, resulting in:
s
xT (k + Ns k)
z(k + jk) = CT AjN
T
(5.104)
Now a problem occurs when the matrix AT contains unstable eigenvalues (i  1). For
s
j , the matrix AjN
will become unbounded, and so will z. The only way to solve
T
this problem is to make sure that the part of the state xT (k + Ns ), related to the unstable
poles is zero (Rawlings & Muske, [51]). Make a partitioning of AT into a stable part As
and an unstable part Au as in (5.88), then (5.104) becomes:
s
AjN
Ts
0
s
xT (k + Ns k)
z(k + jk) = CT Ts Tu
s
Tu
0
AjN
u
(5.105)
= CT Ts AjNs Ts xT (k + Ns k) + CT Tu AjNs Tu xT (k + Ns k)
s
s
will grow unbounded for j . The prediction z(k + jk) for j
The matrix AjN
u
will only remain bounded if Tu xT (k + Ns k) = 0.
In order to guarantee closedloop stability for systems with unstable modes in AT , we need
to satisfy equality constraint
Ns,31 e(kk)+ D
Ns,32 w(k)+
Ns,33 v(k) = 0 (5.106)
Tu xT (k+Ns k) = Tu CNs ,3 x(kk)+ D
Together with the equality constraint (5.83) we obtain the nal constraint:
Ns ,32
Ns ,31
Ns ,33
Tu D
Tu D
Tu D
Tu CNs ,3
+
v(k) = 0
x(kk) +
e(kk) +
ssv w(k)
0
0
Sv
Sv D
Consider Y as given in (5.99) with rightinverse Y r = Y1r Y2r and rightcomplement
Y r , then the set of all signals v(k) satisfying the equality constraints is given by
(k)
v(k) = vE (k) + Y r
(5.107)
where vE is given by
Ns ,32
Ns ,31
Tu D
Tu CNs ,3
Tu D
r
vE (k) = Y
x(kk) +
e(kk) +
ssv w(k)
0
0
Sv D
r
ssv w(k)
+ Y2r Tu Sv D
83
and
(k) is a free vector with the appropriate dimensions.
If the equality Tu xT (k + Ns k) = 0 is satised, the prediction becomes:
s
AjN
Ts xT (k + Ns k)
0
s
=
z(k + jk) = CT Ts Tu
s
0
AjN
0
u
= CT Ts AjNs Ts xT (k + Ns k)
s
(5.108)
(5.109)
Dene
=
M
(5.110)
j=Ns
then
J2 (v, k) =
=
j=Ns
j=Ns
Ts xT (k + Ns k)
= xTT (k + Ns k)TsT M
1 T
=
(k) +
T (k)f2 (k) + c2 (k)
(k)H2
2
where
Ns ,33 Y r
T T T M
Ts D
H2 = 2(Y r )T D
Ns ,33 s
Ts xT,E (k+Ns k)
T T T M
f2 (k) = 2(Y r )T D
Ns ,33 s
Ts xT,E (k+Ns k)
c2 (k) = xT (k + Ns k)T T M
T,E
Ns ,31 e(kk) + D
Ns ,32 w(k)
Ns ,33 vE (k)
+D
xT,E (k+Ns k) = CNs ,3 x(kk) + D
can be computed as the solution of the discrete time Lyapunov equation
The matrix M
As M
+ T T C T ss CT Ts = 0
ATs M
s
T
Derivation of J1 :
The expression of J1 (v, k) is equivalent to the expression of the performance index J(v, k)
for a nite prediction horizon. The performance signal vector z(k) is dened for switching
horizon Ns :
z(kk)
z(k+1k)
z(k) =
..
.
z(k+Ns 1k)
+D
z(k) = CNs ,2 x(kk) + D
Ns,21 e(kk)+ D
Ns,22 w(k)+
Ns,23 vE (k)+Y r
D
(k)
= CNs ,2 x(kk)+ D
Ns ,21 , D
Ns ,22 and D
Ns ,23 are given in (5.93)(5.95). Now J1 becomes
where CNs ,2 , D
J1 (v, k) =
N
s 1
zT (k + jk)(j)
z (k + jk) =
j=0
Ns z(k) =
= zT (k)
1 T
=
(k) +
T (k)f1 (k) + c1 (k)
(k)H1
2
where
r
T
H1 = 2(Y r )T D
Ns ,23 Ns DNs ,23 Y
T
E (k)
f1 (k) = 2(Y r )T D
Ns ,23 Ns z
Ns zE (k)
c1 (k) = zET (k)
Ns ,21 e(kk) + D
Ns ,22 w(k)
Ns ,23 vE (k)
+D
zE (k) = CNs ,2 x(kk) + D
Minimization of J1 + J2 :
Combining the results we obtain the problem of minimizing
1 T
(k) +
T (k)(f1 (k)+f2 (k)) + c1 (k) + c2 (k)
J1 + J2 =
(k)(H1 +H2 )
2
1 T
(k) H
=
(k) +
T (k) f (k) + c(k)
2
Now dene:
Ns ,31 e(kk) + D
Ns ,32 w(k)
xT,0 (k + Ns k) = CNs ,3 x(kk) + D
Ns ,21 e(kk) + D
Ns ,22 w(k)
z0 (k) = CNs ,2 x(kk) + D
ssv w(k)
0 (k) = Sv D
or
Ns ,32
Ns ,31
D
xT,0 (k + Ns k)
D
CNs ,3
Ns ,22 w(k)
= CN ,2 x(kk)+ D
Ns ,21 e(kk)+ D
z0 (k)
s
0 (k)
0
0
Sv Dssv
(5.111)
(5.112)
(5.113)
(5.114)
(5.115)
Then:
vE (k) = Y1r Tu xT,0 (k) Y2r 0 (k)
Ns ,33 vE (k)
xT,E (k + Ns k) = xT,0 (k) + D
Ns ,33 Y1r Tu xT,0 (k) D
Ns ,33 Y2r 0 (k)
= xT,0 (k) D
Ns ,23 vE (k)
zE (k) = z0 (k) + D
Ns ,23 Y r Tu xT,0 (k) D
Ns ,23 Y r 0 (k)
= z0 (k) D
1
2
(5.116)
85
and thus:
Y1r Tu
0
Y2r
xT,0 (k + Ns k)
vE (k)
Ns ,33 Y r Tu ) 0 D
Ns ,33 Y r
xT,E (k + Ns k) = (I D
z0 (k)
1
2
r
r
Ns ,23 Y Tu
Ns ,23 Y
zE (k)
0 (k)
D
I D
1
2
(5.117)
vE (k)
T T T M
Ts 2(Y r )T D
T
xT,E (k + Ns k)
= 0 2(Y r )T D
Ns ,33 s
Ns ,23 Ns
zE (k)
In the absence of inequality constraints we obtain the unconstrained minimization of
(5.112). The minimum
=
E is found for
(k)
E
T T T M
Ts 2Y r H 1(Y r )T D
T
xT,E (k + Ns k)
= I 2Y r H 1 (Y r )T D
Ns ,33 s
Ns ,23 Ns
zE (k)
T TsT M
Ts 2Y r H 1(Y r )T D
T
= I 2Y r H 1 (Y r )T D
Ns ,33
Ns ,23 Ns
Y1r Tu
0
Y2r
xT,0 (k + Ns k)
r
r
z0 (k)
(I DNs ,33 Y1 Tu ) 0 DNs ,33 Y2
Ns ,23 Y r Tu
Ns ,23 Y r
0 (k)
D
I D
1
2
T TsT M
Ts 2Y r H 1(Y r )T D
T
= I 2Y r H 1 (Y r )T D
Ns ,33
Ns ,23 Ns
Y1r Tu
0
Y2r
Ns ,33 Y1r Tu ) 0 D
Ns ,33 Y2r
(I D
r
Ns ,23 Y Tu
Ns ,23 Y r
D
I D
1
2
Ns ,32
Ns ,31
D
D
CNs ,3
Ns ,22 w(k)
Ns ,21 e(kk)+ D
CNs ,2 x(kk)+ D
ssv
0
0
Sv D
= Z1 + Z3 Y1r Tu Z2 (Z3 I)Y2r
Ns ,32
Ns ,31
D
D
CNs ,3
Ns ,22 w(k)
Ns ,21 e(kk)+ D
CNs ,2 x(kk)+ D
0
0
S Dssv
v
e e(kk) + D
w w(k)
= F x(kk) + D
(k)
, j Ns
for
j = 1, . . . , n
(5.118)
87
(5.119)
This means that (5.118), together with the extra equality constraint (5.106) implies (5.119).
Equation (5.118) can be rewritten as
Wn xT (k + Ns k) 1
Together with constraint (5.84) this results in the overall inequality constraint
(k)
(k)
Wn xT (k + Ns k) 1
0
Consider performance index (5.112) in which the optimization vector (k) is written as
(k) =
E (k) +
I (k)
= (H1 +H2)1 (f1 (k)+f2(k)) +
I (k)
1
= H (f (k)) +
I (k)
(5.120)
(5.121)
(5.122)
where
E (k) is the equality constrained solution given in the previous section, and
i (k) is
an additional term to take the inequality constraints into account. The performance index
(k) H
(k) + f T (k)
(k) + c(k) =
2
T
1
H 1 f (k) +
=
I (k) H H 1 f (k) +
I (k)
2
T
1
I (k) + c(k) =
+f (k) H f (k) +
1 T
I (k) f T (k)H 1H
I (k)
(k) H
2 I
1
+ f T (k)H 1 H H 1 f (k) f T (k)H 1 f (k) + f T (k)
I (k) + c(k)
2
1 T
1
=
I (k) H
I (k) f T (k)H 1 f (k) + c(k)
2
2
1 T
I (k) + c (k)
=
(k) H
2 I
=
(5.123)
vI (k) = vE (k) + Y r
and dene the signals
Ns,43 vI (k)
I (k) = 0 (k)+ D
Ns,41 + D
e )e(kk)
Ns ,43 F )x(kk)+(D
Ns ,43 D
= (CNs ,4 D
Ns ,42 + D
w )w(k)
Ns ,43 D
+(D
(5.124)
(5.125)
then
Ns ,43 Y r
I (k)
(k)
= I (k) + D
Ns ,33 Y r
xT (k + Ns k) = xT,I (k + Ns k) + D
I (k)
and we can rewrite the constraint as:
(k)
(k)
= A
I (k) + b (k) 0
Wn xT (k + Ns k) 1
(5.126)
5.3. IMPLEMENTATION
where
I (k) (k)
b (k) =
Wn xT,I (k + Ns k) 1
Ns ,43
D
r
A =
Ns ,33 Y
Wn D
89
(5.127)
(5.128)
w
Ns ,42 + D
Ns ,43 D
D
(k)
+
w ) w(k)
Ns ,32 + D
Ns ,33 D
1
Wn (D
The constrained innite horizon MPC problem is now equivalent to a Quadratic Programming problem of minimizing (5.123) subject to (5.126). Note that the term c (k) in (5.123)
does not play any role in the optimization and can therefore be skipped.
2 End Proof
5.3
Implementation
5.3.1
In this chapter the predictive control problem was solved for various settings (unconstrained, equality/inequality constrained, nite/innite horizon). In the absence of inequality constraints, the resulting controller is given by the control law:
v(k) = F x(kk) + De e(kk) + Dw w(k).
(5.129)
Unfortunately, the state x(kk) of the plant and the true value of e(kk) are unknown. We
therefore introduce a controller state xc (k) and an estimate ec (k). By substitution of xc (k)
and ec (k) in system equations (5.1)(5.2) and control law (5.129) we obtain:
xc (k + 1) = A xc (k) + B1 ec (k) + B2 w(k) + B3 v(k)
1
ec (k) = D11 y(k) C1 xc (k) D12 w(k)
(5.130)
(5.131)
(5.132)
Elimination of ec (k) and v(k) from the equations results in the closed loop form for the
controller:
1
1
C1 B3 De D11
C1 ) xc (k)
xc (k + 1) = (A B3 F B1 D11
1
1
+(B1 D11 + B3 De D11 ) y(k)
1
1
D12 Ew + B2 Ew B1 D11
D12 Ew ) w(k)
+(B3 Dw B3 De D11
1
1
v(k) = (F De D11 C1 ) xc (k) + De D11 y(k) +
1
+(Dw De D11
D12 Ew ) w(k)
(5.133)
(5.134)
v

B1
B3
?
 q 1 I
dx(k+1)
6
B2
 d
x(k)

C1
?
d
D12
d
?
A
?
y
A
B2 Ew
D
12 Ew
 d
Dw
B3
w
d
6
D11
@
R?
 q 1 I
dxc (k+1)
6
 C
rxc (k)
1
?
d
?
1
D11
F
B1
De
ec
controller
Figure 5.1: Realization of the LTI SPCP controller
with Ew =
I 0 ... 0
5.3. IMPLEMENTATION
91
Computational aspects:
Consider the expression (5.14)(5.16). The matrices F , De and Dw can also be found by
solving
C2 D
T
T
21 D
22
(D
=D
(5.135)
23 D23 ) F De Dw
23
e and Dw = Ev D
w . Inversion of D
T
where we dene F = Ev F , De = Ev D
23 D23 may
be badly conditioned and should be avoided. As dened in chapter 4, (j) is a diagonal
=
2 . Dene
selection matrix with ones and zeros on the diagonal. This means that
D
23 , then we can do a singular value decomposition of M:
M =
M = U1 U2
VT
0
and the equation 5.135 becomes:
w = V U T C2 D
21 D
22
e D
( V 2 V T ) F D
(5.136)
F De Dw
= Ev V 1 U1T
22
21 D
C2 D
For singular value decomposition robust and reliable algorithms are available. Note that
the inversion of is simple, because it is a diagonal matrix and all elements are positive
23
D
T to have full rank).
(we assumes D
23
5.3.2
In this section we discuss the implementation of the full SPCP controller. For both the
nite and innite horizon the full SPCP can be solved using an optimal control law
v(k) = v0 (k) + D
I (k)
(5.137)
where
is the solution for the quadratic programming problem
1 T
I H
min
I
(5.138)
subject to:
A
I + b (k) 0
(5.139)
process
v

B1
B3
?
 q 1 I
dx(k+1)
6
B2
 d
D11
x(k)

C1
?
d
D12
?
d
A
?
y
A
B2 Ew

D
Dw
xc (k+1)
@
R d?
 q 1 I
6
B3
12 Ew
d
6

d
?
?
1
D11
 d
6
rxc (k)
 C
B1
De
ec
D
6
Nonlinear mapping
xc
ec
controller
Figure 5.2: Realization of the FULL SPCP controller
5.3. IMPLEMENTATION
93
This optimization problem (5.137)(5.139) can be seen as a nonlinear mapping with input
variables x(kk), e(kk) and w(k)
and signal
I (k):
(k))
and (k)
and output v(k). Again, when the
controller is stabilizing and there is no model error, the state xc and the noise estimate ec
will converge to the true state x and true noise signal e respectively. After a transient the
states of plant and controller will become the same.
An advantage of predictive control is that, by writing the control law as the result of a
constrained optimization problem (5.138)(5.139), it can eectively deal with constraints.
An important disadvantage is that every time step a computationally expensive optimization problem has to be solved [78]. The time required for the optimization makes model
predictive control not suitable for fast systems and/or complex problems.
In this section we discuss the work of Bemporad et al. [5], who formulate the linear
model predictive control problem as multiparametric quadratic programs. The control
variables are treated as optimization variables and the state variables as parameters. The
optimal control action is a continuous and piecewise ane function of the state under the
assumption that the active constraints are linearly independent. The key advantage of this
approach is that the control actions are computed oline: the online computation simply
reduces to a function evaluation problem.
The KuhnTucker conditions for the quadratic programming problem (5.138)(5.139) are
given by:
H
I + AT = 0
T A
I + b (k) = 0
(5.140)
0
A
I + b (k) 0
(5.142)
(5.143)
(5.141)
I = H 1 AT
substitution into equation (5.141) gives condition
T A H 1 AT + b (k) = 0
(5.144)
(5.145)
Let a and i denote the Lagrangian multipliers corresponding the active and inactive
constraints respectively. For inactive constraint there holds i = 0, for active constraint
=
Si
i
and
Sa
Si
T
Sa
Si
= SaT Sa + SiT Si = I
then
=
Sa
Si
T
a
i
= SaT a
I = H 1 AT SaT a
Ta Sa A H 1AT SaT a + b (k) = 0
(5.146)
(5.147)
(5.148)
(5.149)
1
= b (k) A H
Sa b (k)
Sa A H
1
=
Sa b (k)
I A H 1 AT SaT Sa A H 1 AT SaT
0
AT
SaT
AT
SaT
(5.150)
The solution
I of (5.149) for a given choice of Sa is the optimal solution of the quadratic
programming problem as long as (5.150) holds and a 0. Combining (5.148) and (5.150)
gives:
M1 (Sa )
(5.151)
b (k) 0
M2 (Sa )
5.3. IMPLEMENTATION
95
where
1
M1 (Sa ) = I A H 1AT SaT Sa A H 1 AT SaT
Sa
1
M2 (Sa ) = Sa A H 1AT SaT
Sa
Let
xc (k)
ec (k)
b (k) = N
w(k)
(k)
xc (k)
ec (k)
(k) = V1T
w(k)
(k)
Condition (5.151) now becomes
M1 (Sa )
U1 (k) 0
M2 (Sa )
(5.152)
(5.153)
Inequality (5.153) describes a polyhedra P(Sa ) in the RnN , and represent a set of all ,
that leads to the set of active constraints described by matrix Sa , and thus corresponding
to the solution
I , described by (5.149).
For all combinations of active constraints (for all Sa ) one can compute the set P(Sa ) and
nN
in this way
a part of the R will
be covered. For some values of (or in other words,
(k)
), there is no possible combination of active constraints,
and so no feasible solution to the QPproblem is found. In this case we have a feasibility
problem, for these values of we have to relax the predictive control problem in some way.
(this will be discussed in the next section).
All sets P(Sa ) can be computed oline. The online optimization has now become a
searchproblem. At time k we can compute (k) following equation (5.152), and determine
to which set P(Sa ) the vector (k) belongs.
I can be computed using the corresponding
Sa in equation (5.149).
Resuming, the method of Bemporad et al. [5], proposes an algorithm that partitions the
state space into polyhedral sets and computes the coecients of the ane function for
(k))
may be approximated by
a neural network with any desired accuracy. Contrary to [5], linear independency of the
active constraints is not required, and the approach is applicable to control problems with
many constraints.
5.4
Feasibility
At the end of this chapter, a remark has to be made concerning feasibility and stability.
In normal operating conditions, the optimization algorithm provides an optimal solution
within acceptable ranges and limits of the constraints. A drawback of using (hard) constraints is that it may lead to infeasibility: There is no possible control action without
violation of the constraints. This can happen when the prediction and control horizon are
chosen too small, around setpoint changes or when noise and disturbance levels are high.
When there is no feasibility guarantee, stability can not be proven.
If a solution does not exist within the predened ranges and limits of the constraints, the
optimizer should have the means to recover from the infeasibility. Three algorithms to
handle infeasibility are discussed in this section:
softconstraint approach
minimal time approach
constraint prioritization
Softconstraint algorithm
In contrast to hard constraints one can dene soft constraints [55],[79], given by an additional term to the performance index and only penalizing constraint violations. For
example, consider the problem with the hard constraint
min J
v
subject to
(5.154)
subject to
+R , 0 ,
(5.155)
5.5. EXAMPLES
97
subject to
v,jmt
(5.156)
Now we can calculate the optimal input sequence v(k) by optimizing the following problem:
min J
v
subject to
(5.157)
where jmt is the minimum value of problem (5.156). The minimal time algorithm is used if
our aim is to be back in a feasible operation as fast as possible. As a result, the magnitude
of the constraint violations is usually larger than with the soft constraint approach.
Prioritization
The feasibility recovery technique we will discuss now is based on the priorities of the
constraints. The constraints are ordered from lowest to highest priority. In the (nominal)
optimization problem becomes infeasible we start by dropping the lowest constraints and
see if the resulting reduced optimization problem becomes feasible. As long as the problem
is not feasible we continue by dropping more and more constraints until the optimization
is feasible again. This means we solve a sequence of quadratic programming problems in
the case of infeasibility. The algorithm minimizes the violations of the constraints which
cannot be fullled. Note that it may take several trials of dropping constraints and trying
to nd an optimal solution, which is not desirable in any real time application.
5.5
Examples
with constraints on y, y,
y and y . This is a model of a rigid body approximation of an
elevator for position control, subject to constraints on speed, acceleration and jerk (time
derivative of accelaration).
Dene the state
y(t)
x1 (t)
=
=
=
=
u(t) + 0.1e(t)
x1 (t) + e(t)
x2 (t) + 0.5 e(t)
x3 (t) + e(t)
...
where the noise is now acting on the states y (t) and output y(t). In matrix form we get:
x(t)
0 0 0
Ac = 1 0 0
0 1 0
1
Bc = 0
0
0.1
Kc = 1
0.5
Cc =
0 0 1
5.5. EXAMPLES
where
99
1
0 0
1 0
Ao = T
2
T /2 T 1
Co = 0 0 1
T
Bo = T 2 /2
T 3 /6
0.1 T
T + T 2 /20
Ko =
2
3
T /2 + T /6 + T /60
The prediction model is built up as given in section 3. Note that we use u(k) rather
than u(k). This because the system already consists of a triple integrator, and another
integrator in the controller is not desired. The aim of this example is to direct the elevator
from position y = 0 at time t = 0 to position y = 1 as fast as possible. For a comfortable
and safe operation, control is done subject to constraints on the jerk, acceleration, speed
and overshoot, given by
...
 y (t)

y (t)
y(t)
y(t)
(jerk)
(acceleration)
(speed)
(overshoot)
0.4
0.3
0.4
1.01
These constraints can be translated into linear constraints on the control signal u(k) and
prediction of the state x(k):
u(k + j 1)
u(k + j 1)
x1 (k + jk)
x1 (k + jk)
x2 (k + jk)
x2 (k + jk)
x3 (k + jk)
0.4
0.4
0.3
0.3
0.4
0.4
1.01
(positive jerk)
(negative jerk)
(positive acceleration)
(negative acceleration)
(positive speed)
(negative speed)
(overshoot)
for all j = 1, . . . , N.
We choose samplingtime T = 0.1, and prediction and control horizon N = Nc = 30, and
minimum costhorizon Nm = 1. Further we consider a constant reference signal r(k) = 1
for all k, and the weightings parameters P (q) = 1, = 0.1. The predictive control problem
is solved by minimizing the GPC performance index (for an IO model)
J(u, k) =
N
T
y(k + jk) r(k + jk) +
j=Nm
Nc
uT (k + j 1k)u(k + j 1k)
j=1
1.2
position
velocity
acceleration
jerk
y, x, u
0.8
0.6
0.4
0.2
0.2
0.4
0.6
10
20
30
40
50
60
70
80
Chapter 6
Stability
Predictive control design does not give an a priori guaranteed stabilizing controller. Methods like LQ, H , 1 control have builtin stability properties. In this chapter we analyze the
stability of a predictive controller. We distinguish two dierent cases, namely the unconstrained and constrained case. It is very important to notice that a predictive controller
that is stable for the unconstrained case is not automatically stable for the constrained
case.
In Section 6.4 the relation between MPC, the Internal Model Control (IMC) and the Youla
parametrization is discussed.
Section 6.5 nalizes this chapter by considering the concept of robustness.
6.1
In the case where inequality constraints are absent, the problem of stability is not too
dicult, because the controller is linear and timeinvariant (LTI).
Let the process be given by:
x(k + 1) = A x(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D12 w(k)
(6.1)
+(B3 Dw B3 De D11
1
1
1
v(k) = (F De D11 C1 ) xc (k) + De D11 y(k) + (Dw De D11 D12 Ew ) w(k)
with Ew =
I 0 ... 0
102
CHAPTER 6. STABILITY
1
1
(F + De D11 C1 ) xc (k) + De D11 C1 x(k) + De e(k) + Dw w(k)
+ B3 v(k)
1
(B3 F + B3 De D11
C1 ) xc (k)
A x(k) + B1 e(k) + B2 Ew w(k)
1
+B3 De D11 C1 x(k) + B3 De e(k) + B3 Dw w(k)
1
1
= (A + B3 De D11 C1 ) x(k) (B3 F + B3 De D11
C1 ) xc (k)
+(B1 + B3 De ) e(k) + (B3 Dw + B2 Ew ) w(k)
1
1
xc (k + 1) = (A B3 F B1 D11 C1 B3 De D11 C1 ) xc (k)
1
1
+(B1 D11
+ B3 De D11
) y(k)
1
1
+(B3 Dw B3 De D11 D12 Ew + B2 Ew B1 D11
D12 Ew )w(k)
1
1
= (A B3 F B1 D11 C1 B3 De D11 C1 ) xc (k)
1
1
+(B1 D11
C1 + B3 De D11
C1 ) x(k) + (B1 + B3 De ) e(k)
+(B3 Dw + B2 Ew ) w(k)
v(k) =
=
x(k + 1) =
=
or in matrix form:
1
1
C1
B3 F B3 De D11
C1
x(k + 1)
A+B3De D11
x(k)
=
1
1
1
1
xc (k + 1)
B1 D11
C1 +B3 De D11
C1 AB3 F B1 D11
C1 B3 De D11
C1
xc (k)
B1 + B3 De
B3 Dw + B2 Ew
+
e(k) +
w(k)
B1 + B3 De
B3 Dw + B2 Ew
x(k)
+ B1,cl e(k) + B2,cl w(k)
= Acl
xc (k)
1
1
C1 F De D11
C1
v(k)
De D11
x(k)
De
Dw
=
+
e(k) +
w(k)
y(k)
C1
0
xc (k)
D11
D12 Ew
x(k)
+ D1,cl e(k) + D2,cl w(k)
= Ccl
xc (k)
Next we choose a new state:
I 0
x(k)
x(k)
x(k)
=
=T
I I
xc (k)
xc (k)
xc (k) x(k)
This state transformation with
I 0
T =
and
I I
T
I 0
I I
= Acl
+B
xc (k) x(k)
xc (k + 1) x(k + 1)
x(k)
v(k)
2,cl w(k)
1,cl e(k) + D
+D
= Ccl
xc (k)
y(k)
103
Ccl = Ccl T
1
1
De D11
I 0
C1 F De D11
C1
=
C1
0
I I
1
F F De D11 C1
=
C1
0
D1,cl = D1,cl
2,cl = D2,cl
D
The eigenvalues of the matrix Acl are equal to eigenvalues of (AB3 F ) and the eigenvalues
1
of (A B1 D11
C1 ). This means that in the unconstrained case necessary and sucient
conditions for closedloop stability are:
1. the eigenvalues of (A B3 F ) are strictly inside the unit circle.
1
C1 ) are strictly inside the unit circle.
2. the eigenvalues of (A B1 D11
Condition (1) can be satised by choosing appropriate tuning parameters such that the
feedback matrix F makes (AB3 F )  < 1. We can obtain that by a careful tuning (chapter
8), by introducing the endpoint constraint (section 6.3) or by extending the prediction
horizon to innity (section 6.3).
Condition (2) is related with the choice of the noise model H(q), given by the input output
relation (compare 6.1)
x(k + 1) = A x(k) + B1 e(k)
y(k) = C1 x(k) + D11 e(k)
104
CHAPTER 6. STABILITY
e
w
?
v
y
plant
predictive
controller
w
Figure 6.1: Closed loop conguration
with transfer function:
H(q) = C1 (qI A)1 B1 + D11
From appendix B we learn that for
A
B1
H(q)
C1
D11
we nd
H
1
(q)
1
C1
A B1 D11
1
D11 C1
1
B1 D11
1
D11
so the inverse noise model is given (for v = 0 and w = 0) by the state space equations:
1
1
x(k + 1) = (A B1 D11
C1 ) x(k) + B1 D11
y(k)
1
1
e(k) = D11 C1 x(k) + D11 y(k)
If condition (2) is not satised, it means that the inverse of the noise model is not stable.
1
We will have to nd a new observergain B1,new such that A B1,new D11
C1 is stable,
105
without changing the stochastic properties of the noise model. This can be done by a
spectral factorization
T
H(q)H T (q 1 ) = Hnew (q) Hnew
(q 1 )
1
where Hnew and Hnew
are both stable. H(q) and Hnew (q) have the same stochastic properties and so replacing H by Hnew will not aect the properties of the process. The new
Hnew is given as follows:
where X is the positive denite solution of the discrete algebraic Riccati equation:
1
1
(A B1 D11
C1 )T X(I + C1T C1 X)(A B1 D11
C1 ) X = 0
6.2
The issue of stability becomes even more complicated in the case where constraints have
to be satised. When there are only constraints on the control signal, a stability guarantee
can be given if the process itself is stable (Sontag [62], Balakrishnan et al. [4]).
In the general case with constraints on input, output and states, the main problem is
feasibility. The existence of a stabilizing control law is not at all trivial. For stability in
the inequality constrained case we need to modify the predictive control problem (as will
be discussed in Section 6.3) or we have to solve the predictive control problem with a state
feedback law using Linear Matrix inequalities, based on the work of Kothare et al. [35].
(This will be discussed in section 7.3.)
6.3
In this section we will introduce some modications of the MPC design method that
will provide guaranteed stability of the closed loop. We discuss the following stabilizing
modications ([45]):
1. Innite prediction horizon
2. Endpoint constraint (or terminal constraint)
3. Terminal cost function
4. Terminal cost function with endpoint constraint set
106
CHAPTER 6. STABILITY
These modication uses the concept of monotonicity of the performance index to prove
stability ([12, 37]). In most modication we assume the steadystate (vss , xss , wss , zss ) =
(vss , xss , wss , 0) is unique, so the matrix Mss as dened in (5.38) has full column rank.
Furthermore we assume that the system with input v and output z is minimumphase, so
Assumption 26 The matrix
zI A B3
C2 D23
does not drop in rank for all z 1.
Note that if assumption 26 is satised, Mss will have full rank. In most of the modications we will use the KrasovskiiLaSalle principle [72] to prove that the steadystate is
asymptotically stable. If a function V (k) can be found such that
V (k) > 0 , for all x = xss
V (k) = V (k) V (k 1) 0 for all x
and V (k) = V (k) = 0 for x = xss , and if the set {V (k) = 0} contains no trajectory of
the system except the trivial trajectory x(k) = xss for k 0, then the KrasovskiiLaSalle
principle states that the the steadystate x(k) = xss is asymptotically stable.
(6.2)
(6.3)
v(kk)
..
v(k) =
,
.
v(k + Nc 1k)
where Nc ZZ+ {}. A performance index is dened as
T
z(k + jk) (j) z(k + jk) ,
min J(
v , k) = min
v(k)
v(k)
j=0
where (i) (i+j) > 0 for j 0. The predictive control law minimizing this performance
index results in a stabilizing controller.
107
Proof:
First let us dene the function
V (k) = min J(
v , k) 0,
(6.4)
v(k)
where
v (k) =
v (kk)
v (k + 1k)
..
.
v (k + Nc 1k)
for Nc < ,
or
v (kk)
z (k + jk)
T
(j) z (k + jk) .
j=0
Now based on v (k) and the chosen stabilizing modication method we construct a vector
vsub (k + 1). This vector can be seen as a suboptimal input sequence for the k + 1th
optimization. The idea is to construct vsub (k + 1) such that
J(
vsub , k + 1) V (k).
If we now compute
v , k + 1),
V (k + 1) = min J(
v(k+1)
we will nd that
V (k + 1) = min J(
v , k + 1) J(
vsub , k + 1) V (k).
v(k+1)
108
CHAPTER 6. STABILITY
For the innite horizon case we dene the suboptimal control sequence as follows:
v (k + 1k)
v (k + 2k)
.
.
vsub (k + 1) =
for Nc < ,
.
v (k + Nc 1k)
vss
or
v (k + 1k)
vsub (k + 1) = v (k + 2k) for Nc = ,
..
.
and compute
Vsub (k + 1) = J(
vsub , k + 1),
then this value is equal to
T
Vsub (k + 1) =
z (k + jk) (j) z (k + jk) ,
j=1
and so
Vsub (k + 1) V (k)
We proceed by observing that
V (k + 1) = min J(
v , k + 1) J(
vsub , k + 1),
v(k+1)
which means that V (k + 1) V (k). Using the fact that V (k) = V (k) V (k 1) 0 and
that, based on assumption 26, the set {V (k) = 0} contains no trajectory of the system
except the trivial trajectory x(k) = xss for k 0, the KrasovskiiLaSalle principle [72]
states that the steadystate is asymptotically stable.
2 End Proof
As is shown above, for stability it is not important whether Nc = or Nc < . For Nc =
the predictive controller becomes equal to the optimal LQsolution (Bitmead, Gevers &
Wertz [7], see section 5.2.3). The main reason not to choose an innite control horizon is the
fact that constrainthandling on an innite horizon is an extremely hard problem. Rawlings
& Muske ([51]) studied the case where the prediction horizon is equal to innity (N = ),
but the control horizon is nite (Nc < ), see the sections 5.2.4 and 5.2.5. Because of the
nite number of control actions that can be applied, the constrainthandling has become
a nitedimensional problem instead of an innitedimensional problem for Nc = .
109
v(k)
N
1
T
z(k + jk)
(j) z(k + jk) ,
(6.5)
j=0
(6.6)
Finally, let w(k) = wss for k 0. Then, the predictive control law, minimizing (6.5),
results in a stable closed loop.
proof:
Also in this modication the performance index will be used as a Lyapunov function for
the closed loop system. First let us dene the function
V (k) = min J(
v , k) 0,
(6.7)
v(k)
v (kk)
..
v
(k
+
N
c 2k)
= v (k + Nc 1k)
vss
..
.
vss
v , k),
= arg min J(
v(k)
110
CHAPTER 6. STABILITY
where v (k + jk) means the optimal input value to be applied a time k + j as calculated
at time k. Let z (k + jk) be the (optimal) performance signal at time k + j when the
optimal input sequence v (k) computed at time k is applied. Then
T
z (k + jk) (j) z (k + jk) .
V (k) =
j=0
Now based on v (k) and the chosen stabilizing modication method we construct the vector
vsub (k + 1) as follows:
v (k + 1k)
..
v (k + Nc 1k)
vss
vsub (k + 1) =
,
vss
.
..
vss
and we compute
vsub , k + 1).
Vsub (k + 1) = J(
Applying this above input sequence vsub (k + 1) to the system results in a zsub (k + jk + 1) =
z (k + jk) for j = 1, . . . , N and zsub (k + N + jk + 1) = 0 for j > 0. With this we derive
Vsub (k + 1) =
N
T
zsub (k+j +1k + 1) (j + 1) zsub (k+j +1k + 1)
j=1
N
T
z (kk)
= V (k) z (kk)T (1)
V (k).
Further, because of the receding horizon strategy, we do a new optimization
V (k + 1) = min J(
v , k + 1) J(
vsub , k + 1),
v(k+1)
111
C1
y(k + Nk)
C1
yss
C1 A
y(k + N + 1k)
.. C1 A
x(k + N)
xss = 0,
. =
..
..
..
.
.
.
yss
n1
n1
C1 A
C1 A
y(k + N + n 1k)
also results in a
C1
C1 A
..
C1 An1
has full rank. For SISO systems with pair (C1 , A) observable, we choose n = dim(A),
so equal to the number of states. For MIMO systems, there may be an integer value
n < dim(A) for which W has full columnrank.
The endpoint constraint was rst introduced by (Clarke & Scattolini [16]), (Mosca & Zhang
[47]) and forces the output y(k) to match the reference signal r(k) = rss at the end of the
prediction horizon:
y(k + N + j) = rss for j = 1, . . . , n.
With this additional constraint the closedloop system is asymptotically stable.
Although stability is guaranteed, the use of an endpoint constraint has some disadvantages:
The endpoint constraint is only a sucient, but not a necessary condition for stability.
The constraint may be too restrictive and can lead to infeasibility, especially if the horizon
N is chosen too small.
The tuning rules, as will discussed in the Chapter 8, change. This has to do with the fact
that because of the control horizon Nc the output is already forced to its steady state at
time Nc and so for a xed Nc a prediction horizon N = Nc will give the same stable closed
loop behavior as any choice of N > Nc . The choice of the prediction horizon N is overruled
by the choice of control horizon Nc and we loose a degree of freedom.
112
CHAPTER 6. STABILITY
(6.9)
where
T
1 T
R = D D23 .
23
Then the predictive control law, minimizing (6.8), results in a stable closed loop.
Proof:
Dene the signals
x (k + jk) = x(k + jk) xss for all j 0,
v (k + jk) = v(k + jk) vss for all j 0,
T
T
C2 x (k + jk) ,
D23 )1 D23
v(k + j) = v (k + j) + (D23
then, according to section 5.2.2, it follows that the system can be described by the state
space equation
x (k + jk) + B3 v(k + j),
x (k+j +1k) = A
and the performance index becomes
J(v, k) = xT (k + Nk)P0 x (k + Nk)
+
N
1
j=0
x (k + jk) + vT (k + j)R
v (k + j).
xT (k + jk)Q
113
J (v, k) =
x (k + jk) + vT (k + j)R
v(k + j),
xT (k + jk)Q
j=0
and we structure the input signal such that v(k + j) is free for j = 0, . . . , Ns 1 and
v(k + j) = F x (k + jk) where
1 B T P0 A.
F = (B T P0 B3 + R)
3
J a (v, k) =
x (k + jk) + vT (k + j)R
v (k + j)
xT (k + jk)Q
j=0
x (k + jk) + vT (k + j)R
v (k + j),
xT (k + jk)Q
j=N
Then for j N:
x (k + j)
xT (k + j + 1)P0 x (k + j + 1) xT (k + j)P0 x (k + j) + xT (k + j)Q
v (k + j) =
+ vT (k + j)R
T T
x (k + j)
AT P0 B3
A P0 A P0 + Q
x (k + j)
=
=
B3T P0 B3 + R
B T P0 A
v(k + j)
v(k + j)
3T
A P0 A P0 + Q
I
AT P0 B3
x (k + j)
=
xT (k + j) I F T
T
T
B3 P0 B3 + R
B3 P0 A
F
AT P0 A P0 + Q
AT P0 B3
T
T
T
1
=
x (k + j) I A P0 B3 (B3 P0 B3 + R)
B3T P0 B3 + R
B3T P0 A
I
1 B T P0 A x (k + j)
(B3T P0 B3 + R)
3
AT P0 B3 (B T P0 B3 + R)
1 B T P0 A x (k + j)
=
xT (k + j) AT P0 A P0 + Q
3
3
=0,
and so we nd
x (k + j) + vT (k + j)R
v (k + j) = x (k + N)T P0 x (k + N),
xT (k + j)Q
j=Ns
which means that J = J a . We already have proven that the innite horizon performance
index Ja is a Lyapunov function. Because the nite horizon predictive controller with a
terminal cost function gives the same performance index J = J a means that also J is a
Lyapunov function and so nite horizon predictive controller with a terminal cost function
stabilizes the system.
2 End Proof
114
CHAPTER 6. STABILITY
(6.10)
j=0
(6.11)
T
1 T
R = D D23 .
23
(6.12)
where is constant vector. Let w(k) = wss for k 0, and consider the linear control law
T
T
D23 )1 D23
C2 x(k + jk) xss + vss ,
(6.13)
v(k + jk) = ( F + (D23
Finally let W be the set of all states for which (6.12)
1 B T P0 A.
with F = (B3T P0 B3 + R)
3
holds under control law (6.13), and assume
D D+ W.
(6.14)
Then the predictive control law, minimizing (6.8), subject to (6.12) and terminal constraint
x(k + Nk) D,
results in a stable closed loop.
(6.15)
115
Proof:
Dene the signals
x (k + jk) = x(k + jk) xss for all j 0,
v (k + jk) = v(k + jk) vss for all j 0,
T
1 T
v(k + j) = v (k + j) + (D23 D23 ) D23 C2 x (k + jk) ,
then according to section 5.2.2, it follows that the system can be described by the state
space equation
x (k + jk) + B3 v(k + j),
x (k+j +1k) = A
and the performance index becomes
J(
v , k) = xT (k + Nk)P0 x (k + Nk)
+
N
1
x (k + jk) + vT (k + j)R
v (k + j).
xT (k + jk)Q
j=0
Note that because x(k + Nk) D and the set D D+ where D+ is invariant under
feedback law (6.13), we can rewrite this performance index as
v, k) =
J (
+
N
1
j=0
x (k + jk) + vT (k + j)R
v(k + j)
xT (k + jk)Q
x (k + jk) + vT (k + j)R
v(k + j).
xT (k + jk)Q
j=N
where
v (kk)
v (k + 1k)
..
.
v (k) = v (k + N 1k)
F x (k + Nk)
F x (k + N + 1k)
..
.
116
CHAPTER 6. STABILITY
where v (k + jk) means the optimal input value to be applied a time k + j as calculated
at time k. Now dene
v (k + 1k)
v (k + 2k)
..
vsub (k + 1) = v (k + N 1k) .
F x (k + Nk)
F x (k + N + 1k)
..
.
Note that this control law is feasible, because x(k +N +jk) D for all j 0. We compute
Vsub (k + 1) = J(
vsub , k + 1),
then this value is equal to
T
Vsub (k + 1) =
z (k + jk)
z (k + jk) ,
j=1
and we nd
Vsub (k + 1) V (k).
Further
V (k + 1) = min J(
v , k + 1) J(
vsub , k + 1),
v(k+1)
6.4
6.4.1
The Internal Model Control (IMC) scheme provides a practical structure to analyze properties of closedloop behaviour of control systems and is closely related to model predictive
control [26], [46]. In this section we will show that for stable processes, the solution of the
SPCP (as derived in chapter 5) can be rewritten in the IMC conguration.
117
e
?
v 
Q
e
?
e
1
H
y+
?
e
The IMC structure is given in gure 6.2 and consists of the process G(q), the model G(q),
1 and the twodegreeoffreedom controller Q.
the (stable) inverse of the noise model H
For simplicity reasons we will consider F (q) = 0 (known disturbances are not taken into
account), but the extension is straight forward. The blocks in the IMC scheme satisfy the
following relations:
Process : y(k) = G(q)v(k) + H(q)e(k)
+ Q2 (q) e(k)
Controller : v(k) = Q(q)
= Q1 (q)w(k)
e(k)
Proposition 32
(6.19), where G,
G(q)
= G(q) and
(6.16)
(6.17)
(6.18)
(6.19)
Consider the IMC scheme, given in gure 6.2, satisfying equations (6.16)H and H 1 are stable. If the process and noise models are perfect,so if
H(q)
= H(q), the closed loop description is given by:
+ Q2 (q)e(k)
v(k) = Q1 (q)w(k)
y(k) = G(q)Q1 (q)w(k)
(6.20)
(6.21)
118
CHAPTER 6. STABILITY
and (6.19):
1
1
1
Q1 (q)w(k)+Q
(q)H(q)e(k)
v(k) = I Q2 (q)H (q)(G(q) G(q))
2 (q)H
1
1(q)(G(q) G(q))
Q1 (q)w(k)
+ C2 (q)y(k)
From equations (6.17)(6.19) we derive:
+ Q2 (q) e(k)
v(k) = Q1 (q)w(k)
1(q)(y(k) y(k))
+ Q2 (q) H
= Q1 (q)w(k)
1(q)y(k) Q2 (q) H
1(q)G(q)v(k)
= Q1 (q)w(k)
+ Q2 (q) H
1(q)G(q)
1(q)y(k)
v(k) = Q1 (q)w(k)
+ Q2 (q) H
I + Q2 (q) H
so
1
1
Q1 (q)w(k)
v(k) = I + Q2 H (q)G(q)
1
1 (q)G(q)
1(q)y(k)
Q2 (q) H
+ I + Q2 H
I + Q2 (q) H
Q1 (q)
1
1(q)G(q)
1(q)
Q2 (q) H
C2 (q) = I + Q2 (q) H
1
C1 (q) =
(6.22)
119
C1 (q)
Q1 (q) = I C2 (q)G(q)
1
C2 (q)H(q)
Q2 (q) = I C2 (q)G(q)
H
1 and Q(q) as follows:
In state space we can describe the systems G,
Controller Q(q):
+ (B1 + B3 De ) e(k)
x3 (k + 1) = (A B3 F ) x3 (k) + B3 Dw w(k)
+ De e(k)
v(k) = F x3 (k) + Dw w(k)
(6.23)
(6.24)
The state equations of x1 and x2 follow from chapter 2. We will now show that the above
choice of Q(q) will indeed lead to the same control action as the controller derived in
chapter 5.
Combining the three state equations in one gives:
1
1
C1
B3 F
x1 (k+1)
C1
B3 De D11
A B3 De D11
x1 (k)
1
1
x2 (k+1) =
x2 (k)
C1
A B1 D11
C1
0
B1 D11
1
1
1
1
B1 D11 C1 B3 De D11 C1 B1 D11 C1 B3 De D11 C1 A B3 F
x3 (k)
x3 (k+1)
1
B3 Dw
B3 De D11
1
B1 D11
0 w(k)
y(k) +
+
1
1
B1 D11 + B3 De D11
B3 Dw
x1 (k)
1
1
1
C1 De D11
C1 F x2 (k) + De D11
y(k) + Dw w(k)
v(k) = De D11
x3 (k)
We apply a state transformation:
x1
x1
I 0 0
x1
x2 = x1 + x2 x3 = I I I x2
0 0 I
x3
x3
x3
120
CHAPTER 6. STABILITY
This results in
1
1
C1
B3 F B3 De D11
C1
A
B3 De D11
x1 (k)
x1 (k+1)
x2 (k)
x2 (k+1) = 0
A
0
1
1
1
1
x3 (k+1)
0 B1 D11
x3 (k)
C1 B3 De D11
C1 AB3 F B1 D11
C1 B3 De D11
C1
1
B3 Dw
B3 De D11
0
0 w(k)
y(k) +
+
1
1
B1 D11 + B3 De D11
B3 Dw
x1 (k)
1
1
C1 F De D11
C1 x2 (k) + De y(k) + Dw w(k)
v(k) = 0 De D11
x3 (k)
It is clear that state x1 is not observable and state x2 is not controlable. Since A has all
eigenvalues inside the unit disc, we can do model reduction by deleting these states and
we obtain the reduced controller:
1
1
x3 (k + 1) = (AB3 F B1 D11
C1 B3 De D11
C1 ) x3 (k)
1
1
+(B1 D11
+B3 De D11
) y(k) + B3 Dw w(k)
1
1
v(k) = (F De D11 C1 ) x3 (k) + De D11 y(k) + Dw w(k)
This is exactly equal to the optimal predictive controller, derived in the equations (5.133)
and (5.134).
Remark 33 : Note that a condition of a stable Q is equivalent to the condition that all
eigenvalues of (A B3 F ) are strictly inside the unit circle. (The condition a stable inverse
1
H 1 is equivalent to the condition that all eigenvalues of (A B1 D11
C1 ) are strictly inside
the unit circle.)
Example 34 : computation of function Q1 (q) and Q2 (q) for a given MPC controller
Consider the system of example 7 on page 52 with the MPC controller computed in example
24 on page 97. Using equations 6.23 and 6.24 we can give the functions Q1 (q) and Q2 (q)
as follows:
A B3 F
B3 Dw
Q1 (q)
F
Dw
Q2 (q)
A B3 F
F
Note that
1 (q) Dw
Q1 (q) = Q
(B1 + B3 De )
De
1 (q)
Q
A B3 F
F
B3
I
121
6.4.2
The IMC scheme provides a nice framework for the case where the process is stable. For
the unstable case we can use the Youla parametrization [48], [73], [65]. We consider process
G(q), noise lter H(q), satisfying
y(k) = G(q) v(k) + H(q) e(k)
(6.25)
For simplicity reasons we will consider F (q) = 0 (known disturbances are not taken into
account). The extension for F (q) = 0 is straight forward.
Consider square transfer matrices
M(q) N(q)
(q) =
X(q) Y (q)
1
=
(q) =
(q)
Y (q) N
(q)
X(q)
M
(6.26)
+ Y (q) + Q2 (q)N(q)
122
CHAPTER 6. STABILITY
where M(q), N(q), X(q) and Y (q) are found from a doubly coprime factorization of plant
model G(q) and Q1 (q) and Q2 (q) are transfer functions with the appropriate dimensions.
For this controller we can make the following proposition:
Proposition 35 Consider the process (6.25) and the Youla parametrization (6.26). The
closed loop description is then given by:
+ Y (q) + N(q)Q
(6.28)
y(k) = N(q)Q
1 (q)w(k)
2 (q) R(q)e(k)
The closed loop is stable if and only if both Q1 and Q2 are stable.
The closed loop of plant model G(q), noise model H(q) together with the Youlabased
controller is visualised in gure 6.3.
Controller
e
?
e
6
?

Y 1
Q2

Process
Q1
6
e
v N
?
e
y 
The Youla parametrization (6.26) gives all controllers stabilizing G(q) for stable Q1 (q) and
Q2 (q). For a stabilizing MPC controller we can compute the corresponding functions Q1 (q)
and Q2 (q):
The MPC controller in a Youla conguration
The decomposition of G(q) in two stable functions M(q) and N(q) (or coprime factorization) is not unique. We will consider two possible choices for the factorization:
Factorization, based on the inverse noise model:
A possible statespace realization of (q) is the following
ij (q) = C,i (qI A )1 B,j + D,ij
M(q) N(q)
X(q) Y (q)
=
H (q) H (q)G(q)
X(q)
Y (q)
1
C1
A B1 D11
1
D11 C1
F
123
1
B1 D11
B3
1
D11
0
0
I
and we nd
R(q) = H(q) M(q) = I
And we choose
Q1 = Dw
Q2 = De
1
The poles of (q) are equal to the eigenvalues of the matrix A B1 D11
C1 , so equal to the
poles of the inverse noise model H 1(q). For 1 (q) we nd
A
B
F
B
B
3
1
3
= Y (q) N (q)
1 (q) =
C1
D11 0
X(q) M (q)
0
I
F
and so the poles of 1 (q) are equal to the eigenvalues of A B3 F . So, if the eigenvalues of
1
C1 ) and (A B3 F ) are inside the unit circle, the controller will be stabilizing
(A B1 D11
for any stable Q1 and Q2 . In our case Q1 = Dw and Q2 = De are constant matrices and
thus stable.
Factorization, based on a stable plant model:
If G, H and H 1 are stable, we can also make another choice for :
1
M(q) N(q)
H (q) H 1 (q)G(q)
(q) =
=
0
I
X(q) Y (q)
(q) is stable, and so is 1 (q), given by:
H(q) G(q)
1
(q) =
0
I
By comparing Figure 6.2 and Figure 6.3 we nd that for the above choices the Youla
parametrization boils down to the IMC scheme by choosing QY OU LA = QIM C . We nd
that the IMC scheme is a special case of the Youla parametrization.
Example 36 : computation of the functions , 1 , Q1 (q) and Q2 (q)
Given a system with process model G and noise model H given by
q 1
1 2 q 1
1 0.4 q 1
H =
1 2 q 1
G = 0.2
124
CHAPTER 6. STABILITY
A
C,1
C,2
1
B,1 B,2
C1
A B1 D11
1
D,11 D,12 =
D11 C1
F
D,21 D,22
A1
B1 ,1 B1 ,2
A B3 F
C1 ,1
D1 ,11 D1 ,12 =
C1
1
1
1
C ,2
D ,21 D ,22
F
1
B1 D11
B3
0.4
1
= 1
D11
0
6
0
I
1.6 0.2
1
0
0
1
B1 B3
0.8
=
D11 0
1
0
I
6
1.6 0.2
1
0
0
1
We see that the poles of both and 1 are stable (0.4 and 0.8 respectively). Finally
Q1 = Dw = 3 and Q2 = De = 2.
6.5
Robustness
In the last decades the focus of control theory is shifting more and more towards control of
uncertain systems. A disadvantage of common nite horizon predictive control techniques
(as presented in the chapters 2 to 5) is their inability to deal explicitly with uncertainty
in the process model. In real life engineering problems, system parameters are uncertain,
because they are dicult to estimate or because they vary in time. That means that we
dont know the exact values of the parameters, but only a set in which the system moves.
A controller is called robustly stable if a small change in the system parameters does
not destabilize the system. It is said to give robust performance if the performance does
not deteriorate signicantly for small changes parameters in the system. The design and
analysis of linear timeinvariant (LTI) robust controllers for linear systems has been studied
extensively in the last decade.
In practice a predictive controller usually can be tuned quite easily to give a stable closedloop and to be robust with respect to model mismatch. However, in order to be able to
guarantee stability, feasibility and/or robustness and to obtain better and easier tuning
rules by increased insight and better algorithms, the development of a general stability
and robustness theory for predictive control has become an important research topic.
One way to obtain robustness in predictive control is by careful tuning (Clarke & Mothadi
[13], Soeterboek [61], Lee & Yu [39]). This method gives quite satisfactory results in the
unconstrained case.
6.5. ROBUSTNESS
125
In the constrained case robustness analysis is much more dicult, resulting in more complex
and/or conservative tuning rules (Zariou [77], Gencelli & Nikolaou [30]). One approach to
guarantee robust stability is the use of an explicit contraction constraint (Zheng & Morari
[80], De Vries & van den Boom [74]). The two main disadvantages of this approach are the
resulting nonlinear optimization problem and the large but unclear inuence of the choice
of the contraction constraint on the controlled system.
An approach to guarantee robust performance is to guarantee that the criterion function
is a contraction by optimizing the maximum of the criterion function over all possible
models (Zheng & Morari [80]). The main disadvantages of this method are the need to use
polytopic model uncertainty descriptions, the use of less general criterion functions and,
especially, the dicult min max optimization. In the midnineties Kothare et al. ([35])
derived an LMIbased MPC method (see chapter 7), that circumvented all of these disadvantages, however, it may become quite conservative. This method, and its extensions,
will be discussed in section 7.4 and 7.5, respectively.
Robust stability for the IMC scheme
and
Next we will study what happens if there is a mismatch between the plant model G
the true plant G in the Internal Model Control (IMC) scheme of Section 6.4.1. We will
elaborate on the additive model error case, so the model error is given by:
1
126
CHAPTER 6. STABILITY
(q)H(q)e(k)
Q1 (q)w(k)+Q
v(k) = I Q2 (q)H (q)(G(q) G(q))
2 (q)H
1
1(q)(G(q) G(q))
I Q2 (q)H
is stable. A sucient condition for stability is that for all q 1:
1(q)(G(q) G(q))
= I
Q2 (q)H
This leads to the sucient condition for stability:
1 (G G)
<1
Q2 H
which can be relaxed to
1 Wa W 1 (G G)
<1
Q2 H
a
< 1, and so a sucient condition is
We know that Wa1 (G G)
1 Wa < 1
Q2 H
1 Wa . This function T depends on the integer design paDene the function T = Q2 H
rameters N, Nc , Nm , but also on the choice of performance index matrices C2 , D21 , D22 ,
and D23 . A tuning procedure for robust stability can follow the next steps: Design the
controller for some initial settings (see also chapter 8). Compute T . If T < 1, we
already have robust stability and we can use the initial setting as nal setting. If T 1,
the controller has to be retuned to obtain robust stability:
We can perform a mixedinteger optimization to nd a better parameter setting and terminate as soon as T < 1, or proceeded to obtain a even better robustness margin
(T 1).
Tuning of the noise model in the IMC conguration
Until now we assumed that the noise model
H(q) = C1 (qI A)1 B1 + D11
1
was correct with a stable inverse (eigenvalues of AB1 D11
C1 are all strict inside the unit
circle). In the case of model uncertainty however, the state observer B1 , based on the above
6.5. ROBUSTNESS
127
noise model may not work satisfactory and may even destabilize the uncertain system. In
many papers, the observer matrix B1 is therefore considered as a design variable and is
tuned to obtain good robustness properties. By changing B1 , we can change the dynamics
of T (z) and thus improve the robustness of the predictive controller.
We make the following assumption:
D21 = D21,o B1
which means that matrix De is linear in B1 , and so there exist a matrix De,o such that
De = De,o B1 . This will make the computations much easier.
We already had the equation
1 Wa < 1
Q2 H
Note that Q2 (q) is given by
Q2 (q) = F (q I A + B2 F )1 (B1 + B3 De ) + De
= F (q I A + B2 F )1 (I + B3 De,o ) + De,o B1
= Q2,o (q) B1
So a sucient condition for stability is
1 Wa < 1
Q2,o B1 H
where we now use B1 as a tuning variable. Note that for B1 0 we have that
1 Wa 0 and so by reducing the magnitude of matrix B1 we can alQ2,o B1 H
ways nd a value that will robustly stabilize the closedloop system. A disadvantage of
this procedure is that the noise model is disturbed and noise rejection is not optimal any
more. To retain some disturbance rejection, we should detune the matrix B1 not more
than necessary to obtain robustness.
Robust stability in the case of an IIO model
In this section we will consider robust stability in the case where we use an IIO model for
o and the true
the MPC design with an additive model error between the IO plant model G
plant Go . Note that we cannot use the IMC scheme in this case, because the plant is not
stable. We therefore use the Youla conguration. We make the following assumptions:
i (q) = 1 (q)G
o (q).
1. Gi (q) = 1 (q)Go(q) and G
i (q) = 1 (q)H
o(q).
2. Hi (q) = 1 (q)Ho (q) and H
o (q), H 1 (q) and H
1(q) are stable.
3. Go (q), G
o
o
i (q) is stabilizing.
i (q), H
4. The MPC controller for the nominal model G
128
CHAPTER 6. STABILITY
5. The true IIO plant is given by Gi (q) = 1 (q) Go (q) + Wa (q)(q) where the
uncertainty bound is given by 1, and Wa (q) is a given stable weighting lter
with stable inverse Wa1 (q).
M
1 (q)
o (q) = M 1 (q)N(q) = N(q)
We introduce a coprime factorization on the plant G
i (q) and S(q) = 1 (q)M(q). Now R(q), S(q), R1 (q) and
and dene R(q) = M(q)H
1
R (q) are all stable. (If we use the coprime factorization based on the inverse noise model
1 (q).)
on page 122, we will obtain R(q) = I and S(q) = H
o
The closed loop conguration is now given in Figure (6.4). The controller is given by
e
? e
6
6
e
Y 1
Q2

Process
Controller
Q1
Wa 
v
o
G
R
?
e ?
e
M 1
(Y + Q2 N) v(k) = (X + Q2 M) y + Q1 r(k)
and the plant
M y = N v + R e(k) + S
= XM
we obtain
substitution and using the fact that MX
(Y + Q2 N) v(k) = (X + Q2 M) y + Q1 r(k)
= (XM 1 + Q2 ) (N v + R e(k) + S ) + Q1 r(k)
+ Q2 ) (N v + R e(k) + S ) + Q1 r(k)
1 X
= (M
Q2 N) v = (M
+ Q2 ) (R e(k) + S ) + Q1 r(k)
1 XN
1 X
(Y + Q2 N M
XN)
+M
Q2 ) (R e(k) + S ) + M
Q1 r(k)
(MY
v = (X
y
6.5. ROBUSTNESS
129
Y XN
= I we obtain
By using the fact that M
+ MQ
2 ) (R e(k) + S ) + MQ
1 r(k)
v = (X
Q1 r(k)
+M
Q2 ) (R e(k) + S ) + Wa M
= Wa (X
A necessary and sucient condition for robust stability is now given by
+M
Q2 S) 1
Wa (X
+M
Q2 S). Similar to the IMC robust analysis case this
Dene the function T = Wa (X
function T depends on the integer design parameters N, Nc , Nm , but also on the choice of
performance index matrices C2 , D21 , D22 , and D23 . We can use the same tuning procedure
as for the IMC case to achieve robust stability.
130
CHAPTER 6. STABILITY
Chapter 7
MPC using a feedback law, based on
linear matrix inequalities
In the chapters 2 to 6 the basic framework of model predictive control was elaborated.
In this chapter a dierent approach to predictive control is presented, using linear matrix
inequalities (LMIs), based on the work of Kothare et al. [35]. The control strategy is
focussed on computing an optimal feedback law in the receding horizon framework. A
controller can be derived by solving convex optimization problems using fast and reliable
tecniques. LMI problems can be solved in polynomial time, which means that they have
low computational complexity.
In fact, the eld of application of Linear Matrix Inequalities (LMIs) in system and control
engineering is enormous, and more and more engineering problems are solved using LMIs.
Some examples of application are stability theory, model and controller reduction, robust
control, system identication and (last but not least) predictive control.
The main reasons for using LMIs in predictive control are the following:
Stability is easily guaranteed.
Feasibility results are easy to obtain.
Extension to systems with model uncertainty can be made.
Convex properties are preserved.
Of course there are also some drawbacks:
Although solving LMIs can be done using convex techniques, the optimization problem is more complex than a quadratic programming problem, as discussed in chapter 5.
Feasibility in the noisy case still can not be guaranteed.
131
132
In the section 7.1 we will discuss the main features of linear matrix inequalities, in section
7.2 we will show how LMIs can be used to solve the unconstrained MPC problem, and in
section 7.3 we will look at the inequality constrained case. Robustness issues, concerning
stability in the case of model uncertainty, are discussed in section 7.4. Finally section 7.5
will discuss some extensions of the work of Kothare et al. [35].
7.1
m
i Fi
(7.1)
i=1
where Rm1 , with elements i , is the variable and the symmetric matrices Fi = FiT
Rnn , i = 0, . . . , m are given. A strict Linear Matrix Inequality (LMI) has the form
F () > 0
(7.2)
The inequality symbol in (7.2) means that F () is positive denite (i.e. xT F ()x > 0 for
all nonzero x Rn1 ). A nonstrict LMI has the form
F () 0
Properties of LMIs:
Convexity:
The set C = {F () > 0} is convex in , which means that for each pair 1 , 2 C
and for all [0, 1] the next property holds
1 + (1 )2 C
Multiple LMIs:
Multiple LMIs can be expressed as a single LMI, since
F (1) > 0 , F (2) > 0 , . . . , F (p) > 0
is equaivalent to:
0
F (1) 0
0 F (2)
0
..
..
.
.
.
.
.
0
0 F (p)
>0
133
Schur complements:
Consider the following LMI:
Q() S()
>0
S T () R()
where Q() = QT (), R() = RT () and S() depend anely on . This is equaivalent to
R() > 0 , Q() S()R1 ()S T () > 0
n(n+1)/2
M=
Mi mi
i=1
where mi are the scalar entries of the upperright triangle part of M, and Mi are symmetric
matrices with entries 0 or 1. For example:
1 0
m1 m2
0 1
0 0
=
m1 +
M=
m2 +
m3
m2 m3
0 0
1 0
0 1
This means that a matrix inequality, that is linear in a symmetric matrix M, can easily
be transformed into a linear matrix inequality with a parameter vector
T
= m1 m2 . . . mn(n+1)/2
Before we look at MPC using LMIs, we consider the more simple cases of Lyapunov stability
of an autonomous system and the properties of a stabilizing statefeedback.
Lyapunov theory:
Probably the most elementary LMI is related to the Lyapunov theory. The dierence state
equation
x(k + 1) = A x(k)
is stable (i.e. all state trajectories x(k) go to zero) if and only if there exists a positivedenite matrix P such that
AT P A P < 0
For this system the Lyapunov function V (k) = xT (k)P x(k) is positive denite for P > 0,
and the increment V (k) = V (k + 1) V (k) is negative denite because
V (k + 1) V (k) =
=
=
<
for (AT P A P ) < 0.
134
Stabilizing statefeedback:
Consider the dierence state equation:
x(k + 1) = A x(k) + B v(k)
where x(k) is the state and v(k) is the input. When we apply a state feedback v(k) =
F x(k), we obtain:
x(k + 1) = A x(k) + B F x(k) = (A + BF ) x(k)
and so the corresponding Lyapunov equation becomes:
(AT + F T B T )P (A + BF ) P < 0 ,
P >0
S>0
(7.3)
which are LMIs in S and Y . So any S and Y satisfying the above LMI result in a stabilizing
statefeedback F = Y S 1 .
7.2
(7.4)
(7.5)
(7.6)
135
For simplicity reasons, in this chapter, we consider e(k) = 0. Like in chapter 5, we consider
constant external signal w, a zero steadystate performance signal zss and a weighting
matrix (j) which is equal to identity, so
w(k + j) = wss for all j 0
zss = 0
(j) = I for all j 0
Let the system have a steadystate, given by (vss , xss , wss , zss ) = (Dssv wss , Dssxwss , wss, 0).
We dene the shifted versions of the input and state which are zero in steadystate:
v (k) = v(k) vss
x (k) = x(k) xss
Now we consider the problem of nding at time instant k an optimal state feedback v (k +
jk) = F (k) x (k + jk) to minimize the quadratic objective
min(k)J(k)
F
zT (k + jk)
z (k + jk)
j=0
in which z(k + jk) is the prediction of z(k + j), based on knowledge up to time k.
J(k) =
=
j=0
z(k + jk)T
z (k + jk)
(C2 x(k + jk) + D22 w(k + j) + D23 v(k + jk))T (C2 x(k + jk)
j=0
=
(C2 x (k + jk) + D22 v (k + jk))T (C2 x (k + jk) + D22 v (k + jk))
=
j=0
T
x T (k + jk)(C2T + F T (k)D22
)(C2 + D22 F (k))x (k + jk)
j=0
(7.7)
136
for every trajectory. Note that because V (k) < 0 and V (k) > 0, it follows that V () =
0. This implies that x () = 0 and therefore V () = 0. Now there holds:
V (k) = V (k) V ()
V (k + j)
=
j=0
>
T
x T (k + jk)(C2T + F T (k)D22
)(C2 + D22 F (k))x (k + jk)
j=0
= J(k)
so x T (k)P x (k) is a lyapunov function and at the same time an upper bound on J(k).
Now derive
V (k) = V (k + 1) V (k) =
= x T (k + 1k)P x (k + 1k) x T (k)P x (k) =
= x T (k) (A + B3 F )T P (A + B3 F ) x (k) x T (k)P x (k) =
= x T (k) (A + B3 F )T P (A + B3 F ) P x (k)
And so condition (7.7) is the same as
(A + B3 F )T P (A + B3 F ) P + (C2 + D22 F )T (C2 + D22 F ) < 0
By setting P = S 1 and F = Y S 1 we obtain the following condition:
T
(AT + S 1 Y T B3T )S 1 (A + B3 Y S 1 ) S 1 + (C2T + S 1 Y T D22
)(C2 + D22 Y S 1 ) < 0
or
S
SA + Y
B3T
(SC2T
+Y
T
D22
)1/2
S 1
AS + B3 Y
0
>0
0 1 I
1/2 (C2 S + D22 Y )
T
S
SAT + Y T B3T (SC2T + Y T D22
)1/2
> 0 ,S > 0
AS + B3 Y
S
0
1/2
0
I
(C2 S + D22 Y )
So any , S and Y satisfying the above LMI gives an upper bound
V (k) = x (k)T S 1 x (k) > J(k)
(7.9)
137
(7.10)
(7.11)
Now
>J
(7.12)
7.3
In the previous section an unconstrained predictive controller in the LMIsetting was derived. In this section we incorporate constraints. The most important tool we use is the
notion of invariant ellipsoid:
Invariant ellipsoid:
Lemma 37 Consider the system (7.4) and (7.6). At sampling time k, consider
x (kk)T S 1 x (kk) 1
and let S also satisfy (7.9), then there will hold for all j > 0:
x (k + jk)T S 1 x (k + jk) 1
Proof:
From the previous section we know that when (7.9) is satised, there holds:
V (k + j) < V (k) for j > 0
Using V (k) = x T (k)P x (k) = x T (k)S 1 x (k) we derive:
x (k + jk)T S 1 x (k + jk) = 1 V (k + j) < 1 V (k) = x (kk)T S 1 x (kk) 1
2 End Proof
138
E =  T S 1 1 =  T P
of the system, and so
x (kk) E
x (k + jk) E
j>0
The condition x (kk)T S 1 x (kk) 1 of the above lemma is equivalent to the LMI
1
x T (kk)
0
(7.13)
x (kk)
S
Signal constraints
Lemma 38 Consider the scalar signal
(k) = C4 x(k) + D42 w(k) + D43 v(k)
(7.14)
(7.15)
where D43 = 0 and max > 0. Further let condition (7.11) hold and assume that ss  <
max where
ss = C4 xss + D42 wss + D43 vss .
Then condition (7.15) will be satised if
S
(C4 S + D43 Y )T
0
C4 S + D43 Y (max ss )2
(7.16)
Proof::
Note that we can write
(k) = (k) + ss
where
(k) = C4 x (k)(k) + D43 v (k)(k) .
Equation (7.16) together with lemma 37 gives that
 (k + jk)2 (max ss )2
or
 (k + jk) max ss 
and so
(k) (k) ss  + ss  max ss  + ss  = max
2 End Proof
139
=
=
=
=
A x(k) + B3 v(k)
C1 x(k) + D12 w(k)
C2 x(k) + D22 w(k) + D23 v(k)
C4 x(k) + D42 w(k) + D43 v(k)
(7.17)
(7.18)
(7.19)
(7.20)
with a steadystate (vss , xss , wss , zss ) = (Dssv wss , Dssx wss , wss , 0) The LMIMPC control
problem is to nd, at each timeinstant k, a statefeedback law
v(k) = F (x(k) xss ) + vss
such that the criterion
min J(k)
F
where
J(k) =
z T (k + j)z(k + j)
(7.21)
j=0
j 0
(7.22)
where i,max > 0 for i = 1, . . . , n . The optimal solution is found by solving the following
LMI problem:
140
Theorem 40 Given a system with state space description (7.17)(7.19) and a control
problem of minimizing (7.21) subject to constraint (7.22), and an external signal
w(k + j) = wss
for j > 0
,S,Y
(7.23)
subject to
(LMI for lyapunov stability:)
S>0
(LMI for ellipsoid boundary:)
1
x T (kk)
0
S
x (kk)
(LMIs for worst case MPC performance index:)
T
S
SAT +Y T B3T (SC2T +Y T D22
)1/2
>0
AS +B3 Y
S
0
1/2
0
I
(C2 S + D22 Y )
(LMIs for constraints on :)
S
(C4 S + D43 Y )T EiT
0,
i = 1, . . . , n
Ei (C4 S + D43 Y ) (i,max i,ss )2
where Ei = 0 . . . 0 1 0 . . . 0 with the 1 on the ith position.
(7.24)
(7.25)
(7.26)
(7.27)
LMI (7.24) guarantees Lyapunov stability, LMI (7.25) describes an invariant ellipsoid for
the state, (7.26) gives the LMI for the MPCcriterion and (7.27) corresponds to the state
constraint on i .
7.4
In this section we consider the problem of robust performance in LMI based MPC. The
signals w(k) and e(k) are assumed to be zero. How these signals can be incorporated in
the design is in Kothare et al. [35].
We consider both unstructured and structured model uncertainty. Given the fact that w
and e are zero, we get the following model uncertainty descriptions for the LMIbased case:
141
Unstructured uncertainty
The uncertainty is given in the following description:
x(k + 1)
z(k)
(k)
(k)
(k)
=
=
=
=
=
(7.28)
(7.29)
(7.30)
(7.31)
(7.32)
..
.
r
(7.33)
= 1, . . . , r
(7.34)
Structured uncertainty
The uncertainty is given in the following description:
x(k + 1) = A x(k) + B3 v(k)
z(k) = C2 x(k) + D23 v(k)
(k) = C4 x(k) + D43 v(k)
(7.35)
(7.36)
(7.37)
where
A B3
P = C2 D23 Co
C4 D43
3,1
A1 B
23,1 , . . . ,
C2,1 D
43,1
C4,1 D
3,L
AL B
23,L
C2,L D
43,L
C4,L D
(7.38)
142
GG
where
J(k) =
z T (k + j)z(k + j)
(7.39)
j=0
j 0
(7.40)
In equation (7.39) inputsignal v(k) is choosen such that the performance index J is minimized for the worstcase choice of G in G, subject to constraints (7.40) for the same
worstcase G. As was introduced in the previous section, the uncertainty in G can be
either structured or unstructured, resulting in the theorems 41 and 42.
Theorem 41 Robust performance for unstructured model uncertainty
Given a system with uncertainty description (7.28)(7.34) and a control problem of minimizing (7.39) subject to constraints (7.40). The statefeedback F = Y S 1 minimizing the
worstcase J(k) can be found by solving:
min
,S,Y,T,V,
(7.41)
subject to
(LMI for lyapunov stability:)
S>0
(7.42)
S
(AS + B3 Y )T (C2 S + D23 Y )T (C5 S + D53 Y )T
T
(AS + B3 Y )
S B4 B4T
B4 D24
T
(C2 S + D23 Y ) D24 B4T I D24 D24
0
0
0
(C5 S + D53 Y )
(7.43)
(7.44)
143
>0
(7.45)
where is partitioned as :
= diag(1 In1 , 2 In2 , . . . , r Inr ) > 0
(7.46)
2
i,max
S
(C5 S + D53 Y )T (C4 S + D43 Y )T EiT
C5 S + D53 Y
0,
V 1
0
1 T
T
Ei (C4 S + D43 Y )
0
I Ei D44 V D11 Ei
i = 1, . . . , n
V 1 > 0
(7.47)
(7.48)
where V is partitioned as :
V = diag(v1 In1 , v2 In2 , . . . , vr Inr ) > 0
and Ei = 0 . . . 0 1 0 . . . 0 with the 1 on the ith position.
(7.49)
LMI (7.42) guarantees Lyapunov stability, LMI (7.43) describes an invariant ellipsoid for
the state, (7.44), (7.45) are the LMIs for the worstcase MPCcriterion and (7.47), (7.49)
and (7.48) correspond to the state constraint on .
(7.50)
(7.51)
j0
Further the derivation of (7.47)(7.48) is analoguous to the derivation of the LMI for an
output constraint in Kothare et al.[35].
In the notation of the same paper (7.44) can be derived as follows:
V (x(k+j +1k)) =
144
T
(A + B3 F )T P (A + B3 F ) (A + B3 F )T P B4
=
B4T P (A + B3 F )
B4T P B4
T
P 0
x(k + jk)
x(k + jk)
V (x(k + jk)) =
0 0
(k + jk)
(k + jk)
x(k + jk)
(k + jk)
x(k + jk)
(k + jk)
J(k + j) =
= z(k + j)T z(k + j)
T
= (C2 + D23 F )x(k + j) + D24 (k + j) R (C2 + D23 F )x(k + j) + D24 (k + j)
T
(C2 + D23 F )T (C2 + D23 F ) (C2 + D23 F )T D24
x(k + jk)
x(k + jk)
=
T
T
D24
(C2 + D23 F )
D24
D24
(k + jk)
(k + jk)
And so the condition
V (x(k+j +1k)) V (x(k + jk)) J(k + j)
result in the inequality
T
(A+B3 F )T P B4 + (C2 +D23 F )T D24
(A+B3 F )T P (A+B3F ) P +
x(k+jk)
0
(k+jk)
This condition together with condition
(k + jk)T (k + jk)
x(k + jk)(C22, + D53, F )T (C22, + D53, F )x(k + jk) , = 1, 2, . . . r
is satised if there exists a matrix Z 0 such that
(AS + B3 Y )T S 1 B4 +
(AS + B3 Y )T S 1 (AS + B3 Y ) S+
T
+(C2 S + D23 Y ) (C2 S + D23 Y ) + SQS+
+(C2 S + D23 Y )T D24
145
,S,Y
(7.52)
subject to
(LMI for lyapunov stability:)
S>0
(LMI for ellipsoid boundary:)
1
xT (kk)
0
x(kk)
S
(7.53)
(7.54)
T (S C T +Y T D
T )1/2
S
S AT +Y T B
3,
2,
23,
3, Y
0
S
0
A S + B
1/2
23, Y )
0
I
(C2, S + D
, = 1, . . . , L (7.55)
LMI (7.53) guarantees Lyapunov stability, LMI (7.54) describes an invariant ellipsoid for
the state, (7.55) gives the LMIs for the worstcase MPCcriterion and (7.56) correspond to
the state constraint on .
(7.57)
(7.58)
146
C4 D43
j0
1
=
Ei (C4 + D43 F )S 2 ,
j0
Further the derivation of (7.56) is analogous to the derivation of the LMI for an output
constraint in Kothare et al.[35].
In the notation of the same paper (7.55) can be derived as follows. The equation
(A + B3 F )T P (A + B3 F ) P + Q + F T RF 0
is replaced by:
(A + B3 F )T P (A + B3 F ) P + (C2 + D23 F )T (C2 + D23 F ) 0
For P = S 1 and F = Y S 1 we nd
T
S
SAT + Y T B3T (SC2T + Y T D23
)1/2
0
AS + B3 Y
S
0
1/2
0
I
(C2 S + D23 Y )
(7.59)
7.5
Although the LMI based MPC method, described in the previous sections, is a useful for
robustly controlling systems with signal constraints, there are still a few disadvantages.
Computational complexity: Despite the fact that solving an LMI problem is convex,
it is computationally more complex than a quadratic program. For larger systems
the time needed to solve the problem may be too long.
Conservatism: The use of invariant sets (see Section 7.3) may lead to very conservative approximations of the signal constraints. The assumption that constraints are
symmetric around the steady state value is very restrictive.
Approximation of the performance index: Equation (7.12) shows that in LMIMPC
we do not minimize the performance index J itself, but only an upper bound J.
This means that we may be far away from the real optimal control input.
Since Kothare et al. [35] introduced the LMIbased MPC approach, many researchers have
been looking for ways to adapt and improve the method such that at least one of the above
disadvantages is relaxed.
147
(7.60)
In this way only a simple optimization is needed online to calculate the N extra control
moves. Then the closedloop system is derived as follow.
x(k + 1) = Ac (k)x(k) + B(k)c(k)
(7.61)
where Ac (k) = A(k) + B(k)F . Equation can be expressed as the following statespace
model.
(k + 1) = (k)(k)
B(k) 0 . . . 0
Ac (k)
(k) =
0
M
c(k)
c(k + 1)
x(k)
(k) =
, f (k) =
..
f (k)
.
c(k + N 1)
0 IN 0 . . . 0
0 0 IN
0
..
.
.. ..
M = .
. ..
.
0 0
0 IN
0 0 ... 0 0
(7.62)
(7.63)
If there exists an invariant set Ex = {xxT Q1 x 1} for the control law v(k + ik) =
F x(k + ik), i 0, there must exist ar least one invariant set E for system (7.62), where
Ex = { T Q1
1}
(7.64)
Matrix Q1
can be partioned as
Q1
T
11 Q
Q
21
21 Q
22
Q
11 = Q1
withQ
148
can aord highly tuned control law without the reduction of the invariant set. Hence, the
control law F can be designed oine, using the any of the robust control methods without
considering the constraints. When the optimal F is decided, the cost function can be
replaced by Jf = f T f . One can see that the computational burden involved in the online
tuning of F is removed that shows the main advantage of this method.
Related to this method, Bloemen et al. [8] extends the hybrid approach by introducing a
switching horizon Ns . Before Ns the feedback law is designed oine, but after the Ns the
control law F is a variable in the online optimization problem. Here the uncertainties are
not considered. The performance index is dened as
J(k) =
(7.65)
j=0
where Q and R are positivedenite weighting matrices. If F is given, the innite sum
beyond Ns in (7.65) can be replaced by quadratic terminal cost function xT (k+Ns k)P x(k+
Ns k), the weighting matrix P can be obtained via a Lyapunov equation. Based on this
terminal cost function the upper bound of the performance index beyond Ns is given as
follow
N
s 1
J(k)
=
y T (k + j + 1k)Qy(k + j + 1k) + uT (k + jk)Ru(k + jk)
j=0
(7.66)
The the entire MPC optimization problem can be given by the semidenite programming
149
150
Chapter 8
Tuning
As was already discussed in the introduction, one of the main reasons for the popularity
of predictive control is the easy tuning. The purpose of tuning the parameters is to acquire good signal tracking, sucient disturbance rejection and robustness against model
mismatch.
Consider the LQPCperformance index
J(u, k) =
N
x (k + jk)Q
x(k + jk) +
N
uT (k + j 1)Ru(k + j 1)
j=1
j=Nm
N
T
yp (k + jk) r(k + j)
yp (k + jk) r(k + j) +
j=Nm
Nc
uT (k + j 1)u(k + j 1)
j=1
P (q)
Q
R
=
=
=
=
=
=
=
minimumcost horizon
prediction horizon
control horizon
weighting factor (GPC)
tracking lter (GPC)
state weighting matrix (LQPC)
control weighting matrix (LQPC)
151
152
CHAPTER 8. TUNING
This chapter discusses the tuning of a predictive controller. In section 8.1 some rules of
thumb are given for the initial parameter settings. In section 8.2 we look at the case
where the initial controller does not meet the desired specications. An advanced tuning
procedure may provide some tools that lead to a better and suitable controller. The nal
netuning of a predictive controller is usually done online when the controller is already
running. Section 8.3 discusses some special settings of the tuning parameters, leading to
well known controllers.
In this chapter we will assume that the model is perfect. Robustness against model mismatch was already discussed in section 6.5.
8.1
For GPC the parameters N, Nc and are the three basic tuning parameters, and most
papers on GPC consider the tuning of these parameters. The rulesofthumb for the settings
of a GPCcontroller were rst discussed by Clarke & Mothadi ([13]) and later reformulated
for the more extended UPCcontroller in Soeterboek ([61]). In LQPC the parameters N
and Nc are present, but instead of and P (q), we have the weighting matrices R and Q.
In Lee & Yu ([39]), the tuning of the LQPC parameters is considered.
In the next sections the initial setting for the summation parameters (Nm , N, Nc ) and the
signal weighting parameters (P (q), / Q, R) are discussed.
8.1.1
In both the LQPCperformance index and the GPCperformance index, three summation
parameters, Nm , N and Nc are recognized, which can be used to tune the predictive
controller. Often we choose Nm = 1, which is the best choice most of the time, but may
be chosen larger in the case of a deadtime or an inverse response.
The parameter N is mostly related to the length of the step response of the process, and
the prediction interval N should contain the crucial dynamics of x(k) due to the response
on u(k) (in the LQPCcase) or the crucial dynamics of y(k) due to the response on u(k)
(in the GPCcase).
The number Nc N is called the control horizon which forces the control signal to become
constant
u(k + j) = constant for j Nc
or equivalently the increment input signal to become zero
u(k + j) = 0 for j Nc
An important eect of a small control horizon (Nc N) is the smoothing of the control
signal (which can become very wild if Nc = N) and the control signal is forced towards
153
its steadystate value, which is important for stabilityproperties. Another important consequence of decreasing Nc is the reduction in computational eort. Let the control signal
v(k) be a p 1 vector. Then the control vector v(k) is a pN 1 vector, which means
we have pN degrees of freedom in the optimization. By introducing the control horizon
Nc < N, these degrees of freedom decrease because of the equality constraint, as was shown
in chapter 4 and 5. If the control horizon is the only equality constraint, the new optimization parameter is
, which is a pNc 1 vector. Thus the degrees of freedom reduces
is an pN pNc matrix, and so
to pNc . This can be seen by observing that the matrix D
33
T
D
23 D
becomes an pNc pNc matrix. For the case without
)T D
the matrix H = 2 (D
33
23
33
further inequality constraints, this matrix H has to be inverted, which is easier to invert
for smaller Nc .
If we add inequality constraints we have to minimize
1 T
min
H
2
subject to these constraints. The number of parameters in the optimization reduces from
pN to pNc , which will speed up the optimization.
Consider a process where
d = the
n = the
= the
= the
= the
= the
ts = the
s = the
b = the
where Gc,o (s) is the original IO process model in continuoustime (corresponding to the
discretetime IO process model Go (q)). Now the following initial setting are recommended
(Soeterboek [[61]]):
Nm
Nc
N
N
=
=
=
=
1+d
n
int(N ts ) for welldamped systems
int(N s /b ) for badlydamped and unstable systems
where N [1.5, 2] and N [4, 25]. Resuming, Nm is equal to 1 + the process deadtime
estimate and N should be chosen larger than the length of the step response of the openloop process. When we use an IIO model, the value of N should be based on the original
IOmodel (Go (q) or Gc,o(s)).
154
CHAPTER 8. TUNING
For badlydamped and unstable systems, the above choice of N = 2 int(2s /b) is a lower
bound. Often increasing N may improve the stability and the performance of the closed
loop.
In the constrained case Nc will usually be chosen bigger, to introduce extra degrees of
freedom in the optimization. A reasonable choice is
n Nc N/2
If signal constraints get very tight, Nc can be even bigger than N/2. (see elevator example
in chapter 5).
Remark 43 : Alamir and Bornard [1] show there exists a nite value N for which the
closedloop system is asymptotically stable. Unfortunately, only the existence of such a
nite prediction horizon is shown, its computation remains dicult.
8.1.2
and
R = 2 I
155
which makes the LQPC performance index equivalent to the GPC index for an IO model,
r(k) = 0 and P (q) = 1. This can be seen by substitution:
J(u, k) =
N
xT (k + jk)Q
x(k + jk) +
Nc
j=1
j=Nm
N
uT (k + j 1)Ru(k + j 1)
x (k +
jk)C1T C1 x(k
+ jk) +
N
Nc
uT (k + j 1)u(k + j 1)
j=1
j=Nm
yT (k + jk)
y (k + jk) + 2
j=Nm
Nc
uT (k + j 1)u(k + j 1)
j=1
The rules for the tuning of are the same as for GPC. So, = 0 is an obvious choice for
minimum phase systems, is small for nonminimum phase systems. By adding an term
Q1 to the weighting matrix
Q = C1T C1 + Q1
we can introduce an additional weighting xT (k+jk)Q1x(k+jk) on the states.
In the MIMO case it is important to scale the input and output signals. To make this
clear, we take an example from the paper industry where we want to produce a paper roll
with a specic thickness (in the order of 80 m) and width (for A1paper in the order of 84
cm), Let y1 (k) represent the thicknesserror (dened in meters) and y2 (k) the widtherror
(also dened in meters). If we dene an error like
J=
y1(k)2 + y2(k)2
it is clear that the contribution of the thicknesserror to the performance index can be
neglected in comparison to the contribution due to the widtherror. This means that the
error in paper thickness will be neglected in the control procedure and all the eort will be
put in controlling the width of the paper. We should not be surprised that using the above
criterion could result in paper with variation in thickness from, say 0 to 200 m. The
introduction of scaling gives a better balance in the eort of minimizing both widtherror
and thicknesserror. In this example let us require a precision of 1% for both outputs, then
we can introduce the performance index
J=
y1(k)/8 107 2 + y2 (k)/8.4 103 2
Now the relative error of both thickness and width are measured in the same way.
In general, let the required precision for the ith output be given by di, so  yi  di , for
i = 1, . . . , m. Then a scaling matrix can be given by
S = diag(d1 , d2 , . . . , dm )
156
CHAPTER 8. TUNING
By choosing
Q = C1T S 2 C1
the rst term of the LQPC performance index will consist of
T
x (k + jk)Q
x(k + jk) = x (k +
jk)C1T S 2 C1 x(k
+ jk) =
m

yi (k + jk)2 /d2i
i=1
We can do the same for the input, where we can choose a matrix
R = 2 diag(r12 , r22 , . . . , rp2)
The variables ri2 > 0 are a measure how much the costs should increase if the ith input
ui (k) increases by 1.
Example 44 : Tuning of GPC controller
In this example we tune a GPC controller for the following system:
ao (q) y(k) = bo (q) u(k) + fo (q) do(k) + co (q) eo(k)
where
ao (q)
bo (q)
co (q)
fo (q)
= (1 0.9q 1)
= 0.03 q 1
= (1 0.7q 1)
=0
For the design we consider eo (k) to be integrated ZMWN (so ei (k) = eo (k) is ZMWN).
First we transform the above polynomial IO model into a polynomial IIO model and we
obtain the following system:
ai (q) y(k) = bi (q) u(k) + fi (q) di(k) + ci (q) ei (k)
where
ai (q)
bi (q)
ci (q)
fi (q)
= (1 q 1 )ao (q)
= bo (q)
= co (q)
= fo (q)
= (1 q 1 )(1 0.9q 1 )
= 0.03 q 1
= (1 0.7q 1 )
=0
Now we want to nd the initial values of the tuning parameters following the tuning rules
in section 8.1. To start with we plot the step response of the IOmodel (!!!) in Figure 8.1.
We see in the gure that system is welldamped. The 5% settling time is equal to ts = 30.
The order of the IIO model is n = 2 and the dead time d = 0 .
157
0.35
0.3
s(k) >
0.25
0.2
0.15
0.1
0.05
0
0
stepresponse
5% bounds
settling time
10
20
30
k >
40
50
60
158
CHAPTER 8. TUNING
solid: y , dotted: r , dashdot: eo
1.5
y(k),r(k) >
0.5
0.5
1.5
0
y(k)
r(k)
eo(k)
10
20
30
k >
40
50
60
8.2
In the ideal case the performance index and constraints reect the mathematical formulation of the specications from the original engineering problem. However, in practice the
159
10
10
10 2
10
10
10
10
10
y(k) for =4
y(k) for =10
y(k) for =25
e (k)
1.5
0.5
0
0.5
1
0
10
15
k >
20
25
30
160
CHAPTER 8. TUNING
desired behaviour of the closed loop system is usually expressed in performance specications, such as stepresponse criteria (overshoot, rise time and settling time) and response
criteria on noise and disturbances. If the specications are not too tight, the initial tuning,
using the rules of thumb of the previous section, may result in a feasible controller. If the
desired specications are not met, an advanced tuning is necessary. In that case the relation between the performance specications and the predictive control tuning parameters
(Nm , N, Nc , for GPC: , P (q), and for LQPC: Q, R) should be understood.
In this section we will study this relation and consider various techniques to improve or
optimize our predictive controller.
We start with the closed loop system as derived in chapter 6:
v(k)
M11 (q) M12 (q)
e(k)
=
M21 (q) M22 (q)
y(k)
w(k)
where w(k)
consists of the present and future values of reference and disturbance signal.
Let w(k)
161
The rise time ko = rt is an integer and the derivative cannot be computed directly.
Therefore we introduce the interpolated rise time
rt :
rt =
The interpolated rise time is the (realvalued) timeinstant where the rstorder interpolated stepresponse reach the value 0.8.
3. Settling time of output signal on step reference signal:
The settling time is dened as the time the output signal y(k) needs to settle within
5% of its nal value.
&
%
'
&
st = min k ko &  syr (k) 1  < 0.05
ko
st =
The interpolated settling time is the (realvalued) timeinstant where the rstorder
interpolated trackingerror on a step reference signal is settled within its 5% region.
4. Peak value of input signal on step reference signal
of the output signal v(k) for r(k) is a step, is given by
pvr = max  svr (k) 
k
=
wd (k), is given by
pyd = max  syd (k) 
k
162
CHAPTER 8. TUNING
vec(Ap )
Nm
vec(Q1/2 )
N
for LQPC: =
=
for GPC: = vec(Bp )
vec(R1/2 )
vec(Cp )
Nc
vec(Dp )
It is clear that the criteria depend on the chosen values of and , so:
= (, )
where consists of integer values and consists of real values. The tuning of and
can be done sequentially or simultaneously. In a sequential procedure we rst choose
the integer vector and tune the parameters for these xed . In a simultaneous procedure the vectors and are tuned at the same time, using mixedinteger search algorithms.
The tuning of the integer values is not too dicult, if we can limit the size of the search
space S . A reasonable choice for bounds on Nm , N and Nc is as follows:
1 Nm
N
1 N N2,o
1 Nc
N
where N2,o is the prediction horizon from the initial tuning, and = 2 if N2,o is big and
= 3 or = 4 for small N2,o . By varying ZZ we can increase or decrease the search
space S .
For the tuning of it is important to know the sensitivity of the criteria to changes in
163
Parameter optimization
Tuning the parameters and can be formulated in two ways:
1. Feasibility problem: Find parameters and such that some selected criteria
1 , . . . , p satisfy specic predescribed bounds:
1 c1
..
.
p cp
For example: nd parameters and such that the overshoot is smaller than 1.05,
the rise time is smaller than 10 samples and the bandwidth of the closed loop system
is larger than /2 (this can be written as bw /2).
2. Optimization problem: Find parameters and such that criterion o is minimized, subject to some selected criteria 1 , . . . , p satisfying specic predescribed
bounds:
1 c1
..
.
p cp
For example: nd parameters and such that the settling time is minimized,
subject to an overshoot smaller than 1.1 and actuator eort smaller than 10.
Since has integer values, an integer feasibility/optimization problem arises. The criteria
i (x) are nonlinear functions of and , and a general solution technique does not exist.
Methods like Mixed Integer Linear Programming (MILP), dynamic programming, Constraint Logic Propagation (CLP), genetic algorithms, randomized algorithms and heuristic
search algorithms can yield a solution. In general, these methods require large amounts of
calculation time, because of the high complexity and they are mathematically classied as
NPhard problems. This means that the search space (the number of potential solutions)
grows exponentially as a function of the problem size.
8.3
Predictive control is an open method, that means that many wellknown controllers can
be found by solving a specic predictive control problem with special parameter settings
(Soeterboek [61]). In this section we will look at some of these special cases. We consider
only the SISO case in polynomial setting, and
y(k) =
c(q)
q d b(q)
u(k) +
e(k)
a(q)(q)
a(q)(q)
164
CHAPTER 8. TUNING
where
a(q)
b(q)
c(q)
(q)
=
=
=
=
1 + a1 q 1 + . . . + ana q na
b1 q 1 + . . . + bnb q nb
1 + c1 q 1 + . . . + cnc q nc
1 q 1
Deadbeat control:
In a deadbeat control setting, the following performance index is minimized:
J(u, k) =
na
+nb +d

y (k + jk) r(k + j)2
j=nb +d
u(k + j) = 0 for j na + 1
This means that if we use the GPC performance, with setting Nm = nb + d, Nc = na + 1,
P (q) = 1 and N = na + nb + d and = 0 we obtain the deadbeat controller.
165
Meanlevel control:
In a meanlevel control (MLC) setting, the following performance index is minimized:
J(u, k) =

y (k + jk) r(k + j)2
j=1
u(k + j) = 0 for j 1
This means that if we use the GPC performance with setting Nm = 1, Nc = 1, P (q) = 1
and N . we obtain the meanlevel controller.
Poleplacement control:
In a poleplacement control setting, the following performance index is minimized:
J(u, k) =
na
+nb +d

yp (k + jk) r(k + j)2
j=nb +d
u(k + j) = 0 for j na + 1
where yp (k) = P (q)y(k) is the weighted output signal. This means that if we use the GPC
performance with setting Nm = nb + d, Nc = na + 1, P (q) = Pd (q) and N = na + nb + d.
In state space representation, the eigenvalues of (A B3 F ) will become equal to the roots
of polynomial Pd (q).
166
CHAPTER 8. TUNING
Appendix A: Quadratic
Programming
In this appendix we will consider optimization problems with a quadratic object function
and linear constraints, denoted as the quadratic programming problem.
Denition 46 The quadratic programming problem:
Minimize the object function
F () =
1 T
H + fT
2
over variable , where H is a semipositive denite matrix, subject to the linear inequality
constraint
A b
2 End Denition
This is the type of quadratic programming problem we have encountered in chapter 5,
where we solved the standard predictive control problem.
Remark 47 : Throughout this appendix we assume H to be a symmetric matrix. If H is
a nonsymmetric matrix, we can dene
Hnew =
1
(H + H T )
2
Substitution of this Hnew for H does not change the object function.
Remark 48 : The unconstrained quadratic programming problem:
If the constraints are omitted and only the quadratic object function is considered, an
analytic solution exists: An extremum of the object function
F () =
1 T
H + fT
2
168
Appendices
1 T
H + fT
2
over variable , where H is a semipositive denite matrix, subject to the linear equality
constraint
A = b
and the nonnegative constraint
0
2 End Denition
Lemma 50 The quadratic programming problem of denition 46 in a quadratic programming problem of the form of denition 49.
169
Proof : Dene
= + , with + , 0
Note that is now uniquely decomposed into + and . Further introduce a slackvariable y
such that
A + y = b where y 0
We dene
H H 0
f
+
= H
H
H 0 , f = f , A = A A I
, =
y
0
0 0
0
1 T T
H+f
2
A = b
0
2
Consider the quadratic programming problem of denition 49 with the object function
F () =
1 T
H + f T ,
2
h()
g()
=
=
0
0
0
0
0
170
Appendices
The KuhnTucker conditions for this quadratic programming problem are given by:
H + AT
T
A
,
f
0
b
0
=
=
=
(8.1)
(8.2)
(8.3)
(8.4)
We recognize equality constraints (8.3),(8.1) and a nonnegative constraint (8.4), two ingredients of a general linear programming problem. On the other hand, we miss the object
function, and we have an additional nonlinear equality constraint (8.2). (Note that the
multiplication T is nonlinear in the new parameter vector (, , )T .) An object function
can be obtained by introducing two slack variables u1 and u2 , and dening the problem:
A + u1
H + A + u2
, , u1 , u2
T
T
b
f
0
0
=
=
(8.5)
(8.6)
(8.7)
(8.8)
while minimizing
min
,,,u1 ,u2
u1 + u2
(8.9)
A0 =
A 0
0 I 0
T
H A I 0 I
b0 =
b
f
f0 =
0
0
0
1
1
0 =
u1
u2
(8.10)
where
A0 0 = b0
and
0 0
171
means that the nonlinear constraint (8.2) can be rewritten as:
i i = 0 for i = 1, . . . , n
and thus either i = 0 or i = 0. The extra feasibility condition on a basic solution now
becomes i i = 0 for i = 1, . . . , n.
Remark 51 : In the above derivations we assumed b 0 and f 0. If this is not the
case, we have to change the corresponding signs of u1 and u2 in the equations (8.5) and
(8.6).
We will sketch the modied simplex algorithm on the basis of an example.
Modied simplex algorithm:
Consider the following type 2 quadratic programming problem:
min
1 T
H + fT
2
A = b
0
where
2 1 0 0
1 1 1 0
H=
0
1 4 2
0
0 2 4
0
3
f =
2
0
A=
2 1 0 0
0 2 1 2
b=
4
4
Before we start the modied simplex algorithm, we check if a simple solution can be found.
If we minimize the unconstrained function
1
min T H + f T
2
T
we obtain = 7 14 4 2 . This solution is not feasible, because neither the equality constraint A = b, nor the nonnegative constraint 0 are satised.
We construct the matrix A0 , the vectors b0 , f0 and the parameter vector 0 according to
equation (8.10). Note that 0 is a 16 1vector with the elements is a 4 1vector, is
a 2 1vector, is a 4 1vector, u1 is a 2 1vector and u2 is a 4 1vector.
The modied simplex method starts with nding a rst feasible solution. We choose B to
contain the six mostright columns of A0 . We obtain the solution
0
0
0
0
0
3
0
4
=
=
=
u1 =
u2 =
0
0
2
0
4
0
0
0
172
Appendices
0
0
0
0
=
u2 =
=
=
u1 =
0
0
1
1
0
0
0
2
and the KuhnTuker conditions (8.38.2) are satised for these values.
Algorithms that use a modied version of the simplex method are Wolfes algorithm [76],
and the pivoting algorithm of Lemke [40].
and is based on the barrier function method. Consider the nonempty strictly feasible set
G = {  gi () < 0, i = 1, . . . , m }
and dene the function:
)m
i=1 log(gi ())
() =
G
otherwise
Note that is convex on G and, as approaches the boundary of G, the function will go
to innity. Now consider the unconstrained optimization problem
*
min t f () + ()
for some t 0. Then the value (t) that minimizes this function is denoted by
*
(t) = arg min t f () + () .
It is clear that the curve (t), which is called the central path, is always located inside the
set G. The variable weight t gives the relative weight between the objective function f and
the barrier function , which is a measure for the constraint violation. Intuition suggests
that (t) will converge to the optimal value of the original problem as t . Note
173
that in a minimum (t) the gradient of (, t) = t f () + () with respect to is zero,
so:
( , t) = t f ( (t)) +
m
i=1
1
gi ( (t)) = 0 .
gi ( (t))
f ( (t)) +
m
i (t) gi ( (t)) = 0
i=1
where
i (t) =
1
.
gi ( (t)) t
(8.11)
Then from the above we can derive that at the optimum the following conditions hold:
f ( (t)) +
m
g((t)) 0
i (t) 0
(8.12)
(8.13)
i (t) gi ( (t)) = 0
(8.14)
i=1
(8.15)
which proves that for t the righthand side of the last condition will go to zero and so
the KuhnTucker conditions will be satised. This means that (t) and (t)
for t .
From the above we can see that the central path can be regarded as a continuous deformation of the KuhnTucker conditions.
Now dene the dual function [58, 9]:
+
d() = min f () +
m
,
i gi ()
i=1
for any 0
for Rm
174
Appendices
and so we have
f d( (t))
+
min f () +
f ( (t)) +
m
,
i (t)gi ()
i=1
m
i (t)gi ( (t))
i=1
= f ( (t)) m/t
and thus
f ( (t)) f f ( (t)) m/t
It is clear that m/t is an upper bound for the distance between the computed f ( (t)) and
the optimum f . If we want a desired accuracy > 0, dened by
 f f ( (t)) 
then choosing t = m/ gives us one unconstrained minimization problem, with a solution
(m/) that satises:
 f f ( (m/))  .
Algorithms to solve unconstrained minimization problems are discussed in [58].
The approach presented above works, but can be very slow. A better way is to increase
t in steps to the desired value m/ and use each intermediate solution as a starting value
for the next. This algorithm is called the sequential unconstrained minimization technique
(SUMT) and is described by the following steps
Given: G, t > 0 and a tolerance
*
Step 1: Compute (t) starting from : (t) = arg min t f () + () .
175
methods). A simple update rule for t is used, so t(k + 1) = t(k), where is typically
between 10 and 100.
The steps 14 are called the outer iteration and step 1 involves an inner iteration (e.g., the
Newton steps). A tradeo has to be made: a smaller means fewer inner iterations to
compute k+1 from k , but more outer iterations.
In Nesterov and Nemirovsky [49] an upper bound is derived on the number of iterations
that are needed for an optimization using the interiorpoint method. Also tuning rules for
the choice of are given in their book.
176
Appendices
G(q) G(q)
= G1 (q) G2 (q)
G1
G
A1
C1
B1
D1
A1
C1
B1
D1
G2
A2
C2
B2
D2
B1 D2
B2
D1 D2
B2
B1 D2
D1 D2
A1 B1 C2
0
A2
=
C 1 D1 C 2
A2
0
B1 C2 A1
=
D1 C 2 C 1
Parallel:
G(q) G(q)
= G1 (q) + G2 (q)
177
A2
C2
B2
D2
178
Appendices
G1
G
A1
C1
B1
D1
A1
C1
B1
D1
G2
+
A2
C2
B2
D2
A2
C2
B2
D2
B1
B2
D 1 + D2
A1 0
0 A2
=
C1 C2
Change of variables:
x x = T x
u u = P u
y y = R y
G
G
A
C
B
D
A
C
T is invertible
P is invertible
=
T AT 1
RCT 1
T BP 1
RDP 1
State feedback:
u u + F x
G
G
A
C
B
D
A
C
=
A + BF
C + DF
B
D
179
Output injection:
x(k + 1) = Ax(k) + Bu(k) x(k + 1) = A
x(k) + Bu(k) + Hy(k)
G
G
A
C
B
D
A
C
=
A + HC
C
B + HD
D
Transpose:
G(q) G(q)
= GT (q)
G
G
A
C
AT
BT
B
D
CT
DT
G(q) G(q)
= G+ (q)
G+ is a left (right) inverse of G.
B
A
G
C
D
G
A BD+ C
D + C
BD+
D+
180
Appendices
182
Appendices
v1 vn
Rnn
such that
U T AV =
where
..
0 0
.. for m n
..
.
.
0 0
.
m
..
0
..
.
n
0
..
.
for m n
m
gj
gm = sm sm1
and
, m {1, 2, . . .}
j=1
gm q
m=1
sm q m (1 q 1 )
(8.16)
m=1
In the same way, let fm and tm denote the impulse response parameters and the step
response parameters of the stable disturbance model Fo (q), respectively. Then
tm =
m
fj
and
fm = tm tm1
, m {0, 1, 2, . . .}
j=0
m=0
fm q
tm q m (1 q 1 )
(8.17)
m=0
n
m=1
gm u(k m) +
n
fm do (k m) + eo (k)
m=0
183
184
Appendices
n1
sm u(k m) + sn u(k n) +
m=1
n1
tm di (k m) + tn do (k n) + eo (k)
m=0
Ao =
Co =
I 0 ...
0 I
..
..
.
.
.
0 0 0 ..
0 0 0 ...
0
0
..
.
0
0
..
.
I
0
I 0 0 ... 0
Ko =
0
0
..
.
Lo =
Bo =
g1
g2
..
.
gn
fn
DH = I
f1
f2
..
.
DF = f0
This IO system can be translated into an IIO model using the formulas from section 2.1:
I
0
..
.
I 0 ...
0 I
..
..
.
.
Ai =
0 0 0
0 0 0 ...
0 0 0 ...
Ci =
0 0
0 0
..
.
I 0
0 I
0 0
I I 0 ... 0
Ki =
I
0
..
.
0
DH = I
Li =
f0
f1
f2
..
.
fn
DF = f0
Bi =
0
g1
g2
..
.
gn
185
q 1
?
ex3 6
6

g1
q 1
f2
?
ex2 6
?
e x1 6 y
q 1
f1
eo
f0
?
e6
gn1
gn
g2
q 1 x  e
n+1 6
fn
di
e?
xn 6
g2
fn1
fn
do
gn1
gn
?
e 6
fn1
6

 e 1
x4 6 q
f2
g1
?
 e 1
x3 6 q
f1
6
eo
x1
@
@ ?
Re
@
x2 6
q 1
y
f0
6
186
Appendices
where si are the step response parameters of the system, then the state space matrices are
given by:
0 I 0 ... 0 0
0 0 I
0 0
I
t1
s1
.. ..
I
t2
s2
..
..
.
.
K
Ai = . .
=
=
=
L
B
..
..
..
i
i
i
0 0 0
.
.
I 0
.
0 0 0 ... 0 I
tn
sn
I
0 0 0 ... 0 I
Ci =
I 0 0 ... 0
DH = I
DF = t0 = f0
sn1
sn2
sn
ei
?
e6
q 1
xn ?
e6
q 1
xn1
?
e6
tn
tn1
di
s1
x2 ?
e
6
x1 e y(k)6
tn2
6

q 1
t1
6
m=1
sm u(k m) +
m=0
tm di (k m) +
ei (k m)
(8.18)
m=0
where y is the output signal, u is the increment of the input signal u, di is the increment
of the (known) disturbance signal do , and ei , which is the increment of the output noise
signal eo , is assumed to be zeromean white noise.
Note that the output signal y(k) in equation (8.18) is build up with all past values of the
input, disturbance and noise signals. We introduce a signal y0 (kk ), > 0, which gives
187
the value of y(k) based on the inputs, disturbances and noise signals up to time k ,
assuming future values to be zero, so u(t) = 0, di(t) = 0 and ei (t) = 0 for t > k :
y0 (k + jk ) =
sj++m u(km) +
m=0
tj++m di (km) +
m=0
ei (km)
(8.19)
m=0
m=0
sj+m u(km) +
sj+m u(km) +
m=1
m=0
tj+m di (km) +
ei (km)
m=0
tj+m di (km)
m=1
m=1
m=1
sm u(k + j m) +
tm di (k + j m) +
m=0
sj+m u(k m) +
m=1
j
sm u(k + j m) +
ei (k + j m)
m=0
tj+m di (k m) +
m=1
ei (k m)
m=1
j
tm di(k + j m) +
m=0
m=1
j
ei (k + j m)
m=0
= y0 (k + jk 1)
j
j
j
+
sm u(k + j m) +
tm di(k + j m) +
ei (k + j m)
m=1
m=0
m=0
Suppose we are at time k, then we only have knowlede up to time k, so ei (k + j), j > 0 is
unknown. The best estimate we can make of the zeromean white noise signal for future
time instants is (see also previous section):
ei (k + jk) = 0 , for j > 0
If we assume u() and di () to be known in the interval k k + j we obtain the
prediction of y(k + j), based on knowledge until time k, and denoted as y(k + jk):
y(k + jk) = y0 (k + jk 1) +
j
m=1
sm u(k + j m) +
j
m=0
tm di (k + j m) + ei (k)
188
Appendices
m=0
In section 2.2.2 the truncated step response model was introduced. For a model with
stepresponse length n there holds:
sm = sn
for
m>n
m=1
sn+m+j u(k m) +
tn+m+j di (km) +
m=1
sn u(km) +
m=1
ei (km)
m=1
tn di (km) +
m=1
ei (km)
m=1
= y0 (k+n1k 1)
and so y0 (k +n+jk 1) = y0 (k +n1k 1) for j 0. This means that for computation of
y(k +n+jk) where j 0, so when the prediction is beyond the length of the stepresponse
we can use the equation:
y(k + n + jk) = y0 (k + n 1k 1) +
n+j
sm u(k + n + j m)
m=1
n+j
tm di (k + n + j m) + ei (k)
m=0
Matrix notation
From the above derivation we found that computation of y0 (k + jk 1) for j = 0, . . . , n 1
is sucient to make predictions for all j 0. We obtain the equations:
y0 (k+jk) = y0 (k+jk!1) + sj u(k) + tj di (k) + ei (k)
for 0 j n1
for j n
(8.20)
189
In matrix form:
t1 di (k)
y0 (k+1k1)
s1 u(k)
y0 (k+1k)
ei (k)
y0 (k+2k) y0 (k+2k1) s2 u(k) t2 di (k) ei (k)
ei (k)
..
..
..
..
+
+
+
=
.
.
.
.
..
y0 (k+n1k) y0 (k+n1k1) sn1 u(k) tn1 di (k) .
y0 (k+nk)
ei (k)
y0 (k+n1k1)
sn u(k)
tn di (k)
y(kk) = y0 (kk 1) + t0 di (k) + ei (k)
Dene the following signal vector for j 0, 0:
y0 (k + j + 1k )
y0 (k + j + 2k )
..
y0 (k + jk ) =
y0 (k + j + n 1k )
y0 (k + j + nk )
and matrices
M =
C=
0 0
0 0
..
.
I 0
0 I
0 I
0
I
.. . .
.
.
0 0 0
0 0 0
0 0 0
0
0
..
.
I
0
..
.
I 0 0
s1
s2
..
.
S=
sn1
sn
T =
tn1
tn
t1
t2
..
.
(8.21)
(8.22)
To make predictions y(k + jk), we can extend (8.22) over the prediction horizon N. In
matrix form:
y0 (k+1k1)
s1 u(k)
y(k+1k)
y(k+2k) y0 (k+2k1)
s2 u(k)+s1 u(k+1)
.
.
.
..
..
..
..
..
..
.
.
.
y(k+Nk)
sN u(k)+ +s1 u(k+N 1)
y0 (k+n1k1)
190
Appendices
t1 di (k)
t2 di (k)+t1 di (k+1)
..
.
ei (k)
ei (k)
..
.
+ ei (k)
ei (k)
.
..
tN di (k)+ +t1 di (k+N 1)
ei (k)
..
u(k)
y(k+1k)
u(k+1)
y(k+2k)
..
..
u
(k)
=
z(k) =
.
.
u(k+N 2)
y(k+N 1k)
u(k+N 1)
y(k+Nk)
and matrices
M =
0
0
..
.
I
0
..
.
0
0
..
.
0
0
..
.
0 0
0 0
0 0
..
I 0
0 I
.. ..
. .
0 0 I
0
I
.. . .
.
.
0
0
..
.
T =
t1
t2
..
.
..
S=
s1
s2
..
.
0
s1
sN sN 1
di (k)
di (k+1)
..
.
w(k)
di (k+N 2)
di (k+N 1)
..
.
..
0
0
..
.
s1
..
.
0
t1
tN tN 1
0
0
..
.
t1
+ IN e(k)
(8.23)
z(k) = y(k)
B1 = In
e(k) = ei (k)
B2 = T
w(k) = di (k)
B3 = S
v(k) = u(k)
191
C1 = C
D11 = I
D12 = t0
C2 = M
21 = IN
D
22 = T
D
23 = S
D
+D
Go (q) =
Fo (q) =
fo (q)
ao (q)
Ho (q) =
co (q)
ao (q)
=
=
=
=
1 + ao,1 q 1 + . . . + ao,na q na
bo,1 q 1 + . . . + bo,nb q nb
1 + co,1 q 1 + . . . + co,nc q nc
fo,0 + fo,1 q 1 + . . . + fo,nc q nf
(8.24)
The dierence equation corresponding to the IOmodel is given by the controlled autoregressive moving average (CARMA) model:
ao (q) y(k) = bo (q) u(k) + fo (q) do(k) + co (q) eo(k)
in which eo (k) is chosen as ZMWN.
Now consider the SISO IIO polynomial model (compare eq. 2.13):
Gi (q) =
bi (q)
ai (q)
Fi (q) =
fi (q)
ai (q)
Hi (q) =
ci (q)
ai (q)
where ai (q) = ao (q)(q) = ao (q)(1 q 1 ), bi (q) = bo (q), ci (q) = co (q), and fi (q) = fo (q),
192
Appendices
so
ai (q) =
=
bi (q) =
=
ci (q) =
=
fi (q) =
=
1 + ai,1 q 1 + . . . + ai,n+1 q n1
(1 q 1 )(1 + ao,1 q 1 + . . . + ao,n q n )
bi,1 q 1 + . . . + bi,n q n
bo,1 q 1 + . . . + bo,n q n
1 + ci,1 q 1 + . . . + ci,n q n
1 + co,1 q 1 + . . . + co,n q n
fi,0 + fi,1 q 1 + . . . + fi,n q n
fo,0 + fo,1 q 1 + . . . + fo,n q n
(8.25)
The dierence equation corresponding to the IIOmodel is given by the controlled autoregressive integrated moving average (CARIMA) model:
ai (q) y(k) = bi (q) u(k) + fi (q) di(k) + ci (q) ei (k)
in which di (k) = (q)do (k) and ei (k) is chosen as ZMWN.
bo (q)
ao (q)
Fo (q) =
fo (q)
ao (q)
Ho (q) =
co (q)
ao (q)
where ao (q), bo (q), fo (q) and co (q) are given in (8.24). Then the state space matrices are
given by:
ao,1
ao,2
..
.
1 0 ...
0 1
..
..
.
.
..
.
0 0
0
0
..
.
Ao =
ao,n1
1
ao,n 0 0 . . . 0
Co =
1 0 0 ... 0
..
Lo =
.
Bo =
bo,n1
bo,n
bo,1
bo,2
..
.
DF = fo,0
co,1 ao,1
co,2 ao,2
..
Ko =
.
co,n1 ao,n1
co,n ao,n
(8.26)
DH = I
(8.27)
(8.28)
193
u
bo,n
co,n
@
R e
@
I
@
@
ao,n
6
q 1
xn
eo
?
bo,2
co,2
bo,1
co,1
@
R e
@

x3 @
I
q 1
@
R e
@

x2 @
I
q 1
fo,n
ao,2
fo,2
ao,1
fo,1
6
? e
x1
do
1
1
0 0 ... 0
0
b0,1
0 ao,1 1 0 . . . 0
bo,2
0 ao,2 0 1
0
..
..
Bi = ..
(8.29)
Ai = ..
. . ..
. .
.
.
.
.
bo,n1
0 ao,n1 0 0 . . . 1
bo,n
0 ao,n 0 0 . . . 0
Ci = 1 1 0 0 . . . 0
DH = I DF = fo,0
(8.30)
0
1
fo,1 f0,o ao,1
co,1 ao,1
Li =
Ki =
(8.31)
..
..
.
.
194
Appendices
where y(k) is the proces output, u(k) is the proces input, do (k) is a known disturbance
signal and eo (k) is assumed to be ZMWN and
Go (q) =
q 1
(1 0.5q 1 )(1 0.2q 1)
Fo (q) =
0.1q 1
1 0.2q 1
Ho (q) =
1 + 0.5q 1
1 0.5q 1
To use the derived formulas, we have to give Go (q), Ho (q) and Fo (q) the common denominator (1 0.5q 1 )(1 0.2q 1 ) = (1 0.7q 1 + 0.1q 2 ):
Fo (q) =
Ho (q) =
1
1
0
0
Bi = 1
Ci = 1 1 0
Ai = 0 0.7 1
0 0.1 0
0
0
1
Ki = 1
DH = 1
DF = 0
Li = 0.1
0.05
0.2
and so
xi1 (k + 1)
xi2 (k + 1)
xi3 (k + 1)
y(k)
=
=
=
=
195
From IIO polynomial model to state space model
(Kailath [33]) Let
Gi (q) =
bi (q)
ai (q)
Fi (q) =
fi (q)
ai (q)
Hi (q) =
ai,1 1 0 . . . 0
bi,1
ai,2 0 1
bi,2
0
..
..
. . ..
. .
.
.
Bi = ...
Ai =
ai,n 0 0 . . . 1
bi,n
0
ai,n+1 0 0 . . . 0
Ci = 1 0 0 . . . 0
ci,1 ai,1
fi,1
ci,2 ai,2
fi,2
..
Ki =
Li = ...
ci,n ai,n
fi,n
0
ai,n+1
ci (q)
ai (q)
(8.32)
(8.33)
(8.34)
q 1
(1 q 1 )(1 0.5q 1 )(1 0.2q 1)
q 1
(1 1.7q 1 + 0.8q 2 0.1q 3 )
0.1q 1 (1 0.5q 1 )
(1 q 1 )(1 0.5q 1 )(1 0.2q 1)
0.1q 1 0.05q 2
(1 1.7q 1 + 0.8q 2 0.1q 3 )
(1 + 0.5q 1 )(1 0.2q 1 )
(1 q 1 )(1 0.5q 1 )(1 0.2q 1)
1 + 0.3q 1 0.1q 2
(1 1.7q 1 + 0.8q 2 0.1q 3 )
196
Appendices
1.7 1 0
1
Bi = 0
Ci = 1 0 0
Ai = 0.8 0 1
0.1 0 0
0
0.1
Li = 0.05
0
2.0
Ki = 0.9
0.1
DH = 1
DF = 0
and so
xi1 (k + 1)
xi2 (k + 1)
xi3 (k + 1)
y(k)
=
=
=
=
Clearly, the IIO state space representation in example 3 is dierent from the IIO state space
representation in example 2. By denoting the state space matrices of example 2 with a hat
i , Ci , L
i, K
i ,D
H ,D
F ), the relation between the two state space representations
( so Ai , B
is given by:
Ai = T 1 Ai T
i = T 1 Bi
B
Ci = Ci T
i = T 1 Li
L
F = DF
H = DH
D
D
i
K
= T 1 Ki
1
1 0
T = 0.7 0 1
0.1 0 0
c(q)
u2 (k)
a(q)
where
a(q) = 1 + a1 q 1 + . . . + ana q na
c(q) = c0 + c1 q 1 + . . . + cnc q nc
197
Each set of polynomial functions a(q), c(q) together with a positive integer j saties a set
of Diophantine equations
c(q) = Mj (q) a(q) + q j Lj (q)
(8.35)
where
Mj (q) = mj,0 + mj,1 q 1 + . . . + mj,j1 q j+1
Lj (q) = j,0 + j,1 q 1 + . . . + j,na q na
Remark 55 : Diophantine equations are standard in the theory of prediction with polynomial models. An algorithm for solving these equations is given in appendix B of Soeterboek
[61].
When making a prediction of u1 (k + j) for j > 0 we are interested how u1 (k + j) depends
on future values of u2 (k + i), 0 < i j and present and past values of u2 (k i), i 0.
Using Diophantine equation (8.35) we derive (Clarke et al.[14][15], Bitmead et al.[6]):
u1 (k + j) =
c(q)
u2 (k + j)
a(q)
Lj (q)
u2 (k + j)
a(q)
Lj (q)
= Mj (q)u2(k + j) +
u2 (k)
a(q)
= Mj (q)u2(k + j) + q j
Lj (q)
u2 (k)
a(q)
(8.36)
(8.37)
198
Appendices
where w1 and w2 usually corresponds to the reference signal r(k) and the known distrubance
signal d(k), respectively. In the sequel of this section we will use
w1 (k)
f1 (q) f2 (q)
f (q)w(q) =
(8.38)
w2 (k)
w1 (k)
m1 (q) m2 (q)
(8.39)
m(q)w(q) =
w2 (k)
as an easier notation.
To make predictions of future output signals of CARIMA models one rst needs to solve
the following Diophantine equation
h(q) = Ej (q) n(q) + q j Fj (q)
(8.40)
z(k + j) =
a(q)
f (q)
b(q)
y(k)
w(k)
v(k)
c(q)
c(q)
c(q)
w(k + j) +
n(q)c(q)
n(q)
n(q)c(q)
g(q) q j Fj (q)b(q)
v(k + j)
+
n(q)
n(q)c(q)
m(q)
g(q)
Fj (q)a(q)
y(k) +
w(k + j) +
v(k + j)
= Ej (q) e(k + j) +
n(q)c(q)
n(q)c(q)
n(q)c(q)
199
where
m(q)
(8.42)
Then:
m(q)
g(q)
Fj (q)a(q)
y(k) +
w(k + j) +
v(k + j)
n(q)c(q)
n(q)c(q)
n(q)c(q)
Fj (q)a(q)
m(q)
= Ej (q) e(k + j) +
y(k) +
w(k + j) +
n(q)c(q)
n(q)c(q)
q 1 (q)
+
v(k) + q 1 j (q) v(k + j)
n(q)c(q)
Fj (q)a(q)
m(q)
= Ej (q) e(k + j) +
y(k) +
w(k + j) +
n(q)c(q)
n(q)c(q)
(q)
v(k 1) + j (q) v(k + j 1)
+
n(q)c(q)
Using the fact that the prediction e(k + jk) = 0 for j > 0 the rst term vanishes and the
optimal prediction z(k + j) for j > 0 is given by:
m(q)
Fj (q)a(q)
z(k + j) =
y(k) +
w(k + j) +
n(q)c(q)
n(q)c(q)
(q)
v(k 1) + j (q) v(k + j 1)
+
n(q)c(q)
w f (k + j) + j (q) v f (k 1).
which is the output response if all future input signals are taken zero. The last term
j v(k + j 1) accounts for the inuence of the future input signals v(k), . . . , v(k + j 1).
The prediction of the performance signal can now be given as:
200
Appendices
z(k) =
F0 (q) y f (k) + m
0 (q) w f (k) + 0 (q) v f (k1) + 0 (q) v(k)
1 (q) w f (k+1) + 1 (q) v f (k1) + 1 (q) v(k+1)
F1 (q) y f (k) + m
..
.
1
a1 a2 an1 an
0
1
0
0
0
1
0
0
B= 0
A= 0
..
..
.
..
..
..
.
.
.
.
0
0
0
1
0
C = c1 co a1 c2 co a2 . . . cn co an
D = co
Then based on the previous sections we know that the prediction of u1 (k + j) is given by:
j
u1 (k + j) = CA x(k) +
j
i=1
Lj (q)
u2 (k)
a(q)
j
i=1
= Mj (q)u2 (k + j)
gives the response of the future inputs u2 (k + i), i > 0.
Q1/2 xo (k + 1)
R1/2 u(k)
(8.43)
N1
zT (k + jk)(j)
z (k + jk)
j=0
N
1
j=0
N
1
j=0
N
j=1
N
uT (k + j 1k)Ru(k + j 1k)
j=1
N
j=Nm
xT (k + jk)Q
x(k + jk) +
N
uT (k + j 1k)Ru(k + j 1k)
j=1
and so performance index (4.1) is equivalent to (4.5) for the above given z, N and . From
the IO state space equation
xo (k + 1) = Ao xo (k) + Bo u(k) + Lo d(k) + Ko eo (k)
201
202
Appendices
The GPC performance index as given in (4.8) can be transformed into the standard performance index (4.1). First choose the performance signal
yp (k + 1k) r(k + 1)
(8.44)
z(k) =
u(k)
Then it easy to see that
J(v, k) =
N1
zT (k + jk)(j)
z (k + jk) =
j=0
N
1
j=0
yp (k + 1 + jk)r(k + 1 + jk)
=
(j)
u(k + jk)
N
1
T
=
yp (k + 1 + jk) r(k + 1 + j) 1 (j) yp (k + 1 + jk) r(k + 1 + j)
j=0
N
1
uT (k + jk)u(k + jk)
j=0
N
T
yp (k + jk) r(k + j)
yp (k + jk) r(k + j)
j=Nm
N
uT (k + j 1k)u(k + j 1k)
j=1
and so performance index (4.1) is equivalent to (4.8) for the above given z(k), N and (j).
Substitution of the IIOoutput equation
y(k) = Ci xi (k) + DH ei (k)
203
in (4.11),(4.12) results in:
and so
x(k + 1) =
=
y(k) =
=
yp (k) =
=
Bp DH
xp (k)
Ap Bp Ci
ei (k) +
+
Ki
0
Ai
xi (k)
di (k)
0
0 0
v(k) =
+
r(k + 1)
Bi
Li 0
A x(k) + B1 e(k) + B2 w(k) + B3 v(k)
di (k)
C1 x(k) + DH ei (k) + 0 0
r(k + 1)
C1 x(k) + D11 ei (k) + D12 w(k)
Cp Dp Ci x(k) + Dp DH ei (k) + 0 0
xp (k + 1)
xi (k + 1)
di (k)
r(k + 1)
204
Appendices
Appendix F:
205
Contents
Contents
Introduction ............................................................................................. 207
206
Reference
.............................................................................................
209
gpc
.............................................................................................
211
gpc ss
.............................................................................................
215
lqpc
.............................................................................................
218
tf2syst
.............................................................................................
221
imp2syst
.............................................................................................
222
ss2syst
.............................................................................................
224
syst2ss
.............................................................................................
226
io2iio
.............................................................................................
228
gpc2spc
.............................................................................................
229
lqpc2spc
.............................................................................................
231
add du
.............................................................................................
232
add u
.............................................................................................
234
add y
.............................................................................................
238
add x
.............................................................................................
240
add nc
.............................................................................................
242
add end
.............................................................................................
243
external
.............................................................................................
244
dgamma
.............................................................................................
246
pred
.............................................................................................
248
contr
.............................................................................................
250
lticll
.............................................................................................
252
lticon
.............................................................................................
253
simul
.............................................................................................
254
rhc
.............................................................................................
255
Introduction
Introduction
This toolbox is a collection of a few Matlab functions for the design and analysis of
predictive controllers. This toolbox has been developed by Ton van den Boom at the Delft
Center for Systems and Control (DCSC), Delft University of Technology, the Netherlands.
The Standard Predictive Control toolbox is an independent tool, which only requires the
control system toolbox and the signal processing toolbox.
Ao Ko Lo Bo
C o DH D F 0
dim =
na nk nl nb nc 0 0 0
Ai Ki Li Bi
C i DH DF 0
dim =
na nk nl nb nc 0 0 0
207
Introduction
=
=
=
=
=
G=
A
C1
C2
C3
C4
B1
D11
D21
D31
D41
B2 B3
D12 0
D22 D23
D32 D33
D42 D43
dim =
(k)
(k)
=
=
=
=
=
1 e(k) + B
2 w(k)
3 v(k)
A x(k) + B
+B
11 e(k) + D
12 w(k)
13 v(k)
C1 x(k) + D
+D
21 e(k) + D
22 w(k)
23 v(k)
+D
C2 x(k) + D
31 e(k) + D
32 w(k)
33 v(k)
+D
C3 x(k) + D
41 e(k) + D
42 w(k)
43 v(k)
+D
C4 x(k) + D
G=
208
A
C1
C2
C3
C4
1
B
D11
21
D
31
D
41
D
2
B
D12
22
D
32
D
42
D
3
B
D13
23
D
33
D
43
D
dim =
Reference
Reference
Common predictive control problems
gpc
gpc ss
lqpc
209
Reference
210
gpc
gpc
Purpose
Solves the Generalized Predictive Control (GPC) problem
Synopsis
[y,du]=gpc(ai,bi,ci,fi,P,lambda,Nm,N,Nc,lensim,...
r,di,ei,dumax,apr,rhc);
[y,du,sys,dim,vv]=gpc(ai,bi,ci,fi,P,lambda,Nm,N,Nc,lensim,...
r,di,ei,dumax,apr,rhc);
Description
The function solves the GPC method (Clarke et al.,[14],[15]) for a controlled
autoregressive integrated moving average (CARIMA) model, given by
ai (q) y(k) = bi (q)u(k) + ci (q) ei (k) + fi (q) di (k)
where the increment input signal is given by u(k) = u(k) u(k 1).
A performance index, based on control and output signals, is minimized:
min J(u, k) = min
u(k)
u(k)
N

yp(k+jk) r(k+j)2 + 2
Nc
u(k+j 1)2
j=1
j=Nm
where
yp (k)
r(k)
y(k)
di (k)
ei (k)
u(k)
Nm
N
Nc
P (q)
for
j Nc
211
gpc
for
0 j < Nc
The parameter determines the tradeo between tracking accuracy (rst term)
and control eort (second term). The polynomial P (q) can be chosen by the
designer and broaden the class of control objectives. np of the closedloop poles
will be placed at the location of the roots of polynomial P (q).
The function gpc returns the output signal y and increment input signal du.
The input and output variables are the following:
y
du
sys
dim
vv
ai,bi,ci,fi
P
lambda
Nm,N,Nc
r
di
ei
dumax
lensim
apr
rhc
gpc
Example
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Example for gpc
%
randn(state,4);
% set initial value of random generator
%%%
ai =
bi =
ci =
fi =
G(q)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
conv([1 1.787 0.798],[1 1]); % polynomial ai(q)
0.029*[0 1 0.928 0];
% polynomial bi(q)
[1 1.787 0.798 0];
% polynomial ci(q)
[0 1 1.787 0.798];
% polynomial fi(q)
%%% P(q)
P = 1;
lambda=0;
Nm = 8;
Nc = 2;
N = 25;
lensim = 61;
dumax = 0;
apr = [0 0];
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% No poleplacement
% increment input is not weighted
% Mimimum cost horizon
% Control horizon
% Prediction horizon
% length simulation interval
% dumax=0 means that no constraint is added !!
% no apriori knowledge about w(k)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
r = [zeros(1,10) ones(1,100)];
% step on k=11
di = [zeros(1,20) 0.5 zeros(1,89)];
% step on k=21
ei = [zeros(1,30) 0.1*randn(1,31)];
% noise starting from k=31
[y,du]=gpc(ai,bi,ci,fi,P,lambda,Nm,N,Nc,lensim,r,di,ei,dumax,apr);
u = cumsum(du);
do = cumsum(di(1:61));
eo = cumsum(ei(1:61));
% compute
% compute
% compute
u(k)
do(k)
eo(k)
from
from
from
du(k)
di(k)
ei(k)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% plot figures %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
t=0:60;
[tt,rr]=stairs(t,r(:,1:61));
[tt,uu]=stairs(t,u);
Standard Predictive Control Toolbox
213
gpc
[tt,duu]=stairs(t,du);
subplot(211);
plot(t,y,b,tt,rr,g,t,do,r,t,eo,c);
title(green: r , blue: y , red: do , cyan: eo);
grid;
subplot(212);
plot(tt,duu,b,tt,uu,g);
title(blue: du , green: u);
grid;
disp(
);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
See Also
tf2io, io2iio, gpc2spc, external, pred, add nc, add du, add y, contr,
simul
214
gpc ss
gpc ss
Purpose
Solves the Generalized Predictive Control (GPC) problem for a state space IIO
model
Synopsis
[y,du]=gpc ss(Ai,Ki,Li,Bi,Ci,DH,DF,Ap,Bp,Cp,Dp,Wy,Wu,...
Nm,N,Nc,lensim,r,di,ei,dumax,apr,rhc);
[y,du,sys,dim,vv]=gpc ss(Ai,Ki,Li,Bi,Ci,DH,DF,Ap,Bp,Cp,Dp,Wy,Wu,...
Nm,N,Nc,lensim,r,di,ei,dumax,apr,rhc);
Description
The function solves the GPC method (Clarke et al., [14],[15]) for a state space
IIO model, given by
x(k + 1) = Ai x(k) + Ki ei (k) + Li di (k) + Bi u(k)
y(k) = Ci x(k) + DH ei (k) + DH di (k)
where the increment input signal is given by u(k) = u(k) u(k 1).
A performance index, based on control and output signals, is minimized:
min J(u, k) = min
u(k)
u(k)
N
j=Nm
Nc
Wu u(k+j1)2
j=1
where (k) stands for the euclidean norm of the vector (k), and the variables
are given by:
yp (k)
r(k)
y(k)
di (k)
ei (k)
u(k)
Nm
N
Nc
Wy
Wu
P (q)
215
gpc ss
for
j Nc
for
0 j < Nc
For MIMO systems umax can be choosen as a scalar value or a vector value
(with the same dimension as vector u(k)). In the case of a scalar the bound
umax holds for each element of u(k). In the case of a vector umax , the
bound is taken elementwise.
The weighting matrices Wy and Wu determines the tradeo between tracking
accuracy (rst term) and control eort (second term). The reference system
P (q) can be chosen by the designer and broaden the class of control objectives.
The function gpc ss returns the output signal y and increment input signal du.
The input and output variables are the following:
y
du
sys
dim
vv
Ai,Ki,Li,Bi,
Ci,DH,DF
Ap,Bp,Cp,Dp
Wy
Wu
Nm,N,Nc
r
di
ei
dumax
lensim
apr
rhc
216
gpc ss
See Also
gpc, tf2io, io2iio, gpc2spc, external, pred, add nc, add du, add y,
contr, simul
217
lqpc
lqpc
Purpose
Solves the Linear Quadratic Predictive Control (LQPC) problem
Synopsis
[y,u,x]=lqpc(Ao,Ko,Lo,Bo,Co,DH,DF,Q,R,Nm,N,Nc, ...
lensim,xo,do,eo,umax,apr,rhc);
[y,u,x,sys,dim,vv]=lqpc(Ao,Ko,Lo,Bo,Co,DH,DF,Q,R,Nm,N,Nc, ...
lensim,xo,do,eo,umax,apr,rhc);
Description
The function solves the Linear Quadratic Predictive Control (LQPC) method
(Garca et al., [28]) for a state space model, given by
x(k + 1) = Ao x(k) + Ko eo (k) + Lo do (k) + Bo u(k)
y(k) = Co x(k) + DH eo (k) + DF d0 (k)
A performance index, based on state and input signals, is minimized:
min J(u, k) = min
u(k)
u(k)
N
x(k+j)T Q
x(k+j) +
N
j=1
j=Nm
where
x(k)
u(k)
y(k)
do (k)
eo (k)
Nm
N
Nc
Q
R
is
is
is
is
is
is
is
is
is
is
the
the
the
the
the
the
the
the
the
the
The signal x(k + jk) is the prediction of state x(k + j), based on knowledge up
to time k. The input signal u(k + j) is forced to become constant for j Nc .
Further the input signal is assumed to be bounded:
 u(k + j)  umax
218
for
0 j < Nc
Standard Predictive Control Toolbox
lqpc
The matrices Q and R determines the tradeo between tracking accuracy (rst
term) and control eort (second term). For MIMO systems the matrices can
scale the states and inputs.
The function lqpc returns the output signal y, input signal u and the state
signal x. The input and output variables are the following:
y
u
x
sys
dim
vv
Ao,Ko,Lo,Bo,Co,DH,DF
Q,R
Nm,N,Nc
xo
do
eo
umax
lensim
apr
rhc
The signal eo must have as many rows as there are noise inputs. The signal do
must have as many rows as there are disturbance inputs. Each column of eo
and do corresponds to a new time point.
The signal y will have as many rows as there are outputs. The signal u will
have as many rows as there are inputs. The signal x must have as many rows
as there are states. Each column of y, u and x corresponds to a new time point.
Setting umax=0 means no input constraint will be added.
The (scalar) variable apr determines if the disturbance signal di (k+j) is a priori
known or not. The signal di (k + j) is a priori known for apr=1 and unknown
for apr=0.
For rhc=0 the simulation is done in closed loop mode. For rhc=1 the simulation
is done in receding horizon mode, which means that the simulation is paused
every sample and can be continued by pressing the returnkey. Past signals and
prediction of future signals are given within the prediction range.
The output variables sys, dim and vv can be used as input variables for the
Standard Predictive Control Toolbox
219
lqpc
See Also
lqpc2spc, pred, add nc, add u, add y, contr, simul
220
tf2syst
tf2syst
Purpose
Transforms a polynomial into a state space model in SYSTEM format
Synopsis
[G,dim]=tf2syst(a,b,c,f);
Description
The function transforms a polynominal model into a state space model in SYSTEM format (For SISO system only). The input system is:
a(q)y(k) = b(q)v(k) + c(q)e(k) + f (q)d(k)
The state space equations become
x(k + 1) = Ax(k) + B1 e(k) + B2 d(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D12 d(k)
A Rnana ,
B3 Rnanb3 ,
B1 Rnanb1 , B2 Rnanb2
C1 Rnc1na ,
See Also
io2iio, imp2syst, ss2syst
221
imp2syst
imp2syst
Purpose
Transforms an impulse reponse model into a state space model in SYSTEM
format
Synopsis
[G,dim]=imp2syst(g,f,nimp);
[G,dim]=imp2syst(g,f);
Description
The function transforms an impulse response model into a state space model
in SYSTEM format (For SISO system only) The input system is:
y(k) =
ng
g(i)u(k i) +
i=1
nf
f (i)d(k i) + e(k)
i=1
where the impulse response parameters g(i) and f (i) are given in the vectors
0 g(1) g(2) . . . g(ng )
g =
0 f (1) f (2) . . . f (nf )
f =
The state space equations become
x(k + 1) = Ax(k) + B1 e(k) + B2 d(k) + B3 v(k)
y(k) = C1 x(k) + e(k)
A Rnimpnimp ,
B1 Rnimpnb1 ,
B2 Rnimpnb2
B3 Rnimpnb3 , C1 Rnc1nimp
The state space representation is given in a SYSTEM format, so
A
B1 B2 B3
G=
C1
I
0 0
with dimension vector
dim = nimp nb1 nb2 nb3 nc1 0 0 0
The function imp2syst returns the state space model in Go and its dimension
dimo, given in SYSTEM format. The input and output variables are the following:
222
imp2syst
G
dim
g
f
nimp
See Also
io2iio, tf2syst, ss2syst
223
ss2syst
ss2syst
Purpose
Transforms a state space model into the SYSTEM format
Synopsis
[G,dim]=ss2syst(A,B1,B2,B3,C1,D11,D12,D13);
[G,dim]=ss2syst(A,B1,B2,B3,C1,D11,D12,D13,C2,D21,D22,D23);
[G,dim]=ss2syst(A,B1,B2,B3,C1,D11,D12,D13,C2,D21,D22,D23,...
C3,D31,D32,D33,C4,D41,D42,D43);
Description
The function transforms a state space model into the SYSTEM format The
state space equations are
x(k + 1)
y(k)
z(k)
(k)
(k)
=
=
=
=
=
A Rnana ,
C1 Rnc1na ,
A
B1 B2 B3
C1
D11 D12 D13
G=
C
D21 D22 D23
2
C3
D31 D32 D33
C4
D41 D42 D43
C3 Rnc3na
SYSTEM format, so
ss2syst
G
The state space model in SYSTEM format.
dim
the dimension of the system.
A,B1,. . .,D43 system matrices of state space model.
See Also
io2iio, imp2syst, tf2syst, syst2ss
225
syst2ss
syst2ss
Purpose
Transforms SYSTEM format into a state space model
Synopsis
[A,B1,B2,B3,C1,D11,D12,D13]=syst2ss(sys,dim);
[A,B1,B2,B3,C1,D11,D12,D13,C2,D21,D22,D23]=syst2ss(sys,dim);
[A,B1,B2,B3,C1,D11,D12,D13,C2,D21,D22,D23,C3,D31,D32,D33,...
C4,D41,D42,D43]=syst2ss(sys,dim);
Description
The function transforms a model in SYSTEM format into its state space representation.
Consider a system in the SYSTEM format, so
sys =
A
C1
B1 B2 B3
D11 D12 D13
sys = C1
C2
B1 B2 B3
D11 D12 D13
D21 D22 D23
syst2ss
1 B
2 B
3
B
A
11 D
12 D
13
D
C1
21 D
22 D
23
sysp = C2
D
C3
D31 D32 D33
42 D
43
41 D
C4
D
with dimension vector dimp. Using
[A,tB1,tB2,tB3,tC1,D11,D12,D13,tC2,tD21,tD22,tD23,...
tC3,tD31,tD32,tD33,tC4,tD41,tD42,tD43]= syst2ss(sysp,dimp);
the state space equations become:
x(k + 1)
y(k)
z(k)
(k)
(k)
=
=
=
=
=
2 w(k)
3 v(k)
1 e(k) + B
+B
Ax(k) + B
11 e(k) + D
12 w(k)
13 v(k)
C1 x(k) + D
+D
21 e(k) + D
22 w(k)
23 v(k)
C2 x(k) + D
+D
31 e(k) + D
32 w(k)
33 v(k)
+D
C3 x(k) + D
41 e(k) + D
42 w(k)
43 v(k)
+D
C4 x(k) + D
See Also
io2iio, imp2syst, tf2syst, ss2syst
227
io2iio
io2iio
Purpose
Transformes a IO system (Go) into a IIO system (Gi)
Synopsis
[Gi,dimi]=io2iio(Go,dimo);
Description
The function transforms an IO system into an IIO system
The state space representation of the input system is:
Ao
Ko Lo Bo
Go =
Co
DH DF 0
and the dimension vector
dimo = na nk nl nb nc
The state space representation of the output system is:
Ki Li Bi
Ai
Gi =
Ci
DH DF 0
and a dimension vector
dimi = na + nc nk nl nb nc
The function io2iio returns the state space IIOmodel Gi and its dimension
dimi, The input and output variables are the following:
Gi
dimi
Go
dimo
228
gpc2spc
gpc2spc
Purpose
Transforms GPC problem into a SPC problem
Synopsis
[sys,dim]=gpc2spc(Gi,dimi,P,lambda);
Description
The function transforms the GPC problem into a standard predictive control
problem. For an IIO model Gi :
xi (k + 1) = Ai xi (k) + Ki ei (k) + Lo w(k) + Bo u(k)
y(k) = Ci xi (k) + DH ei (k) + DF w(k)
The weighting P (q) is given in state space (!!) representation:
xp (k + 1) = Ap xp (k) + Bp y(k)
yp (k) = Cp xp (k) + Dp y(k)
Where xp is the state of the realization describing yp (k) = P (q)y(k).
After the transformation, the system becomes
x(k + 1) = Ax(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D21 w(k)
z(k) = C2 x(k) + D21 e(k) + D22 w(k) + D23 v(k)
yp (k + 1) r(k + 1)
z(k) =
u(k)
d(k)
w(k) =
r(k + 1)
The IO model Gi, the weighting lter P and the transformed system sys are
given in the SYSTEM format. The input and output variables are the following:
sys
dim
Gi
P
lambda
229
gpc2spc
See Also
lqpc2spc, gpc
230
lqpc2spc
lqpc2spc
Purpose
Transforms LQPC problem into a SPC problem
Synopsis
[sys,dim]=lqpc2spc(Go,dimo,Q,R);
Description
The function transforms the LQPC problem into a standard predictive control
problem. Consider the IO model Go :
xo (k + 1) = Ao xo (k) + Ko eo (k) + Lo w(k) + Bo u(k)
y(k) = Co xo (k) + DH eo (k) + DF w(k)
After the transformation, the system becomes
x(k + 1) = Ax(k) + B1 e(k) + B2 w(k) + B3 v(k)
y(k) = C1 x(k) + D11 e(k) + D21 w(k)
z(k) = C2 x(k) + D21 e(k) + D22 w(k) + D23 v(k)
1/2
Q x(k + 1)
z(k) =
R1/2 u(k)
w(k) = d(k)
where
Q
R
The IO model Gi and the transformed system sys are given in the SYSTEM
format. The input and output variables are the following:
sys
dim
Go
Q
R
See Also
gpc2spc, lqpc
Standard Predictive Control Toolbox
231
add du
add du
Purpose
Function to add increment input constraint to standard problem
Synopsis
[sys,dim] = add du(sys,dim,dumax,sgn,descr);
[sysp,dimp]= add du(sysp,dimp,dumax,sgn,descr);
Description
The function adds an increment input constraint to the standard problem (sys)
or the prediction model (sysp). For sgn=1 , the constraint is given by:
u(k + j) umax  for j = 0, , N 1
For sgn=1 , the constraint is given by:
u(k + j) umax  for j = 0, , N 1
For sgn=0 , the constraint is given by:
umax  u(k + j) umax  for j = 0, , N 1
where
umax
u(k + j)
For MIMO systems umax can be choosen as a scalar value or a vector value
(with the same dimension as vector u(k)). In the case of a scalar the bound
umax holds for each element of u(k). In the case of a vector umax , the
bound is taken elementwise.
The function add du returns the system, subjected to the increment input constraint. The input and output variables are the following:
sys
dim
sysp
dimp
dumax
sgn
descr
232
Standard model
The dimension of the standard model.
Prediction model
The dimension of the prediction model.
The maximum bound on the increment input.
Scalar with value of +1/0/1.
Flag for model type (descr=IO for input output model.
descr=IIO for an increment input output model).
Standard Predictive Control Toolbox
add du
See Also
add u, add v, add y, add x, gpc
233
add u
add u
Purpose
Function to add input constraint to standard problem
Synopsis
[sys,dim]=add u(sys,dim,umax,sgn,descr);
[sysp,dimp]=add u(sysp,dimp,umax,sgn,descr);
Description
The function adds input constraint to the standard problem (sys) or the prediction model (sysp). For sgn=1 , the constraint is given by:
u(k + j) umax  for j = 0, , N 1
For sgn=1 , the constraint is given by:
u(k + j) umax  for j = 0, , N 1
For sgn=0 , the constraint is given by:
umax  u(k + j) umax  for j = 0, , N 1
Where
umax
u(k + j)
For MIMO systems umax can be choosen as a scalar value or a vector value
(with the same dimension as vector u(k)). In the case of a scalar the bound
umax holds for each element of u(k). In the case of a vector umax , the bound is
taken elementwise.
The function add u returns the system, subjected to the input constraint. The
input and output variables are the following:
234
add u
sys
dim
sysp
dimp
umax
sgn
descr
Standard model
The dimension of the standard model.
Prediction model
The dimension of the prediction model.
The maximum bound on the input signal.
Scalar with value of +1/0/1.
Flag for model type (descr=IO for input output model.
descr=IIO for an increment input output model).
See Also
add du, add v, add y, add x, lqpc
235
add v
add v
Purpose
Function to add input constraint to standard problem
Synopsis
[sys,dim] = add v(sys,dim,vmax,sgn);
[sysp,dimp]= add v(sysp,dimp,vmax,sgn);
Description
The function adds an increment input constraint to the standard problem (sys)
or the prediction model (sysp). For sgn=1 , the constraint is given by:
v(k + j) vmax  for j = 0, , N 1
For sgn=1 , the constraint is given by:
v(k + j) vmax  for j = 0, , N 1
For sgn=0 , the constraint is given by:
vmax  v(k + j) vmax  for j = 0, , N 1
where
vmax
v(k + j)
For MIMO systems vmax can be choosen as a scalar value or a vector value
(with the same dimension as vector v(k)). In the case of a scalar the bound
vmax holds for each element of v(k). In the case of a vector vmax , the bound is
taken elementwise.
The function add v returns the system, subjected to the increment input constraint. The input and output variables are the following:
sys
dim
sysp
dimp
vmax
sgn
236
Standard model
The dimension of the standard model.
Prediction model
The dimension of the prediction model.
The maximum bound on the input.
Scalar with value of +1/0/1.
Standard Predictive Control Toolbox
add v
See Also
add u, add du, add y, add x, gpc
237
add y
add y
Purpose
Function to add output constraint to to the standard problem (sys) or the
prediction model (sysp).
Synopsis
[sys,dim] =add y(sys,dim,ymax,sgn);
[sysp,dimp]=add y(sysp,dimp,ymax,sgn);
Description
The function adds output constraint to the standard problem (sys) or the
prediction model (sysp). For sgn=1 , the constraint is given by:
y(k + j) ymax  for j = 0, , N 1
For sgn=1 , the constraint is given by:
y(k + j) ymax  for j = 0, , N 1
For sgn=0 , the constraint is given by:
ymax  y(k + j) ymax  for j = 1, , N 1
Where
ymax
y(k + j)
For MIMO systems ymax can be choosen as a scalar value or a vector value
(with the same dimension as vector y(k)). In the case of a scalar the bound
ymax holds for each element of y(k). In the case of a vector ymax , the bound is
taken elementwise.
The function add y returns the system subjected to the output constraint. The
input and output variables are the following:
238
add y
sys
dim
sysp
dimp
ymax
sgn
Standard model
The dimension of the standard model.
Prediction model
the dimension of the prediction model.
The maximum bound on the output signal.
Scalar with value of +1/0/1.
See Also
add u, add du, add v, add x
239
add x
add x
Purpose
Function to add state constraint to to the standard problem.
Synopsis
[sys,dim]=add x(sys,dim,E);
[sys,dim]=add x(sys,dim,E,sgn);
Description
The function adds state constraint to the standard problem.
NOTE: The function add x can NOT be applied to the prediction model !!!!
(i.e. add x must be used before pred).
For sgn=1 , the constraint is given by:
Ex(k + j) 1 for j = 0, , N 1
For sgn=1 , the constraint is given by:
Ex(k + j) 1 for j = 0, , N 1
For sgn=0 , the constraint is given by:
1 Ex(k + j) 1 for j = 1, , N 1
Where
E
x(k + j)
is a matrix
is the state of the system
The function add x returns the standard problem, subjected to the state constraint. The input and output variables are the following:
sys
dim
E
sgn
add x
See Also
add u, add du, add v, add y
241
add nc
add nc
Purpose
Function to add a control horizon to prediction problem
Synopsis
[sysp,dimp]=add nc(sysp,dimp,dim,Nc);
[sysp,dimp]=add nc(sysp,dimp,dim,Nc,descr);
Description
The function add nc returns the system with a control horizon sysp and the
dimension dimp. The input and output variables are the following:
sysp
dimp
dim
Nc
descr
See Also
gpc, lqpc
242
add end
add end
Purpose
Function to add a state endpoint constraint to prediction problem.
Synopsis
[sysp,dimp] = add end(sys,dim,sysp,dimp,N);
Description
The function adds a state endpoint constraint to the standard problem:
x(k + N) = xss = Dssx w(k + N)
The function add end returns the system with a state endpoint constraint in
sysp and the dimension dimp. The input and output variables are the following:
sys
dim
sysp
dimp
N
The
The
The
The
The
standard model.
dimension of the standard model.
system of prediction problem.
dimension of the prediction problem.
prediction horizon.
243
external
external
Purpose
Function makes the external signal w from d and r for simulation purpose
Synopsis
[tw]=external(apr,lensim,N,w1);
[tw]=external(apr,lensim,N,w1,w2);
[tw]=external(apr,lensim,N,w1,w2,w3);
[tw]=external(apr,lensim,N,w1,w2,w3,w4);
[tw]=external(apr,lensim,N,w1,w2,w3,w4,w5);
Description
The function computes the vector w with with predictions of the external signal
w:
w = w(k) w(k + 1) w(k + N 1)
where
w(k) =
w1 (k) w5 (k)
244
tw
apr
lensim
N
w1
..
.
w5
external
The vector apr determine if the external signal w(k + j) is a priori known. For
i = 1, . . . , 5 there holds:
signal wi (k) is a priori known for apr(i) = 1
signal wi (k) is not a priori known for apr(i) = 0
See Also
gpc, lqpc
245
dgamma
dgamma
Purpose
For the innite horizon case the funtion
Function makes the selection matrix .
also creates the steady state matrix ss .
Synopsis
[dGamma] = dgamma(Nm,N,dim);
[dGamma,Gammass] = dgamma(Nm,N,dim);
Description
The selection matrix is dened as
= diag((0), (1), . . . , (N 1))
where
0
0
0 I
(j) =
I
0
0 I
for
0 j Nm 1
for
Nm j N 1
Where
Nm
N
dim
The function dgamma returns the diagonal elements of the selection matrix ,
The input and output variables are the following:
so dGamma = diag().
dGamma
Gammass
Nm
N
Nc
dim
246
dgamma
See Also
gpc, lqpc, blockmat
247
pred
pred
Purpose
Function computes the prediction model from the standard model
Synopsis
[sysp,dimp]=pred(sys,dim,N);
Description
The function makes a prediction model for the system
x(k + 1)
y(k)
z(k)
(k)
(k)
=
=
=
=
=
for
sys =
dim =
A
C1
C2
C3
C4
B1
D11
D21
D31
D41
B2 B3
D12 0
D22 D23
D32 D33
D42 D43
Where
x(k)
e(k)
w(k)
v(k)
z(k)
(k)
(k)
A, Bi , Cj , Dij
248
pred
(k)
(k)
where
p =
=
=
=
=
=
1 e(k) + B
2 w(k)
3 v(k)
Ax(k) + B
+B
11 e(k) + D
12 w(k)
13 v(k)
C1 x(k) + D
+D
21 e(k) + D
22 w(k)
23 v(k)
+D
C2 x(k) + D
31 e(k) + D
32 w(k)
33 v(k)
+D
C3 x(k) + D
41 e(k) + D
42 w(k)
43 v(k)
+D
C4 x(k) + D
p(kk)
p(k + 1k)
..
.
p(k + Nk)
stands for the predicted signal for p = y, z, , and for future values for p =
v, w.
The function pred returns the prediction model sysp:
sysp =
dimp =
A
C1
C2
C3
C4
1
B
11
D
21
D
31
D
41
D
2
B
12
D
22
D
32
D
42
D
3
B
13
D
23
D
33
D
43
D
sysp
dimp
sys
dim
N
The
The
The
The
The
prediction model.
dimension of the prediction model.
system to be controlled.
dimension of the system.
prediction horizon.
See Also
gpc, lqpc, blockmat
249
contr
contr
Purpose
Function computes controller matrices for predictive controller
Synopsis
[vv,HH,AA,bb]=contr(sysp,dimp,dim,dGam);
Description
The function computes the matrices for predictive controller for use in the
function simul.
sysp is the prediction model, dimp is the dimensionvector of the prediction
model, dim is the dimensionvector of the standard model, and dGam is a row
vector with the diagonal elements of the matrix .
If inequality constraints are present, the control problem is transformed into a
quadratic programming problem, where the objective function is chosen as
1 T
(k)H
I (k)
J(I , k) =
2 I
subject to
A
I b 0
which can be solved at each time instant k in the function simul. There holds
The variable vv is given by:
vv = F De Dw D
so that
xc (k)
e(k)
v(k) = vv
w(k)
I (k)
= F xc (k) + De e(k) + Dw w(k)
+ D
I (k)
1
xc (k)
b (k) = bb
e(k)
w(k)
250
contr
xc (k)
v(k) = vv e(k)
w(k)
sysp
dimp
dim
dGam
vv,HH,AA,bb
See Also
simul, gpc, lqpc
251
lticll
lticll
Purpose
Function computes the state space matrices of the closed loop in the LTI case
(no inequality constraints).
Synopsis
[Acl,B1cl,B2cl,Ccl,D1cl,D2cl]=lticll(sys,dim,vv,N);
Description
Function lticll computes the state space matrices of the LTI closed loop,
given by
xcl (k + 1) = Acl xcl (k) + B1cl e(k) + B2cl w(k)
v(k)
the model.
dimensions sys.
the prediction horizon.
controller parameters.
system matrices of closed loop
See Also
gpc, lqpc, gpc ss, contr, simul, lticon
252
lticon
lticon
Purpose
Function computes the state space matrices of the optimal predictive controller
in the LTI case (no inequality constraints).
Synopsis
[Ac,B1c,B2c,Cc,D1c,D2c]=lticon(sys,dim,vv,N);
Description
Function lticon computes the state space matrices of the optimal predictive
controller, given by
xc (k + 1) = Ac xc (k) + B1c y(k) + B2c w(k)
the model.
dimensions sys.
the prediction horizon.
controller parameters.
system matrices of LTI controller
See Also
gpc, lqpc, gpc ss, contr, simul, lticll
253
simul
simul
Purpose
Function makes a closed loop simulation with predictive controller
Synopsis
[x,xc,y,v]=simul(syst,sys,dim,N,x,xc,y,v,tw,e,...
k1,k2,vv,HH,AA,bb);
Description
The function makes a closed loop simulation with predictive controller, the input and output variables are the following:
syst
sys
dim
N
x
xc
y
v
tw
k1
k2
vv,HH,AA,bb
See Also
contr, gpc, lqpc
254
rhc
Purpose
Function makes a closed loop simulation in receding horizon mode
Synopsis
[x,xc,y,v]=rhc(syst,sys,sysp,dim,dimp,dGam,N,x,xc,y,v,tw,e,k1,k2);
Description
The function makes a closed loop simulation with predictive controller in the
receding horizon mode. Past signals and prediction of future signals are given
within the prediction range. The simulation is paused every sample and can be
continued by pressing the returnkey. The input and output variables are the
following:
syst
sys
sysp
dim
dimp
dGamma
N
x
xc
y
v
tw
k1
k2
See Also
simul, contr, gpc, lqpc
255
256
Index
a priori knowledge, 36
add du.m, 232
add end.m, 243
add nc.m, 242
add u.m, 234
add v.m, 236
add x.m, 240
add y.m, 238
full SPCP, 62
generalized minimum variance control, 164
Generalized predictive control (GPC), 11
GPC, 154
gpc.m, 211
gpc2spc.m, 229
gpc ss.m, 215
imp2syst.m, 222
implementation, 89
implementation, full SPCP case, 91
implementation, LTI case, 89
constraints, 25, 44
Incrementinputoutput (IIO) models, 16
control horizon constraint, 46
Innite horizon SPCP, 65
equality constraints, 25, 44, 46
innite horizon SPCP with control horizon ,
inequality constraints, 25, 45, 48
72
stateend point constraint, 47
InputOutput (IO) models, 15
contr.m, 250
internal model control (IMC) scheme, 116
control horizon, 27, 46
Controlled AutoRegressive Integrated Mov invariant ellipsoid, 137
io2iio.m, 228
ing Average (CARIMA), 11
Controlled AutoRegressive Moving Average
linear matrix inequalities (LMI), 131
(CARMA), 11
LQPC, 154
coprime factorization, 121
lqpc.m, 218
lqpc2spc.m, 231
deadbeat control, 164
lticll.m, 252
dgamma.m, 246
lticon.m, 253
Diophantine equation, 196
basis functions, 27
blocking, 27
Bibliography
[1] M. Alamir and G. Bornard. Stability of a truncated innite constrained receding
horizon scheme: The general discrete nonlinear case. Automatica, 31(9):13531356,
1995.
[2] F. Allgower, T.A. Badgwell, J.S. Qin, and J.B. Rawlings. Nonlinear predictive control
and moving horizon estrimation an introductory overview. In Advances in Control,
Highlights of ECC99, Edited by F. Frank, SpringerVerlag, London, UK, 1999.
[3] K.J.
Astrom and B. Wittenmark. On selftuning regulators. Automatica, 9:185199,
1973.
[4] V. Balakrishnan, A. Zheng, and M. Morari. Constrained stabilization of discretetime
systems. In Advances in MBPC, Oxford University Press, 1994.
[5] A. Bemporad, M. Morari, V. Dua, and E. N. Pistikopoulos. The expicit linear
quadratic regulator for constrained systems. Automatica, 38(1):320, 2002.
[6] B.W. Bequette. Nonlinear control of chemical processes: A review. Ind. Eng. Chem.
Res., 30(7):13911413, 1991.
[7] R.R. Bitmead, M. Gevers, and V. Wertz. Adaptive Optimal Control. The Thinking
Mans GPC. Prentice Hall, Upper Saddle River, New Jersey, 1990.
[8] H. H. J. Bloemen, T. J. J. van den Boom, and H. B. Verbruggen. Optimizing the end
point state weighting in Modelbased Predictive Control. Automatica, 38(6):1061
1068, June 2002.
[9] S. Boyd and Y. Nesterov. Nonlinear convex optimization: New methods and new
applications. Minicourse on the European Control Conference, Brussels, Belgium,
July 14, 1997.
[10] S.P. Boyd and C.H. Barratt. Linear Controller Design, Limits of performance. Prentice Hall, Information and System Sciences Series, Englewood Clis, New Jersey, 1991.
[11] E.F. Camacho and C. Bordons. Model Predictive Control in the Process Industry,
Advances in Industrial Control. Springer, London, 1995.
259
[12] C.C. Chen and L. Shaw. On receding horizon feedback control. Automatica, 18:349
352, 1982.
[13] D.W. Clarke and C. Mohtadi. Properties of generalized predictive control. Automatica,
25(6):859875, 1989.
[14] D.W. Clarke, C. Mohtadi, and P.S. Tus. Generalized predictive control  part 1. the
basic algorithm. Automatica, 23(2):137148, 1987.
[15] D.W. Clarke, C. Mohtadi, and P.S. Tus. Generalized predictive control  part 2.
extensions and interpretations. Automatica, 23(2):149160, 1987.
[16] D.W. Clarke and R. Scattolini. Constrained receding horizon predictive control. IEE
ProcD, 138(4), 1991.
[17] C.R. Cutler and B.L. Ramaker. Dynamic matrix control  a computer control algorithm. In AIChE Nat. Mtg, 1979.
[18] C.R. Cutler and B.L. Ramaker. Dynamic matrix control  a computer control algorithm. In Proceeding Joint American Control Conference, San Francisco, CA, USA,
1980.
[19] F.C. Cuzzola, J.C. Geromel, and M. Morari. An improved approach for constrained
robust model predictive control. Automatica, 38(7):11831189, 2002.
[20] R.M.C. De Keyser and A.R. van Cauwenberghe. Typical application possibilities for
selftuning predictive control. In IFAC Symp.Ident.& Syst.Par.Est., 1982.
[21] R.A.J. De Vries and H.B Verbruggen. Multivariable process and prediction models in
predictive control a unied approach. Int. J. Adapt. Contr.& Sign. Proc., 8:261278,
1994.
[22] V.F. Demyanov and L.V. Vasilev. Nondierentiable Optimization, Optimization Software. SpringerVerlag, 1985.
[23] B. Ding, Y. Xi, and S. Li. A synthesis approach of online constraint robust model
predictive control. Automatica, 40(1):163167, 2004.
[24] J.C. Doyle, B.A. Francis, and A.R. Tannenbaum. Feedback control systems. MacMillan
Publishing Company, New York, USA, 1992.
[25] J.C. Doyle, K. Glover, P.P. Khargonekar, and B.A. Francis. Statespace solutions to
standard H2 and H control problems. IEEE AC, 34:pp831847, 1989.
[26] C.E. Garcia and M. Morari. Internal model control. 1. a unifying review and some
new results. Ind.Eng.Chem.Proc.Des.Dev., 21(2):308323, 1982.
260
[27] C.E. Garcia and A.M. Morshedi. Quadratic programming solution of dynamic matrix
control (QDMC). Chem.Eng.Comm., 46(11):7387, 1986.
[28] C.E. Garcia, D.M. Prett, and M. Morari. Model predictive control: Theory and
practice  a survey. Automatica, 25(3):335348, 1989.
[29] C.E. Garcia, D.M. Prett, and B.L. Ramaker. Fundamental Process Control. Butterworths, Stoneham, MA, 1988.
[30] H. Genceli and M. Nikolaou. Robust stability analysis of constrained 1 norm model
predictive control. AIChE Journ., 39(12):19541965, 1993.
[31] E.G. Gilbert and K.T. Tan. Linear systems with state and control constraints: the
theory and application of maximal output admissible sets. IEEE AC, 36:10081020,
1991.
[32] M. Grotschel, L. Lovasz, and A. Schrijver. Geometric Algorithms and Combinatorial
Optimization. SpringerVerlag, Berlin, Germany, 1988.
[33] T. Kailath. Linear Systems. Prentice Hall, New York, 1980.
[34] M. Kinnaert. Adaptive generalized predictive controller for mimo systems. Int.J.
Contr., 50(1):161172, 1989.
[35] M.V. Kothare, V. Balakrishnan, and M. Morari. Robust contrained predictive control
using linear matrix inequalities. Automatica, 32(10):13611379, 1996.
[36] B. Kouvaritakis, J.A. Rossiter, and J. Schuurmans. Ecient robuts predictive control.
IEEE AC, 45(8):15451549, 2000.
[37] W.H. Kwon and A.E. Pearson. On feedback stabilization of timevarying discrete
systems. IEEE AC, 23:479481, 1979.
[38] J.H. Lee, M. Morari, and C.E. Garcia. Statespace interpretation of model predictive
control. Automatica, 30(4):707717, 1994.
[39] J.H. Lee and Z.H.Yu. Tuning of model predictive controllers for robust performance.
Comp.Chem.Eng., 18(1):1537, 1994.
[40] C.E. Lemke. On complementary pivot theory. In Mathematics of the Decision Sciences, G.B.Dantzig and A.F.Veinott (Eds.), 1968.
[41] L. Ljung. System Identication: Theory for the User. Prentice Hall, Englewood Clis,
NJ, 1987.
[42] Y. Lu and Y. Arkun. Quasiminmax mpc algorithms for LPV systems. Automatica,
36(4):527540, 2000.
261
[58] L.E. Scales. Introduction to NonLinear Optimization. MacMillan, London, UK, 1985.
[59] P.O.M. Scokaert. Innite horizon genralized predictive control. Int. J. Control,
66(1):161175, 1997.
[60] P.O.M. Scokaert and J.B. Rawlings. Constrained linear quadratic regulation. volume 43, pages 11631169, 1998.
[61] A.R.M. Soeterboek. Predictive control  A unied approach. Prentice Hall, Englewood
Clis, NJ, 1992.
[62] E.D. Sontag. An algebraic approach to bounded controllability of linear systems.
Int.J.Contr., pages 181188, 1984.
[63] V. Strejc. State Space Theory of Discrete Linear Control. Academia, Prague, 1981.
[64] M. Sznaier and M.J. Damborg. Suboptimal control of linear systems with state and
control inequality constraints. In Proceedings of the 26th IEEE conference on decision
and control, pages 761762, Los Angeles, 1987.
[65] T.J.J. van den Boom. Model based predictive control, status and perspectives. In Tutorial Paper on the CESA IMACS Conference. Symposium on Control, Optimization
and Supervision, volume 1, pages 112, Lille, France, 1996.
[66] T.J.J. van den Boom, M. Ayala Botto, and P. Hoekstra. Design of an analytic constrained predictive controller using neural networks. International Journal of System
Science, 36(10):639650, 2005.
[67] T.J.J. van den Boom and R.A.J. de Vries. Robust predictive control using a timevarying Youla parameter. Journal of applied mathematics and computer science,
9(1):101128, 1999.
[68] E.T. van Donkelaar. Improvement of eciency in system identication and model
predictive control of industrial processes. PhD Thesis, Delft University of Technology,
Delft, The Netherlands, 2000.
[69] E.T. van Donkelaar, O.H. Bosgra, and P.M.J. Van den Hof. Model predictive control
with generalized input parametrization. In ECC, 1999.
[70] P. Van Overschee and B. de Moor. Subspace algorithms for the stochastic identication
problem. Automatica, 29(3):649660, 1993.
[71] M. Verhaegen and P. Dewilde. Subspace model identication. part i: The outputerror
state space model identication class of algorithms / part ii: Analysis of the elementary
outputerror state space model identication algorithm. Int.J.Contr., 56(5):1187
1241, 1992.
263
264