You are on page 1of 15

Proceedings of the American Control Conference

San Diego, California June 1999

Tutorial: Model Predictive Control Technology


James B. Rawlings
Department of Chemical Engineering
University of Wisconsin-Madison
Madison, w1 53706-1691

Abstract We should point these out before narrowing the fo-


cus in the interest of presenting a reasonably self-
This paper provides a brief tutorial on model pre- contained tutorial for the non-expert. The three
dictive control (MPC) theory for linear and nonlinear MPC papers presented at the CPC VI conference in
models. The discussion of MPC with linear models 1996 are an excellent starting point [28, 33, 491.
covers the topics of steady-state target calculation, Qin and Badgwell present comparisons of industrial
infinite horizon recedmg horizon regulation, resolv- MPC algorithms that practitioners may find particu-
ing infeasibilities caused by constraints, and state larly useful. Morari and Lee provide another recent
estimation and disturbance models. The section on review [37]. Kwon provides a very extensive list of
nonlinear models briefly discusses what is desirable references [26]. Moreover, several excellent books
from a theoretical perspective and what is possible have appeared recently [38, 65, 61. For those inter-
from an implementation perspective and focuses on ested in the status of MPC for nonlinear plants, the
some current efforts to bridge this gap. The paper proceedmgs of the 1998 conference [ 11 would be of
concludes with a brief outlook for future develop- strong interest.
ments in the areas of nonlinear MPC, robustness,
moving horizon state estimation and MPC of hybrid
systems. 2 Models

The essence of MPC is to optimize, over the manip-


1 Introduction ulatable inputs, forecasts of process behavior. The
forecasting is accomplished with a process model,
and therefore the model is the essential element of
The purpose of this paper is to provide a reasonably an MPC controller. As discussed subsequently, mod-
accessible and self-contained tutorial exposition on els are not perfect forecasters, and feedback can
model predictive control (MPC). The target audience overcome some effects of poor models, but starting
is people with control expertise, particularly practi- with a poor process model is akin to driving a car at
tioners, who wish to broaden their perspective in the night without headlights; the feedback may be a bit
MPC area of control technology. This paper mainly late to be truly effective.
introduces the concepts, provides a framework in
which the critical issues can be expressed and ana-
lyzed, and points out how MPC allows practitioners Linear models. Historically, the models of choice
to address the tradeoffs that must be considered in in early industrial MPC applications were time
implementing a control technology. The subsequent domain, input/output, step or impulse response
papers and presentations by respected and experi- models[54, 11,471. Part of the early appeal of MPC
enced control practitioners provide the wider per- for practitioners in the process industries was un-
spective on the costs and benefits of implementing doubtedly the ease of understandmg provided by
MPC in a variety of industrial settings. It is hoped this model form. It has become more common for
that this combination offers a balanced viewpoint MPC researchers, however, to discuss linear models
and may stimulate further interest bothinindustrial in state-space form
application and fundamental research.
dx
-
dt
= AX + BU X j + l = Axj +Buj (1)
The MPC research literature is by now large, but
review articles have appeared at regular intervals. y=cx yj = C X j (2)

0-7803-4990-6/99$10.00 0 1999 AACC 662


in which x is the n-vector of states, y is the p-vector
of (measurable) outputs and U is the m-vector of
(manipulatable)inputs, t is continuous time and j is
the discrete time sample number. Continuous time
models may be more familiar to those with a clas-
sical control background in transfer functions, but
discrete time models are very convenient for w-
tal computer implementation. With abuse of nota- Figure 1: &ample input and state constraint regions de-
tion, we use the same system matrices ( A ,B, C) for fine by Equations 5-6.
either model, but most of the subsequent discus-
sion focuses on discrete time. The transformations
from continuous time to discrete time models are are positive vectors. The constraint region bound-
available as one line commands in a language like aries are straight lines as shown in Figure 1. At
Octave or MATLAB. Linear models in the process in- this point we are assuming that x = 0 , u = 0 is
dustries are, by their nature, empirical models, and the steady state to which we are controlhg the
identified from input/output data. The ideal model process, but we treat the more general case subse-
form for identification purposes, is perhaps best left quently. Optimization over inputs subject to hard
to the experts in identification theory, but a survey constraints leads immediately to nonlinear control,
of that literature indicates no disadvantage to using and that departure from the well understood and
state-space models inside the MPC controller. well tested linear control theory provided practi-
tioners with an important, new control technology
The discussion of MPC in state-space form has sev- and motivated researchers to understand better this
eral advantages includmg easy generalization to new framework. Certainly optimal control with con-
multi-variable systems, ease of analysis of closed- straints was not a new concept in the 1970s, but
loop properties, and on-line computation. Further- the implementation in a moving horizon fashion of
more, the wealth of linear systems theory: the lin- these open-loop optimal control solutions subject to
ear quadratic regulator theory, Kalman filtering the- constraints at each sample time was the new twist
ory, internal model principle, etc., is immediately that had not been fully investigated.
accessible for use in MPC starting with this model
form. We demonstrate the usefulness of those tools
subsequently. A word of caution is also in order. Nonlinear models. The use of nonlinear models
Categories, frameworks and viewpoints, while in- in MPC is motivated by the possibility to improve
dispensable for clear thinlung and communication, control by improving the quality of the forecast-
may blind us to other possibilities. We should resist ing. The basic fundamentals in any process control
the easy temptation to formulate all control issues problem - conservation of mass, momentum and
from an LQ, state-space framework. The tendency energy, considerations of phase equilibria, relation-
is to focus on those issues that are easily imported ships of chemical kinetics, and properties of final
into the dominant framework while neglecting other products - all introduce nonlinearity into the pro-
issues, of possibly equal or greater import to prac- cess description. In which settings use of nonlin-
tice, which are difficult to analyze, awkward, and ear models for forecasting delivers improved con-
inconvenient. trol performance is an open issue, however. For
continuous processes maintained at nominal oper-
From a theoretical perspective, the significant shift ating conditions and subject to small disturbances,
in problem formulation came from the MPC practi- the potential improvement would appear small.
tioners who insisted on maintaining constrains, par- For processes operated over large regions of the
ticularly input constraints in the problem formula- state space - semi-batch reactors, frequent prod-
tion, uct grade changes, processes subject to large distur-
bances, for example - the advantages of nonlinear
dx
-=Ax+Bu models appear larger.
dt
y = cx Yj CXj (4) Idenhfication of nonlinear models runs the entire
Duid Duj 5 d (5) range from models based on fundamental principles
Hxih Hxj 5 h (6)
with only parameter estimation from data, to com-
pletely empirical nodnear models with all coeffi-
in which D , H are the constraint matrices and d , h cients identified from data. We will not stray into the

663
in which A u j U j - U j - 1 . The matrices Q , R , and
S are assumed to be symmetric positive definite.
When the complete state of the plant is not mea-
sured, as is almost always the case, the addition of
a state estimator is necessary (Section 3.4).

The vector 7 is the desired output target and U is the


Figure 2: Example input and state constr&t regions de- desired input target, assumed for simPficiv to be
fine by Equations 9-10. time invariant. In many industrial implementations
the desired targets are calculated as a steady-state
economic optimization at the plant level. In these
issues of identification of nonlinear models, which cases, the desired targets are normally constant be-
is a large topic by itself. The interested reader may tween plant optimizations, which are at
consult [44, 271 and the references therein for an a slower time scale than the MPC controller oper-
entry point into this literature. ates. In batch and semi-batch reactor operation,
on the other hand, a final time objective may be
Regardless of the identification method, we repre-
optimized instead, which produces a time-varying
sent the nonlinear model inside the MPC controller
trajectory for the system states. Even in continu-
also in state space form
ous operations, some recommend tuning MPC con-
dx trollers by specifying the setpoint trajectory, often a
-
dt = f ( x , u ) xj+l = f ( x J 9
uj) (7) first order response with adjustable time constant.
Y=g(x) Yj = g ( x j ) (8) As discussed by Bitmead et al. [SIin the context of
U€U U GPC, one can pose these types of tracking problems
Uj E (9)
within the LQ frameworkby augmenting the state of
X € X Xj E X (lo) the system to describe the evolution of the reference
signal and posing an LQ problem for the combined
If the model is nonlinear, there is no advantage
system.
in keeping the constraints as linear inequalities, so
we consider the constraints as membership in more For a time invariant setpoint, the steady-state aspect
general regions U ,X shown in Figure 2. of the control problem is to determine appropriate
values of ( y s , x s , u s )Ideally,
. ys = 7 and us = U .
However, process limitations and constraints may
3 MPC with Linear Models prevent the system from reaching the desired steady
state. The goal of the target calculation is to find the
feasible triple (ysl x s ,U , ) such that ys and us are as
In this paper we focus On formulating MPC as an close as possible to 7 and U . We address the target
infinite horizon optimal control strategy with a
calculation in Section 3m1.
quadratic performance criterion. We use the follow-
ing discrete time model of the plant To simplify the analysis and formulation, we trans-
form (13) using deviation variables to the generic
Xj+l = AXj+B(uj +d ) ( 11) infinite horizon quadratic criterion
rj = C X j + P (12)

The affine terms d and p serve the purpose of @=- C


1 " z T Q z ~+ V T R V j + AvTSAvj. (14)
2 j=o
ad@ integral control. They may be interpreted as
modelmg the effect of constant disturbances influ- The original criterion (13) canbe recovered from (14)
encing the input and output, respectfully. Assuming by making the following substitutions:
that the state of the plant is perfectly measured, we
define model predictive control as the feedback law
U j = p ( X j ) that minimizes
in which ys,xs and us are the steady states satisfy-
ing the following relation

664
By using deviation variables we treat separately the Combining the solution of the target t r a c k prob-
steady-state and the dynamic elements of the con- lem and the constrained regulator, we define the
trol problem, thereby simplifying the overall analy- MPC algorithm as follows:
sis of the controller.
1. Obtain an estimate of the state and distur-
The dynamic aspect of the control problem is to con-
trol (y,x , U) to the steady-state values (ys, x s ,us) bances * ( X j , p , d )
in the face of constraints, which are assumed not to 2. Determine the steady-state target 3
be active at steady state, i.e. the origin is in the strict (Ys,Xsr U s >
interior of regions X , U . See [Sl] for a preliminary
treatment of the case in which the constraints are ac- 3. Solve the regulation problem =V j
tive at the steady-state operating point. This part of 4. Let Uj =Vj + US
the problem is discussed in Section 3.2. In particu-
lar we determine the state feedback law V j = p ( W j ) 5. Repeat for j -j + 1
that minimizes (14). When there are no inequality
constraints, the feedback law is the linear quadratic 3.1 Target Calculation
regulator. However, with the addition of inequal- When the number of the inputs equals the number
ity constraints, there may not exist an analytic form of outputs, the solution to the unconstrained target
for p ( W j ) . For cases in which an analytic solution is problem is obtained using the steady-state gain ma-
unavailable, the feedback law is obtained by repet- trix, assuming such a matrix exists (i.e. the system
itively solving the open-loop optimal control prob- has no integrators). However for systems with un-
lem. This strategy allows us to consider only the en-
equal numbers of inputs and outputs, integrators, or
countered sequence of measured states rather than
inequality constraints, the target calculation is for-
the entire state space. For a further discussion, see mulated as a mathematical program [40, 391. When
Mayne [32]. there are at least as many inputs as outputs, mul-
If we consider only linear constraints on the input, tiple combinations of inputs may yield the desired
input velocity, and outputs of the form output target at steady state. For such systems, a
mathematical program with a least squares objec-
U- ID U k 5 U-, -A, 5 AUk 5 A, (16) tive is formulated to determine the best combina-
tions of inputs. When the number of outputs is
ymin 5 cxk 5 Ymax (17)
greater than the number of inputs, situations exist in
we formulate the regulator as the solution of the which no combination of inputs satisfies the output
following infinite horizon optimal control problem target at steady state. For such cases, we formulate a
mathematical program that determines the steady-
- 0 0
state output ys # 9 that is closest to 9 in a least
squares sense.

subject to the constraints Instead of solving separate problems to establish the


target, we prefer to solve one problem for both sit-
uations. Through the use of an exact penalty [17],
we formulate the target trackmg problem as a single
quadratic program that achieves the output target if
possible, and relaxes the problem in a 11/1$ optimal
sense if the target is infeasible. We formulate the
soft constraint

If we denote p - c x , - p Iq,
y - c x , - p 2 -q, (24)
q 2 0,

then the control law is by relaxing the constraint C x , = j j using the slack
variable q. By suitably penalizing q , we guaran-
p ( x j )= V,*(xj). tee that the relaxed constraint is bindug when it
is feasible. We formulate the exact soft constraint
We address the regulation problem in Section 3.2. by addug an 11/l: penalty to the objective function.

665
The 11 /lg penalty is simply the combination of a lin- is not fixed at each sample time. Consider, for exam-
ear penalty q:q and a quadratic penalty qTQsq,in ple, a tank in which the level is unmeasured (i.e. an
which the elements of qs are strictly non-negative unobservable integrator). The steady-state solution
and Qs is a symmetric positive definite matrix. By is to set us = 0 (i.e. balance the flows). However,
choosing the linear penalty sufficiently large, the any level x s , within bounds, is an optimal alterna-
soft constraint is guaranteed to be exact. A lower tive. Likewise, at the next time instant, a different
bound on the elements of qs to ensure that the level also would be a suitably optimal steady-state
original hard constraints are satisfied by the solu- target. The resulting closed-loop performance for
tion cannot be calculated explicitly without know- the system could be erratic, because the controller
ing the solution to the original problem, because may constantly adjust the level of the tank, never
the lower bound depends on the optimal Lagrange letting the system settle to a steady state.
multipliers for the original problem. In theory, a
conservative state-dependent upper bound for these In order to avoid such situations, we restrict our
multipliers may be obtained by exploiting the Lips- discussion to detectable systems, and recommend
chitz continuity of the quadratic program P O ] . How- redesign if a system does not meet this assump-
ever, in practice, we rarely need to guarantee that tion. For detectable systems, the solution to the
the 1 1 / 1 $ penalty is exact. Rather, we use approxi- quadratic program is unique assuming the feasible
mate values for qs obtained by computational expe- region is nonempty. The details of the proof are
rience. In terms of constructing an exact penalty, given in [Sl]. Uniqueness is also guaranteed when
the quadratic term is superfluous. However, the only the integrators are observable. For the practi-
quadratic term adds an extra degree of freedom for tioner this condition translates into the requirement
tuning and is necessary to guarantee uniqueness. that all levels are measured. The reason we choose
the stronger condition of detectability is that if good
We now formulate the target traclung optimization control is desired, then the unstable modes of the
as the following quadratic program system should be observable. Detectability is also
required to guarantee the stability of the regulator.
min -1 ( ~ ' Q s q + (U,- U)*Rs(us- R ) )
xs,us,rl 2 Empty feasible regions are a result of the in-
+ q f q (25) equality constraints (26c). Without the inequality
constraints (26c) the feasible region is nonempty,
subject to the constraints thereby guaranteeing the existence of a feasible and

: -;][:I{ :}[&I
unique solution under the condition of detectability.
r-A -B o
[ 5:
(26a)
For example, the solution (us,xs,q ) = ( - d , 0, IT -
p I ) is feasible. However, the addition of the inequal-
ity constraints (26c)presents the possibility of infea-
sibility. Even with well-defined constraints, U& <
q 2 0, (26b) umax and y& < yrn, disturbances may render the
U& I D u ~I u r n , 3/niin I C X +~ p I yrn, feasible region empty. Since the constraints on the
(26~) input usually result from physical limitations such
as valve saturation, relaxing only the output con-
in which Rs and Qs are assumed to be symmetric straints is one possibility to circumvent infeasibil-
positive definite. ities. Assuming that U& I-d I urn, the feasible
region is always nonempty. However, we contend
Because xsis not explicitly in the objective function, that the output constraints should not be relaxed in
the question arises as to whether the solution to the target calculation. Rather, an infeasible solution,
Equation 25 is unique. If the feasible region is non- readily determined during the initial phase in the so-
empty, the solution exists because the quadratic lution of the quadratic program, should be used as
program is bounded below on the feasible region. an indicator of a process exception. While relaxing
If Qs and Rs are symmetric positive definite, ys and the output constraints in the dynamic regulator is
usare uniquely determined by the solution of the common practice [55,18, 14, 70,61,59], the output
quadratic program. However, without a quadratic constraint violations are transient. By relaxing out-
penalty on xs,there is no guarantee that the result- put constraints in the target calculation on the other
ing solution for xsis unique. Non-uniqueness in the hand, the controller seeks a steady-state target that
steady-state value of x, presents potential problems continuously violates the output constraints. The
for the controller, because the origin of the regulator

666
steady violation indicates that the controller is un- subject to the following constraints:
able to compensate adequately for the disturbance
and, therefore, should indicate a process exception. W O= X j , Wk+l = AWL i- B'Uk (31a)
d- IDVk - GWk Id- (31b)
3.2 Receding Horizon Regulator YLnh -Ys CWL 5 Y- - Ys (31c)
Given the calculated steady state we formulate the
regulator as the following infinite horizon optimal The original formulation (27)-(28)can be recovered
control problem from (30)-(31) by making the following substitu-
tions into the second formulation:

subject to the constraints


:], .-[;I
W O= X j - Xs,
W k + l = AWL -t BVk
21-1 = Uj-1 - Us (284
(28b)
C-[C 01, M -
[ LT(RS+ s, 1
Urn- Us 5 D v k 5 U-
- Au IAVk 5 &
-us (28~)
C + L ~ ( R + S ) L -L=S
S I
9-y~
5 CWk 5 9-r~
(28d)
(28d R-R+S, D- [ 71' .-[ --!;]
We assume that Q and R are symmetric posi-
tive definite matrices. We also assume that the
origin,(Wj,vj) = (0, 0), is an element of the feasible
region W x V l. If the pair ( A ,B ) is constrained sta-
bilizable and the pair (A, Q1i2C)is detectable, then While the formulation (30)-(31)is theoretically ap-
xj = 0 is an exponentially stable fixed point of the pealug, the solution is intractable in its current
closed-loop system 1611. form, because it is necessary to consider an infinite
number of decision variables. In order to obtain a
For unstable state transition matrices, the direct computationally tractable formulation, we reformu-
solution of (27)-(28)is ill-conditioned, because the late the optimization in a finite dimensional decision
system dynamics are propagated through the un- space.
stable A matrix. To improve the conditioning of
the optimization, we re-parameterize the input as Several authors have considered this problem in var-
v k = L w k + 7 3 ~ .in which L is a linear stabilizing feed- ious forms. In this paper, we concentrate on the
back gain for ( A ,B ) [22, 571. The system model be- constrained linear quadratic methods proposed in
comes the literature [22, 66, 10, 61, 601. The key con-
cept behind these methods is to recognize that the
Wk+l = ( A 4- B L ) w k + B Q (29) inequality constraints remain active only for a fi-
nite number of sample steps along the prediction
in which n is the new input. By initially specifying a
horizon. We demonstrate informally this concept
stabilizing, potentially infeasible, trajectory, we can
as follows: if we assume that there exists a feasi-
improve the numerical conditioning of the optimiza-
ble solution to (30), (31), then the state and input
tion by propagating the system dynamics through
trajectories { W k , V k } & approach the origin expo-
the stable ( A + B L ) matrix.
nentially. Furthermore, if we assume the origin is
By expandmg AVk and substituting in for v k , contained within the interior of the feasible region
we transform (271428) into the following more W x V (we address the case in which the origin lies on
tractable form: the boundary of the feasible region in the next sec-
tion), then there exists a positively invariant convex
set [ 191

(30) 0, = {wI ( A + BK)Jw E WK, V j 2 0} (32)


such that the optimal unconstrained feedback law
v = Kw is feasible for all future time. The set W K

667
is the feasible region projected onto the state space linear programs. Their method provides an easy
by the linear control K (i.e. WK = { w l ( w , K w )E check whether, for a fixed N, the solution to (33)-
W x V}). Because the state and input trajectories (34) is feasible (i.e. W N E 0,).The check consists
approach the origin exponentially, there exists a fi- of determining whether state and input trajectories
nite N * such that the state trajectory { I . U ~ } ; = ~ ,is generated by unconstrained Control law Vk = KWk
contained in 0,. from the initial condition W N are feasible with re-
spect to inequality constraints for t* time steps in
In order to guarantee that the inequality constraints the future. If the check fails, then the optimization
(31b) are satisfied on the infinite horizon, N * must (33)-(34)needs to be resolved with a longer control
be chosen such that WN. E 0,. Since the value of horizon N' z N since W N 4 0,. The process is
N * depends on X j , we need to account for the vari- repeated until W N ~E 0,.
able decision horizon length in the optimization. We
formulate the variable horizon length regulator as When the set of initial conditions {WO]is compact,
the following optimization Chmielewski and Manousiouthakis [lo] present a
method for calculatmg an upper bound Iy on N * us-
N-1
ing bounding arguments on the optimal cost func-
tion @*. Given a set P = { x l ,... ,x m } of initial con-
ditions, the optimal cost function @ * ( x )is convex
function defined on the convex hull (CO) of P. A n
upper bound & ( X I on the optimal cost @ * ( x )for
subject to the constraints x E CO( P) is obtained by the correspondulg convex
combinations of optimal cost functions @ * ( x j )for
W O= X j , Wk+l = A +B v ~ ,
w~ WN E 0, (34a)
X J E P. The upper bound on N* is obtained by rec-
d aID v -
~ GWk I d-, (34b) ognizing that the state trajectory W j only remains
yrnb-Ys~cWk~yrnax-Ys (34d outside of 0, for a finite number of stages. A lower
bound q on the cost of W T Q W j can be generated
The cost to go n is determined from the discrete- for xj & 0, (see [lo] for explicit details). It then
time algebraic Riccati equation follows that N* 5 & ( x ) / q .Further refinement of
the upper bound can be obtained by including the
~=A~IIA+Q terminal stage penalty Il in the analysis.
- (ATnB+ M ) ( R + BTIIB)-l(BTIIA + M T ) , (35)
Finally, efficient solution of the quadratic program
for which many reliable solution algorithms exist. generated by the MPC regulator is discussed in [68,
The variable horizon formulation is similar to the 531.
dual-mode receding horizon controller [ 351 for non-
linear system with the linear quadratic regulator
chosen as the stabilizing linear controller. 3.3 Feasibility
In the implementation of model predictive control,
While the problem (33)-(34)is formulated on a finite process conditions arise where there is no solution
horizon, the solution cannot, in general, be obtained to the optimization problem (33) that satisfies the
in real-time since the problem is a mixed-integer constraints (34). Rather than declaring such sit-
program. Rather than try to solve directly (33)-(34), uation process exceptions, we sometimes prefer a
we address the problem of determining N* from a solution that enforces some of the inequality con-
variety of semi-implicit schemes while maintaining straints while relaxing others to retain feasibility.
the quadratic programming structure in the subse- Often the input constraints represent physical lim-
quent optimizations. itations such as valve saturation that cannot be vi-
olated. Output constraints, however, frequently do
Gilbert and Tan 1191 show that there exist a finite
not represent hard physical bound. Rather, they of-
number t* such that Of* is equivalent to the maxi-
ten represent desired ranges of operations that can
mal Om,in which
be violated if necessary. To avoid infeasibilities, we
0t = {wl (A + B K j J w E W K , for j = O,.. . , t }. relax the output constraints by treating the them as
(36) "soft" constraints.

They also present an algorithm for determining t* Various authors have considered formulating out-
that is formulated efficiently as a finite number of put constraints as soft constraints to avoid poten-

668
Output (solution 1) Output (solution 2)
31

-1' -1
0 5 10 15 0 5 10 15 20

Figure 3: Two controller's resolution of output infeasi-


bility. Output versus time. Solution 1 mini- I

mizes duration of constraint violation; Solu- duration


tion 2 minimizes peak size of constraint viola-
tion. Figure 4: Pareto optimal curves for size versus duration
of constraint violation as a function of initial
condition, X O .
tial infeasibilities [55, 18, 14, 70, 61, 591. We fo-
cus on the 1 1 / 1 2 exact soft constraint strategy first
advocated by de Oliveira and Biegler [14]. The at- unstable zero and cannot be avoided by clever con-
tractive features of the 1 1 / 1 2 formulation is that the troller design. For a given system and horizon N, the
quadratic programming structure is retained and Pareto optimal size/duration curves can be plotted
the resulting solution is exact if a feasible solution for different initial conditions, as in Figure 4. The
exists. user must then decide where in the size/duration
plane, the plant should operate at times of infeasibil-
Multi-objective nature of infeasibility problems. ity. Desired operation may lie on the Pareto optimal
In many plants, the simultaneous minimization of curve, because points below this curve cannot be at-
the size and duration of the state constraint viola- tained and points above it are inferior, in the sense
tions is not a conflicting objective. The optimal way that they correspond to larger size and/or duration
to handle infeasibility is then simply to minimize than are required.
both size and duration; regulator performance may
We construct next soft output inequality constraints
then be optimized, subject to the "optimally" re-
by introducing the slack variable Ek into the opti-
laxed state constraints. Unfortunately, not all infea-
mization. We reformulate the variable horizon reg-
sibilities are as easily resolved. In some cases, such
ulator with soft constraints as the following opti-
as nonminimum-phase plants, a reduction in size of
mization
violation can only be obtained at the cost of a large
increase in duration of violation, and vice-versa. The
optimization of constraint violations then becomes
a multi-objective problem. In Figure 3 we show two
different controllers' resolution of an infeasibility
problem. The 2-state, SISO system model is

[ 1.6 -0.64 ] subject to the constraints


Xktl =

Y k = 1-1
1

2lXk
0 Xk+
[ ] uk

with constraints and initial condition

Solution 1 corresponds to a controller minimizing


the duration of constraint violation, which leads to We assume Z is symmetric positive definite matrjx
a large peak violation, and solution 2 corresponds and z is a vector with positive elements chosen such
to a controller minimizing the peak constraint vio- that output constraints can be made exact if desired.
lation, which leads to a long duration of violation.
This behavior is a system property caused by the As a second example, consider the third-order non-

669
input Lee, Morari and Garcia [29] discuss the equivalence
0.4 between the original industrial MPC algorithms and
different disturbance model choices. Shinskey [62]
0.2
provides a nice discussion of the disadvantages of
0 output disturbance models, in the original DMC for-
0.5 mulation, compared to input disturbance models.
-0.2

0 5 10 15 20 0 5 10 15 20 We set d = 0 and focus for simplicity on the out-


samples samples put disturbance model. We augment the state of
the system so the estimator produces estimates of
Figure 5: Least squares soft Constraint SolUtbn. z = both state, 2, and modelled disturbance, @ with
1,10,50 and 100. Solid lines: closed-loop, the standard Kalman filtering equations. The dis-
dashed h e s : OPen-looP Predictions at time o, turbance may be modelled by passing white noise,
dotted line: output upper constraint. Ek, through an integrator, or by passing white noise
through some other stable linear system (filter) and
minimum phase system
] then through an integrator. The disturbance shap-

[ :]
ing filter enables the designer to attenuate distur-
2 -1.45 0.35
,B= bances with selected frequency content. Bitmead et
A = [ l 0 0 0 . (39)
al. [6] provide a tutorial discussion of these issues
0 1
in the unconstrained predictive control context.
C=[-1 0 21 (40)
In the simplest case, the state estimator model takes
for which the output displays inverse response.
the form
The controller tuning parameters are Q = C’C,
R = 1 and N = 20. The input is unconstrained, Xj+l = A x j +B u j + W j (41)
the output is constrained between -cl and we per-
form simulations from the initial condition xo = Pj+l = P j + Ej (42)
[1.5 1.5 1.51‘. Figure 5 shows the possible trade- y j = CXj +Pj +Vj (43)
offs that can be achieved by adjusting the quadratic in which W j , E j , V j are the noises driving the pro-
soft-constraint penalty, Z. We also see that open- cess, integrated disturbance, and output measure-
loop predictions and nominal closed-loop response ment, respectively. As shown in Figure 6, we spec-
are in close agreement for all choices of tuning pa-
ramet er.
ify aw = R,,diag(Q,,Q~), which are the covari-
ances of the zero mean, normally distributed noise
terms. The optimal state estimate for this model is
3.4 State estimation given by the classic Kalman filter equations [2]. A s
We now turn to reconstruction of the state from out- in standard LQG design, one can tune the estimator
put measurements. In the model of Equation 12, by choosing the relative magnitudes of the noises
the nonzero disturbances d and p are employed to driving the state, integrated disturbance and mea-
give offset free control in the face of nonzero dis- sured output. Practitioners certainly would prefer
turbances. The original industrial MPC formulations tuning parameters more closely tied to closed-loop
(GPC, QDMC, IDCOM) were designed for offset free performance objectives, and more guidance on MPC
control by using an integrating output disturbance tuning,in general, remains a valid research objec-
model. The integrating output disturbance model is tive.
a standard device in LQG design [12, 251. Similarly,
to track nonzero targets or desired trajectories that Assembling the components of Sections 3.1-3.4,
asymptotically approach nonzero values, one aug- produces the structure shown in Figure 6. This
ments the plant model dynamics with integrators. structure is certainly not the simplest available that
The disturbances may be modelled at the input, out- accounts for output feedback, nonzero setpoints
put, or some combination. These disturbance mod- and disturbances, and offset free control. Nor is
els are not used in the regulator; the disturbances it the structure found in the dominant commercial
are obviously uncontrollable, and are only required vendor products. It is presented here mainly as a
in the state estimator. The effects of the distur- prototype to display a reasonably flexible means for
bance estimates is to shift the steady-state target of h a n d k g these critical issues. Somethmg similar to
the regulator. Bitmead et al. [6] provide a nice dis- this structure has been implemented by industrial
cussion of the disturbance models popular in GPC. practitioners with success, however, and that imple-

670
4.1 State feedback
As an attempt to solve, approximately, Problem 44,
it is natural to try to extend to the nonlinear case
the ideas of Section 3.2. In the linear case, we de-
fine a region in state space, W ,with the following
properties:
WcX, KWcU
X E W =$ ( A + B K ) X E ^ W
which tells us that in W : the state c o n s t r a t s are
Figure 6: MPC controller consisting of receding horizon satisfied, the input constraints are satisfied under
regulator, state estimator, and target calcula- the unconstrained, linear feedback law U = Kx, and
tor. once a state enters W ,it remains in W under this
control law. We can compute the cost to go for x E
mentation will be presented at this meeting [16]. W ;it is x'nx in which n is given in Equation 35.
For the linear case, the Gilbert and Tan algorithm
provides in many cases the largest set W with these
properties, 0,.
4 MPC with Nodhear Models
ingredients of open-loop,optimal control problem.
What is desirable and what is possible. From
In the simplest extension to the nonlinear case, con-
the practical side, industrial implementation of
sider a region W with the analogous properties:
MPC with nonlinear models has already been re-
ported [48], so it is certamly possible. But the im- WcX, KWcU
plementations are largely without any established x E W f ( x , K x )E W
closed-loop properties, even nominal properties. A
lack of supporting theory should not and does not, The essential difference is that we must ensure that,
examining the historical record, discourage experi- under the nonlinear model, the state remains in W
ments in practice with promising new technologies. with the linear control law. Again, for this simplest
But if nonlinear MPC is to become widespread in version, we determine the linear control law by con-
the harsh environment of applications, it must even- sidering the linearization off at the setpoint
tually become reasonably reliable, predictable, effi-
cient, and robust against on-line failure.
From the theoretical side, it would be desirable to For the nonlinear case, we cannot easily compute the
solve in real time infinite horizon nonlinear optimal largest region with these properties, but we can find
control problems of the type a finite-sized region with these properties. The main
m
assumption required is that f ' s partial derivatives
are Lipschitz continuous [ 581 in order to bound the
min @ ( x j=)
{Xkluk)
1L(xktUk)
k= 0
(44)
size of the nonlinear effects, and that the linearized
system is controllable. Most chemical processes sat-
subject to the constraints isfy these assumption, and with them we can con-.
xk+l = f ( X k , uk), XO = xj (45) struct region W as shown in Figure 7. Consider the
UkE'U, X k E X (46) quadratic function V , and level sets of this function,
WXl
Nonlinear MPC based on this optimal control prob-
lem would have the strongest provable closed-loop V ( x )= x'nx, w, = { X l X ' r I X Ia}

properties. The concomitant theoretical and compu- for a a positive scalar. Define e ( x )to be the differ-
tational difficulties associated with this optimal con- ence be the state propagation under the nonlinear
trol problem, either off-line but especially on-line, model and linearized model, e ( x )= f ( x ,Kx)- (A+
are well known and formidable 1331. The current BK)x and el(x)to be the difference in V at these
view of problem 44 is: desirable, but not possible. In two states, e l ( x ) = V ( f ( x , K x )-) V ( ( A+ B K ) x ) .
the next two sections, we evince one viewpoint on We can show that near the setpoint (origin)
the current status of bringing these two sides closer
together. lel(x)l5
Ile(x)II 5 c 11x11~~ ~ I I X I I ~

67 1
500


450
B
400
5

- 5B 350
P
X1
2 3 00

250 I I I
-1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5
Concentration (MJ
x I

Figure 7: Region W level sets of x’nx, and the effect of Figure 8: W regions surrounding the locus of steady-
nonlinearity in a neighborhood of the setpoint. state operating points. Solid line: steady
states. Dashed lines: W regions.

0.5
which bounds the effect of the nonlinearity. We can
therefore find an a such that the finite horizon con-
0.48
trol law with terminal constraint and approximate
cost to go penalty is stabilizing 0.46

N-I 0.44

0.42

subject to the constraints 0.4

-.--
0 0.5 I 1.5 2 2.5 3 3.5 4 4.5 5
Time (min)

We choose a such that Figure 9 Closed-loop response. CAversus t.

max(V(FK(x)) - V ( x )+ ( 1 / 4 ) x T Q x 5
} 0 (SO) steady state:
X€W#

It has also been established that global optimality


in Problem 47 is not required for closed-loop stabil-
ity [58]. Calculation of the W aregionin Problem 50
remains a challenge, particularly when the target cal- k = koexp ( - E / R T )
culation and state estimation and disturbance mod-
els are added to the problem as in Sections 3.1-3.4.
Under those circumstances, Wa, which depends on Figure 8 displays the W regions computed by solv-
the current steady target, changes at each sample. It ing Problem 50 along the locus of steady-state oper-
may be possible that some of this computation can ating points. For the steady-state operating point
be performed off-line, but resolving this computa-
tional issue remains a research challenge.

We present a brief example to illustrate these ideas. The closed-loopbehavior of the states withMPC con-
Consider the simple model presented by Henson trol law, Problem 47, is shown in Figures 9-10, and
and Seborg [21] for a continuously stirred tank reac- the manipulated variable is displayed in Figure 11.
-
tor (CSTR)undergoing reaction A B at an unstable Figure 12 displays a phase-portrait of the two states

672
370

-
365

360 -
366 R
0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.7
Concentratlon (MI

Figure 12: Closed-loop phase portrait under nonlinear


MPC. Region lV and closed-loop response T
versus CA.

Time (min) converging to the setpoint and the terminal region


W.
Figure 10: Closed-loop response. T versus t .

5 Future Developments

Although this paper is intended as a tutorial, brief


consideration of areas of future development may
prove useful. The theory for nominal MPC with lin-
ear models and constraints is reasonably mature in
that nominal properties are established, and effi-
cient computational procedures are available. The
role of constraints is reasonably well understood.
Applications in the process industries are ubiqui-
tous.
320
300
L
280
MPC with nonlinear models. In MPC for nonlin-
ear models, the territory is much less explored.
260 The nonconvexity of the optimal control problems
presents theoretical and computational difficulties.
240
The research covered in this tutorial on quasi-
220 infinite horizons and suboptimal MPC provide one
avenue for future development [43, 81. Contrac-
200
tive MPC [45, 46, 69, 361 and exact linearization
I""
MPC [15, 241 are two other alternatives that show
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 promise. Mayne et al. [34] and De Nicolao et al. [13]
Time (min) provide recent reviews of this field for further read-
ing.
Figure 11: Manipulated variable. Tc versus t.
It is expected, as in the case of linear MPC of the
1970s and 1980%that these theoretical hurdles will
not impede practitioners from experimenting with
nonlinear MPC [48, 311. If the practical benefits of
the nonlinear models are demonstrated, research ef-
forts will redouble and likely produce continued im-

673
provements in the available theory and faster and closed-loop regulation with these models. State esti-
more robust computational algorithms for imple- mation using empirical, nonlinear models is already
mentation. being used in commercial process monitoring soft-
ware. Moreover, state estimation is a wide ranging
Robustness. Robustness to various types of uncer- technique for addressing many issues of process op-
tainty and model error is of course an active re- erations besides feedback control, such as process
search area in MPC as well as other areas of auto- monitoring, fault detection and diagnosis.
matic control. The difficulty that MPC introduces
into the robustness question is the open-loop nature MPC for hybrid systems. Essentially all processes
of the optimal control problem and the implicit feed- contain discrete as well as continuous components:
back produced by the recedmg horizon implemen- on/off valves, switches, logical overrides, and so
tation. Several robust versions of MPC have been forth Slupphaug and Foss [64, 631 and Bemporad
introduced that address this issue [71, 18, 421 Lee and Morari [SI have considered application of MPC
and Yu [30] define a dynamic programming prob- in this environment. This problem class is rich with
lem for the worst case cost. Badgwell[4]appends a possibilities: for example, the rank ordering rather
set of robustness constraints to the open-loop prob- than softening of output constraints to handle in-
lem, which ensures robustness for a finite set of feasibility. Some of the intriguing questions at this
plants. Kothare et al. [23]address the feedbackissue early juncture are: how far can this framework be
by optimization over the state feedback gain rather pushed, are implementation bottlenecks expected
than the open-loop control sequence subject to con- from the system modelling or on-line computations,
straints. what benefits can be obtained compared to tradi-
tional heuristics, and what new problem types can
The rapid development of time domain worst case be tackled.
controller design problems as dynamic games (see
[3]for an excellent summary) has led to further pro-
posals for robust MPC exploiting this connection to
H , theory [7, 9, 131. At this early juncture, on-line Acknowledgments
computation of many of the robust MPC control laws
appears to be a major hurdle for practical applica- I wish to acknowledge Professor Thomas A. Badg-
tion. well for numerous helpful discussions of this mate-
rial during our co-teachmg of industrial MPC short
Moving horizon estimation. The use of optimiza- courses.
tion subject to a dynamic model is the underpinning
for much of state estimation theory. A moving hori- The financial support of the National Science Foun-
zon approxjmation to a full infinite horizon state dation, through grant CTS-9708497, and the indus-
estimation problem has been proposed by several trial members of the Texas-Wisconsin Modeling and
researchers [41, 56, 671. The theoretical properties Control Consortium is gratefully acknowledged.
of this framework are only now emerging [50, 521.
Again, attention should be focused on what key is-
sues of practice can be addressed in this framework References
that are out of reach with previous approaches. Be- [l] In F. AUgower and A. Zheng, editors, Internatfonal
cause moving horizon estimation with linear mod- Symposium on Nonlinear Model Predictive Control, As-
els'produces simple, positive definite quadratic pro- cona, Switzerland, 1998.
grams, on-line implementation is possible today for [ 21 B. D. 0.Anderson and J. B. Moore. Optimal Fflrering.
many process applications. The use of constraints Prentice-Hall,Englewood Cliffs, N.J., 1979.
o n states or state disturbances presents intriguing [3] T. Bagar and P. Bernhard. H"-Oprfmal Control and
opportunities, but it is not clear what applications Related Minimax Design Problems: A Dynamic Game Ap-
benefit from using the extra physical knowledge in proach. Birkauser, Boston, 1995.
the form of constraints. Nonlinear, fundamental [4] T. A. Badgwell. Robust model predictive control of
models coupled with moving horizon state estima- stable linear systems. Int J. Control,68(4):797-818,1997.
tion may start to play a larger role in process op- [SI A. Bemporad and M. Morari. Predictive control of
erations. State estimation for unmeasured product constrained hybrid systems. In International *mposium
properties based on fundamental, nonlinear mod- on Nonlinear Model Predictive Control, Ascona, Swirzer-
els may have more impact in the short term than land, 1998.

674
[6] R. R. Bitmead, M. Gevers, and V. Wertz. Adaptfve [24] M. J. Kurtz and M. A. Henson. Input-output lineariz-
Optimal Control, The Thinking Man's GPC. Prentice-Hall, ing control of constrained nonlinear processes. J. Proc.
Englewood Cliffs, New Jersey, 1990. Cont., 7(1):3-17, 1997.
[7] H. Chen. Stabilify and Robustness Considerations in [25] H. Kwakernaakand R. Sivan. Linear Optimal Control
Nonlinear Model Predictive Control.PhD thesis, University Systems. John Wiley and Sons, New York, 1972.
of Stuttgart, 1997. [26] W. H. Kwon. Advances in predictive control: the-
[8] H. Chen and F. Allgower. A quasi-infinite horizon ory and application. In Proceedings of first Asian Control
nonlinear model predictive control scheme with guaran- Conference, Tokyo, 1994.
teed stability. Automatica, 34(10):1205-1217, 1998. [27] J. H. Lee. Modeling and identification for nonlinear
191 H. Chen, C. W. Scherer, and F. AUgower. A game model predictive control: Requirements, current status
theoretic approach to nonlinear robust receding horizon and future needs. In International Symposium on Nonlin-
control of constrained systems. In Proceedings of Amer- ear Model Predictive Control, Ascona, Switzerland, 1998.
ican Control Conference, Albuquerque, NM pages 3073- [28] J. H. Lee and B. Cooley. Recent advances in model
3077,1997. predictive control and other related areas. In J. C. Kan-
[lo] D. Chmielewski and V. Manousiouthakls. On con- tor, C. E. Garcia, and B. Carnahan, editors, Proceedings
strained infinite-time linear quadratic optimal control. of Chemical Process Control - V; pages 201-216. CACHE,
Sys. Cont. Let., 29:121-129, 1996. AIChE. 1997.
[ l l ] C. R. Cutler and B. L. Ramaker. Dynamic matrix [29] J. H. Lee, M. Morari, and C. E. Garcia. State-space
control-a computer control algorithm. In Proceedings of interpretation of model predictive control. Automatica,
the Joint Automatic Control Conference, 1980. 30(4):707-717.1994.
[12] E. J. Davison and H. W. Smith. Pole assignment in [30] J. H. Lee and Z. Yu. Worst-case formulations of
linear time-invariant multivariable systems with constant model predictive control for systems with bounded pa-
disturbances. Automatica, 7:489-498, 1971. rameters. Automatica, 33(5):763-781,1997.
[13] G. De Nicolao, L. Magni, and R. Scattolini. Stability 1311 G. Martin. Nonlinear model predictive control. In
and robustness of nonlinear receding horizon control. In Proceedings of the American Control Conference, San
International Symposium on Nonlinear Model Predictfve Diego, CA, 1999.
Control,Ascona, Switzerland, 1998. 1321 D. Q. Mayne. Optimization in model based con-
[14] N. M. C. de Oliveira and L. T. Biegler. Constraint han- trol. In Proceedings of the ZFAC Symposium Llynamics
dling and stability properties of model-predictivecontrol. and Control of Chemical Reactors, Dlstillation Columns
AZChE J., 40(7):1138-1155,1994. and Batch Processes, pages 229-242, Helsingor, Denmark,
[15] S . L. de Oliveira, V. Nevistii., and M. Morari. Control June 1995.
of nonlinear systems subject to input constraints. In ZFAC [33] D. Q. Mayne. Nonlinear model predictive control:
Symposium on Nonlinear Control System Design, Tahoe An assessment. In J. C. Kantor, C. E. Garcia, and B. Carna-
Cify, California,pages 15-20, 1995. han, editors, Proceedings of Chemical Process Control - V;
[16] J. J. Downs. Control strategy design using model pages 217-231. CACHE, AIChE, 1997.
predictive control. In Proceedings of the American Control [34] D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. 0. M.
Conference, San Diego, CA, 1999. Scokaert. Model predictive control: a review. Submitted
[17] R. Fletcher. PracficalMethods of Optimization. John for publication in Automatica, May 1998.
Wfley & Sons, New York, 1987. 1351 H. Michalska and D. Q. Mayne. Robust receding
[18] H. Genceli and M. Nikolaou. Robust stabrUty anal- horizon control of constrained nonlinear systems. lEEE
ysis of constrained l1-norm model predictive control. Trans. Auto. Cont., 38(11):1623-1633,1993.
AIChE J., 39(12):1954-1965,1993. [36] M. Morari and S . de Oliviera. Contractive model pre-
[19] E. G. Gilbert and K. T. Tan. Linear systems with state dictive control for constrained nonlinear systems. IEEE
and control constraints: The theory and application of Trans. Auto. cont., 1998. In press.
maximal output admissible sets. ZEE Trans. Auto, Cont., 1371 M. Morari and J. H. Lee. Model predictive control:
36(9):1008-1020, September 1991. past, present and future. In Proceedings ofjoint 6th inter-
[20] W. W. Hager. Lipschitz continuity for constrained national symposium on process systems engineering (PSE
processes. S M J . Cont. Opt., 17(3):321-338,1979. '97) and 30th European symposium on computer aided
[21] M. A. Henson and D. E. Seborg. Nonlinear Process process systems engineering (ESCAPE 7),Trondheim, Nor-
Control. Prentice Hall PTR, Upper Saddle Rtver, New Jer- way, 1997.
sey, 1997. [38] E. Mosca. Optimal, predictive and adaptfve control.
[22] S . S . Keerthi. Optimal Feedback Control of Dbcrete- Information and System Science Series. Prentice Hall, En-
Time %stems with Stare-Control Constraints and General glewoods Cliffs, New Jersey, 1994.
Cost Functions. PhD thesis, University of Michigan, 1986. [39] K.R.Muske. Steady-state target optfmizaton in lin-
[23] M. V. Kothare, V. Balakrishnan, and M. Morari. Ro- ear model predictive control. In Proceedings of the Amer-
bust constrained model predictive control using linear ican Control Conference, pages 3597-3601. Albuquerque,
matrix inequalities. Automatica, 32(10):1361-1379,1996. NM, June 1997.

675
[40] K. R. Muske and J. B. Rawlings. Model predictive editors, Proceedings of the 1988 LFAC Workshopon Model
control with linear models. AIChEJ.,39(2):262-287,1993. Based Process Control, pages 13-22, Oxford, 1988. Perga-
[41] K. R. Muske, J. B. Rawlings, and J. H. Lee. Receding mon Press.
horizon recursive state estimation. In Proceedings o f the [56] D. G. Robertson, J. H. Lee, and J. B. Rawlings. A
1993 American Control Conference,pages 900-904, June moving horizon-based approach for least-squares state
1993. estimation. AIChE J., 42(8):2209-2224,August 1996.
[42] G. D. Nicolao, L. Magni, and R. Scattolini. Robust [57] J. A. Rossiter, M. J. Rice, and B. Kouvaritakis. A ro-
predictive control of systems with uncertain impulse re- bust stable state-space approach to stable predictive con-
sponse. Automatica, 32(1Oh14 75-14 79,1996. trol stategies. In Proceedings of the American Control Con-
[43] T. Parislni and R. Zoppoli. A receding-horizon reg- ference, pages 1640-1641, Albuquerque, NM, June 1997.
ulator for nonlinear systems and a neural approximation. 1581 P. 0. Scokaert, D. Q. Mayne, and J. B. Rawlings. Sub-
Automatica, 31(10):1443-1451, 1995. optimal model predictive control. Accepted for publica-
[44] R. K. Pearson and B. A. Ogunnalke. Nonlinear pro- tion in IEEE Trans. Auto. Cont., April 1996.
cess identification. In M. A. Henson and D. E. Seborg, ed- [59] P. 0. Scokaert and J. B. Rawlings. On infeasibilities
itors, Nonlinear Process Control, pages 11-1 10. Prentice in model predictive control. In J. C. Kantor, C. E. Garcia,
Hall, 1997. and B. Carnahan, editors, Chemical Process Control-V,
[45] E. Polak and T. H. Yang. Moving horizon con- pages 331-334. CACHE, AIChE, 1997.
trol of linear systems with input saturation and plant 1601 P. 0.Scokaert and J. B. Rawlings. Constrained h e a r
uncertainty-part 1: Robustness. Int.J. Control, 58(3):613- quadratic regulation. EEE Trans.Auto. Cont., 43(8):1163-
638,1993. 1169, August 1998.
[46] E. Polak and T. H. Yang. Moving horizon con- [611 P. 0.M. Scokaert and J. B. Rawlings. Infinite horizon
trol of linear systems with input saturation and plant linear quadratic control with constraints. In Proceedings
uncertainty-part 2: Disturbance rejection and tracking. o f the 13th IFAC World Congress, San Francisco, Califor-
Int. J. Control, 58(3):639-663,1993. nia, July 1996.
1471 D. M. Prett and R D. Gillette. Optimization and con- 1621 F. G. Shinskey. Feedback Controllers for the Process
strained multivariable control of a catalytic cracking unit. Industries. McGraw-Hill, New York, 1994.
In Proceedings of the Joint Automatic Control Conference, 1631 0.Slupphaug. On robust constrained nonlinear con-
1980. trol and hybrid control BMl and MPC based state-feedback
[481 J. S. Qin and T. A. Badgwell. An overview of non- schemes. PhD thesis, Norwegian University of Science and
linear model predictive control applications. In Inrema- Technology, 1998.
tional $ymposium on Nonlinear Model Predictive Control, [641 0. Slupphaug and B. A. Foss. Model predictive con-
Ascona, Switzerland, 1998. trol for a class of hybrid systems. In Proceedings of Euro-
[49] S . J. Qin and T.A. Badgwell. An overview of indus- pean Control Conference,Brussels, Belgium, 1997.
trial model predictive control technology. In J. C. Kantor, 1651 R. Soeterboek. Predictive control - a unified ap-
C. E. Garcia, and B. Carnahan, editors, Chemical Process proach. prentice Hall, 1992.
Control-C: pages 232-256. CACHE. AIChE, 1997. [661 M. Sznaier and M. J. Damborg. Suboptimal control
[ S O ] C. V. Rao and J. B. Rawlings. Nonlinear moving hori- of linear systems with state and control inequality con-
zon estimation. In International Symposium on Nonlinear straints. In Proceedings of the 26th Conference on Deci-
Model Predictive Control, Ascona, Switzerland, 1998. sion and Control, pages 761-762,1987.
1511 C. V. Rao and J. B. Rawlings. Steady states and con- [67] M. L. Tyler and M. Morari. Stability of constrained
straints in model predictive control. Submitted for pub- moving horizon estimation schemes. PreprintAUT96-18,
lication in AIChEJ,August 1998. Automatic Control Laboratory, Swiss Federal Institute of
[52] C. V. Rao, J. B. Rawllngs, and J. H. Lee. Stability Technology, 1996.
of constrained linear moving horizon estimation. In Pro- [68] S. J. Wright. Applying new optimization algorithms
ceedings of American Control Conference, San Diego, CA, to model predictive control. In J. C. Kantor, C. E. Garcia,
June 1999. and B. Carnahan, editors, Chemical Process Control-V,
[53] C. V. Rao. S. J. Wright, and J. B. Rawhgs. Effl-
pages 147-155. CACHE. AIChE, 1997.
cient interior point methods for model predictive control. 1691 2. Yang and E. Polak. Moving horizon control of
Accepted for publication in J. Optim. Theory Appl., June nonlinear systems with input saturation, disturbances
1997. and plant uncertainty. Int. J. Control, 58:875-903, 1993.
[541 J. Richalet, A. Rault, J. L. Testud, and J. Papon. Al- [70] A. Zheng and M. Morari. Stability of model pre-
gorithmic control of industrial processes. In Proceedings dictive control with mixed constraints. EEE Trans. Auto.
of the 4th LFAC Symposium on Identification and *stem Cont., 40(10):1818-1823,October 1995.
Parameter Estimation, pages 1119-1167,1976. [71] Z. Q. Zheng and M. Morari. Robust stability of
[55] N. L. Ricker, T. Subrahmanian, and T. Sim. Case constrained model predictive control. In Proceedings of
studies of model-predictive control in pulp and paper the 1993 American Control Conference, pages 379-383,
production. In T. J. McAvoy, Y. Arkun, and E. Zafiriou, 1993.

676

You might also like