Professional Documents
Culture Documents
# 2003 EUCA
The purpose of this paper is twofold. In the first part, is minimized while satisfying constraints on the con-
we give a review on the current state of nonlinear model trols and the states. Ideally one would look for a
predictive control (NMPC). After a brief presentation closed solution for the feedback law satisfying the
of the basic principle of predictive control we outline constraints while optimizing the performance. How-
some of the theoretical, computational, and implemen- ever, typically the optimal feedback law cannot be
tational aspects of this control strategy. Most of the found analytically, even in the unconstrained case,
theoretical developments in the area of NMPC are since it involves the solution of the corresponding
based on the assumption that the full state is available Hamilton±Jacobi±Bellman partial differential equa-
for measurement, an assumption that does not hold in tions. One approach to circumvent this problem is the
the typical practical case. Thus, in the second part of repeated solution of an open-loop optimal control
this paper we focus on the output feedback problem in problem for a given state. The first part of the
NMPC. After a brief overview on existing output resulting open-loop input signal is implemented and
feedback NMPC approaches we derive conditions that the whole process is repeated (see Section 2). Control
guarantee stability of the closed-loop if an NMPC state approaches using this strategy are referred to as model
feedback controller is used together with a full state predictive control (MPC), moving horizon control or
observer for the recovery of the system state. receding horizon control.
In general one distinguishes between linear and
Keywords: Nonlinear model predictive control; nonlinear model predictive control (NMPC). Linear
Output feedback; Performance; Stability MPC refers to a family of MPC schemes in which
linear models are used to predict the system dynamics
and considers linear constraints on the states and
inputs and a quadratic cost function. Even if the
1. Introduction system is linear, the closed-loop dynamics are in gen-
eral nonlinear due to the presence of constraints.
In many control problems it is desired to design a NMPC refers to MPC schemes that are based on
stabilizing feedback such that a performance criterion nonlinear models and/or consider non-quadratic cost-
functionals and general nonlinear constraints on the
states and inputs.
Correspondence and offprint requests to: R. Findeisen, Institute for
Systems Theory in Engineering, University of Stuttgart, 70550
Stuttgart, Germany
E-mail: findeise@ist.uni_stuttgart.de
E-mail: allgower@ist.uni_stuttgart.de Received 16 April; Accepted 17 April 2003.
y
Email: Lars.Imsland@itk.ntnu.no Recommended by I. Postlethwaite, J. M. Maciejowski, K. Glover and
z
Email: Bjarne.Foss@itk.nutnu.no P. J. Fleming.
State and Output Feedback NMPC 191
Since its first invention in the 70s of the last century, since no general separation principle for nonlinear
linear MPC has crystallized as one of the key systems exists. After a review of existing solutions of
advanced control strategies. By now linear MPC is the output feedback NMPC problem in Section 3.1 we
widely used in industrial applications especially in the present in Sections 3.2±3.4 a unifying approach for
process industry, see for example [38,39,73,77,78]. The output feedback NMPC that is based on separation
practical success is mainly based on the possibility to principle ideas.
take constraints on the states and inputs systematically In the following, k k denotes the Euclidean vector
into account while operating the process optimally. norm in Rn (where the dimension n follows from
Overviews on industrial linear MPC techniques can context) or the associated induced matrix norm.
be found in [77] and [78]. In [78], more than 4500 Vectors are denoted by boldface symbols. Whenever
applications spanning a wide range from chemicals to a semicolon ``;'' occurs in a function argument, the
aerospace industries are reported. By now, linear MPC following symbols should be viewed as additional
theory can be considered as quite mature. Important parameters, i.e. f(x;
) means the value of the function
issues such as online computation, the interplay f at x with the parameter
.
between modeling/identification, and control and
system theoretic issues like stability are well addressed
[53,69,73,80]. 2. State Feedback NMPC
Many systems are, however, inherently nonlinear.
Higher product quality specifications and increasing In this section, we provide an overview on the area of
productivity demands, tighter environmental regula- state feedback NMPC. Note, however, that we limit
tions and demanding economical considerations the discussion to NMPC for continuous time systems
require to operate systems over a wide range of oper- using sampled measurement information. We briefly
ating conditions and often near the boundary of the refer to this as sampled-data NMPC. We do not go
admissible region. Under these conditions linear into details on NMPC for discrete time systems.
models are often not sufficient to describe the process However, most of the outlined approaches have dual
dynamics adequately and nonlinear models must be discrete time counterparts, see for example
used. This inadequacy of linear models is one of the [3,22,69,80,81].
motivations for the increasing interest in NMPC
control.
2.1. The Principle of Predictive Control
The purpose of this paper is twofold. In the first
part we provide a review on the current state of NMPC.
Model predictive control is formulated as the repeated
After a presentation of the basic principle of predictive
solution of a (finite) horizon open-loop optimal control
control we present some of the key theoretical, com-
problem subject to system dynamics and input and
putational and implementational aspects of this con-
state constraints. Fig. 1 depicts the basic idea behind
trol strategy. We furthermore discuss the inherent
MPC control. Based on measurements obtained at
advantages and disadvantages of NMPC. Note that
time t, the controller predicts the dynamic behavior of
this part is not intended to provide a complete review
the system over a prediction horizon Tp in the future
of existing NMPC approaches. We mainly focus on
NMPC for continuous time systems using sampled
measurement information and do not go into details
on discrete time NMPC strategies. For further reviews
on NMPC we refer to [3,17,22,69,80].
In the second part of the paper the output feedback
problem for NMPC is considered. One of the key
obstacles of NMPC is that it is inherently a state
feedback control scheme using the current state and
the system model for prediction. Thus, for an appli-
cation of predictive control in general the full state
information is necessary and must be reconstructed
from the available measurement information. How-
ever, even if the state feedback NMPC controller and
the observer used for the state reconstruction are both
stable, there is no guarantee that the overall closed-
loop is stable with a reasonable region of attraction, Fig. 1. Principle of MPC.
192 R. Findeisen et al.
and determines (typically over a control horizon subject to the input and state constraints:
Tc Tp) the input such that a predetermined open-
u
t 2 u, 8t 0
2
loop performance objective is minimized. If there were
no disturbances and no model-plant mismatch, and if x
t 2 x, 8t 0:
3
the optimization problem could be solved over an
infinite horizon, then the input signal found at t 0 With respect to the vector ®eld f : Rn Rm ! Rn we
could be open-loop applied to the system for all t 0. assume that it is locally Lipschitz continuous in the
However, due to disturbances, model-plant mismatch, region of interest (typically the region of attraction)1
and the finite prediction horizon the actual system and satis®es f(0, 0) 0. Furthermore, the set u Rm
behavior is different from the predicted one. To is compact, x Rn is connected, and (0, 0) 2 x u.
incorporate feedback, the optimal open-loop input is Typically u and x are (convex) box constraints of
implemented only until the next sampling instant. In the form:
principle the time between each new optimization, the
u : fu 2 Rm jumin u umax g
4
sampling time, can vary. We assume for simplicity
of presentation, that it is fixed, i.e the optimal control x : fx 2 Rn jxmin x xmax g,
5
problem is re-evaluated after the constant sampling
time . Using the new system state at time t , the with the constant vectors umin, umax and xmin, xmax.
whole procedure ± prediction and optimization ± is In sampled-data NMPC an open-loop optimal
repeated, moving the control and prediction horizon control problem is solved at discrete sampling instants
forward. ti based on the current state information x(ti). Since
In Fig. 1, the open-loop optimal input is depicted as we consider a constant sampling time , the sampling
a continuous function of time. To allow a numerical instants ti are given by ti i , i 0, 1, 2, . . . . When
solution of the open-loop optimal control problem the the time t and ti occurs in the same setting, ti should be
input is often parametrized by a finite number of taken as the closest previous sampling instant ti < t.
``basis'' functions, leading to a finite dimensional The open-loop input signal applied in between the
optimization problem. In practice, often a piecewise sampling instants is given by the solution of the fol-
constant input is used, leading to Tc/ decisions for the lowing optimal control problem:
input over the control horizon. Problem 1.
Summarizing, a standard NMPC scheme works as
follows: min J
x
ti , u
u
where the stage cost F : x u ! R is often assumed The value function plays a central role in the stability
to be continuous, satisfy F(0, 0) 0, and is lower analysis of NMPC, since it often serves as a Lyapunov
bounded by a class k function2 F : F
kxk function candidate [3,69].
F
x, u; 8
x, u 2 x u. Typically, no explicit controllability assumption on
The terminal penalty term E and the so-called the system is considered in NMPC. Instead, the sta-
terminal region constraint x
ti Tp 2 e might or bility results are based on the assumption of initial
might not be present. These are often used to enforce feasibility of the optimal control problem, i.e. the
nominal stability (see Section 2.3). existence of a input trajectory u() s.t. all constraints
The stage cost can for example arise from econo- are satisfied.
mical and ecological considerations. Often, a quadratic From a theoretical and practical point of view, one
form for F is used: would like to use an infinite prediction and control
horizon, i.e. Tp and Tc in Problem 1 are set to 1. This
F
x, u xT Qx uT Ru
8 would lead to a minimization of the cost up to infinity.
However, normally the solution of a nonlinear infinite
with Q 0 and R > 0. horizon optimal control problem cannot be calculated
The state measurement enters the system via the (sufficiently fast). For this reason finite prediction and
initial condition in (6a) at the sampling instant, i.e. the control horizons are considered. In this case, the
system model used to predict the future system actual closed-loop input and states will differ from the
behavior is initialized by the actual system state. Since predicted open-loop ones, even if no model plant
all state information is necessary for the prediction, mismatch and no disturbances are present. At the
the full state must be either measured or estimated. sampling instants the future is only predicted over
The solution of the optimal control problem (6) is the prediction horizon. At the next sampling instant
denoted by u
; x
ti . It defines the open-loop input the prediction horizon moves forward, allowing to
that is applied to the system until the next sampling obtain more information thus leading to a mismatch
instant ti 1: of the trajectories.
u
t; x
ti u
t; x
ti , t 2 ti , ti1 :
9 The unequalness of the predicted and the closed-
loop trajectories has two immediate consequences.
The control u(t;x(ti)) is a feedback, since it is recal- Firstly, the actual goal to compute a feedback such that
culated at each sampling instant using the new state the performance objective over the infinite horizon of the
information. In comparison to sampled-data NMPC closed-loop is minimized is not achieved. In general it
for continuous time systems, in instantaneous NMPC is by no means true that the repeated minimization
(also often referred to as receding horizon control) the over a moving finite horizon leads to an optimal solu-
input is de®ned by the solution of Problem 1 at all tion for the infinite horizon problem. The solutions
times: u
x
t u
t; x
t, i.e. no open-loop input is will often differ significantly if a short finite horizon is
applied, see e.g. [68,69]. chosen. Secondly, there is in general no guarantee that
The solution of (1) starting at time t1 from an initial the closed-loop system will be stable. Hence, when
state x(t1), applying an input u : t1 , t2 ! Rm is using finite prediction horizons special attention is
denoted by x(; u(), x(t1)), 2 [t1, t2]. We will refer to required to guarantee stability (see Section 2.3).
an admissible input as: The summarize the key characteristics and proper-
ties of NMPC are:
Definition 2.1 (Admissible input). An input u :
0, Tp ! Rm for a state x0 is called admissible, if it is: NMPC allows the direct use of nonlinear models for
(a) piecewise continuous, (b) u
2 u 8 2 0, Tp , (c) prediction.
x
; u
, x0 2 x 8 2 0, Tp , (d) x
Tp ; u
, x0 2 e. NMPC allows the explicit consideration of state
Furthermore, one refers to the so-called value func- and input constraints.
tion as: In NMPC a speci®ed time domain performance
criteria is minimized on-line.
Definition 2.2 (Value function). The value function In NMPC the predicted behavior is in general dif-
V(x) of the open-loop optimal control Problem 1 is ferent from the closed loop behavior.
defined as the minimal value of the cost for the state For the application of NMPC typically a real-time
x : V
x J
x, u
; x. solution of an open-loop optimal control problem is
necessary.
2
A continuous function : [0, 1) ! [0, 1) is a class k function, if To perform the prediction the system states must be
it is strictly increasing and (0) 0. measured or estimated.
194 R. Findeisen et al.
Basing the applied input on the solution of an formulation such that stability of the closed-loop can
optimal control problem that must be solved on-line is be guaranteed independently of the plant and per-
advantageous and disadvantageous at the same time. formance specifications. This is usually achieved by
First, and for most, this allows in a direct way to adding suitable equality or inequality constraints and
consider constraints on states and inputs which are suitable additional penalty terms to the standard
often difficult to handle otherwise. Furthermore, the setup. The additional terms are generally not moti-
desired cost objective, the constraints and even the vated by physical restrictions or performance
system model can in principle be adjusted on-line requirements but have the sole purpose to enforce
without making a complete redesign of the controller stability. Therefore, they are usually called stability
necessary. However, solving the open-loop optimal constraints.
control problem, if attacked blindly, can be difficult or One possibility to enforce stability with a finite
even impossible for large systems. prediction horizon is to add a so-called zero terminal
equality constraint at the end of the prediction
horizon, i.e.
2.3. State Feedback NMPC and Nominal Stability
x
t Tp 0
10
One of the most important questions in NMPC is
whether a finite horizon NMPC strategy does guar- is added to Problem 1 [15,47,68]. This leads to stability
antee stability of the closed-loop or not. The key of the closed-loop, if the optimal control problem has
problem with a finite prediction and control horizon is a solution at t 0. Similar to the in®nite horizon case
due to the difference between the predicted open-loop the feasibility at one sampling instant does imply
and the resulting closed-loop behavior. Ideally, one feasibility at the following sampling instants and a
would seek for an NMPC strategy which achieves decrease in the value function. One disadvantage of a
closed-loop stability independently of the choice of the zero terminal constraint is that the predicted system
parameters and, if possible, approximates the infinite state is forced to reach the origin in ®nite time. This
horizon NMPC scheme as well as possible. An NMPC leads to feasibility problems for short prediction/
strategy that achieves closed-loop stability indepen- control horizon lengths, i.e. to small regions of
dently of the choice of the performance parameters is attraction. From a computational point of view, an
often referred to as an NMPC approach with guar- exact satisfaction of a zero terminal equality con-
anteed stability. Different approaches to achieve straint does require in general an in®nite number of
closed-loop stability using finite horizon lengths exist. iterations in the optimization and is thus not desirable.
Here, only some central ideas are reviewed and no The main advantages of a zero terminal constraint are
detailed proofs are given. Moreover, no attempt is the straightforward application and the conceptual
made to cover all existing methods. simplicity.
Without loss of generality it is assumed that the Many schemes exist that try to overcome the use of
origin (x 0 and u 0) is the steady state to be sta- a zero terminal constraint of the form (10). Most of
bilized. them use the terminal region constraint x
t Tp 2 e
Infinite Horizon NMPC: Probably the most intui- and/or a terminal penalty term E
x
t Tp to
tive way to achieve stability is to use an infinite hor- enforce stability and feasibility. Typically the terminal
izon cost, i.e. Tp in Problem 1 is set to 1. In this case, penalty E and the terminal region E are determined
the open-loop input and state trajectories computed as off-line such that the cost function
the solution of the NMPC optimization Problem 1 at Z ti Tp
a specific sampling instant are in fact equal to the
J
x
t, u
x
, u
d
F
closed-loop trajectories of the nonlinear system due ti
to Bellman's principle of optimality [7]. Thus, the x
t Tp
E
11
remaining parts of the trajectories at the next sampling
instant are still optimal (end pieces of optimal trajec- gives an upper bound on the in®nite horizon cost and
tories are optimal). This also implies convergence of guarantees a decrease in the value function as the
the closed-loop. Detailed derivations can be found in horizon recedes in time.
[46,47,68,69]. We do not go into details about the different
Finite Horizon NMPC Schemes with Guaranteed approaches. Instead, we state the following theorem,
Stability: Different possibilities to achieve closed-loop which gives conditions for the convergence of the
stability using a finite horizon length exist. Most closed-loop states to the origin. It is a slight mod-
of these approaches modify the standard NMPC ification of the results given in [36] and [16,17].
State and Output Feedback NMPC 195
Theorem 2.1. (Stability of sampled-data NMPC). Convergence: We first show that the value function is
Suppose that decreasing starting from a sampling instant. Remem-
ber that the value of V at x(ti) is given by:
(a) the terminal region e x is closed with 0 2 e
Z ti Tp
and that the terminal penalty E(x) 2 C1 is positive
semi-de®nite V
x
ti x
; u?
; x
ti , x
ti , u?
; x
ti d
F
ti
(b) the terminal region and terminal penalty term are
x
ti Tp ; u?
; x
ti , x
ti ,
E
14
chosen such that 8x 2 e there exists an (admis-
sible) input ue : 0, ! u such that x
2 e and the cost resulting from (13) starting from any
8 2 [0, ] and x
ti ; u?
; x
ti , x
ti , 2 (0, ti 1 ti ], using the
@E input u~
, x
ti , is given by
f
x
, ue
F
x
, ue
0; 8 2 0,
@x J
x
ti , u~
; x
ti
12 Z ti Tp
(c) the NMPC open-loop optimal control problem F
x
; u~
; x
ti , x
ti ,
ti
is feasible for t 0.
u~
; x
ti d
Then in the closed-loop system (1) with (9) x(t) con- E
x
ti Tp ;
verges to the origin for t ! 1, and the region of
attraction r consists of the states for which an u~
; x
ti , x
ti :
15
admissible input exists. Reformulation yields
J
x
ti , u~
; x
ti
Proof. The proof is given here for sake of complete-
ness. It bases on using the value function as a V
x
ti
Z ti
decreasing Lyapunov-like function. As usual in pre-
dictive control the proof consists of two parts: in the F
x
; u?
; x
ti , x
ti ,
ti
first part it is established that initial feasibility implies ?
u
; x
ti d
feasibility afterwards. Based on this result it is then
shown that the state converges to the origin. E
x
ti Tp ; u?
; x
ti , x
ti
Feasibility: Consider any sampling instant ti for Z ti Tp
which a solution exists (e.g. t0). In between ti and ti 1 F
x
; u~
; x
ti ,
ti Tp
the optimal input u
;x
ti is implemented. Since no
model plant mismatch nor disturbances are present, x
ti , u~
; x
ti d
x
ti1 x
ti1 ; u?
; x
ti , x
ti . The remaining piece E
x
ti Tp ; u~
, x
ti , x
ti :
of the optimal input u?
; x
ti , 2 ti1 , ti Tp
16
satisfies the state and input constraints. Furthermore,
Integrating inequality (12) from ti to ti Tp
x
ti Tp ; u?
; x
ti x
ti 2 e and we know from
starting from x(ti ) we obtain zero as an upper
Assumption (b) of the theorem that for all x 2 e there
bound for the last three terms on the right side. Thus,
exists at least one input ue
that renders e invariant
over . Picking any such input we obtain as admissible J
x
ti , u~
; x
ti V
x
ti
input for any time ti , 2 (0, ti 1 ti] Z ti
8 F
x
; u?
; x
ti , u?
; x
ti d:
< u?
; x
ti , 2 ti , ti Tp , ti
u~
; x
ti ue
ti Tp , :
17
:
2
ti Tp , ti Tp :
13 Since u~ is feasible but not necessarily the optimal input
for x(ti ), it follows that
Speci®cally, we have for the next sampling time (
ti 1 ti) that u~
; x
ti1 is a feasible input, hence V
x
ti V
x
ti
feasibility at time ti implies feasibility at ti 1. Thus, if Z ti
(6) is feasible for t 0, it is feasible for all t 0. x
; u?
; x
ti , x
ti , u?
; x
ti d,
F
ti
Furthermore, if the states for which an admissible
input exists converge to the origin, it is clear that the
18
region of attraction r that consists of those points
i.e. the value function decreases along solution tra-
belongs to the region or attraction.
jectories starting at a sampling instant ti. Especially,
196 R. Findeisen et al.
decrease in the value function is necessary. This can be not go into further details here and instead refer
utilized to decrease the necessary on-line solution time to [12,25,67].
and makes the practical application more robust. Recent studies have shown, that using special dyna-
Summarizing, the nominal stability question of mic optimizers and tailored NMPC schemes allows to
NMPC in the state feedback case is well understood. employ NMPC to practically relevant problems (see
Various NMPC approaches that guarantee stability e.g. [6,27,32,66,86,35]) even with todays computa-
exist. tional power.
controllers, conditions on the observer that guarantee shown to lead to (semi-global) closed-loop stability.
that the closed-loop is semi-globally practically stable. For the results to hold, however, a global optimization
The result is based on the results presented in [34,43], problem for the moving horizon observer with an
where high-gain observers are used for state recovery. imposed contraction constraint must be solved.
Basically, we exploit that sampled-data predictive More recently, ``regional'' separation principle-
controllers that possess a continuous value function based approaches have appeared for a wide class of
are inherently robust to small disturbances, i.e. we will NMPC schemes. In [43,44], it was shown that based
consider the estimation error as a disturbance acting on the results of [5, 85], semi-global practical stability
on the closed-loop. Before we derive the approach, we results could be obtained for instantaneous NMPC
give a brief review of the existing output feedback based on a special class of continuous-time models,
NMPC approaches. using high gain observers for state estimation. In this
context, semi-global practical stability means that for
any compact region inside the state feedback NMPC
3.1. Existing Output-Feedback Results region of attraction, there exists an observer gain such
that for system states starting in this region, the closed
Various researchers have addressed the question of loop take the state into any small region containing
output feedback NMPC using observers for state the origin. The result of [43] are developed further to
recovery. We restrict the discussion to output feed- the more realistic sampled-data case in [33,34], still
back MPC schemes relying on state space models for considering a class (albeit a larger one) of continuous-
prediction and differentiate between the two output time systems. In [30], it is pointed out how these results
feedback design approaches as outlined above. The can be seen as a consequence of NMPC state feedback
``certainty-equivalence'' method is often used in a robustness. In [42], conditions are given on the system
somewhat ad-hoc manner in industry [78], e.g. based and the observer for the state to actually converge to
on the (extended) Kalman filter as a state observer. In the origin.
the presence of a separation principle, this would be a Related results appeared recently in [1], where for
theoretically sound way to achieve a stabilizing output the same system class as considered in [43], semi-
feedback scheme. Unfortunately, a general separation global practical stability results are given for sampled-
principle does not exist for MPC ± even in the case of data systems using sampled high-gain observers.
linear models, the separation principle for linear sys- In [89], a scheduled state feedback NMPC scheme is
tems is void due to the presence of constraints. Thus, combined with an exponential convergent observer,
at the outset, nothing can be said about closed loop and regional stability results are established. On a
stability in this case, and it seems natural that one has related note, the same authors show in [88] how an
to restrict the class of systems to obtain results. As an NMPC controller can be combined with a convergent
example, [91] shows global asymptotic stability for the observer to obtain stability, where stability is taken
special case of discrete-time linear open-loop stable care of off-line.
systems. In the robust design approach the errors in the state
For a more general class of nonlinear systems, it can estimate are directly accounted for in the state feed-
be shown that the properties of the value function as a back predictive controller. For linear systems, [8]
Lyapunov function gives some robustness of NMPC introduces a set membership estimator to obtain
to ``small'' estimation errors. For ``weakly detectable'' quantifiable bounds on the estimation error, which
discrete-time systems, this was first pointed out in are used in a robust constraint-handling predictive
[83] (see also [57,59], and an early version in [74]). controller. The setup of [8] is taken further in [21],
However, these results must be interpreted as ``local'', using a more general observer, and considering more
in the sense that even though an approximated region effective computational methods. For the same class
of attraction can be calculated in principle, it is not of systems, [56] does joint estimation and control
clear how parameters in the controller or observer calculation based on a minimax formulation, however
must be tuned to influence the size of the region of without obtaining stability guarantees.
attraction. For linear systems with input constraints, the
In [24], local uniform asymptotic stability of con- method in [54] obtains stability guarantees through
tractive NMPC in combination with a ``sampled'' computation of invariant sets for the state vector
EKF state estimator is established. augmented with the estimation error. In a similar
Non-local results are obtained in [72], where an fashion, by constructing invariant sets for the observer
optimization based moving horizon observer com- error, [50] adapts the NMPC controller in [14] such
bined with the NMPC scheme proposed in [71] is that the total closed loop is asymptotically stable.
State and Output Feedback NMPC 199
3.2. Output Feedback NMPC with Stability±Setup the state. This in general also implies a discontinuous
value function. Many NMPC schemes, however,
In the following, we present one specific approach to satisfy this assumption at least locally around the
output feedback NMPC. It is based on the fact that origin [18, 20, 69]. Furthermore, for example NMPC
sampled-data predictive controllers that possess a schemes that are based on control Lyapunov func-
continuous value function are inherently robust to tions [45] and that are not subject to constraints on the
small disturbances, i.e. we will consider the estimation states and inputs satisfy Assumption 1.2 and 1.3.
error as a disturbance acting on the closed-loop. This
inherent robustness property of NMPC is closely Remark 3.1. Note that the uniform continuity
connected to recent results on the robustness proper- assumption on V(x) implies that for any compact
ties of discontinuous feedback via sample and hold subset t r there exists a k-function V such that
[48]. However, here we consider the specific case of a for any x1 , x2 2 tkV
x1 V
x2 k V
kx1 x2 k.
sampled-data NMPC controller and we do not We do not state any explicit observability assump-
assume that the applied input is realized via a hold tions, since they depend on the observer used.
element. Concerning the observer used, we assume that after
an initial phase, the observer error at the sampling
Setup: Instead assuming that the real system state x(ti) instants can be made sufficiently small, i.e. we assume
is available at every sampling instant only a state that
estimate x^
ti is available. Thus, instead of the optimal
feedback (9) the following ``disturbed'' feedback is Assumption 2. (Observer error convergence). For any
applied: desired maximum estimation error emax > 0 there exist
observer parameters, such that
u
t; x^
ti u?
t; x^
ti , t 2 ti , ti1 :
25
kx
ti x^
ti k emax , 8ti kconv :
27
Note that the estimated state x^
ti can be outside the
region of attraction r of the state feedback NMPC
Here, kconv > 0 is a freely chosen, but ®xed number of
controller. To avoid feasibility problems we assume
sampling instants after which the observer error has to
that in this case the input is ®xed to an arbitrary, but
satisfy (27).
bounded value.
The NMPC scheme used for feedback is assumed to Remark 3.2. Depending on the observer used, further
fit the setup of Theorem 2.1. Additionally, we make conditions on the system (e.g. observability assump-
the following assumptions: tions) may be necessary. Note that the observer does
Assumption 1. In the nominal region of attraction not have to operate continuously since the state
r x Rn the following holds: information is only necessary at the sampling times.
Note that there exist a series of observers which
1. Along solution trajectories starting at a sampling satisfy Assumption 2, see Section 3.4. Examples are
instant ti at x
ti 2 r, the value function satis®es high-gain observers and moving horizon observers
for all positive : with contraction constraint.
Since we do not assume that the observer error
V
x
ti V
x
ti
Z ti converges to zero, we can certainly not achieve
F
x
s, u
s; x
si ds:
26 asymptotic stability of the origin, nor can we render
ti the complete region of attraction of the state feedback
controller invariant. Thus, we consider in the follow-
2. The value function V(x) is uniformly continuous.
ing the question if the system state in the closed loop
3. For all compact subsets t r there is at least one
can be rendered semi-globally practically stable, under
level set
c fx 2 rjV
x cg s.t. t
c .
the assumption that for any maximum error emax there
Following Theorem 2.1, Assumption 1.1 imply stabi- exist observer parameters such that (27) holds. In this
lity of the state feedback NMPC scheme (compare context, semi-globally practically stable means, that
with (18) in the proof of Theorem 2.1), and is typically for arbitrary sets
c0
c r, 0 < < c0 < c
satisfied for stabilizing NMPC schemes. However, in there exist observer parameters and a maximum
general there is no guarantee that a stabilizing NMPC sampling time such that 8x(0) 2
c0: 1. x(t) 2
c,
schemes satisfies Assumption 1.2 and 1.3, especially if 8t > 0, 2. 9T > 0 s.t. x(t) 2
, 8t T. For
state constraints are present. As is well known clarification see Fig. 2.
[36,40,70], NMPC can also stabilize systems that Note, that in the following we only consider level
cannot be stabilized by feedback that is continuous in sets for the desired set of initial conditions (
c0), the
200 R. Findeisen et al.
Second part (decrease of the value function after observer Thus, if we choose the observer parameters such that
convergence and finite time convergence to
/2):
1
We assume that x(ti) 2
c1. For simplicity of notation, V
eLfx emax V
emax Vmin c, ,
uxà denotes the optimal input resulting from xÃ(ti) and 2 4
ux denotes the input that corresponds to the real state and V
emax
36
4
x(ti). Furthermore, xi x(ti) and xÃi xÃ(ti).
Note that since xi 2
c1 we know from the deriva- we achieve ®nite time convergence from any point
tions in the first part of the proof that xÃi 2
c2 and that x(ti) 2
c0 to the set
/2.
x() 2
c, x(; xÃi, uxÃ) 2
c, x(; xi, uxÃ) 2
c,
Third part (x(ti 1) 2
, 8x(ti) 2
/2):
8 2 [ti, ti 1). Under these conditions the following
If x(ti) 2
/2 equation (37) is still valid. Skipping
equality is valid:
the integral contribution on the right we obtain:
V
x
; xi , ux^ V
xi V
x
; xi , ux^
V
x
; xi , ux^ V
xi V
eLfx
ti
k^
xi xi k
V
x
; x^i , ux^ V
x
; x^i , ux^ V
^
xi
V
k^
xi xi k
37
V
^
xi V
xi :
31
Thus if we assume that
We can bound the last two terms since V is uniformly
continuous in compact subsets of r
c . Also note
V
eLfx emax V
emax :
38
that the third and fourth term start from the same xÃi, 2
and that the ®rst term can be bounded via V: then x(ti 1) 2
8x(ti) 2
/2. Combining all three
Lfx
ti steps, we obtain the theorem if
V
x
; xi , ux^ V
xi V
e k^
xi xi k
Z
max minfTc0 c1 =kconv , Tc2 c g:
39
F
x
s; x^i , ux^, ux^ ds V
k^
xi xi k:
ti
and if we choose the observer error emax such that
32
1
Lfx
Here, we used an upper bound for jjx(;xi,uxÃ) V
e emax V
emax min Vmin c, , , :
2 4 4
x(;xÃi,uxÃ)jj based on the Gronwall±Bellman lemma. If
40
we now assume that xi 2 =
/2 and that
&
V
emax ,
33
4 Remark 3.3. Explicitly designing an observer based
3 on (40) and (39) is in general not possible. However,
then we know that xÃi 2 =
/4 . Thus we we obtain from
the theorem underpins that if the observer error can be
(31) using Fact 1 that
sufficiently fast decreased that the closed-loop system
V
x
; xi , ux^ V
xi Vmin c, , state will be semi-globally practically stable.
4
V eLfx k^xi xi k V
k^xi xi k:
34 Theorem 3.1 lays the basis for the design of observer
based output feedback NMPC controllers that
To guarantee that x is decreasing from sampling achieve semi-global practical stability. While in prin-
instant to sampling instant along the level sets, and to ciple Assumption 2 is difficult to satisfy, a number of
achieve convergence to the set
/2 in ®nite time we observers designs achieve the desired properties as
need that the right-hand side is strictly less than zero. shown in the next section.
One possibility to obtain this is to require, that the
observer parameters are chosen such that:
3.4. Output Feedback NMPC with
Stability ± Possible Observer Designs
V eLfx k^
xi xi k V
k^xi xi k
Vmin c, , Vmin c, , Various observers satisfy Assumption 2 and thus
4 4
1 allow the design of semi-globally stable output feed-
Vmin c, , :
35 back controllers following Theorem 3.1. We will go
2 4
into some detail for standard high-gain observers [87]
and optimization based moving horizon observers
with contraction constraint [72]. Note, that further
3
The values /2 and /4 are chosen for simplicity. observer designs that satisfy the assumptions are for
202 R. Findeisen et al.
B blockdiag B1 , . . . , Bp ,
Bi 0 0 1 Tri 1
43b 4
We use hatted variables for the observer states and variables.
State and Output Feedback NMPC 203
16. Chen H. Stability and Robustness Considerations in 32. Findeisen R, Diehl M, Uslu I, Schwarzkopf S,
Nonlinear Model Predictive Control. Fortschr-Ber VDI AllgoÈwer F, Bock HG, SchloÈder JP, Gilles. Computa-
Reihe 8 Nr. 674. VDI Verlag, DuÈsseldorf, 1997 tion and performance assessment of nonlinear model
17. Chen H, AllgoÈwer F. Nonlinear model predictive predictive control. In Proceedings of 42th IEEE
control schemes with guaranteed stability. In: Berber R Conference Decision Control, Las Vegas, USA, 2002
and Kravaris C (eds). Nonlinear Model Based Process 33. Findeisen R, Imsland L, AllgoÈwer F, Foss BA. Output
Control, Kluwer Academic Publishers, Dodrecht, 1998, feedback nonlinear predictive control ± a separation
pp. 465±494 principle approach. In Proceedings of 15th IFAC
18. Chen H, AllgoÈwer F. A quasi-infinite horizon nonlinear World Congress, Barcelona, Spain, 2002
model predictive control scheme with guaranteed 34. Findeisen R, Imsland L, AllgoÈwer F, Foss BA. Output
stability. Automatica 1998; 34(10): 1205±1218 feedback stabilization for constrained systems with non-
19. Chen H, Scherer CW, AllgoÈwer F. A game theoretic linear model predictive control. Int J Robust Nonlinear
approach to nonlinear robust receding horizon control Control 2003; 13(3±4): 211±227
of constrained systems. In Proceedings of American 35. Findeisen R, Nagy Z, Diehl M, AllgoÈwer F, Bock HG,
Control Conference, Albuquerque, 1997, pp 3073± SchloÈder JP. Computational feasibility and perfor-
3077 mance of nonlinear model predicitve control. In Proc
20. Chen W, Ballance DJ, O'Reilly J. Model predictive con- 6st European Control Conference ECC'01, Porto,
trol of nonlinear systems: Computational delay and sta- Portugal, 2001, pp 957±961
bility. IEE Proceedings, Part D 2000; 147(4): 387±394 36. Fontes FA. A general framework to design stabilizing
21. Chisci L, Zappa G. Feasibility in predictive control of nonlinear model predictive controllers. Syst Contr Lett
constrained linear systems: the output feedback case. 2000; 42(2): 127±143
Int J Robust Nonlinear Control 2002; 12(5): 465±487 37. Fontes FA. Discontinuous feedbacks, discontinuous
22. De Nicolao G, Magni L, Scattolini R. Stability and optimal controls, and continuous-time model predictive
robustness of nonlinear receding horizon control. In: control. Int J Robust Nonlinear Control 2003; 13(3±4):
AllgoÈwer F and Zheng A (eds). Nonlinear Predictive 191±209
Control, BirkhaÈuser, 2000, pp 3±23 38. Froisy JB. Model predictive control: Past, present and
23. de Oliveira NMC, Biegler LT. An extension of Newton- future. ISA Transactions 1994; 33: 235±243
type algorithms for nonlinear process control. 39. GarcõÂ a CE, Prett DM, Morari M. Model Predictive
Automatica 1995; 31(2): 281±286 Control: Theory and practice ± A survey. Automatica
24. de Oliveira Kothare S, Morari M. Contractive model 1989; 25(3): 335±347
predictive control for constrained nonlinear systems. 40. Grimm G, Messina MJ, Teel AR, Tuna S. Examples
IEEE Trans Autom Control 2000; 45(6): 1053±1071 when model predictive control is nonrobust. submitted
25. Diehl M, Findeisen R, Nagy N, Bock HG, SchloÈder JP, to Automatica, 2002
AllgoÈwer F. Real-time optimization and nonlinear 41. Grimm G, Messina MJ, Teel AR, Tuna S. Model pre-
model predictive control of processes governed by dictive control: For want of a local control Lyapunov
differential-algebraic equations. J Proc Control 2002; function, all is not lost. submitted to IEEE Trans
4(12): 577±585 Autom control, 2002
26. Diehl M, Findeisen R, Schwarzkopf S, Uslu I, 42. Imsland L, Findeisen R, AllgoÈwer F, Foss BA. Output
AllgoÈwer F, Bock JP, SchloÈder HG. An efficient feedback stabilization with nonlinear predictive control
approach for nonlinear model predictive control of ± asymptotic properties. In Proceedings of American
large-scale systems. Part I: Description of the metho- Control Conference, Denver, 2003
dology. Automatisierungstechnik 2002; 12: 557±567 43. Imsland L, Findeisen R, Bullinger E, AllgoÈwer F, Foss
27. Diehl M, Findeisen R, Schwarzkopf S, Uslu I, BA. A note on stability, robustness and performance
AllgoÈwer F, Bock JP, SchloÈder HG. An efficient of output feedback nonlinear model predictive control.
approach for nonlinear model predictive control of J Proc Control 2003 (To appear).
large-scale systems. Part II: Experimental evaluation 44. Imsland L, Findeisen R, Bullinger E, AllgoÈwer, Foss
considering the control of a distillation column. BA. On output feedback nonlinear model predictive
Automatisierungstechnik 2003; 1: 22±29 control using high gain observers for a class of systems.
28. Drakunov S, Utkin V. Sliding Mode Observers. In 6th IFAC Symposium on Dynamics and Control of
Tutorial. In Proceedings of 34th IEEE Conference Process Systems, DYCOPS-6, Jejudo, Korea, 2001,
Decision Control, New Orleans, LA, December 1995. pp. 91±96
IEEE, pp 3376±3378 45. Jadbabaie A, Yu J, Hauser J. Unconstrained receding
29. Engel R, Kreisselmeier G. A continuous±time observer horizon control of nonlinear systems. IEEE Trans
which converges in finite time. IEEE Trans Autom Autom Control 2001; 46(5): 776±783
Control 2002; 47(7): 1202±1204 46. Keerthi SS, Gilbert EG. An existence theorem for
30. Findeisen R. Stability, Computational Efficiency, discrete-time infinite-horizon optimal control problems.
Robustness, and Output Feedback in Sampled-Data IEEE Trans Autom Control 1985; 30(9): 907±909
Nonlinear Model Predictive Control. 2003. PhD thesis, 47. Keerthi SS, Gilbert EG. Optimal infinite-horizon
University of Stuttgart feedback laws for a general class of constrained
31. Findeisen R, AllgoÈwer F. The quasi-infinite horizon discrete-time systems: Stability and moving-horizon
approach to nonlinear model predictive control. In: approximations. J Opt Theory and Appl 1988; 57(2):
Zinober A and Owens D (eds). Nonlinear and Adaptive 265±293
Control, Lecture Notes in Control and Information 48. Kellett C, Shim H, Teel A. Robustness of discontin-
Sciences, Springer-Verlag, Berlin, 2001, pp 89±105 uous feedback via sample and hold. In: Proceedings
State and Output Feedback NMPC 205
of American Control Conference, Anchorage, 2002, 67. Mayne DQ. Optimization in model based control. In
pp 3515±3516 Proc. IFAC Symposium Dynamics and Control of
49. Kothare MV, Balakrishnan V, Morari M. Robust con- Chemical Reactors, Distillation Columns and Batch
strained model predictive control using linear matrix Processes, Helsingor, 1995, pp 229±242
inequalities. Automatica 1996; 32(10): 1361±1379 68. Mayne DQ, Michalska H. Receding horizon control of
50. Kouvaritakis B, Wang W, Lee YI. Observers in nonlinear systems. IEEE Trans Automat Control,
nonlinear model-based predictive control. Int J Robust 1990; 35(7): 814±824
Nonlinear Control 2000; 10(10): 749±761 69. Mayne DQ, Rawlings JB, Rao CV, Scokaert POM.
51. Krener AJ, Isidori A. Linearization by output injection Constrained model predictive control: stability and
and nonlinear observers. Syst Control Lett 1983; 3: 47±52 optimality. Automatica 26(6): 789±814
52. Lall S, Glover A. A game theoretic approach to moving 70. Meadows ES, Henson MA, Eaton JW, Rawlings JB.
horizon control. In: Clarke D. (ed.), Advances Receding horizon control and discontinuous state
in Model-Based Predictive Control. Oxford University feedback stabilization. Int J Control 1995; 62(5):
Press, 1994 1217±1229
53. Lee JH, Cooley B. Recent advances in model predictive 71. Michalska H, Mayne DQ. Robust receding horizon
control and other related areas. In: Kantor JC, Garcia control of constrained nonlinear systems. IEEE Trans
CE, and Carnahan B, (eds). Fifth International Confer- Automat Control 1993; AC-38(11): 1623±1633
ence on Chemical Process Control ± CPC V, American 72. Michalska H, Mayne DQ. Moving horizon observers
Institute of Chemical Engineers, 1996, pp. 201±216 and observer-based control. IEEE Trans Automat
54. Lee YI, Kouvaritakis B. Receding horizon output Control 1995; 40(6): 995±1006
feedback control for linear systems with input satura- 73. Morari M, Lee JH. Model predicitve control: Past,
tion. IEE Control Theory Appl 2001; 148(2): 109±115 present and future. Comp Chem Eng 1999; 23(4/5):
55. Li WC, Biegler LT. Multistep, newton-type control 667±682
strategies for constrained nonlinear processes. Chem 74. Muske KR, Meadows ES, Rawlings JR. The stability
Eng Res Des 1989; 67: 562±577 of constrained receding horizon control with state
56. LoÈfberg J. Towards joint state estimation and control estimation. In: Proceedings of American Control
in minimax MPC. In: Proceedings of 15th IFAC World Conference, Baltimore, 1994
Congress, Barcelona, Spain, 2002 75. Primbs J, Nevistic V, Doyle J. Nonlinear optimal
57. Magni L, De Nicolao D, Scattolini R. Output feedback control: A control Lyapunov function and receding
receding-horizon control of discrete-time nonlinear horizon perspective. Asian J Control 1999; 1(1):
systems. In Preprints of the 4th Nonlinear Control 14±24
Systems Design Symposium 1998 ± NOLCOS'98, 76. Pytlak R. Numerical Methods for Optimal Control
IFAC, July 1998, pp 422±427 Problems with State Constraints. Lecture Notes in
58. Magni L, De Nicolao G, Scatollini R, AllgoÈwer F. Mathematics. Springer, Berlin, 1999
Robust model predictive control for nonlinear discrete- 77. Qin SJ, Badgwell TA. An overview of nonlinear model
time systems. Int J Robust and Nonlinear Control predictive control applications. In: AllgoÈwer F. and
2003; 13(3±4): 229±246 Zheng A. (eds), Nonlinear Predictive Control,
59. Magni L, De Nicolao G, Scattolini R. Output feedback BirkhaÈuser, 2000, pp 369±393
and tracking of nonlinear systems with model pre- 78. Qin SJ, Badgwell TA. A survey of industrial model
dictive control. Automatica 2001; 37(10): 1601±1607 predictive control technology. Control Engineering
60. Magni L, De Nicolao G, Scattolini R. A stabilizing Practice 2003; 11(7): 733±764
model-based predicitve control algorithm for nonlinear 79. Rao CV, Rawlings JB, Mayne DQ. Constrained state
systems. Automatica 2001; 37(10): 1351±1362 estimation for nonlinear discrete time systems: Stability
61. Magni L, De Nicolao G, Scattolini R, AllgoÈwer F. and moving horizon approximations. IEEE Trans
Robust receding horizon control for nonlinear discrete- Autom Control 2003; 48(2): 246±258
time systems. In: Proceedings 15th IFAC World 80. Rawlings JB. Tutorial overview of model predictive
Congress, Barcelona, Spain, 2001 control. IEEE Contr Syst Magazine 2000; 20(3):
62. Magni L, Nijmeijer H, vander Schaft AJ. A receding- 38±52
horizon approach to the nonlinear H1 control pro- 81. Rawlings JB, Meadows ES, Muske KR. Nonlinear
blem. Automatica 2001; 37(5): 429±435 model predictive control: A tutorial and survey. In:
63. Magni L, Scattolini R. State-feedback MPC with Proceedings of International Symposium on Advances
piecewise constant control for continuous-time systems. in Control of Chemical Processes, ADCHEM, Kyoto,
In: Proc. 42th IEEE Conference Decision Control, Las Japan, 1994
Vegas, USA, 2002 82. Scokaert POM, Mayne DQ, Rawlings JB. Suboptimal
64. Magni L, Sepulchre R. Stability margins of nonlinear model predictive control (feasibility implies stability).
receding±horizon control via inverse optimality. Syst IEEE Trans. Autom Control, 1999; 44(3): 648 ±
Control Lett 1997; 32(4): 241±245 654
65. Mahadevan R, Doyle III FJ. Efficient optimization 83. Scokaert POM, Rawlings JB, Meadows ES. Discrete-
approaches to nonlinear model predictive control. Int J time stability with perturbations: Application to model
Robust Nonlinear Control 2003; 13(3±4): 309±329 predictive control. Automatica 1997; 33(3): 463±470
66. Martinsen F, Biegler LT, Foss BA. Application of opti- 84. Sznaier M, Suarez R, Cloutier J. Suboptimal control of
mization algorithms to nonlinear MPC. In: Proceedings constrained nonlinear systems via receding horizon
of 15th IFAC World Congress, Barcelona, Spain, control Lypunov functions. Int J Robust Nonlinear
2002 Control 2003; 13(3±4): 247±259
206 R. Findeisen et al.
85. Teel A, Praly L. Tools for semiglobal stabilization by nonlinear systems. In Proceedings of American Control
partial state and output feedback. SIAM J Control Conference 2003.
Optimization 1995; 33(5): 1443±1488 90. Wright SJ. Applying new optimization algorithms
86. Tenny MJ, Rawlings JB. Feasible real-time nonlinear to model predictive control. In Kantor JC,
model predictive control. In 6th International Con- Garcia CE, and Carnahan B, (ed.), Fifth International
ference on Chemical Process Control ± CPC VI, Conference on Chemical Process Control ± CPC V,
AIChE Symposium Series, 2001 American Institute of Chemical Engineers, 1996,
87. TornambeÁ A. Output feedback stabilization of a class pp 147±155
of non-miminum phase nonlinear systems. Syst Control 91. Zheng A, Morari M. Stability of model predictive
Lett, 1992; 19(3): 193±204 control with mixed constraints. IEEE Trans Autom
88. Wan Z, Kothare MV. Robust output feedback model Control 1995; AC-40(10): 1818±1823
predictive control using offline linear matrix inequal- 92. Zimmer G. State observation by on-line minimization.
ities. J Process Control, 2002; 12(7): 763±774 Int J Control 1994; 60(4): 595±606
89. Wan Z, Kothare MV. Efficient stabilizing output
feedback model predictive control for constrained