9, OCTOBER 2008

[25] G. Pillonetto, “Solutions of nonlinear control and estimation problems
in reproducing kernel Hilbert spaces: existence and numerical determination,” Automatica, vol. 44, no. 8, pp. 2135–2141, Aug. 2008.
[26] R. Ravikanth and S. P. Meyn, “Bounds on achievable performance in
the identification and adaptive control of time-varying systems,” IEEE
Trans. Automat. Control, vol. 44, no. 4, pp. 670–682, Apr. 1999.
[27] L. Schwartz, Radon Measures on Arbitrary Topological Spaces and
Cylindrical Measures. New York: Oxford University Press, 1973.
[28] H. L. Van Trees, Detection, Estimation, and Modulation Theory Part
1. New York: Wiley, 1968.
[29] Y. Z. Tsypkin and M. V. Bondarenko, “An optimal algorithm for identification of rapidly time-varying systems,” IEEE Trans. Automat. Control, vol. 37, no. 2, pp. 237–239, Feb. 1992.
[30] A. N. Tychonov and V. Y. Arsenin, Solutions of Ill-Posed Problems.
Washington, DC: Winston/Wiley, 1977.
[31] G. Wahba, Spline Models for Observational Data. Philadelphia, PA:
SIAM, 1990.
[32] L. Y. Wang, “Persistent identification of time-varying systems,” IEEE
Trans. Automat. Control, vol. 42, no. 1, pp. 66–82, Jan. 1997.
[33] L. Yingwei, N. Sundararajan, and P. Saratchandran, “Identification of
time-varying nonlinear systems using minimal radial basis function
neural networks,” Control Theory and Applications, IEE Proceedings,
vol. 144, no. 2, pp. 202–208, Mar. 1997.

Unreachable Setpoints in Model Predictive Control
James B. Rawlings, Dennis Bonné, John B. Jørgensen,
Aswin N. Venkat, and Sten Bay Jørgensen

Abstract—In this work, a new model predictive controller is developed
that handles unreachable setpoints better than traditional model predictive
control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability
of the optimal steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing
analysis methods for closed-loop properties of MPC are not applicable to
this new formulation, and a new analysis method is developed. It is shown
how to extend this analysis to terminal penalty MPC. Two examples are presented that demonstrate the advantages of the proposed setpoint-tracking
MPC over the current target-tracking MPC.
Index Terms—Aymptotic stability, constraints, model predictive control.

Model predictive control (MPC) has been employed extensively for
constrained, multivariable control in chemical plants [1]. The main moManuscript received February 25, 2008; revised April 23, 2008. Current version published October 8, 2008. This work was supported in part by the industrial members of the Texas–Wisconsin Modeling and Control Consortium
and the National Science Foundation under Grant CTS-0456694, by the Computer Aided Process and Products Center, Technical University of Denmark,
and by the Danish Department of Energy under Grant EFP 273/01-0012. Recommended by Associate Editor M. Fujita.
J. B. Rawlings is with the Department of Chemical and Biological Engineering, University of Wisconsin, Madison, WI 53706-1691 USA (e-mail:
D. Bonné is with the CAPEC, Department of Chemical and Biochemical Engineering, Technical University of Denmark, DK 2800 Lyngby, Denmark.
J.B. Jørgensen is with the DTU Informatics, Technical University of Denmark, DK 2800 Lyngby, Denmark.
A.N. Venkat is with the Shell Global Solutions (U.S.), Inc., Houston, TX
77084 USA.
S.B. Jørgensen is with the CAPEC, Department of Chemical and Biochemical
Engineering, Technical University of Denmark, DK 2800 Lyngby, Denmark.
Digital Object Identifier 10.1109/TAC.2008.928125


tivation for choosing MPC is to handle constrained systems. Essentially all current MPC theory is predicated on the assumption that the
setpoint has been transformed to a desired and reachable steady-state
target in a separate steady-state optimization [2], [3]. This assumption
allows current theory to address only transient active constraints that
occur when solving the controller’s dynamic optimization to drive the
system to the desired and reachable steady state. In practice, the setpoint often becomes unreachable due to a disturbance and the choice
to transform the problem to one with a desired and reachable steady
state is a significant decision that affects controller performance. We
show in this paper that when the setpoint is unreachable, this two-level
optimization is suboptimal and does not minimize the tracking error.
To improve the performance of MPC when the setpoint is unreachable,
we propose defining system performance relative to the unreachable
setpoint rather than the reachable steady-state target. As we show in
the examples, a consequence of the unreachable setpoint formulation
is a desirable fast/slow asymmetry in the system tracking response that
depends on the system initial condition, speeding up approaches to the
unreachable setpoint, but slowing down departures from the unreachable setpoint. One technical difficulty that arises is that the proposed
MPC cost function is unbounded on the infinite horizon. A more significant problem is that the existing MPC theory for stability and convergence no longer applies because this theory is based on establishing
the controller cost function as a Lyapunov function for the closed-loop
system, but the controller cost function is not strictly decreasing in this
Optimal control problems with unbounded cost are not new to control theory. The first studies of this class of problems arose in the economics literature in the 1920s [4], in which the problem of interest was
to determine optimal savings rates to maximize capital accumulation.
Since this problem has no natural final time, it was considered on the
infinite horizon. A flurry of activity in the late 1950s and 1960s led
to generalizations regarding future uncertainty, scarce resources, expanding populations, multiple products and technologies, and many
other economic considerations. Much of this work focused on establishing existence results for optimal control with infinite horizon and
unbounded cost, and the famous “turnpike” theorems [5] that characterize the optimal trajectory. Refs. [6] and [7] provides a comprehensive and readable overview of this research.
This class of problems was transferred to and further generalized in
the control literature. For infinite horizon optimal control of continuous
time systems, [8] established the existence of overtaking optimal trajectories. Convergence of these trajectories to an optimal steady state
is also demonstrated. Ref. [9] extended the results of [8] to infinite
horizon control of discrete time systems. Reduction of the unbounded
cost, infinite horizon optimal control problem to an equivalent optimization problem with finite costs is established. Ref. [10] provide a
comprehensive overview of these infinite horizon results.
For feedback control subject to hard input constraints, we often
employ a receding horizon implementation of a suitably chosen finite
horizon optimal control problem. The controller is implemented by
solving the open-loop control problem at each sample time as the state
becomes available and injecting only the first move of the open-loop
optimal solution. Ref. [11] provides a review of the methods for analyzing the resulting closed-loop behavior of these controllers. In this
formulation, the existence of the optimal control is not at issue because
the horizon is finite and the cost therefore bounded, but the stability
and convergence of the resulting receding horizon implementation of
the control law is the interesting question. In this paper, we address
the closed-loop properties of this receding horizon implementation for
the case of unreachable setpoints. In the formulation introduced here,

0018-9286/$25.00 © 2008 IEEE

u x. NO. and closed (not necessarily bounded)... u) is strictly convex. The (convex) feasible set for the constraints Ax = p. VOL. 1 = () . a terminal state equality constraint is employed. 1) The solution x3 exists and is unique. then x(k) ! x3 . If A has eigenvalues at unity. 72–73] . L(xsp . termed setpoint tracking MPC. quadratic function of x 2 n . in which contains the point p = 0. . . In Section II. subject to x = Ax + Bu ) u 2 : We assume the model and constraints admit a steady state. 53. u) is strictly convex and nonnegative and vanishes only at the setpoint. Two examples demonstrating the benefit of the proposed setpoint tracking MPC over current approaches are presented in Section IV. x 2 is assumed nonempty for p 2 .u k ( )+ Ax k () Bu k . Remark 3: Positive definiteness of Q and R implies that L(x. A. 01 : the positive function positive difference Q P L x i P L xP this difference (or a i positive lower bound on this difference) proves useful in the sequel. i. . x (0) = x: .2210 IEEE TRANSACTIONS ON AUTOMATIC CONTROL. P 0 1 P p . The cost function L(x. and conclusions of this study are presented in Section VI. usp ) = 0. bounded.i P P .u ( ( )) if and only if ( ) x i Ax k min ( L x i L x i = ( + 1) 2 i i p Definition 1 (Optimal Steady State): The optimal steady state. closed. convex and closed (not necessarily bounded). with nonempty. Lemma 2 (Basic Convex Optimization Result): Given L(x) is a strictly convex. The paper is organized as follows. u 2 m . The optimal solution x3 (p) and cost are uniquely defined and Lipschitz continuous on the set b Lemma 1 (An Equality for Quadratic Functions): Let be a nonempty. convex. basic results required to establish properties of the new controllers are given. 9. i = 1. PRELIMINARIES b Denote the non-negative reals by + . then containing an element of the null space of B .P = Ax III. Knowing =1 P i x i xP min ( ) subject to 2 x L x . f ( )g =001 = u i N N i ( + 1) = x k k 01 =0 ( ( ) ( )) L x k .. changing the equality to an inequality and dividing by P 2 The system model is assumed linear and time invariant x i : P x in which KL . u ) = 21 ( 0 sp )0 ( 0 sp ) + ( 0 sp )0 ( 0 sp ) 0 0 x Q > . =1 P ( )= =1 ( ( )) 0 21 : i =1 ( + 1) = ( ) 1 P =1 k ( ) 0 k2 x i xP Q : (1) i ( )= P L xP =1 ( ( )) the () x i not all i = 1 . u3 ) is unique according to Lemma 2. 2 gives the =1 k ( ) 0 k ( ( )) 0 ( ) . for example. Extending the asymptotic stability result to terminal penalty MPC is discussed in Section V. equality is achieved if and only if x(i) = x(i + 1). the steady-state problem is feasible. denoted (x3 . Convergence and nominal asymptotic stability are established for the closed-loop system. is the solution to the following optimization problem Remark 2: Because L is strictly convex. II.. A more general version of this lemma is stated and proved in [13]. 2) If x(k) is a feasible sequence (x(k) 2 . u3 ). x R > Q x x u R u u u : We assume the input constraints define a nonempty polytope in =f j u Hu  g m h L x. 3 kL(x (p)) 0 L(x3 (0))k  KL kpk . then any admits a steady state.P equal. Therefore (x3 . is defined in Section III. TERMINAL CONSTRAINT MPC Then the following holds: P L xP L x . Denote a closed ball of radius + centered at a 2 n by Bb (a) ( ) = f jk 0 k  g x x a min ( ) subject to x ( ) = 12 0 Consider a sequence ( ) 2 0 x Qx + q x + c L x x i xP Q > = 1 . p. quadratic function of x 2 n . In this paper we employ the popular choice of a quadratic measure of distance from the setpoint x P L xP ( )+ x k 2  0. Lemma 3 (Lipschitz Continuity): Given L(x) is a strictly convex. .. with mean . . 0 i L x i p. Jensen’s inequality for this Remark 1: Since kx(i) 0 xP kQ quadratic function follows from this result by replacing the second sum in (1) with zero. L(x(k)) bounded for all k ) with L(x(k)) ! L(x3 ). and optimization problem 2 Bb a If with nonempty. For simplicity of exposition. x 2 2 () (2) Bu k ( n L x. The new controller. convex subset of n (hence compact). Kx  0 are the Lipschitz constants. If A has no eigenvalues at unity. and optimization problem x i are . Terminal Constraint MPC Define the controller cost function 8 x. quadratic function defined on 1 2 A proof is provided in [12. is sufficient for a steady state to exist. OCTOBER 2008 the stage cost of the optimal control problem measures distance from the setpoint even if the setpoint is unreachable at steady state due to the input constraints. kx3 (p) 0 x3 (0)k  Kx kpk . and let L be a strictly convex.e.

x). u3 ) (5) k = 1 . there exists 0  r  n + N such that a closed-loop sequence x(k) under terminal constraint MPC starting in the steerable set remains in a nonempty. and we denote the optimal cost by 80 (x). convex subset of n for times k  r . shifting all the other inputs forward in the sequence. Proof: Remark 4 and Lemma 5 ensure that every closed-loop sequence x(k) is bounded after finite k . j = 0. This input evaluated at state w(k) is admissible for state w (k + 1) because it satisfies the model. u0 (w(k))) + L(x3. terminal constraint. N 0 1: The set is nonempty because it contains x3 . All closed-loop sequences are therefore bounded because the set N is positive invariant under the MPC control law. P . . 2This admissibility implies the positive invariance of the MPC control law mentioned previously. we define press the MPC control problem as u= fu(i)gNi=001 . . .IEEE TRANSACTIONS ON AUTOMATIC CONTROL. P 0 1: This optimal trajectory for w(2) is labeled Trajectory 2 in Fig. . and input constraints. Trajectory 1 is the truncation of the trajectories from (2) to optimal trajectory starting from point (1) and trajectory 2 is the complete optimal trajectory from point (2). u3 ): Therefore. w). and the Bolzano-Weierstrass property ensures that every infinite sequence in a closed. . x). NO. u0 (N 0 1. w). . . . The corresponding trajectory for w(2) is labeled Trajectory 1 in Fig. . shows that the closed-loop sequence starting in the (unbounded) steerable set evolves in a bounded set. Because a is an accumulation point of the closed-loop sequence. bounded. the steerable set is a nonempty.1 For singular A. w). Define the means over the closed-loop sequence to be wP = P1 P () w i uP i=1 = P1 P u i=1 0 (w(i)): Note also that the mean state and input satisfy (3a) (3b) (3c) (3d) (3e) We denote the optimal input sequence by u0 (x) = fu0 (0. Definition 2 (Steerable Set): The steerable set N is the set of states steerable to x3 in N steps N 2211 = fxjx3 = AN x + AN 01 Bu(0) + 1 1 1 + Bu(N 0 1). both the initial and terminal states are equal to x3 . . which we denote as u0 (x) = u0 (0. 1. 1This positive invariance is a well-known characteristic of terminal constraint MPC. 9. u0 (1. Lemma 5: For singular A. the cost is 8(w(k). Closed-loop MPC sequence with accumulation point . N 0 1 3 x (N ) = x u (k ) 2 k = 0. OCTOBER 2008 Fig. but for those unfamiliar with MPC. For notational simplicity. we can choose E and P large enough so that w(1). . The optimal open-loop input trajectories are u0 (w) = fu0 (0. u0 (1. 1. x). u0 (N 0 1. u3 g which is created by dropping the first input u0 (0. u) u subject to x(0) = x x(k + 1) = Ax(k ) + Bu(k ) k = 0. . we ex- min 8(x. . Hence. . . . . Let fx(k)g be a closed-loop sequence with an accumulation point a. and appending u3 for the final input. u~ (w(k))) 0 L(x3 . The closed-loop system is given by ( + 1) = Ax(k) + Bu0 (x(k)) with u(k) = u0 (x(k)) so 0 x(k + 1) = f (x(k )) f (1) = A(1) + Bu (1): x k Lemma 4: The optimal steady state is a fixed point of the closed-loop system ( 3 ) = Ax3 + Bu0 (x3 ) = x3 : f x Proof: For x(0) = x3 in the MPC control problem. w (k ) = x(k + E ). for every  > 0. . using the optimal sequence u0 (w(k + 1)) in place of u~ (w(k)) in this equation produces the inequality 8(w(k + 1). Strict convexity of the cost function implies the entire trajectory remains at x3 so u0 (x3 ) = u3 . . . . For state w (k + 1). convex subset of n . . (3). k = 1. w). VOL. u0 (N 0 1. For w(k). w). wP = AwP + BuP + P1 (w(1) 0 w(P + 1)): (4) Now consider the MPC optimal open-loop trajectory from each w(k) for k = 1. . . u0 (w(k))). u0 (w(k))) 0 L(w(k). x)g: The MPC feedback law is the first move of this optimal sequence. u(j ) 2 . P . 1. Define the sequence fw(k)g to be a portion of the closed-loop sequence. u0 (2. bounded subset of n has at least one accumulation point [14]. consider first the feasible but possibly suboptimal sequence u~ (w) = fu0 (1. u0 (w(k))) 0 L(w(k). w). . N 0 1g Remark 4: For nonsingular A. Say the closed-loop sequence enters B (a) at time index E +1. . the steerable set is not bounded. u0 (w(k + 1)))  8(w(k). . Two forecast are shown. The set is bounded because N is finite and is bounded. . . w(P ) 2 B (a) as depicted in Fig. 53. .2 Its cost from state w(k + 1) is directly related to the optimal cost from w(k) by 8(w(k). it is also established later in the proof of Theorem 1. Lemma 6: Every closed-loop sequence has at least one accumulation point. The following lemma. u0 (w(k))) = 8(w(k + 1). . 1. however. bounded. . . . . . w)g: We wish to compare the optimal costs for two succeeding states w(k) and w(k + 1).

u3 ). u3 ) : Applying Lipschitz continuity. u3 (p))  L(wP . Equation (7) then provides a contradiction because the right-hand side goes to 01 for a suitable subsequence with increasing P and the left-hand side is nonnegative. we show (wP . u0 (w(P ))) 0 L(x3 . Its region of attraction is the steerable set. u3 ) 1 P kw(i) 0 w k2 + u0 (w(i)) 0 u 2 P Q P R 2 i=1  KL kw(1) 0 w(P + 1)k + dP + L(w(P ). and kS (0. EXAMPLES : (8) Two examples are presented to illustrate the advantages of the proposed setpoint tracking MPC framework (sp-MPC) compared to tradi- . 2) Convergence: Convergence of x(k) to x3 for x0 in the steerable set is established in Lemma 7. VOL. This setting is the same as depicted in Fig. dP goes to zero with . The cost decrease is lost and 80 is not a Lyapunov function for the closed-loop system. u3 ) 0 L(wP . Next consider optimizing subject to the constraint given by(4) min L(x.u x = Ax + Bu + p 1 (w(1) 0 w(P + 1)) p= P in which P = k . there exists b > 0 such that kxkQ  b kxk2 for all x. we can choose E and P large enough so that w(1). Therefore. u0 (w(i))) converges to (x3 . kS (k. x0 ) 0 x3 k   for some 0 < i < k (denote one of these i as i). uP ) or we violate the inequality above. x0 ) 0 x3 k  . Since kS (i. Consider an  > 0 and x0 2 B  (x3 ). u3 ). u3 ) : (7) Next. u0 (w(i))) converges to its mean (wP . Therefore the sequence (w(i). Choose finite k that depends on x0 so that S (k. Theorem 1 (Asymptotic Stability of Terminal Constraint MPC): The optimal steady state is the asymptotically stable solution of the closedloop system under terminal constraint MPC. With the unreachable setpoint. x0 ) converges to x3 (Lemma 7). and. u3 ) with increasing P .u i=1 + dP 0 We can choose an increasing sequence of P for which the right-hand side is bounded above. for every  > 0. uP ) achieves values greater than L(x3 . x0 )   for all k  0 for all x0 2 B  (x3 ). NO. w(P ) 2 B (a). u0 (w(P ))) 0 L(x3 . the origin can be shifted to (x3 . u0 (w(P ))) 0 L(x3 . Add the inequalities in (5) to obtain P ( () L w i . we have the following lower bound on the sum of the following two distances from the mean: kw(1) 0 wP k2 + w(i) 0 wP 2  12 ( 0 )2 : 2 Since Q > 0. Lemma 3. there exists  > 0 such that S (k. u3 ) and the term L(x3 . and we have established a lower bound on the sum P i=1 subject to (9) kw(i) 0 wP kQ2 + u 0 (w(i)) 0 uP 2 R  2b ( 0 )2 : Substitution into (9) gives u 2 and denote this solution as (x3 (p). OCTOBER 2008 It should be noted that in standard MPC with a reachable steady-state target. Convergence and Lyapunov stability imply asymptotic stability and the theorem is proved. (wP . if it does not converge to (x3 . L(xP . Proof: Let a be an accumulation point of the closed-loop sequence. u0 (w(P ))) 0 L(x3 . 9. We know such a finite k exists for every x0 2 N since the sequence S (k. and we have established a contradiction. 1 (with a = x3 ). 53. u0 (w(P ))) 0 L(x3 . Lemma 7: Every closed-loop sequence converges to the optimal steady state. gives 1 P kw(i) 0 w k2 + u0 (w(i)) 0 u 2 P Q P R 2 i=1  KL kw(1) 0 w(P + 1)k + dP + L(w(P ). IV. and we take the intersection in case N does not contain a full neighborhood of x3 . The term 0L(w(k). 1) Lyapunov Stability: Let S (k. That leads immediately to a cost decrease from w(k) to w(k +1) and the cost function is a Lyapunov function for the closed-loop system. Denote the difference in the optimal costs between points w(1) and w(P ) by dP dP = 80 (w(1)) 0 80 (w(P )): (6) Because the optimal cost is continuous in the state. From its construction we know L(x3 (p). u3 ) 0 L(x3(p). For every  > 0. and we have established Lyapunov stability. u3 ) + L(w(P ). the left-hand side converges to b=42 > 0 and the right-hand side converges to zero. u3 ) for infinitely many P . u0 (w(k))) + L(x3 . and we have the same inequality derived previously in (8) (w(i)))  P L(x3 . w(i) = S (i 0 1. x0 ) 2 B  (x3 ) for all k  k (x0 ). that there exists  > 0 such that for every  > 0. x0 ) 0 x3 k   for some finite k for some 3 3 3 x0 2 B  (x ). Therefore (w(i). u3 ) is zero. u3 (p))) + dP + L(w(P ). From (4). uP ) converges to (x3 . u3 (p)). u0 (w(P ))) 0 L(x3 . by optimality. u3 ) : Taking the limit as  ! 0. in which B  (x ) = B (x ) \ N . Proof: Any initial state in the steerable set defines an infinite closed-loop sequence in N through the iteration x(k +1) = f (x(k)). uP )) + dP + L(w(P ). u3 ) b 2 4 ( 0 )  KL kw(1) 0 w(P + 1)k + dP + L(w(P ). Assume. the situation is completely different. We next prove asymptotic stability for model predictive control with unreachable setpoints by other means. u3 ) changes sign with k on typical closed-loop trajectories. x0 ). uP ) converges to the set of steady states. then. contrary to what is to be proven. u) x. u3 ) : Next apply Lemma 1 to the variable (w(i).2212 IEEE TRANSACTIONS ON AUTOMATIC CONTROL. x0 ) denote the solution of x(k + 1) = f (x(k)) at time k with initial condition x(0) = x0 2 N . We establish a contradiction. uP ) and therefore 1 P kw(i) 0 w k2 + u0 (w(i)) 0 u 2 P Q P R 2 i=1  P (L(x3. u0 (w(i))) to obtain 1 P kw(i) 0 w k2 + u0 (w(i)) 0 u 2 P Q P R 2 i=1  P (L(x3.

In addition to 04 I6 and Rv ance are Qx random state noise. This system in which can be put in the standard form defined for terminal constraint MPC by ( 0 1)] [3]. but others of which lead to a departure from setpoint. setpoint-tracking MPC (sp-MPC). The de: which corresponds to a steady-state sired output setpoint is ysp . S 2 u j S R. : turbance dx limit. Between times 50–130. OCTOBER 2008 2213 Fig. a zero-mean square-wave disturbance of magnitude 6 : . The state and measurement noise covari04 I2 . the controller performance is assessed using the following three closed-loop control performance measures: 0 8 ( ) = 1 21 k ( ) 0 k + k ( + 1) 0 ( )k 0 8 ( ) = 1 21 k ( ) 0 k 8( ) = 8 ( ) + 8 ( ) 1 k u k kT u j 1 k y k k kT 2 usp u j R u j 2 S j =0 x j xsp 2 Q j =0 u k y k in which T is the process sample time. 200–270 and 360–430. NO. a state dis0 causes the input to saturate at its lower : . we define the percentage improvement of sp-MPC compared to targ-MPC by 1% = 8 targ-MPC 8 08 sp-MPC targ-MPC 2 100 : 1) Example 1: The first example is the single input-single output system ( ) = 60 0+0 592623 2 +1 G s = 10 0 953 : s 2 : s =1 8 ( )= (12) 1 = 10 = 0 = 80 sampled with T s. 9. undesirable control action that forces the input to move between the upper and lower limits of operation. 2) Comments: In the targ-MPC framework. The regulator augmenting the state as ~( ) = [ ( ) u j Q > 2 usp u j R R. is = 12 8 sp-MPC 01 N k ( )0 x j j =0 xsp TABLE I COMPARISON OF CONTROLLER PERFORMANCE (EXAMPLE 1) k 2 Q + k ( ) 0 k + k ( + 1) 0 ( )k (10) 0. Since the control objective is to be close to the setpoint. The control in targ-MPC causes the output of the system to move away from the (unreachable) setpoint faster than the corresponding output of sp-MPC. 2. Closed-loop performance of sp-MPC and targ-MPC (Example 1). tional target tracking MPC (targ-MPC). The regulator parameters are Qy . The desired output setpoint 0 which corresponds to a steady-state input of : .IEEE TRANSACTIONS ON AUTOMATIC CONTROL. The inputs u1 and The system is sampled at the rate T are constrained between 0 : and 0. = 0 25 = + 0 01 = [17 1 1 77] ( ) The closed-loop performance of the two control formulations are compared in Table I. VOL. For each of the indices defined above. S > x k x k u k cost function in traditional target-tracking MPC (targ-MPC) is 8 targ-MPC = 12 N 01 k ( ) 0 3k x j j =0 x 2 Q + k ( ) 0 3 k + k ( + 1) 0 ( )k u j 2 u u j R u j 2 S (11) : In the examples. For this u2 = [0 337 0 34] = [0 496 0 468] = 10 [0 03 0 03] 05 = 10 .  0 and at least one of 0. The output setpoint is unreachable under the influence of this state disturbance dx . on the other hand. The regulator cost function for the new controller. The greater cost of control action in targ-MPC is shown by the cost index u in Table I. The traditional targ-MPC can be tuned to be fast or slow through relative choice of tuning parameters Q and R. The sp-MPC framework. : is ysp 0 . respectively. input value of 0 : 0 S . the controller tries to reject the state disturbance and minimize the deviation from the new steady-state target. The input u is constrained as juj  . The benefit here is that the sp-MPC controller slows down the departure from setpoint. 3) Example 2: The second example is the 2 input–2 output system G s 1:5 (s+2)(s+1) 0:5 (s+0:5)(s+1) 0:75 (s+5)(s+2) 2 (s+2)(s+3) = 0 25 : (13) : s.0 : system are corrupted by noise. but it is fast or slow from all initial conditions. some of which lead to an approach setpoint.R . The closed-loop performance of sp-MPC and targ-MPC under the described disturbance scenario are shown in Fig. Q C Qy C : I2 . A horizon length of N is used. 0 : 0 also affects the state evolution of the system. The state and measurement evolution of the usp : . but speeds up the approach to setpoint.5. 2. this undesirable behavior is eliminated by sp-MPC. The cost of control action in targ-MPC exceeds that of sp-MPC by nearly 100%. 53. attempts to minimize the deviation from setpoint and subsequently the input just rides the lower limit input constraint. This requires a large.

TERMINAL PENALTY MPC It is well known in the MPC literature that terminal constraint MPC with a short horizon is not competitive with terminal penalty MPC in . R = 0I2 . VOL. The closed-loop performance of the sp-MPC and the targ-MPC formulations are shown in Figs. S = I2 . OCTOBER 2008 Fig. 3. 4.2214 IEEE TRANSACTIONS ON AUTOMATIC CONTROL. 4) Comments: In this example. 3 and 4. Fig. The presence of state and measurement noise causes input u1 to saturate and consequently the setpoint to become unreachable. The overall controller cost of targ-MPC exceeds that of sp-MPC by nearly 70% (Table II). Fig. Closed-loop outputs of sp-MPC and targ-MPC in Example 2. Closed-loop inputs of sp-MPC and targ-MPC in Example 2. Target and disturbance estimates in Example 2. the state and disturbances are estimated using a steady-state Kalman filter. TABLE II COMPARISON OF CONTROLLER PERFORMANCE (EXAMPLE 2) V. 5 depicts the target and disturbance estimates for the noisy system. the input u1 is near its upper limit at steady state. 53. example. We note from Table II that the total cost of control action used by targ-MPC and sp-MPC are nearly the same. 9. Fig. 5. The horizon length is N = 80. NO. and Q = C 0 Qy C + I6 . The regulator parameters are Qy = I2 . but the system outputs behave quite differently. Table II quantifies the relative performance of the sp-MPC and targ-MPC frameworks.

pp. and the undesirable difference between open-loop prediction and closed-loop behavior [11].” J. constraints to zero the unstable modes at stage N are added to (14) [15]. x(0) = x in which 5 satisfies the usual Lyapunov equation and  is given by 5 = A0 5A + Q + K 0 RK 0  = (I 0 A )01 Q(xsp 0 x3 ) + K 0 R(usp 0 u3 ) : Note that the rotated cost-to-go satisfies Lr1 (x) = Lr1 (Ax + Buc (x)) + Lr (x. If the system is unstable and feasible K does not exist to make (A + BK ) stable given the active constraints at u3 . [4] F. B. [2] K. [6] L. 789–814. pp. Opt. [8] W. Ramsey. VOL. 11. Muske and J. Computing the change in cost for this candidate sequence gives 8r (w(k + 1).” Econ. u) u subject to x(0) = x x(k + 1) = Ax(k) + Bu(k) k = 0. [7] L. “Turnpike theory. First define the rotated cost Lr [9] Lr (x. no. A procedure to choose feasible K that more closely approximates the infinite horizon solution is given in [3]. w(k))) + Lr1 Ax0 (N.” SIAM J. vol. . OCTOBER 2008 terms of the size of the set of admissible initial states. u0 (w(k))). a new MPC controller was presented that handles unreachable setpoints better than standard MPC methods. w) is the state value at stage N using the optimal input sequence. Capital. W. 1993. vol. PA: SIAM. vol. no. Leizarowitz. Dec. 7. vol. Infinite Horizon Optimal Control. W. These plus compactness of ensure system evolution in a compact set. w). Samuelson. Haurie. u0 (w(k)))  8r0 (w(k)) 0 L(w(k). The new controller was based on a cost function measuring distance from the unreachable setpoint. u3 ) and compute the infinite horizon rotated cost-to-go under control law uc (x) = K (x 0 x3 ) + u3 with K chosen so that A = A + BK is asymptotically stable. u3 ) for terminal penalty MPC. . Solow. V. u) = L(x. no. . w(k)) + Buc (x0 (N. fu(i)gN0 i=0 = N01 k=0 Notice this inequality is the same as (5) if we replace the cost function 8 in (5) by the rotated cost function 8r . A. 39. vol. 733–764. “A survey of industrial model predictive control technology. pp. 1. u(k)) x(k + 1) = Ax(k) + Bu(k). “Steady states and constraints in model predictive control. Muske and J. such as the simple choice K = 0. Two examples were presented that demonstrate the advantages of the proposed setpoint-tracking MPC over the existing target-tracking MPC. 38. This new unreachable setpoint approach should also prove useful in applications where optimization of a system’s economic performance is a more desirable goal than simple setpoint tracking. pp. .” Appl. pp. P. Res. B. Principles of Mathematical Analysis. 3rd ed. vol. J. 1928. vol. 543–559. 152. Q. Here we briefly outline how the previous analysis can be applied to prove asymptotic stability of terminal penalty MPC. B. 1968. The set of admissible initial states is chosen to ensure positive invariance and feasibility of control law uc . Wright for helpful discussion of the ideas contained in this paper. 1994. (14c) REFERENCES (14d) We compute the change in cost along the closed-loop trajectory. The user has some flexibility in choosing K . N 0 1: u (k ) 2 (14a) (14b) The authors would like to thank D. 5. u) 0 L(x3 . no. 3. pp. uc (x)). McKenzie. consider the candidate sequence u~ (w) = fu0 (1. N 0 1 k = 0 . J. New York: McGraw-Hill. u0 (w(k))) + L (x 3 . 321–338. x(0) = x In this work. J. u0 (w(k))) 0 Lr (w(k). 2nd ed. vol. “Linear model predictive control of unstable processes. . . Cont. we have the inequality 8r0 (w(k + 1))  8r0 (w(k)) 0 Lr (w(k). u(k)) + Lr1 (x(N )) x(k + 1) = Ax(k) + Bu(k). Rawlings. Rawlings. Wolfe. uc (x0 (N. R. Chicago. Germany: Springer–Verlag. “Infinite horizon autonomous systems with unbounded cost. [11] D. 841–865. 36. u 3 ): [1] S. . Because we use the control law uc (x) in place of the constant u3 as we did in terminal constraint MPC. 262–287. 6.” AIChE J. Philadelphia. u0 (N 0 1. A. and P. “Accumulation programs of maximum utility and the von Neumann facet. [9] A. Linear Programming and Economic Analysis. Brock and A. 6. A simple calculation gives L1 r (x ) = 1 k=0 Lr (x(k). Rao and J. no. Leizarowitz. 1999. Hager. N. pp. 1976. [5] R. . Math. . Rudin. 17. w). 1266–1278. w(k))) : The last three terms cancel and noting the optimization at stage k + 1.” Control Eng. and u3 is on the boundary of . 2003. 2. 45. 85–96. 19–43. pp. w). It was next shown how to extend the asymptotic stability result to the case of terminal penalty MPC by introducing rotated cost.” in Value. CONCLUSION 1 Lr1 (x) = (x 0 x3 )0 5(x 0 x3 ) 0 0 (x 0 x3 ) 2 function as 2215 Lr (x(k). Sep. and. u0 (2. Berlin. Rawlings. Carlson. 3. 353–383. 1991. IL: Edinburgh Univ. R. Proc. . 1976. 2000.IEEE TRANSACTIONS ON AUTOMATIC CONTROL. P. V. Badgwell.. Mangasarian. Mayne. Cont. Scokaert. and A. and controller ACKNOWLEDGMENT min 8r (x. Press/Aldine. [14] W. vol. . we must restrict the gain matrix K and initial state w such that the control law is feasible [3] and the system is positive invariant. [12] O. M. O. w(k))) + Lr x0 (N. Rawlings. A. B. VI. 337–346. no. the cost is 8r (w(k). “Constrained model predictive control: Stability and optimality. u0 (w(k))) 0 Lr1(x0 (N. “A mathematical theory of saving. Qin and T. and Growth. Mayne and S.” Automatica. 2. Oper. w))g in which x0 (N. Q. Ed. Nonlinear Programming. C. Nominal asymptotic stability of the optimal steady state was established for this terminal constraint MPC. [10] D. Next define the terminal penalty MPC cost 8r 1 x. For w(k). and R. pp. Dorfman. 1985. A. New York: McGraw-Hill. Cost improvements between 50% and 70% were obtained for the two examples. w(k)). P.” Econometrica. pp. “On existence of overtaking optimal trajectories over an infinite time horizon. uc (x0 (N. no.” Math. J. [3] C. Prac. “Model predictive control with linear models. J. . McKenzie. [13] W. vol. “Lipschitz continuity for constrained processes. pp. 53. a finite horizon with a terminal constraint. B.” AIChE J. For state w(k + 1). . 9. 1993. for simplicity of exposition. Rao. [15] K. 1976. W.. no. 1979. 1958.. u~ (w(k))) = 8r (w(k). Opt. 13. Hauie. NO.. 44. The remaining steps after (5) in the previous development can then be followed to establish asymptotic stability of (x3 .