You are on page 1of 72

Journal of Control Science and Engineering

Model Predictive Control

Guest Editors: Baocang Ding, Marcin T. Cychowski,


Yugeng Xi, Wenjian Cai, and Biao Huang
Model Predictive Control
Journal of Control Science and Engineering

Model Predictive Control


Guest Editors: Baocang Ding, Marcin T. Cychowski,
Yugeng Xi, Wenjian Cai, and Biao Huang
Copyright © 2012 Hindawi Publishing Corporation. All rights reserved.

This is a special issue published in “Journal of Control Science and Engineering.” All articles are open access articles distributed under
the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided
the original work is properly cited.
Editorial Board
Zhiyong Chen, Australia Seul Jung, Republic of Korea Zoltan Szabo, Hungary
Edwin K. P. Chong, USA Vladimir Kharitonov, Russia Onur Toker, Turkey
Ricardo Dunia, USA Derong Liu, China Xiaofan Wang, China
Nicola Elia, USA Tomas McKelvey, Sweden Jianliang Wang, Singapore
Peilin Fu, USA Silviu-Iulian Niculescu, France Wen Yu, Mexico
Bijoy K. Ghosh, USA Yoshito Ohta, Japan L. Z. Yu, China
F. Gordillo, Spain Yang Shi, Canada Mohamed A. Zribi, Kuwait
Shinji Hara, Japan S. Skogestad, Norway
Contents
Model Predictive Control, Baocang Ding, Marcin T. Cychowski, Yugeng Xi, Wenjian Cai, and Biao Huang
Volume 2012, Article ID 240898, 2 pages

Distributed Model Predictive Control of the Multi-Agent Systems with Improving Control Performance,
Wei Shanbi, Chai Yi, and Li Penghua
Volume 2012, Article ID 313716, 8 pages

Model Predictive Control of Uncertain Constrained Linear System Based on Mixed H2 /H∞ Control
Approach, Patience E. Orukpe
Volume 2012, Article ID 402948, 12 pages

Experimental Application of Predictive Controllers, C. H. F. Silva, H. M. Henrique,


and L. C. Oliveira-Lopes
Volume 2012, Article ID 159072, 18 pages

Robust Model Predictive Control Using Linear Matrix Inequalities for the Treatment of Asymmetric
Output Constraints, Mariana Santos Matos Cavalca, Roberto Kawakami Harrop Galvão,
and Takashi Yoneyama
Volume 2012, Article ID 485784, 7 pages

Efficient Model Predictive Algorithms for Tracking of Periodic Signals, Yun-Chung Chu and
Michael Z. Q. Chen
Volume 2012, Article ID 729748, 13 pages

Pole Placement-Based NMPC of Hammerstein Systems and Its Application to Grade Transition Control
of Polypropylene, He De-Feng and Yu Li
Volume 2012, Article ID 630645, 6 pages
Hindawi Publishing Corporation
Journal of Control Science and Engineering
Volume 2012, Article ID 240898, 2 pages
doi:10.1155/2012/240898

Editorial
Model Predictive Control

Baocang Ding,1 Marcin T. Cychowski,2 Yugeng Xi,3 Wenjian Cai,4 and Biao Huang5
1 Ministry of Education Key Lab For Intelligent Networks and Network Security (MOE KLINNS Lab),
Department of Automation, School of Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an 710049, China
2 NIMBUS Centre for Embedded Systems Research, Cork Institute of Technology, Rossa Avenue, Cork, Ireland
3 Department of Automation, Shanghai Jiaotong University, Shanghai 200240, China
4 School of Electronic and Electrical Engineering, Nanyang Technological University, BLK S2, Singapore 639798
5 Department of Chemical and Materials Engineering, University of Alberta, Edmonton, AB, Canada T6G 2G6

Correspondence should be addressed to Baocang Ding, baocang.ding@gmail.com

Received 9 February 2012; Accepted 9 February 2012

Copyright © 2012 Baocang Ding et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Model predictive control (MPC) has been a leading technol- et al. treat asymmetric output constraints in integrating SISO
ogy in the field of advanced process control for over 30 years. systems based on pseudoreferences.
At each sampling time, MPC optimizes a performance cost Usually, the steady-state target calculation, followed with
satisfying the physical constraints, which is initialized by the the dynamic move calculation, is implemented in each sam-
real measurements, to obtain a sequence of control moves pling time. This is not practical for fast-sampling systems.
or control laws. However, MPC only implements the control Y. C. Chu and M. Z. Q. Chen propose a method to tackle
move corresponding to the current sampling time. At the this issue. In their method, the steady-state target calculation
next sampling time, the same optimization problem is solved works in parallel to, with a period longer than and a scale of
with renewed initialization. In order to obtain offset-free optimization larger than, the dynamic move calculation. It is
control, the system model is augmented with a disturbance shown that their method is particularly suitable for tracking
model to capture the effect of model-pant mismatch. The the periodic references.
state and disturbance estimates are utilized to initialize the The nonlinear system represented by a Hammerstein
optimization problem. A steady-state target calculation unit model has always been a good platform for control algorithm
is constructed to compromise between what is desired by research. D. F. He and L. Yu revisit this topic by invoking
real-time optimization (RTO) and what is feasible for the the pole-placement method on the linear subsystem. They
dynamic control move optimization. As a result, MPC ex- propose the algorithm which consists of three online steps,
hibits innumerous research issues from both theoretical and instead of the two-step MPC. They also propose to use
practical aspects. Nowadays, there is big gap between the their algorithm in the grade transition control of industrial
theoretical investigations and industrial applications. The polypropylene plants, via a simulation study.
main focus of this special issue is on the new research ideas Usually, MPC has its own paradigm for robust control.
and results for MPC. However, combing MPC with H2 /H∞ control could be a
A total number of 14 papers were submitted for this spe- good topic for improving the robustness of MPC. P. E.
cial issue. Out of the submitted papers, 6 contributions have Orukpe applies the mixed H2 /H∞ method in MPC where the
been included in this special issue. The 6 papers consider 6 system uncertainties are modeled by perturbations in a linear
rather different, yet interesting topics. fractional transform (LFT) representation and unknown
A synthesis approach of MPC is the one with guaranteed bounded disturbances.
stability. The technique of linear matrix inequality (LMI) is Interests in the cooperative control of multi-agent sys-
often used in the synthesis approach to address the satis- tems have been growing significantly over the last years. MPC
faction of physical constraints. However, the existing LMI has the ability to redefine cost functions and constraints as
formulations are for symmetric constraints. M. S. M. Cavalca needed to reflect changes in the environment, which makes
2 Journal of Control Science and Engineering

it a nice choice for multiagent systems. S. B. Wei et al. give


a method for distributed MPC which punishes, in the cost
function, the deviation between what an agent optimizes and
what other related agents think of it. The deviation weight
matrix at the end of the control horizon is specially discussed
for improving the control performance.
Besides, C. H. F. Silva et al. report some experimental
studies for two classical algorithms: infinite horizon MPC
and MPC with reference system. The pilot plant is for level
and pH control, which has physical constraints and nonlin-
ear dynamics.
We hope the readers of Journal of Control Science and
Engineering will find the special issue interesting and stim-
ulating, and expect that the included papers contribute to
further advance the area of MPC.

Acknowledgments
We would like to thank all the authors who have submitted
papers to the special issue and the reviewers involved in the
refereeing of the submissions.
Baocang Ding
Marcin T. Cychowski
Yugeng Xi
Wenjian Cai
Biao Huang
Hindawi Publishing Corporation
Journal of Control Science and Engineering
Volume 2012, Article ID 313716, 8 pages
doi:10.1155/2012/313716

Research Article
Distributed Model Predictive Control of the Multi-Agent
Systems with Improving Control Performance

Wei Shanbi, Chai Yi, and Li Penghua


College of Automation, Chongqing University, Chongqing 400044, China

Correspondence should be addressed to Wei Shanbi, wsbmei@yahoo.com.cn

Received 8 October 2011; Revised 22 December 2011; Accepted 19 January 2012

Academic Editor: Yugeng Xi

Copyright © 2012 Wei Shanbi et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This paper addresses a distributed model predictive control (DMPC) scheme for multiagent systems with improving control
performance. In order to penalize the deviation of the computed state trajectory from the assumed state trajectory, the deviation
punishment is involved in the local cost function of each agent. The closed-loop stability is guaranteed with a large weight
for deviation punishment. However, this large weight leads to much loss of control performance. Hence, the time-varying
compatibility constraints of each agent are designed to balance the closed-loop stability and the control performance, so that
the closed-loop stability is achieved with a small weight for the deviation punishment. A numerical example is given to illustrate
the effectiveness of the proposed scheme.

1. Introduction of communication and cooperation, the local control actions


cannot keep consistent [7, 8]. require each local controller
Interests in the cooperative control of multiagent systems exchange information with all other local controllers to
have been growing significantly over the last years. The main improve optimality and consistency based on sufficient
motivation is the wide range of military and civilian appli- communication. For the decoupled systems [9], exploits
cations, including formation flight of UAV and automated the estimation of the prediction state trajectories of the
traffic systems. Compared with the traditional approach, neighbors’ [10]; treats the prediction state trajectories of the
model predictive control (MPC), or receding horizon control neighbor agents as bounded disturbance where a min-max
(RHC) has the ability to redefine cost functions and con- optimal problem is solved for each agent with respect to the
straints as needed to reflect changes in the system and/or worst-case disturbance. In [11, 12], the optimal variables of
the environment. Therefore, MPC is extensively applied to the local optimization problem contain the control action
the cooperative control of multiagent systems, which makes of its own and its neighbors’ which are coupled in collision
the agents operate close to the constraint boundaries and avoidance constraints and cost function. Obviously, the
obtain better performance than traditional approaches [1– deviation between the actions of what the agent is actually
3]. Moreover, due to the computational advantages and the doing and of what its neighbor estimates for it affects
convenience of communication, distributed MPC (DMPC) the control performance. Sometimes the consistency and
is recognized as a nature technique to address trajectory collision avoidance cannot be achieved, and the feasibility
optimization problems for multiagent systems. and stability of this scheme cannot be guaranteed [13].
One of the challenges for distributed control is to en- proposes a distributed MPC with a fixed compatibility
sure that local control actions keep consistent with the constraint to restrict the deviation. When the bound of this
actions of others agents [4, 5]. For the coupled systems, constraint is sufficiently small, the closed-loop system state
the local optimization problem is solved based on the enter a neighborhood of the objective state [14, 15] give an
states of its neighbors’ at sample time instant using Nash- improvement over [13] by adding deviation punishment term
optimization technique in [6]. As the local controllers lack to penalize the deviation of the computed state trajectory
2 Journal of Control Science and Engineering

from the assumed state trajectory. Closed-loop exponential coupling cost function and coupling constraints [19]. The
stability follows if the weight on the deviation function term coupling constraints can be transformed to coupling cost
is large enough. But the large weight leads to the loss of the function term directly or handled as decoupling constraints
control performance. using the technique of [15]. In the present paper we will not
A contribution in this paper is to propose an idea to consider this issue.
reduce the adverse effect of the deviation punishment on The control objective for all system is to cooperatively
the control performance. At each sample time, the value asymptotically stabilize all agents to an equilibrium point
of compatibility constraint is set as the maximum value (xe , ue ) of (4). In this paper we assumed that the (xe , ue ) =
of the deviation of the previous sample time. We give the (0, 0), f (xe , ue ) = 0. The corresponding equilibrium point
stability condition to guarantee the exponential stability of for each agent is (xei , uie ) = (0, 0), f i (xei , uie ) = 0. Assumption
the global closed-loop system with a small weight on the f i (0, 0) = 0 is not restrictive, since if (xei , uie ) =/ (0, 0), one can
deviation punishment term, which is obtained by dividing always shift the origin of the system to it.
the centralize stability constraint as the manner of [16, 17]. The resultant control law for minimization of (3) can
The effectiveness of the scheme is also demonstrated by a be implemented in a centralized way. However, the existing
numerical example. methods for centralized MPC are only computationally
tractable for small-scale system. Furthermore, the communi-
Notations. xki is the value of vector xi at time k · xk,t i
is the cation cost of implementing a centralized receding horizon
i
value of vector x at a future time k + t, predicted at time control law may be costly. Hence, by means of decomposi-
k · |x| = [|x1 |, |x2 |, . . . , |xN |] is the absolute value for each tion, Jk is divided as Jki ’s such that the minimization of (3) is
component of x. For a vector x and positive-definite matrix implemented in distributed manner, with
Q, x2Q = xT Qx. ∞ Na
  
Jki = i
zk,t 2Q + uik,t 2Ri ,
i
Jk = Jki , (5)
2. Problem Statement t =0 i=1

T T
Let us consider a system which is composed of Na agents. The where zk,ti i
= [(xk,t
T−i
) (xk,t −i
) ] ; xk,t includes the states of the
dynamics of agent i [11] is neighbors. The set of neighbors’ of agent i is denoted as Ni .
j 
  xk−i = {xk | j ∈ Ni }, xk−i ∈ Rn , n−i = j ∈Ni n j . For
−i

i
xk+1 = f i xki , uik , (1) each agent i, the control objective is to stabilize it to the
T T
equilibrium point (xei , uie ) · Qi = Qi > 0, Ri = Ri > 0.
where uik ∈ Rmi , xki ∈ Rni , and f i : Rni × Rmi  → Rni ,
Qi is obtained by dividing Q using the technique of [19].
are the input, state, and state transition function of agent For the agents that have decoupled dynamics, the couplings
i,mi T i,ni T
i, respectively. uik = [ui,1 i i,1
k , . . . , uk ] , xk = [xk , . . . , xk ] . of control moves for all system are not considered. R is a
The sets of feasible input and state of agent i are denoted as diagonal matrix and Ri is directly obtained.
Ui ⊂ Rmi and Xi ⊂ Rni , respectively, that is, Under the networked environment, the bandwidth limi-
tation can restrict the amount of information exchange [17].
uik ∈ Ui , xki ∈ Xi , k ≥ 0. (2) It is thus appropriate to allow agents to exchange information
only once in each sampling interval. We assume that the
At each time k, the control objective is [18] to minimize connectivity of the interagent communication network is
∞  
sufficient for agents to obtain information regarding all the

Jk = xk,t 2Q + uk,t 2R (3) variables that appear in their local problems.
t=0 In the receding horizon control manner, a finite-horizon
cost function is exploited to approximate Jki . According to
T T T the (5), the evolution of the control moves with predictive
with respect to uk,t , t ≥ 0, where x = [(x1 ) , . . . , (xNa ) ] ,
T T i i
horizon for agent i is based on the estimation of the
, uik,t ), xk,0
i
= xki ; Q =
T
u = [(u1 ) , . . . , (uNa ) ] ; xk,t+1 = f i (xk,t state trajectories xk,t−i
, t ≤ N of the neighbors’, which are

QT > 0, R = RT > 0. u ∈ Rm , m = i mi , and x ∈ Rn , substituted by the assumed state trajectories xk,t −i
, t ≤ N as
n = i ni . Then, [11]. In each control interval, the transmitted information
between agents is the assumed state trajectories. As the
xk+1 = f (xk , uk ), (4)
cooperative consistency and efficiency of distributed control
T moves is affected for the existence of the deviation of the
where f = [ f 1 , f 2 , . . . , f Na ] , f : Rn × Rm  → Rn . computed state trajectory from the assumed state trajectory,
i i
(xe , ue ) is the equilibrium point of agent i, and (xe , ue ) is the it is appreciate to penalize it by adding the deviation
corresponding equilibrium point of all agents. X = X1 × punishment term into the local cost function.
X2 × · · · × XNa , U = U1 × U2 × · · · × UNa . The models for Define
all agents are completely decoupled. The coupling between
agents arises due to the fact that they operate in the same uik,t = Fi (k)xk,t
i
, ∀t ≥ N. (6)
environment, and that the “cooperative” objective is imposed
on each agent by the cost function. Hence, there are the Fi (k) is the gain of distributed state feedback controller.
Journal of Control Science and Engineering 3

Consider 3.1. Design of the Local Control Law. The following equation
−1
follows for achieving closed-loop stability:
N 
J˘ki = i
zk,t 2Q + uik,t 2Ri + xk,t
i i
− xk,t 2T i 
 i
2


 i
2


 i
2

t =0
i
xk,N+t+1  − xk,N+t  ≤ −xk,N+t 
(7) Pi Pi Qi
∞ 
   2 (13)
i  i 
+ xk,t 2Qi + uik,t 2Ri , − uk,N+t  , t ≥ 0.
Ri
t =N

where Lemma 1. Suppose that there exist Qi > 0, Ri > 0, Pi > 0,


which satisfy the Lyapunov-equation:
 T  T
T
i i −i i
zk,t = xk,t xk,t , xk,0 = xki , (8)
(Ai + Bi Ki )T Pi (Ai + Bi Ki ) − Pi = −κi Pi − Qi − KiT Ri Ki ,
(14)
−i
xk,t includes the assumed states of the neighbors. Qi = QiT >
0 and Ri = RTi = Ri satisfy for some κi > 0. Then, there exists a constant αi > 0 such that
Ωi defined in (11) satisfies (13).
diag Q1 , Q2 , . . . , QNa ≥ Q, diag R1 , R2 , . . . , RNa = R.
(9) Remark 2. Lemma 1 is directly obtained by referring to
“Lemma 1” in [21]. For MPC, the stability margin can be
Obviously, Qi is designed to stabilize the agent i to the local adjusted by turning the value of κi according to Lemma 1.
equilibrium point, independently. Qi is designed to stabilize With regard to DMPC, [11] adjusts the stability margin by
the agent i to the local equilibrium point with neighbor tuning the weight in the local cost function. The control
agents, cooperatively. T i is the weight on the deviation objective is to asymptotically stabilize the closed-loop system,
punishment term, to penalize the deviation of the computed so that xk,i ∞ = 0 and uik,∞ = 0. For t = 0, . . . , ∞, summing
state trajectory from the assumed state trajectory. (13) obtains
At each time k, the optimization problem for distributed
∞ 
   2
 
MPC is transformed as:  i 2    i 2
xk,t  + uik,t  ≤ xk,N  . (15)
Qi Ri Pi
min
i
J˘ki , s.t.(1), (2), (6), (7). (10) t =N
U k ,Fi (k)
Considering both (7) and (15), yields
∗i ∗i T ∗i T ∗i T T ∗i ∗i
U k = [(uk,0 ) , (uk,1 ) , . . . , (uk,N −1 ) ] , only when uk = uk,0 −1 
N   2  2

i  i 2    i i 
is implemented, and the problem (9) is solved again at time J˘ki ≤ J k = zk,t  + uik,t  + xk,t

Qi
− xk,t
Ri

Ti
k + 1. t =0 (16)
 
 i 2
+ xk,N  ,
Pi
Remark 1. The local deviate punishment by each agent
effects the control performance, that is, incurs the loss of i
optimality. where J k is a finite-horizon cost function, which consists of
a finite horizon standard cost, to specify the desired control
performance and a terminal cost, to penalize the states at the
3. Stability of Distributed MPC end of the finite horizon.
The stability of distributed MPC by simply applying the The terminal region Ωi for agent i is designed, so that it
procedure as in the centralized MPC will be discussed. The is invariant for nonlinear system controlled by a local linear
i
compact and convex terminal set Ωi is defined state feedback. The quadratic terminal cost xk,N 2Pi bounds
the infinite horizon cost of the nonlinear system starting
 T 
from Ωi and controlled by the local linear state feedback.
Ωi = xi ∈ Rni | xi Pi xi ≤ αi , (11)
3.2. Compatibility Constraint for Stability. As in [18], we
where Pi > 0, αi > 0 are specified such that Ωi is a
define two terms, ξ −i = x−∗i − x−i , ξ i = x∗i − xi ,
control invariant set. So using the idea of [20, 21], one
simultaneously determines a linear feedback such that Ωi is a ⎡ 1 12 ⎤
Qi Qi
positively invariant under this feedback. Q i = ⎣ 
12 T 3
⎦,
Define the local linearization at the equilibrium point Qi Qi
 −1 
Na N T
∂fi ∂fi Cx∗ (k) = ∗i
2 xk,t
12
−i
Qi ξk,t
Ai = (0, 0), Bi = (0, 0). (12)
∂xi ∂ui i=1 t =1 (17)
 T  T 
i −i 3 −i −i 3 −i
and assume that (Ai , Bi ) is stabilizable. When xk,N+t ,t ≥ 0 +2 xk,t Qi ξk,t + ξk,t Qi ξk,t ,
i
enters into the terminal set Ω , the local linear feedback Na N
 −1 T
control law is assumed as uik,N+t = Fi (k)xk,N+t
i i
= Ki xk,N+t . Cξ∗ (k) = i
ξk,t i
T i ξk,t ,
Ki is a constant which is calculated off line as follows. i=1 t =1
4 Journal of Control Science and Engineering

Lemma 2. Suppose that (9) holds and there exits ρ(k) such By applying (17)–(19),
that, for all k > 0,

J(k + 1) − J (k)
0 ≤ ρ(k) ≤ 1, Na    
   i T T 
2  
Na  T   2

≤ − 1 − ρ(k)  −i   ∗i 2
  i
 x (k)
T 
−i
2
  ∗i   xk , xk  + uk,0 Ri
− ρ(k)  , xk  + u (0 | k)R (18) i=1 Qi
i=1 Qi i  
≤ − 1 − ρ(k) xk 2Q .
+ Cx∗ (k) − Cξ∗ (k) ≤ 0. (24)
Then, by solving the receding-horizon optimization ∗
problem At time k + 1, by reoptimization, J (k + 1) ≤ J(k + 1).
Hence, it leads to
i
min J k , s.t.(1), (2), (14), (16), uik,N = Ki xk,N
i i
, xk,N ∈ Ωi , ∗ ∗  
i
U (k) J (k + 1) − J (k) ≤ − 1 − ρ(k) xk 2Q
  2
(25)
(19) ≤ − 1 − ρ(k) λmin (Q)x(k)Q ,
∗i
and implementing uk,0 , the stability of the global closed-loop where λmin (Q) is the minimum eigenvalue of Q. This
system is guaranteed, once a feasible solution at time k = 0 is indicates that the closed-loop system is exponentially stable.
i
found. Satisfaction of (18) indicates that all xk,t should not
 i i
deviate too far from their assumed values xk,t [13]. Hence,
Proof. Define J(k) = Ni=a1 J k . Suppose, at time k, there are
∗i (18) can be taken as a new version of the compatibility
optimal solution U k , i ∈ {1, . . . , Na }, which yields condition. This compatibility condition is derived from a
Na   
 T   2
 single compatibility condition that collects all the states
 i T 2

J (k) =  x , x
 −i  +
 u∗i 
 (whether predicted or assumed) with in the switching
 k k  k,0 Ri
i=1 Qi horizon and is disassembled to each agent in distributed

−1   manner, which results in local compatibility constraint for
Na N
  ∗i T  −i T 2  2
+  x , x
  + ∗i 
uk,t  each agent.
 k,t k,t  Ri
i=1 t =1 Qi
    2   Na  
 ∗i T T  ∗i 2 3.3. Synthesis Approach of Distributed MPC. In the synthesis
+ i
 xk,t − xk,t 
 + xk,N  .
Pi approach, the local optimization problem incorporates the
Ti i=1 ∗
(20) above compatibility condition. Since xk,t for all agent i is
coupled with other agents through (18), it is necessary to
∗i i assign the constraint to each agent so as to satisfy (18) along
At time t + 1, according to Lemma 2, U k+1 = {uk,1 ,...,
∗i ∗i
the optimization. The continued discussion on stability
uk,N −1 , Ki xk,N } is feasible, which yields depends on handling of (18).
T j
N 
Na 
 
 ∗i T  −∗i T 2  2
 Denote ξki = [ξki,1 , . . . , ξki,ni ] , ξk−i = {ξk | j ∈ Ni }. At
J(k + 1) =  x , x  +
u∗i 
 time k > 0, by solving the optimization problem, there exits
 k,t k,t  k,t Ri
Qi
i=1t =1 a parameter Eki,l , l = 1, . . . , ni , for each element of ξki,l , l =
Na 
  1, . . . , ni .
 ∗i 2
+ xk,N+1  Define
Pi
i=1 (21)
   
−1    
Na N
  ∗i T  −∗i T 2  2
 ∗i  Eki,l = maxξki,l−1,t , (26)
 
 xk,t , xk,t  + uk,t Ri
= t
i=1 t =1 Qi
 2  2  2 T j

∗      and denote Eki = [Eki,1 , . . . , Eki,ni ] , Ek−i = {Ek | j ∈ Ni }. At
+ xk,N  + u∗ ∗
k,N R + xk,N+1 P ,
Q time k + 1 > 0, set following constraint for each agent i:
 T  T 
where P = diag{P1 , P2 , . . . , PNa }. By applying (9) and  i
 x i

 < Ei.
Lemma 2, (11) guarantees that  k+1,t − x
 k+1,t  k (27)
 2      
 ∗   ∗ 2  ∗ 2  ∗ 2 i
From (26) and (27), it is shown that ξk+1,t < Eki and ξk+1,t
−i
<
xk,N+1  − xk,N  ≤ −xk,N  − uk,N  . (22)
P P Q R −i
Ek .
Substituting (22) into J(k + 1) yields Denote
  −1
N  T
−1 
Na N   2
  ∗i T  −∗i T 2  ∗i  Cx∗i (k) = ∗i
2 xk,t
12
Qi Ek−i
J(k + 1) ≤  
 xk,t , xk,t  + uk,t Ri
Qi t =1
i=1 t =1
(23)  T 3
 T 3
 T 
Na   −i
  ∗i 2 +2 xk,t Qi Ek−i + Ek−i Qi Ek−i ,
+ xk,N  .
Pi
i=1 (28)
Journal of Control Science and Engineering 5

3 3
2 2

Y -label (m)
Y -label (m)

1 1
0 0
−1 −1
−2 −2
−3 −3
−2 0 2 4 6 8 10 12 14 −2 0 2 4 6 8 10 12 14
X-label (m) X-label (m)
(a) (b)
3
2
Y -label (m)

1
0
−1
−2
−3
−2 0 2 4 6 8 10 12 14
X-label (m)
(c)

Figure 1: Evolutions of the formation with different control schemes.

−1
N T Remark 4. According to (31), the maximum value and
Cξ∗i (k) = i
ξk,t i
T i ξk,t . (29) minimum value of T i can be calculated by considering the
t =1
range of each variable. We choose the middle value for T i .
  Obviously, the T i is time varying and denoted as T i (k).
Then Cx∗ (k) ≤ Ni=a1 Cx∗i (k), Cξ∗ (k) = Ni=a1 Cξ∗i (k).
By applying (26)–(29), it is shown that (18) is guaranteed
by assigning 4. Control Strategy
0 ≤ ρi (k) ≤ 1, For practical implementation, distributed MPC is formu-
   lated in the following algorithm.
Na
  i T  −i T 2  2
− ρi (k)  x , x
  +
u∗i 

 k k  k,0 Ri
i=1 Qi (30)
Algorithm. Off-line stage:
Na
 Na

+ Cx∗i (k) − Cξ∗i (k) ≤ 0. (i) Set the value of the prediction horizon N.
i=1 i=1
(ii) According to (3), (5) and (9), find Qi , Ri , Qi , Ri , t =
is dispensed to agent i: 0, . . . , N − 1, for all agents.
(iii) Set the value of the compatibility constraint for all
0 ≤ ρi (k) ≤ 1,
 agents Ei (0) = +∞, j ∈ Ni .
−1  
N
 i 2  i T  −i T 2
ξk,t  ≥ −ρi (k)  x  (iv) Calculate the terminal weight Pi , local linear feedback
Ti  k , xk  (31)
t =1 Qi control gain Ki and the terminal set Ωi .
  
 ∗i 2
+uk,0  + Cx∗i (k). Ri
On-line stage: For agent i, perform the following steps
at k = 0:
By using (26)–(28), conservativeness is introduced. Hence,
(31) is more stringent than (18). (i) Take the measurement of x0i . Set T i = 0.
(ii) Send x0i to its neighbor j, j ∈ Ni of agent i. Receive
Remark 3. By adding the deviation punishment term in the j
x0 .
local cost function, the closed-loop stability follows with a j j i
large weight. The larger weight means the more loss of the (iii) Set xt,0 = x0,0 , j ∈ Ni , t = 0, . . . , N − 1 and x0,t = x0i .
performance [14, 19]. For a small value of T i , we can adjust (iv) Solve problem (19).
the value of ρi (k) to obtain exponential stability. As the ρi (k)
(v) Implement ui0 = u0,0
∗i
.
is set by optimization, this scheme has more freedom to
i
tuning parameters, to balance the closed-loop stability and (vi) Get xt,0 and the value of compatibility constraint
control performance. Ei (1).
6 Journal of Control Science and Engineering

i
(vii) Send x0,t and Ei (1) to its neighbor j, j ∈ Ni . Receive as c12 = (−2, 1), c13 = (−2, −1), c24 = (−2, 1). Choose N1 =
j {2}, N2 = {1}, N3 = {1}, N4 = {2}. Then,
x0,t and E j (1). Calculate T i (k).
⎡ 1 8 8 ⎤
For the agent i, perform the following steps at k > 0: ⎢ 2 9 I2 0 − I2 0 − I2 0 0 0⎥
⎢ 9 9 ⎥
(i) Take the measurement of xki . ⎢ 0

I2 0 0 0 0 0 0⎥

⎢ 8 1 1 ⎥
(ii) Solve problem (19). ⎢− I 2 0 2 I2 0 I2 0 −I2 0 ⎥
⎢ 9 9 9 ⎥
⎢ ⎥
(iii) Implement uik = u0,k
∗i
. Q=⎢

0 0 0 I2 0 0 0 0⎥ ⎥, R = I8 .
⎢ 8 1 1 ⎥
i ⎢− I 2 0 I2 0 1 I2 0 0 0⎥
(iv) Get xk,t and the new value of compatibility constraint ⎢ 9 9 9 ⎥
⎢ ⎥
Ei (k + 1). ⎢ 0

0 0 0 0 I2 0 0 ⎥⎥
i ⎣ 0 0 −I2 0 0 0 I2 0 ⎦
(v) Send xk,t and Ei (k + 1) to its neighbor j, j ∈ Ni .
j 0 0 0 0 0 0 0 I2
Receive xk,t and E j (k + 1).
⎡ ⎤
7 4
(vi) Calculate T i (k). ⎢ 9 I2 0 − I2 0 ⎥
⎢ 9 ⎥
⎢ 1 ⎥
⎢ 0 I2 0 0 ⎥
5. Numerical Example ⎢ 3 ⎥
Q1 = ⎢
⎢ 4
⎥,

⎢− I 2 0 I2 0 ⎥
We consider the model of agent i [22] as ⎢ 9 ⎥
⎣ 1 ⎦
    0 0 0 I2
I2 I2 i 0.5I2 i 3
i
xk+1 = x + uk , (32) ⎡ ⎤
0 I2 k I2 1 4
⎢ 1 9 I2 0 − 9 I2 0 ⎥
⎢ ⎥
which is obtained by discretizing the continuous-time model ⎢ 1 ⎥
⎢ 0 I 0 0 ⎥
    ⎢ 3
2 ⎥
Q2 = ⎢
⎢ 4 4 ⎥,
ẋi =
0 I2 i
x +
0 i
u. (33) ⎢− I 2 0 I2 0 ⎥ ⎥
0 0 I2 ⎢ 9 9 ⎥
⎣ 1 ⎦
0 0 0 I2
i,y i,y T i,y 3
(xki = [qki,x , qk , vki,x , vk ] , qki,x and qk are positions in ⎡ ⎤
i,y 1 8
the horizontal and vertical directions, resp. vki,x and vk are ⎢ 1 9 I2 0 − 9 I2 0 ⎥
⎢ ⎥
velocities in the horizontal and vertical directions, resp.) with ⎢ 1 ⎥
⎢ 0 I2 0 0 ⎥
sampling time interval of 0.5 second. There are four agents. ⎢ 2 ⎥
Q3 = ⎢
⎢ 8 8 ⎥,

A set of positions of the four agents constitute a formation. ⎢− I 2 0 I2 0 ⎥
The initial positions of the four agents are ⎢ 9 9 ⎥
⎣ 1 ⎦
    0 0 0 I2
1,y 2,y 3
qo1,x , qo = [0, 2], qo2,x , qo = [−2, 0], (34)
⎡ ⎤
    I2 0 −I2 0
3,y
qo3,x , qo = [0, −3],
4,y
qo4,x , qo = [2, 0]. (35) ⎢ 0 I2 0 0 ⎥
⎢ ⎥
Q4 = ⎢
⎢−I2 0 I2 0 ⎥ ⎥,
Linear constraints on states and input are ⎣ 1 ⎦
0 0 0 I2
   T    T 3
 i  i
x  ≤ 100 100 15 15 , u  ≤ 2 2 . (36)
(38)
The agent i, i = 1, 2, 3 are selected as the core agents of and Ri = I2 , i ∈ {1, 2, 3, 4}. Choose Qi = 6.85 ∗ I4 and
the formation. A0 is designed as A0 = {(1, 2); (1, 3); (2, 4)}. Ri = I2 , i ∈ {1, 2, 3, 4}, N = 10. The terminal set is αi = 0.22.
If all systems achieve the desire formation and the core agents The above choice of model, cost, and constraints allow us
i,y
cooperatively cover the virtue leader, then ui,xk (k) = 0, uk = to rewrite problem (19) as a quadratic programming with
0. The global cost function is obtained as quadratic constraint. To solve the optimal control problems
∞ 
numerically, the package NPSOL 5.02 is used. From top to
 2  2
 1   1  bottom, the first subgraph of Figure 1 is the evolution of
J(k) = qk,t − qk,t
2
+ c12  + qk,t − qk,t
3
+ c13 
t =0 the formation with central MPC; the second sub-graph of
 2 1   2 Figure 1 is the evolution of the formation with distributed
 2   1 
+ qk,t − qk,t
4
+c24  +  qk,t 2
+qk,t 3
+qk,t − qc  MPC with time-varying compatible constraint; the third
9

 2  2  2  2  2 sub-graph of Figure 1 is the evolution of the formation with


       
+vk,t
1 
+ vk,t
2 
+ vk,t
3
 + vk,t
4 
+ uk,t  . distributed MPC with a fixed compatibility constraint.
(37) With the three control schemes, the formation of all
agents can be achieved. The obtained Jtrue s are 2.5779 × 106 ,
They cooperatively track the virtual leader whose reference 4.8725 × 106 , and 5.654 × 106 , respectively. Compared
is qc = (0.5 ∗ k, 0). The distance between agents is defined with the second sub-graph, the third sub-graph have a
Journal of Control Science and Engineering 7

1 [3] J. A. Rossiter, Model-Based Predictive Control: A Practical,


CRC, Boca Raton, Fla, USA, 2003.
[4] Y. Kuwata, A. Richards, T. Schouwenaars, and J. P. How,
0.8
“Distributed robust receding horizon control for multivehicle
guidance,” IEEE Transactions on Control Systems Technology,
0.6 vol. 15, no. 4, pp. 627–641, 2007.
[5] E. Camponogara, D. Jia, B. H. Krogh, and S. Talukdar,
Value

“Distributed model predictive control,” IEEE Control Systems


0.4 Magazine, vol. 22, no. 1, pp. 44–52, 2002.
[6] S. Li, Y. Zhang, and Q. Zhu, “Nash-optimization enhanced
distributed model predictive control applied to the Shell
0.2 benchmark problem,” Information Sciences, vol. 170, no. 2–4,
pp. 329–349, 2005.
[7] A. N. Venkat, J. B. Rawlings, and S. J. Wright, “Stability
0
0 5 10 15 20 25 and optimality of distributed model predictive control,” in
Time/k Proceedings of the 44th IEEE Conference on Decision and
Control, and the European Control Conference (CDC-ECC ’05),
Figure 2: The value of ρi (k). vol. 2005, pp. 6680–6685, Seville, Spain, December 2005.
[8] A. N. Venkat, J. B. Rawlings, and S. J. Wright, “Distributed
model predictive control of large-scale systems,” in Assessment
large overshoot at the time-instant k = 9 (nearby the and Future Directions of Nonlinear Model Predictive Control,
position (3, 0)). The distributed MPC with the time-varying vol. 358 of Lecture Notes in Control and Information Sciences,
pp. 591–605, 2007.
compatible constraint has a better control process comparing
[9] M. Mercangöz and F. J. Doyle III, “Distributed model predic-
to the one with fixed compatible constraint. The value of tive control of an experimental four-tank system,” Journal of
ρi (k) is shown in Figure 2. “∗ ” for agent 1; “O” for agent 2; Process Control, vol. 17, no. 3, pp. 297–308, 2007.
“>” for agent 3; “<” for agent 4. [10] D. Jia and B. Krogh, “Min-max feedback model predictive
control for distributed control with communication,” in
Remark 5. For the second simulation, the value of the fixed Proceedings of the American Control Conference, pp. 4507–
compatible constraint is 0.2. For the third simulation, the 4512, May 2002.
values of the time-varying compatible constraint is calculat- [11] T. Keviczky, F. Borrelli, and G. J. Balas, “Decentralized receding
ed according to the states deviation of the previous horizon. horizon control for large scale dynamically decoupled sys-
tems,” Automatica, vol. 42, no. 12, pp. 2105–2115, 2006.
6. Conclusions [12] F. Borrelli, T. Keviczky, G. J. Balas, G. Stewart, K. Fregene,
and D. Godbole, “Hybrid decentralized control of large scale
In this paper, we have proposed an improved distributed systems,” in Hybrid Systems: Computation and Control, vol.
MPC scheme for multiagent systems based on deviation 3414 of Lecture Notes in Computer Science, pp. 168–183,
Springer, 2005.
punishment. One of the features of the proposed scheme
[13] W. B. Dunbar and R. M. Murray, “Distributed receding
is that the cost function of each agent penalizes the
horizon control for multi-vehicle formation stabilization,”
deviation between the predicted state trajectory and the Automatica, vol. 42, no. 4, pp. 549–558, 2006.
assumed state trajectory, which improves the consistency and [14] W. B. Dunbar, “Distributed receding horizon control of cost
optimal control trajectory. At each sample time, the value of coupled systems,” in Proceedings of the 46th IEEE Conference
compatibility constraint is set by the deviation of previous on Decision and Control (CDC ’07), pp. 2510–2515, December
sample time-instant. The closed-loop stability is guaranteed 2007.
with a small value for the weight of the deviation function [15] S. Wei, Y. Chai, and B. Ding, “Distributed model predictive
term. Furthermore, the effectiveness of the scheme has been control for multiagent systems with improved consistency,”
investigated by a numerical example. One of the future works Journal of Control Theory and Applications, vol. 8, no. 1, pp.
will focus on feasibility of optimization. 117–122, 2010.
[16] B. Ding, “Distributed robust MPC for constrained systems
with polytopic description,” Asian Journal of Control, vol. 13,
Acknowledgment no. 1, pp. 198–212, 2011.
[17] B. Ding, L. Xie, and W. Cai, “Distributed model predictive
This work is supported by a Grant from the Fundamental control for constrained linear systems,” International Journal
Research Funds for the Central Universities of China, no. of Robust and Nonlinear Control, vol. 20, no. 11, pp. 1285–
CDJZR10170006. 1298, 2010.
[18] B. Ding and S. Y. Li, “Design and analysis of constrained
References nonlinear quadratic regulator,” ISA Transactions, vol. 42, no.
2, pp. 251–258, 2003.
[1] J. A. Primbs, “The analysis of optimization based controllers,” [19] W. Shanbi, B. Ding, C. Gang, and C. Yi, “Distributed model
Automatica, vol. 37, no. 6, pp. 933–938, 2001. predictive control for multi-agent systems with coupling
[2] J. M. Maciejowski, Predictive Control with Constraints, Prentice constraints,” International Journal of Modelling, Identification
Hall, Englewood Cliffs, NJ, USA, 2002. and Control, vol. 10, no. 3-4, pp. 238–245, 2010.
8 Journal of Control Science and Engineering

[20] H. Chen and F. Allgöwer, “A quasi-infinite horizon nonlinear


model predictive control scheme with guaranteed stability,”
Automatica, vol. 34, no. 10, pp. 1205–1217, 1998.
[21] T. A. Johansen, “Approximate explicit receding horizon con-
trol of constrained nonlinear systems,” Automatica, vol. 40, no.
2, pp. 293–300, 2004.
[22] W. B. Dunbar, Distributed receding horizon control for multia-
gent systems, Ph.D. thesis, California Institute of Technology,
Pasadena, Calif, USA, 2004.
Hindawi Publishing Corporation
Journal of Control Science and Engineering
Volume 2012, Article ID 402948, 12 pages
doi:10.1155/2012/402948

Research Article
Model Predictive Control of Uncertain Constrained Linear
System Based on Mixed H2/H∞ Control Approach

Patience E. Orukpe
Department of Electrical and Electronic Engineering, University of Benin, P.M.B 1154, Benin City, Edo State, Nigeria

Correspondence should be addressed to Patience E. Orukpe, patience.orukpe01@imperial.ac.uk

Received 30 June 2011; Revised 13 October 2011; Accepted 5 November 2011

Academic Editor: Marcin T. Cychowski

Copyright © 2012 Patience E. Orukpe. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Uncertain constrained discrete-time linear system is addressed using linear matrix inequality based optimization techniques. The
constraints on the inputs and states are specified as quadratic constraints but are formulated to capture hyperplane constraints as
well. The control action is of state feedback and satisfies the constraints. Uncertainty in the system is represented by unknown
bounded disturbances and system perturbations in a linear fractional transform (LFT) representation. Mixed H2 /H∞ method is
applied in a model predictive control strategy. The control law takes account of disturbances and uncertainty naturally. The validity
of this approach is illustrated with two examples.

1. Introduction uncertainty and disturbances simultaneously was not con-


sidered. In this paper, we extend the result of [9] to con-
Model predictive control (MPC) is a class of model-based strained uncertain linear discrete-time invariant systems
control theories that use linear or nonlinear process models using a mixed H2 /H∞ design approach and the uncertainty
to forecast system behaviour. MPC is one of the control tech- considered is norm-bounded additive. This is more suitable
niques that is able to cope with model uncertainties in an as both performance and robustness issues are handled
explicit way [1]. MPC has been used widely in practical appli- within a unified framework.
cations to industrial process systems [2] and active vibration The method presented in this paper develops an LMI
control of railway vehicles [3]. One of the methods used design procedure for the state feedback gain matrix F, allow-
in MPC when uncertainties are present is to minimise the ing input and state constraints to be included in a less con-
objective function for the worst possible case. This strategy servative manner. A main contribution is the accomplish-
is known as minimax and was originally proposed [4] in the ment of a prescribed disturbance attenuation in a systematic
context of robust receding control, [5] in the context of feed- way by incorporating the well-known robustness guarantees
back and feedforward control and [6] in the context of H∞ through H∞ constraints into the MPC scheme. In addition,
MPC. MPC has been applied to H∞ problems in order to norm-bounded additive uncertainty is also incorporated. A
combine the practical advantage of MPC with the robustness preliminary version of some of the work presented in this
of the H∞ control, since robustness of MPC is still being paper was presented in [10].
investigated for it to be applied practically. The structure of the work is as follows. After defining
This work is motivated by the work in [7, 8] where uncer- the notation, we describe the system and give a statement
tainty in the system was modeled by perturbations in a linear of the mixed H2 /H∞ problem in Section 2. In Section 3, we
fractional representation. In [9], model predictive control derive sufficient conditions, in the form of LMIs, for the
based on a mixed H2 /H∞ control approach was considered. existence of a state feedback control law that achieves the
The designed controller has the form of state feedback and design specifications. In Section 4, we consider two exam-
was constructed from the solution of a set of feasibility linear ples that illustrate our algorithm. Finally, we conclude in
matrix inequalities. However, the issue of handling both Section 5.
2 Journal of Control Science and Engineering

The notation we use is fairly standard. R denotes the quadratic Lyapunov function (V (ζ) = ζ T Pζ, P > 0) for all
set of real numbers, Rn denotes the space of n-dimensional possible choices of the uncertainty parameters.
(column) vectors whose entries are in R and Rn×m denoting In the case of norm-bounded uncertainties:
the space of all n × m matrices whose entries are in R. For  
A B w Bu
A ∈ Rn×m , we use the notation AT to denote transpose. For
x, y ∈ Rn , x < y (and similarly ≤, >, and ≥) is interpreted      (5)
∈ Ao Bwo Buo + FA ΔH EA Ew Eu : Δ ∈ Δ ,
element wise. The identity matrix is denoted as I and the null
matrix by 0 with the dimension inferred from the context. where [Ao Bwo Bou ] represents the nominal model, ΔH =
Δ(I − HΔ)−1 , with
2. Problem Formulation   
Δ ∈ Δ := Δ = diag δ1 Iq1 , . . . , δl Iql , Δl+1 , . . . , Δl+ f : Δ
We consider the following discrete-time linear time invariant 
system: ≤ 1, δi ∈ R, Δi ∈ Rqi ×qi
(6)
xk+1 = Axk + Bw wk + Bu uk + B p pk,
and where FA , EA , Ew , Eu , and H are known and constant
qk = Cq xk + Dqu uk + Dqw wk , matrices with appropriate dimensions. This linear fractional
p k = Δk q k , representation of uncertainty, which is assumed to be well
(1) posed over Δ (i.e., det(I − HΔ) = / 0 for all Δ ∈ Δ), has great
⎡ ⎤
Cz x k generality and is used widely in robust control theory [12].
zk = ⎣ ⎦, We use the following lemma, which is a slight modifica-
Dzu uk tion of a result in [13] and which uses the fact that Δ ∈ Δ to
remove explicit dependence on Δ for the solution with norm
x0 given,
bounded uncertainties.
where x0 is the initial state, xk ∈ Rn is the state, wk ∈ Rnw is
the disturbance, uk ∈ Rnu is the control, zk ∈ Rnz is the con- Lemma 1. Let Δ be as described in (6) and define the subspaces
trolled output, A ∈ Rn×n , Bw ∈ Rn×nw , Bu ∈ Rn×nu , Cz ∈   
Σ = diag S1 , . . . , Sl , λ1 Iql+1 , . . . , λs Iql+ f
Rnz1 ×n , and Dzu ∈ Rnz2 ×nu , and where nz = nz1 + nz2 . The
signals qk and pk model uncertainties or perturbations ap- 
pearing in the feedback loop. : Si = STi ∈ Rqi ×qi , λj ∈ R ,
The operator, Δk , is block diagonal:    
⎧ ⎡ ⎤ ⎫ Γ = diag G1 , . . . , Gl , 0ql+1 , . . . , 0ql+ f : Gi = −GTi ∈ Rqi ×qi .

⎪ Δ1k 0 ⎪


⎪ ⎢ ⎥ ⎪
⎪ (7)
⎨ ⎢ ⎥ ⎬
Δ k ∈ Δk = ⎪ Δ k = ⎢

..
. ⎥ : Δik  ≤ 1 ∀i ,
⎥ ⎪
(2) Let T1 = T1T ,T2 ,T3 ,T4 be matrices with appropriate dimen-

⎪ ⎣ ⎦ ⎪


⎩ ⎪
⎭ sions. We have det(I − T4 Δ) = −1
/ 0 and T1 + T2 Δ(I − T4 Δ) T3 +
0 Δtk T T − T
T3 (I − ΔT T4 ) ΔT T2 < 0 for every Δ ∈ Δ if there exist S ∈ Σ
1

and is norm bounded by one. Scalings can be included in Cq and G ∈ Γ such that S > 0 and
and B p , thus generalizing the bound. Δk can represent either ⎡ ⎤
T1 + T2 ST2T T3T + T2 ST4T + T2 G
a memoryless time-varying matrix with σ(Δik ) ≤ 1, for i = ⎣ ⎦ < 0.
1, . . . , t, k ≥ 0, or the constraints: T3 + T4 ST2T + GT T2T T4 ST4T + T4 G + GT T4T − S
T T (8)
pik pik ≤ qik qik , i = 1, . . . , t, (3)
If Δ is unstructured, then (8) becomes
where pk = [p1k , . . . , ptk ]T , qk = [q1k , . . . , qtk ]T , and the ⎡ ⎤
partitioning is induced by Δk . Each Δk is assumed to be either T1 + λT2 T2T T3T + λT2 T4T
⎣  ⎦ < 0, (9)
a full block or a repeated scalar block, and models a number
T3 + λT4 T2T λ T4 T4T − I
of factors, such as dynamics or parameters, nonlinearities,
that are unknown, unmodeled or neglected. In this work, we for some scalar λ > 0. In this case, condition (9) is both neces-
only consider full blocks for simplicity. sary and sufficient.
In terms of the state space matrices, this formulation can
be viewed as replacing a fixed (A, Bu , Bw ) by (A, Bu , Bw ) ∈ We also use the following Schur complement result [14].
(A, Bu , Bw ), where
 T T
Lemma 2. Let X11 = X11 and X22 = X22 . Then
(A, Bu , Bw ) = A + B p Δk Cq , Bu + B p Δk Dqu , Bw ⎡ ⎤
  (4) X11 X12
⎣ ⎦ ≥ 0 ⇐⇒ X22 ≥ 0,
+B p Δk Dqw | Δk ∈ Δk . T
X12 X22 (10)
In robust model predictive control, we consider norm- T +
 
X22 − X12 X11 X12 ≥ 0, X 12 I − X11 X11 = 0, +
bounded uncertainty and define stability in terms of quad-
ratic stability [11] which requires the existence of a fixed where X11
+
denotes the Moore-Penrose pseudo-inverse of X11 .
Journal of Control Science and Engineering 3

We assume that the pair (A, Bu ) is stabilizable and that the 3. LMI Formulation of Sufficiency Conditions
disturbance is bounded as
 The next theorem, which is the main result of this paper,

∞ derives sufficient conditions, in the form of LMIs, for the
w 2 :=  wkT wk ≤ w, (11) existence of an admissible F.
k=0
Theorem 3. Let all variables, definitions, and assumptions be
where w ≥ 0 is know. as above. Then there exists an admissible state feedback gain
The aim is to find a state feedback control law {uk = Fxk } matrix F if there exists solutions Q = QT ∈ Rn×n , Y ∈ Rnu ×n ,
in L2 , where F ∈ Rnu ×n , such that the following constraints δ j ≥ 0, μ j ≥ 0, ν j ≥ 0, Λ = diag(λ1 I, . . . , λt I) > 0, and
are satisfied for all Δk ∈ Δk .
Ψ j = diag(ψ 1 I, . . . , ψ t I) > 0 to the LMIs shown in (16)–
(1) Closed-loop stability: the matrix A + Bu F is stable.
(19).
(2) Disturbance rejection: for given γ > 0, the transfer
matrix from w to z, denoted as Tzw , is quadratically stable
and satisfies the H∞ constraint ⎡ ⎤
−Q      
⎢ ⎥
z2 < γw 2 , (12) ⎢ ⎥
⎢ 0 −α2 γ2 I      ⎥
⎢ ⎥
⎢ ⎥
for x0 = 0. ⎢ ⎥
⎢ 0 0 −Λ     ⎥
(3) Regulation: for given α > 0, the controlled output ⎢ ⎥
⎢ ⎥

satisfies the H2 constraint: ⎢ AQ + Bu Y α2 Bw B p Λ −Q    ⎥
⎥ < 0,
⎢ ⎥
 ⎢ ⎥
 ⎢C Q + D Y α2 D 0 −Λ   ⎥
∞ ⎢ q qu qw 0 ⎥
z2 :=  zkT zk < α. ⎢ ⎥
(13) ⎢ ⎥
⎢ Cz Q 0 0 0 0 −α2 I  ⎥
k=0 ⎢ ⎥
⎣ ⎦
Dzu Y 0 0 0 0 0 −α2 I
(4) Input constraints: for given H1 , . . . , Hmu ∈ Rnu ×nu ,
H j = H Tj ≥ 0, h1 , . . . , hmu ∈ Rnu ×1 , and u1 , . . . , umu ∈ R, (16)
the inputs satisfy the quadratic constraints: ⎡ ⎤
1  
⎢ ⎥
⎢ 2 2 2 2 2 ⎥
uTk H j uk + 2hTj uk ≤ uj, ∀k; for j = 1, . . . , mu . (14) ⎢γ w α γ w ⎥ ≥ 0,
⎢ ⎥ (17)
⎣ ⎦
x0 0 Q
(5) State/output constraints: for given G1 , . . . , Gmx ∈ ⎡ ⎤
Rn×n , G j = GTj ≥ 0, g1 , . . . , gmx ∈ Rn×1 , and x1 , . . . , xmx ∈ Q   
⎢ ⎥
R the states/outputs satisfy the quadratic constraints: ⎢ 1/2
⎢ H j Y μ j I  ⎥

⎢ ⎥
⎢ ⎥ ≥ 0, j = 1, . . . , mu , (18)
T
xk+1 G j xk+1 + 2g Tj xk+1 ≤ x j , ∀k; for j = 1, . . . , mx . ⎢ T ⎥
⎢−h j Y 0 μ j u j ⎥
(15) ⎢ ⎥
⎣ ⎦
0 0 μj 1
An F ∈ Rnu ×n satisfying these requirements will be called an
admissible state feedback gain.
⎡ ⎤
Q      
⎢ ⎥
⎢ ⎥


0 δjI     ⎥

⎢ ⎥

⎢ 0 0 Ψj    ⎥

⎢ ⎥
⎢ ⎥
⎢ G1/2 (AQ + B Y ) ν G1/2 B G1/2 B Ψ ν I   ⎥
⎢ j u j j w j p j j ⎥ ≥ 0, j = 1, . . . , mx . (19)
⎢ ⎥
⎢ ⎥
⎢ C Q+D Y ν j Dqw Ψj  ⎥
⎢ q qu 0 0 ⎥
⎢ ⎥
⎢ T ⎥
⎢−g (AQ + Bu Y ) −ν j g T Bw −g T B p Ψ j 0 0 ν j x j − δ j w2 ⎥
⎢ j j j ⎥
⎢ ⎥
⎣ ⎦
0 0 0 0 0 νj 1

Here,  represents terms readily inferred from symmetry Remark 4. The variables in the LMI minimization of
and the partitioning of Λ and Ψ j is induced by the Theorem 3 are computed online at time k, the subscript k
partitioning of Δk . If such solutions exist, then F = Y Q−1 . is omitted for convenience.
4 Journal of Control Science and Engineering

Proof. Using uk = Fxk , the dynamics in (1) become Using qk = (Cq + Dqu F)xk + Dqw wk ,
Ccl
⎡  ⎤
Acl
   Cz  T  
xk+1 =(A + Bu F) xk + Bw wk + B p pk , zk =⎣ ⎦ xk . qkT Λqk = xkT Cq + Dqu F Λ Cq + Dqu F xk
Dzu F
(20)  T
+ xkT Cq + Dqu F ΛDqw wk
Consider a quadratic function V (x) = xT Px, P > 0 of the (23)
 
state xk . It follows from (20) that + wkT Dqw
T
Λ Cq + Dqu F xk

V (xk+1 ) − V (xk ) + wkT Dqw


T
ΛDqw wk ,
 
= xkT ATcl PAcl − P xk + xkT ATcl PBw wk + xkT ATcl PB p pk

+ wkT BwT PAcl xk + wkT BwT PBw wk + wkT BwT PB p pk where Λ = diag(λ1 I, . . . λt I).
Substituting (23) into (21), it can be verified that we can
(21) write
+ pkT BTp PAcl xk + pkT BTp PBw wk + pkT BTp PB p pk
⎡ ⎤
xk ⎡ ⎤
  ⎢ ⎥ xk
⎢ ⎥
= xkT wkT pkT K ⎢wk ⎥ − xkT CclT Ccl xk + γ2 wkT wk ,   ⎢ ⎥
⎣ ⎦ ⎢ ⎥
V (xk+1 ) − V (xk ) = xkT wkT pkT K ⎢wk ⎥ + pkT Λpk
pk ⎣ ⎦
pk
where
⎡ ⎤ − qkT Λqk − xkT CclT Ccl xk + γ2 wkT wk ,
ATcl PAcl − P + CclT Ccl ATcl PBw ATcl PB p
⎢ ⎥ (24)
⎢ ⎥
⎢ BwT PAcl BwT PBw − γ2 I BwT PB p ⎥
K =⎢

⎥.

(22)
⎣ ⎦
BTp PAcl BTp PBw BTp PB p
where K is defined in (25) and C pw := Cq + Dqu F.

⎡ ⎤
ATcl PAcl − P + CclT Ccl + C Tpw ΛC pw ATcl PBw + C Tpw ΛDqw ATcl PB p
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
K =⎢ BwT PAcl + Dqw
T ΛC
pw BwT PBw − γ2 I + Dqw
T ΛD
qw BwT PB p ⎥. (25)
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎣ ⎦
BTp PAcl BTp PBw BTp PB p − Λ

Assuming that limk → ∞ xk = 0 we have z22 = x0T Px0 + γ2 w 22


⎡ ⎤
xk
∞ 
  ∞ 
  ⎢ ⎥ ∞ 
 
T
xk+1 Pxk+1 − xkT Pxk = −x0T Px0 . (26) + xkT wkT pkT K ⎢ ⎥
⎢w k ⎥ + pkT Λpk − qkT Λqk ,
⎣ ⎦
k=0 k=0
k=0 pk
(28)
We write the H2 cost function as
where K is defined in (25).
Setting x0 = 0, it follows from (3), (12), and (28) that
∞ 
  ∞
 z2 < γw 2 if K < 0 and Λ ≥ 0. In this work, we will take
z22 = xkT CclT Ccl xk − γ2 wkT wk + γ2 wkT wk . (27) Λ > 0 to simplify our solution [8]. Using (2) and Lemma 1 it
k=0 k=0 can be shown that
K < 0, (29)
Adding (26) and (27) and carrying out a simple manipu-
lation gives is also sufficient for quadratic stability of Tzw .
Journal of Control Science and Engineering 5

Next, we linearize the matrix inequality (29) by applying The equation above is a bilinear matrix inequality, thus by
a Schur complement, to give defining α2 Λ−1 as a variable Λ, we get the LMI in (16).
⎡ ⎤
Now, it follows from (3), (11), (28), and (29) that,
−P      
⎢ ⎥
⎢ 0 −γ I 
2    ⎥ z22 ≤ x0T Px0 + γ2 w 22 ≤ x0T Px0 + γ2 w 2 . (34)
⎢ ⎥
⎢ ⎥
⎢ 0 0 −Λ    ⎥
⎢ ⎥
⎢ ⎥ Thus the H2 constraint in (13) is satisfied if

⎢ Acl Bw B p −P −1   ⎥
⎥ < 0. (30)
⎢ ⎥
⎢ ⎥
⎢ C pw Dqw 0 0 −Λ−1  ⎥




x0T Px0 + γ2 w2 < α2 . (35)
⎢ Cz 0 0 0 0 −I  ⎥
⎣ ⎦
Dzu F 0 0 0 0 0 −I Dividing by α2 , rearranging and using a Schur complement
give (17) as an LMI sufficient condition for (13).
Pre- and post-multiplying the equation above by diag(P −1 , I, To turn (14) and (15) into LMIs, we first show that
I, I, I, I, I) gives xkT Pxk ≤ α2 ∀k > 0. Since K < 0, it follows from (3) and
(24) that
⎡ ⎤
−P −1      
⎢ ⎥ T

⎢ 0 −γ2 I     ⎥
⎥ xk+1 Pxk+1 − xkT Pxk ≤ γ2 wkT wk . (36)
⎢ ⎥
⎢ 0 −Λ    ⎥
⎢ 0 ⎥
⎢ ⎥ Applying this inequality recursively, we get

⎢ Acl P −1
Bw B p −P −1
  ⎥ ⎥ < 0, (31)
⎢ ⎥
⎢ ⎥
⎢ C pw P −1 Dqw 0 0 −Λ−1  ⎥ k
⎢ ⎥ −1
⎢ ⎥ xkT Pxk ≤ x0T Px0 + γ2 wTj w j
⎢ Cz P −1 0 0 0 0 −I ⎥
⎣ ⎦ j =0 (37)
Dzu FP −1 0 0 0 0 0 −I
≤ x0T Px0 + γ2 w 2 < α2 .
setting Q = α2 P −1 ,
F = Y Pα = Y Q , C pw = Cq + Dqu F
−2 −1

and multiplying through by α2 , the equation above becomes It follows that


⎡ ⎤
−Q        
⎢ ⎥  1/2 2
⎢ ⎥ P xk  < α2 , (38)
⎢ 0 −α2 γ2 I      ⎥
⎢ ⎥
⎢ −α2 Λ     ⎥
⎢ 0 0 ⎥
⎢ ⎥ or equivalently,
⎢ ⎥
⎢ AQ + Bu Y α2 Bw α2 B p −Q    ⎥
⎢ ⎥
⎢ ⎥
⎢Cq Q + Dqu Y α2 Dqw
⎢ 0 0 −α Λ
2 −1
  ⎥

xkT Q−1 xk < 1, ∀k > 0. (39)
⎢ ⎥
⎢ Cz Q 0 0 0 0 −α2 I  ⎥
⎣ ⎦
Next, we transform the constraints in (14) to a set of
Dzu Y 0 0 0 0 0 −α2 I
LMIs. Setting F = Y Q−1 = Y Pα−2 and uk = Fxk ,
< 0.
(32) e j (uk ) := uTk H j uk + 2hTj uk − u j
(40)
Pre- and post-multiplying the equation above by diag(I, I, = xkT Q−1 Y T H j Y Q−1 xk + 2hTj Y Q−1 xk − u j .
Λ−1 , I, I, I, I) gives
⎡ ⎤ Now for any μ j ∈ R, we can write
−Q      
⎢ ⎥
⎢ 0 −α2 γ2 I      ⎥ ⎡ ⎤T
⎢ ⎥
⎢ ⎥   xk
⎢ −α Λ
2 −1
    ⎥ e j (uk ) = −μ j 1 − xkT Q−1 xk −⎣ ⎦
⎢ 0 0 ⎥
⎢ ⎥ 1
⎢ ⎥
⎢ AQ + Bu Y α Bw α B p Λ −Q
2 2 −1
   ⎥
⎢ ⎥ ⎛⎡ ⎤⎞
⎢ ⎥ μ j Q−1 −Q−1 Y T H j Y Q−1 −Q−1 Y T h j
⎢Cq Q + Dqu Y α Dqw
2 0 0 −α Λ
2 −1   ⎥



⎥ × ⎝⎣ ⎦⎠
⎢ Cz Q 0 0 0 0 −α2 I  ⎥ −hTj Y Q−1 −μ j + u j
⎣ ⎦
⎡ ⎤
Dzu Y 0 0 0 0 0 −α2 I xk
× ⎣ ⎦.
< 0. 1
(33) (41)
6 Journal of Control Science and Engineering

Therefore a sufficient condition for e j (uk ) ≤ 0 is μ j ≥ 0 Finally, to obtain an LMI formulation of the state con-
and straints (15), the following analogous steps are carried out:
⎡ ⎤
μ j Q−1 −Q−1 Y T H j Y Q−1 −Q−1 Y T h j
⎣ ⎦≥ 0. (42)
−hTj Y Q−1 uj − μj ⎡ ⎤T ⎡ ⎤ ⎡ ⎤
xk AT xk
⎢ ⎥ ⎢ cl ⎥  ⎢ ⎥
Pre- and post-multiplying by diag(Q, I) gives a bilinear ⎢ ⎥ ⎢ T⎥ ⎢ ⎥
f j (xk+1 ) := ⎢wk ⎥ ⎢ Bw ⎥G j Acl Bw B p ⎢wk ⎥
matrix inequality and applying a Schur complement, we get ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
pk BTp pk
⎡ ⎤ (44)
μjQ Y T H 1/2
j −Y T h j ⎡ ⎤
⎢ 1/2 ⎥ xk
⎢H Y I ⎥   ⎢ ⎥
⎢ j 0 ⎥ ≥ 0. (43) ⎢ ⎥
⎣ ⎦ + 2g Tj Acl Bw B p ⎢wk ⎥ − x j .
⎣ ⎦
−hTj Y 0 uj − μj
pk
Pre- and post-multiplying the above bilinear matrix inequal-
ity by diag(μ−j 1/2 , μ1/2
j , μ j ) and applying a Schur comple-
1/2

ment, this is equivalent to the LMI in (18). Now for any ν j , ρ j ∈ R, we can write

   
f j (xk+1 ) = −ν j 1 − xkT Q−1 xk − ρ j w2 − wkT wk
⎛⎡  T   ⎤⎞
⎡ ⎤T ⎡ ⎤
⎜⎢ν j Q − Acl + B p Δk C pw G j Acl + B p Δk C pw  
−1
xk ⎥⎟ x k
⎢ ⎥ ⎜⎢  T   ⎥⎟⎢ ⎥
⎢ ⎥ ⎜⎢ ⎥⎟⎢ ⎥
− ⎢w k ⎥ ⎜⎢ − B + B Δ D G A + B Δ C ρ I  ⎥⎟⎢wk ⎥.
⎣ ⎦ ⎜⎢ w p k qw j cl p k pw j ⎥⎟⎣ ⎦
⎝⎣     ⎦⎠
1 −g Tj Acl + B p Δk C pw −g Tj Bw + B p Δk Dqw −ν j − ρ j w 2 + x j
1
(45)

Therefore a sufficient condition for f j (xk+1 ) ≥ 0 is ν j ≥ 0,


ρ j ≥ 0 and

⎡ ⎤
ν j Q−1  
⎢ ⎥
⎢ 0 ρjI  ⎥
⎢ ⎥
⎣     ⎦
−g Tj Acl + B p Δk C pw −g Tj Bw + B p Δk Dqw x j − ν j − ρ j w 2
⎡ T ⎤ (46)
⎢ Acl + B p Δk C pw
⎥     
⎢ T ⎥
−⎢ ⎥
⎢ Bw + B p Δk Dqw ⎥G j Acl + B p Δk C pw Bw + B p Δk Dqw 0 ≥ 0.
⎣ ⎦
0

Applying Schur complement to the above equation, we get

⎡ ⎤
ν j Q−1   
⎢ ⎥
⎢ 0 ρjI  ⎥
⎢ ⎥
⎢ ⎥
⎢ T 
T
  ⎥ ≥ 0. (47)
⎢−g j Acl + B p Δk C pw −g j Bw + B p Δk Dqw x j − ν j − ρ j w ⎥
2
⎢ ⎥
⎣     ⎦
G1/2
j Acl + B p Δk C pw G1/2
j Bw + B p Δk Dqw 0 I
Journal of Control Science and Engineering 7

When Δ is structured we proceed as follows. For norm- Substituting the variables from (48) into (50) and swap-
bounded uncertainty, we first separate the terms involving ping the third and sixth diagonal elements, we get
modeling uncertainties from the other terms as
⎡ ⎤
ν j Q−1   
⎢ ⎥ ⎡ ⎤
⎢ 0 ρjI  ⎥ ν j Q−1     
⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ T T ⎥ ⎢ ⎥
⎢ −g j Acl −g j Bw x j − ν j − ρ j w ⎥
2
⎢ 0 ρjI     ⎥
⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎢ ⎥
−G j Acl −G j Bw
1/2 1/2
0 I ⎢ ⎥
⎢ 0 0 S    ⎥
   ⎢ ⎥
⎢ ⎥
−T1 ⎢ ⎥ ≥ 0.
⎢ 1/2 ⎥
⎡ ⎤ ⎢ G j Acl G1/2
j Bw G j B p
1/2
I   ⎥
⎢ ⎥
0 ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ 0 ⎥ ⎢ C pw Dqw 0 0 S−1  ⎥
⎢ ⎥   ⎢ ⎥
⎢ ⎥ ⎢ ⎥
+⎢ T ⎥Δk C pw Dqw 0 0 ⎣ ⎦
⎢−g j B p ⎥    (48) −g Tj Acl −g Tj Bw −g Tj B p 0 0 x j − ν j − ρ j w2
⎢ ⎥
⎣ 1/2 ⎦ T3
G j Bp (51)
   Pre- and post-multiplying by diag(Q, I, I, I, I, I) gives
−T2

⎡ ⎤
C Tpw
⎢ ⎥
⎢D T ⎥  
⎢ qw ⎥ T ⎡ ⎤
+⎢ ⎥ T T 1/2
⎢ 0 ⎥Δk 0 0 −B p g j B p G ≥ 0. νjQ     
⎢ ⎥    ⎢ ⎥
⎣ ⎦ ⎢ ⎥
T
−T2 ⎢ 0 ρjI     ⎥
0 ⎢ ⎥
⎢ ⎥
   ⎢ ⎥
⎢ 0 0 S    ⎥
T3T ⎢ ⎥
⎢ ⎥
⎢ ⎥ ≥ 0.
⎢ 1/2 ⎥
⎢ G j Acl G j Bw G j B p I  
1/2 1/2
Equation (48) is equivalent to −T1 − T2 ΔT3 − T3T ΔT T2T > 0, ⎥
⎢ ⎥
where T4 = 0. By using (8) from Lemma 1, we have ⎢ ⎥
⎢ ⎥
⎢ C pw Dqw 0 0 S−1  ⎥
⎡ ⎤ ⎢ ⎥
⎢ ⎥
−T1 − T3T ST3 −T2 ⎣ ⎦
⎣ ⎦ > 0. (49) −g Tj Acl −g Tj Bw −g Tj B p 0 0 x j − ν j − ρ j w2
−T2T S
(52)
Applying Schur complement to (49), we get
⎡ ⎤
−T1 T3T −T2
⎢ ⎥

⎢ T3 S−1 0 ⎥
⎥ > 0. (50) The above equation is bilinear and thus we pre- and
⎣ ⎦ post-multiply it by diag(ν−j 1/2 , ν1/2
j , ν j , ν j , ν j , ν j ) to
1/2 1/2 1/2 1/2
−T2T 0 S obtain

⎡ ⎤
Q     
⎢ ⎥
⎢ ⎥
⎢ 0 νjρjI     ⎥
⎢ ⎥
⎢ ⎥
⎢ 0 νjS    ⎥
⎢ 0 ⎥
⎢ ⎥
⎢ ⎥ ≥ 0. (53)
⎢ G1/2 A ν G1/2 B ν G1/2 B ν I   ⎥
⎢ j cl j j w j j p j ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ C pw ν j Dqw 0 0 ν j S−1  ⎥
⎢ ⎥
⎣ ⎦
T T T
−g j Acl −ν j g j Bw −ν j g j B p 0 0 νjxj − νj − νjρjw
2 2

From the above equation, we can see that the variable S and uniform, we pre- and post-multiply by diag(I, I, S−1 , I, I, I)
its inverse appear in the matrix inequality, thus to make it to get
8 Journal of Control Science and Engineering

⎡ ⎤
Q     
⎢ ⎥
⎢ 0 νjρjI     ⎥
⎢ ⎥
⎢ ⎥
⎢ 0 ν S−1
   ⎥
⎢ 0 j ⎥
⎢ ⎥
⎢ 1/2 ⎥ ≥ 0. (54)
⎢ G j Acl ν j G j Bw ν j G j B p S
1/2 1/2 −1 ν j I   ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ C pw ν j Dqw 0 0 νjS −1
 ⎥
⎢ ⎥
⎣ ⎦
T T T
−g j Acl −ν j g j Bw −ν j g j B p S−1
0 0 νjxj − νj − νjρjw
2 2

Thus the above equation is a nonlinear matrix inequality in The system is modeled by
ν2j and bilinear in ν j ρ j and ν j S−1 , hence we define new vari-
ables Ψ j = ν j S−1 and δ j = ν j ρ j and finally applying a Schur
complement, we obtain the LMI of (19). ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤
xk+1
1
0.6148 0.0315 x1 0.0385
⎣ ⎦=⎣ ⎦⎣ k ⎦ + ⎣ ⎦uk
Remark 5. The input and state constraints used in this paper xk+1
2
−0.3155 −0.0162 2
xk 0.0315
are more general than those used in [9] in that we allow linear
⎡ ⎤ ⎡ ⎤
terms and so this makes it possible to include asymmetric or 0.00385 0
hyperplane constraints. +⎣ ⎦w k + ⎣ ⎦ pk ,
0.00315 10
Remark 6. When there is no uncertainty, the problem redu- (55)
ces to disturbance rejection technique considered in [9]. qk = Cq xk + Dqu uk + Dqw wk ,
pk = Δqk ,
Remark 7. When there is no disturbance, the results reduce
to those of [7]. ⎡ ⎤
Cz x k
zk = ⎣ ⎦,
Remark 8. The method used in this paper guarantees recur- Dzu uk
sive feasibility (see [15, Chapter 4]). Also see [16] for a diffe-
rent approach.

where

4. Numerical Examples
 
In this section, we present two examples that illustrate the Cq = 1 0 , Dqu = 1, Dqw = 0, (56)
implementation of the proposed scheme. In the first example
we consider a solenoid system, and in the second example
we consider the coupled spring-mass system. The solution to
the linear objective minimization was computed using LMI where x1 and x2 are the position and the velocity of the
Control Toolbox in the MATLAB environment and α2 was plate. The cost function is specified using Cz = diag(1, 1) and
set as a variable. Dzu = 10. The magnetic force u is the control variable, and w
is the external disturbance to the system, which is bounded
in the range [−1, 1]. The initial state is given as x0 = [1 0]T .
4.1. Example 1. We consider a modified version of the sole- We choose γ2 = 0.01 and γ2 = 1. Figures 2 and 3 compare
noid system adapted from [17]. The system (see Figure 1) the closed-loop response for the high and low disturbance
consists of a central object wrapped with coil and is attached rejection levels, respectively, for randomly generated Δ’s.
to a rigid surface via a spring and damper, which forms a The optimization is feasible, the response is stable, and the
passive vibration isolator. The solenoid is one of the common performance is good. A control constraint of |uk | ≤ 0.5 is
actuator components. The basic principle of operation in- imposed, which is satisfied. The computation time for 100
volves a moving ferrous core (a piston) that moves inside a samples was about 10 s, making 0.1 s per sample.
wire coil. Normally, the piston is held outside the core by
a spring and damper. When a voltage is applied to the coil
and current flows, the coil builds up a magnetic field that 4.2. Example 2. We revisit a modified version of Example 2
attracts the piston and pulls it into the center of the coil. The reported in [7]. The system consists of a two-mass-spring
piston can be used to supply a linear force. Application of this model whose discrete-time equivalent is obtained using Euler
includes pneumatic valves and car door openers. first-order approximation with a sampling time of 0.1 s.
Journal of Control Science and Engineering 9

(position x1 , velocity x2 )

Coil
Spring
Magnetic force u

Solenoid

External disturbance w
Damper

Figure 1: Solenoid system.

0.2 1
0
Input u

0.5

State x
−0.2
−0.4 0
−0.6 −0.5
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
Sample Sample

Magnetic force u Position x1


Velocity x2
(a) (b)

Figure 2: Closed-loop response of the solenoid system with γ2 = 0.01.

0.5 5
Input u

State x

0
0
−5
−0.5 −10
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
Sample Sample

Magnetic force u Position x1


Velocity x2
(a) (b)

Figure 3: Closed-loop response of the solenoid system with γ2 = 1.

The model in terms of disturbance and perturbation vari- where


ables is
⎡ ⎤
⎡ ⎤ 0
⎡ ⎤ 0 ⎢ ⎥
1 0 0.1 0 ⎢ ⎥ ⎢ ⎥ ⎡ ⎤
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎢ ⎥ 0
⎢ 0 1 0 0.1⎥


⎢ 0 ⎥
⎥ ⎢0.01⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ K K ⎥ ⎢ ⎥ Bw = ⎢ ⎥, Bp = ⎢ ⎥,
xk+1 =⎢
⎢−0.1 0.1 1 ⎥
0 ⎥xk + ⎢ ⎥

⎥uk ⎢ ⎥ ⎢ ⎥
⎢−0.1⎥
⎢ m1 m1 ⎥ ⎢ 0.1 ⎥ ⎢ ⎥ ⎣ ⎦
⎢ ⎥ ⎢ ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢ m1 ⎥ ⎢ ⎥ (58)
⎢ K K ⎥ ⎢ ⎥ ⎢



0.1
⎣ 0.1 −0.1 1 0⎦ ⎢ ⎥ ⎣ ⎦
m2 m2 ⎣ ⎦
0 0
+ Bw w k + B p p k ,  
Cq = 0.475 −0.475 0 0 ,
qk = Cq xk + Dqu uk + Dqw wk ,
Dqw = 0, Dqu=0 ,
pk = Δqk ,
 
y k = 0 1 0 0 xk , where x1 and x2 are the positions of body 1 and 2, and
(57) x3 and x4 are their velocities, respectively. m1 and m2
10 Journal of Control Science and Engineering

0.5 1
Input u

0.5

State x
0
0
−0.5 −0.5
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Time t (s) Time t (s)

u x1 x3
x2 x4
(a) (b)

Figure 4: Closed-loop response of the coupled spring-mass system with γ2 = 0.45.

0.5 1
Input u

0.5

State x
0
0
−0.5 −0.5
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Time t (s) Time t (s)

u x1 x3
x2 x4
(a) (b)

Figure 5: Closed-loop response of the coupled spring-mass system with γ2 = 4.

150 140
120
100
100
80
α2
α2

60
50
40
20
0 0
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
Sample Sample

Upper bound on H2 cost Upper bound on H2 cost


(a) System with γ2 = 0.01 (b) System with γ2 = 1

Figure 6: Upper bound on H2 cost function for solenoid system.

are the masses of the two bodies and K is the spring 4.3. Discussion. Note that the performance and response of
constant. The initial state is given as x0 = [1 1 0 0]T . The the systems based on the high disturbance rejection level
cost function is specified using CZ = diag(1, 1, 1, 1), and were better than those obtained using low disturbance rejec-
Dzu = 1. We consider the system with m1 = m2 = 1 and tion level, since the states and control are regulated to smaller
K ∈ [0.5, 10]. steady state values. Constraints on the input were satisfied in
A persistent disturbance of the form wi = 0.1 for all both cases; however, the constraints were more conservative
sample time was considered. Here we set γ2 = 0.45 and γ2 = with respect to the control signal for the low disturbance
4. Figures 4 and 5 compare the closed-loop response for the rejection level. For example in the solenoid system, the
high and low disturbance rejection levels, respectively, for control signal for high disturbance rejection level was 0.4826
randomly generated Δ’s. The value of γ2 for high disturbance and that for low disturbance rejection level was 0.4144. The
rejection was the lowest value for which a feasible solution issue of conservativeness in the mixed H2 /H∞ setting has
exists. An input constraint of |uk | ≤ 1 is imposed, which is been considered in [18]. For the systems considered, the
satisfied. The computation time for 300 samples was about upper bound on the H2 cost function α2 is depicted in
47 s, making 0.16 s per sample. Figures 6 and 7 for the high and low disturbance rejection
Journal of Control Science and Engineering 11

120
80 100
70
60 80
50

α2
60
α2

40
30 40
20
20
10
0 0
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Time t (s) Time t (s)

Upper bound on H2 cost Upper bound on H2 cost


(a) System with γ2 = 0.45 (b) System with γ2 = 4

Figure 7: Upper bound on H2 cost function for coupled spring-mass system.

levels. It can been seen that the performance coefficient [2] J. Richalet, A. Rault, J. L. Testud, and J. Papon, “Model pre-
obtained for the high and low disturbance rejection levels is dictive heuristic control. Applications to industrial processes,”
small. However, the offset level on the low disturbance level Automatica, vol. 14, no. 5, pp. 413–428, 1978.
is higher than that for the high disturbance level. This is due [3] P. E. Orukpe, X. Zheng, I. M. Jaimoukha, A. C. Zolotas, and
to the higher value of γ2 in x0T Px0 + γ2 w2 . In Example 2, we R. M. Goodall, “Model predictive control based on mixed
have considered uncertainty in the model by using variable H2 /H∞ control approach for active vibration control of rail-
way vehicles,” Vehicle System Dynamics, vol. 46, no. 1, pp. 151–
spring constant K.
160, 2008.
[4] H. S. Witsenhausen, “A minimax control problem for sampled
5. Conclusion linear systems,” IEEE Transactions on Automatic Control, vol.
13, no. 1, pp. 5–21, 1968.
In this paper, we proposed a robust model predictive control [5] R. S. Smith, “Robust model predictive control of constrained
design technique using mixed H2 /H∞ for time invariant linear systems,” in Proceedings of the 2004 American Control
discrete-time linear systems subject to constraints on the Conference (AAC ’04), pp. 245–250, July 2004.
inputs and states. This method takes account of disturbances [6] P. E. Orukpe and I. M. Jaimoukha, “A semidefinite relaxation
naturally by imposing the H∞ -norm constraint in (14) and for the quadratic minimax problem in H∞ model predictive
thus extends the work in [9] by the introduction of struc- control,” in Proceedings of IEEE Conference on Decision and
tured, norm-bounded uncertainty. The uncertain system was Control, New Orleans, La, USA, 2007.
represented by LFTs. The development is based on full state [7] M. V. Kothare, V. Balakrishnan, and M. Morari, “Robust con-
feedback assumption and the on-line optimization involves strained model predictive control using linear matrix inequal-
ities,” Automatica, vol. 32, no. 10, pp. 1361–1379, 1996.
the solution of an LMI-based linear objective minimization
[8] R. S. Smith, “Model predictive control of uncertain constrai-
(convex optimization). Hence, the resulting state-feedback
ned linear systems: Lmi-based,” Tech. Rep. CUED/F-INFENG/
control law minimizes an upper bound on the robust objec- TR.462, University of Cambridge, Cambridge, UK, 2006.
tive function. The new approach reduces to that of [9] when [9] P. E. Orukpe, I. M. Jaimoukha, and H. M. H. El-Zobaidi,
there are no perturbations present in the system and to [7] “Model predictive control based on mixed H2 /H∞ control
when there are no disturbances. Thus, we have been able approach,” in Proceedings of the American Control Conference
to show that it is possible to handle uncertainty and distur- (ACC ’07), pp. 6147–6150, July 2007.
bance in the mixed H2 /H∞ model predictive control design [10] P. E. Orukpe and I. M. Jaimoukha, “Robust model predictive
approach. The two examples illustrate the application of the control based on mixed H2 /H∞ control approach,” in Pro-
proposed method. ceedings of the European Control Conference, pp. 2223–2228,
Budapest, Hungary, 2009.
[11] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear
Acknowledgments Matrix Inequalities in Systems and Control Theory, SIAM
Studies in Applied Mathematics, 1994.
This was parts supported by Commonwealth Scholarship
[12] A. Packard and J. Doyle, “The complex structured singular
Commission in the United Kingdom (CSCUK) under Grant value,” Automatica, vol. 29, no. 1, pp. 71–109, 1993.
NGCS-2004-258. With regard to this paper, particular thanks [13] K. Sun and A. Packard, “Robust H2 and H∞ filters for uncer-
go to Dr. Imad M. Jaimoukha and Dr. Eric Kerrigan. tain LFT systems,” in Proceedings of the 41st IEEE Conference
on Decision and Control, pp. 2612–2618, Las Vegas, Nev, USA,
References December 2002.
[14] C. K. Li and R. Mathias, “Extremal characterizations of the
[1] E. F. Camacho and C. Bordons, Model Predictive Control, Schur complement and resulting inequalities,” SIAM Review,
Springer, 2nd edition, 2004. vol. 42, no. 2, pp. 233–246, 2000.
12 Journal of Control Science and Engineering

[15] P. E. Orukpe, Model predictive control for linear time invariant


systems using linear matrix inequality techniques, Ph.D. thesis,
Imperial College, London, UK, 2009.
[16] H. Huang, L. Dewei, and X. Yugeng, “Mixed H2 /H∞ robust
model predictive control based on mixed H2 /H∞ with norm-
bounded uncertainty,” in Proceedings of the 30th Chinese
Control Conference, pp. 3327–3331, Yantai, China, 2011.
[17] D. Jia, B. H. Krogh, and O. Stursberg, “LMI approach to robust
model predictive control,” Journal of Optimization Theory and
Applications, vol. 127, no. 2, pp. 347–365, 2005.
[18] P. E. Orukpe, “Towards a less conservative model predictive
control based on mixed H2 /H∞ control approach,” Interna-
tional Journal of Control, vol. 84, no. 5, pp. 998–1007, 2011.
Hindawi Publishing Corporation
Journal of Control Science and Engineering
Volume 2012, Article ID 159072, 18 pages
doi:10.1155/2012/159072

Research Article
Experimental Application of Predictive Controllers

C. H. F. Silva,1 H. M. Henrique,2 and L. C. Oliveira-Lopes2


1 Cemig Geração e Transmissão SA, Avenida Barbacena 1200, 16◦ Andar Ala B1 (TE/AE), 30190-131 Belo Horizonte, MG, Brazil
2 Faculdadede Engenharia Quı́mica, Universidade Federal de Uberlândia, Avenida João Naves de Ávila, 2121,
Bloco 1K do Campus Santa Mônica 38408-100 Uberlândia, MG, Brazil

Correspondence should be addressed to L. C. Oliveira-Lopes, lcol@ufu.br

Received 31 May 2011; Revised 22 October 2011; Accepted 25 October 2011

Academic Editor: Baocang Ding

Copyright © 2012 C. H. F. Silva et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Model predictive control (MPC) has been used successfully in industry. The basic characteristic of these algorithms is the
formulation of an optimization problem in order to compute the sequence of control moves that minimize a performance function
on the time horizon with the best information available at each instant, taking into account operation and plant model constraints.
The classical algorithms Infinite Horizon Model Predictive Control (IHMPC) and Model Predictive Control with Reference System
(RSMPC) were used for the experimental application in the multivariable control of the pilot plant (level and pH). The simulations
and experimental results indicate the applicability and limitation of the control technique.

1. Introduction reference system incorporated directly into the formulation.


Consequently, there is an effort for eliminating undesirable
The process control is related to the application of the situations of too high speed and for balancing the system
automatic control principles to industrial processes. The response.
globalization effect upon industries brought a perception Most industrial chemical processes have a nonlinear
of the importance associated to the product quality over characteristic, although linear controllers are used in many of
organization profits. Because of that, the process control has these systems. The great advantage of this approximation is
been more and more demanded and explored, in order not having an analytical solution to the control problem and also
only to assure that the process is according to acceptable the low computational complexity. This kind of approach is
performance levels but also to address legal requirements very common in the industry practice.
in terms of safety and quality of the products [1]. In this The pH system is used as a benchmark for applications
context, predictive controllers are able to deal with system in process control, mainly because of its strong nonlinear
requirements in a proper way and simple to be implemented. behavior. For the experimental application case addressed
One of the biggest concerns on control theory is related to the in this work, it is a control of multiple inputs and multiple
stability of closed loop of systems. It is natural that only stable outputs (MIMO), in which inputs are acid and base flows
closed-loop response is considered for real implementation. and outputs are reactor level and stream pH output. This
A representative algorithm of this controller class is Infinite system presents not only a nonlinear feature, but also an
Horizon Model Predictive Control (IHMPC) [2]. interaction between inputs and outputs linked to the system
During the closed-loop project, there are some theo- directionality. In this kind of system, classical controllers do
retical tools that can be incorporated into the controllers not work to keep the stability and required performance. In
and aggregate desirable characteristics. This is the case of addition, MPC solves the problem of open loop optimal con-
predictive controllers that incorporate a reference path in trol in real time, considering the current state of the system
their formulation (RSMPC) [3]. In this case, the value to be controlled as much as a policy feedback offline. As can
of control action is computed from a QDMC (Quadratic be seen in the experimental data, the multivariable system
Dynamic Matrix Control) problem that has a first-order could not be controlled with optimal classical controllers.
2 Journal of Control Science and Engineering

Past Future Setpoint

Output prediction

Control moves

Control horizon

Prediction horizon

Figure 1: MPC and the receding horizon strategy.

That is why it was chosen to implement IHMPC and RSMPC more complex cases. In this situation, the MPC controller
predictive controllers in real time [4]. can be understood as a compensator based on a state
Section 2 presents the control algorithms used in the observer and its stability, performance, and robustness are
implementation described in this work. In Section 3 there is determined by the observer poles, established directly by
a description of the experimental system. Section 4 presents parameter adjustment, and by regulator poles, determined
the simulations with the controllers and experimental results by performance horizons and weights [17]. Figure 1 shows
of the closed loop. Section 5 shows the main conclusions of the receding horizon strategy, which is the center of the MPC
this paper. theory.
Figure 2, based on Richalet [18] and Richards and
2. Model Predictive Control How [19], shows interfacial areas of the MPC control
developments.
The model predictive control (MPC) theory was originated Considering its application to MIMO processes, MPC
in the end of 1970s decade and developed considerably ever deals directly with coordination and balancing interactions
since [5–10]. MPC became an important control strategy between inputs and outputs and also with associated con-
of industrial applications, mainly due to great success of its straints [20]. In spite of being really effective in suitable sit-
implementations in the petrochemical industry. The reason uations, MPC still has limitations as, for instance, operation
for MPC being so popular might be due to its ability to difficulties, high maintenance cost (as it demands specialized
deal with quite difficult control challenges, although other work) and flexibility loss, which can result in a controller
characteristics such as its ability to control constrained weakness. These limitations are not connected only to the
multivariable plants, have been claimed as one of the algorithm itself, but also to the necessity of a plant model,
most important features [6]. The ability to handle input, for which is necessary maintenance. In practical terms, the
output, and internal state constraints can be assigned as limits of MPC applicability and performance should not be
the most significant contribution to the many successful connected to the algorithm deficiencies, but to the issues
industrial applications [11–13]. Other benefits to consider linked to the modeling difficulties, sensor adequacy, and
are the possibility of embedding safety limits, the possibility insufficient robustness in the presence of faults [15, 21].
of obtaining advantages in using it in highly nonlinear
plants as well as in time-varying plants, the consideration 2.1. IHMPC. The IHMPC formulation was introduced by
of process noise in the formulation, the high level of Muske and Rawlings [2]. For simplicity the algorithm will
control performance, reduced maintenance requirements, be presented only for stable systems. For unstable systems see
and improved flexibility and agility [14]. the reference section [2]. The discrete dynamical system used
The various predictive control algorithms differ from is shown in (1) in which y is the vector output, u is the vector
each other depending on the way the predictive model is input, and x is the states vector:
used to represent the process, the noise description, and on
the cost function to be minimized. There are many control xk+1 = Axk+1 + Buk ,
predictive applications well succeeded not only in chemical (1)
yk = Cxk .
industry but also in other areas [15]. The use of state-
space models, in spite of other formulations, was responsible The receding horizon regulator is based on the minimization
for a substantial maturing of the predictive control theory of the infinite horizon open-loop quadratic objective func-
during the 1990s decade [16]. The state-space formulation tion at time k (2). Q is a symmetric positive semidefinite
not only allows the application of linear system theorems penalty matrix on the outputs. R is a symmetric positive
already known, but also it makes easier to generalize for semidefinite penalty matrix on the inputs. S is a symmetric
Journal of Control Science and Engineering 3

RMPC Hybrid MPC

Mathematics

Artificial
Integrated Optimization intelligence
project of hierarchical

Biological system
control
Physical system

Functional
Modeling
modes
MPC

Diagnosis
Man- Training
Alarm
machine
interface

Computational methods

Fault-tolerant
MPC

Figure 2: MPC and related areas.

NaHCO3
FC FT 3 FT FC
1 2
HNO3 NaOH

hm LT

pHT pHm

4
Figure 3: Experimental system. Figure 4: Diagram representation of the experimental system.

positive semidefinite penalty matrix on the rate of the input horizon open-loop objective. For stable system, Q is defined
change with Δu = uk+i − uk . The vector uN contains the N as the infinite sum (4). This infinite sum can be determined
future open-loop control moves (3). The infinite horizon from the solution of the discrete Lyapunov equation (5).
open-loop objective function can be expressed as a finite Using simple algebraic manipulation, the quadratic objective
4 Journal of Control Science and Engineering

12 13.5
11 13
10
9 12.5
8 12

u (mL/s)
pH

7 11.5
6 11
5
4 10.5
3 10
2 9.5
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 20 40 60 80 100 120 140 160 180 200
q2 /q1 Time (min)
q3 = 0 mL/s q3 = 0.1 mL/s q1
q3 = 0.05 mL/s Experimental q2
q3 = 0.08 mL/s q3 = 0.5 mL/s
Figure 7: Random inputs for parameter estimation.
Figure 5: Titration Curves of the neutralization system.

250 13
12
200 11
Frequency error in h

h (cm)
10
150 9
8
100 7
6
50 100 110 120 130 140 150 160 170 180 190 200
Time (min)

0 h
−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 h-Mod
Error in predicting h
Figure 8: Open-loop run behavior (Mod indicates that it is a result
Figure 6: Error in predicting the level of the reactor. of model simulation).

18
16
is show in (6). The matrices H, G, and F are showed in (7)–
14
(9):
12
Output


 10
Jk = min
N
T
yk+i Qyk+i + uTk+i Ruk+i + ΔuTk+i SΔuk+i , (2) 8
u
i=0
6
⎡ ⎤ 4
uk 2
⎢ uk+1 ⎥
⎢ ⎥ 0 20 40 60 80 100 120 140 160 180 200
u =⎢
N
⎢ .. ⎥,
⎥ (3) Time (min)
⎣ . ⎦
uk+N −1 h h Mod
pH pH Mod


Figure 9: Real data x simulated data (Mod).

Ti T i
Q= A C QCA , (4)
i=0
⎡ N −1

BT QB + R + 2S BT AT QB − S · · · BT AT QB
⎢ T N −2 ⎥
Q = CT QC + AT QA, (5) ⎢ B QAB − S BT QB + R + 2S · · · BT AT QB ⎥
⎢ ⎥
H=⎢
⎢ .. .. .. .. ⎥,

⎣ . . . . ⎦
T T BT QAN −1 B N −2
B QA B · · · B QB + R + 2S
T T
min Φk = min uN HuN + 2uN (Gxk − Fuk−1 ), (6)
uNN u (7)
Journal of Control Science and Engineering 5

(4–20 mA)
Calibration
Filter Input (channel—0 to 15) Transmitter
curve
h

Calibration Filter Data acquisition Transmitter


curve q1 flow
− −
hSP MPC Pump
PID-q1 Output (channel—0 to 5) h
q1 (mL/s) q3 (mL/s)
+ Process
+
Filter Pump pH
or Calibration Input (channel—0 to 5) q2 (mL/s)
− curve
pHSP Transmitter
RSMPC PID-q2 Data acquisition
+ + q2 flow

Transmitter
Calibration Filter Output (channel—0 to 5) pH
curve
(4–20 mA)
Computer

Software-Labview Hardware
Software-Matlab Process

Figure 10: Structure of operation—experimental plant.

⎡ ⎤
BT QA fk−1 = f(x, u)|x=xk−1 , u=uk−1 , (15)
⎢ BT QA2 ⎥
⎢ ⎥
G=⎢
⎢ .. ⎥⎥, (8) hk−1 = h|x=xk−1 , (16)
⎣ . ⎦
BT QAN x k+1 = Φxk + ΨBuk + Ψf k−1 ,
(17)
y k = Cxk + hk−1 ,
⎡ ⎤
S
⎢0⎥
⎢ ⎥ where
F=⎢ ⎥
⎢ .. ⎥. (9)
⎣.⎦  
0 Φ = eAΔt , Ψ = A−1 eAΔt − I B, Ω = A−1 eAΔt − I .
(18)
2.2. RSMPC. The RSMPC strategy is basically a trans-
formation of an optimization problem with constraints
into a quadratic programming problem [3]. The RSMPC Since A is not singular, then (17) can be written for each
development is presented next. Consider a general system prediction instant from k = 1 to k = P, where P is the
described by (10). By linearizing (10) with the Taylor’s series prediction horizon and N is the control horizon, with P ≥ N
expansion around the point immediately before the current and Δuk+ j = 0 to N ≤ j ≤ P. The resulting set of equation
operational point, one obtains (11). The matrices are defined is showed below, where
in (12)–(16). Equation (11) is integrated from t to t + Δt
and by assuming uk constant between samples, one can write y = ΓΔu + γ, (19)
(17):  
y = y k+1 y k+2 · · · y k+N · · · y k+P , (20)
dx
= f(x, u), y = h(x), (10)  
dt
Δu = Δuk+1 Δuk+2 · · · Δuk+N . (21)
dx
= Ak−1 x + Bk−1 u + fk−1 , y = Ck−1 x + hk−1 , (11)
dt The controller is designed in order to transform the closed-


∂f
loop behavior in a fist-order system behavior according
Ak−1 = , (12)
∂x x=xk−1 , u=uk−1 to (24). To overcome the problem of infeasible solutions

of the MPC optimization problem, a slack variable (λ) is
∂f
introduced in order to allow the system to deviate from
Bk−1 = , (13)
∂u x=xk−1 , u=uk−1 reference system and to satisfy hard constraints [3] resulting

in (25). This equation can be written for each prediction
∂h

Ck−1 = , (14) instant (26):
∂x x=xk−1
6 Journal of Control Science and Engineering

Figure 11: Frontal panel—IHMPC.


Journal of Control Science and Engineering 7

Figure 12: Block diagram—IHMPC.

10
9
8
pH (adm.)

7
6
5
4
0 50 100 150 200 250
Time (min)

pH
pH-Ref
pH-SP

Figure 13: Closed-loop simulation: RSMPC—pH (setpoint: SP and reference system: Ref).
8 Journal of Control Science and Engineering

21
20
19

h (cm)
18
17
16
15
0 50 100 150 200 250
Time (min)

h
h-Ref
h-SP
Figure 14: Closed-loop simulation: RSMPC—h (setpoint: SP and reference system: Ref).

13.5

13

12.5
u (mL/s)

12

11.5

11

10.5
0 50 100 150 200 250
Time (min)

q1
q2

Figure 15: Closed-loop simulation: RSMPC—control moves.

⎡ ⎤
CΨ 0 ··· 0
⎢ .. .. .. ⎥
⎢ ⎥
⎢ . . . ⎥
⎢ ⎥
⎢ C(Φ + I)Ψ CΨ · · · 0 ⎥
⎢ ⎥
⎢ .. .. .. ⎥
⎢ ⎥
⎢ ⎛ . ⎞ ⎛ . ⎞ . ⎥
⎢ ⎥
⎢  N N − 1 ⎥
Γ=⎢ ⎝ −1 ⎠ ⎝ −1 ⎠ ⎥, (22)
⎢C
⎢ Φ i
Ψ C Φ i
Ψ · · · CΨ ⎥

⎢ i=1 i=1 ⎥
⎢ ⎥
⎢ .. .. .. ⎥
⎢ . ⎞ . ⎞ . ⎞ ⎥
⎢ ⎛ ⎛ ⎛ ⎥
⎢  P−1 −N
P ⎥
⎢ P ⎥
⎣C⎝ Φi−1 ⎠Ψ C⎝ Φi−1 ⎠Ψ · · · C⎝ Φi−1 ⎠Ψ⎦
i=1 i=1 i=1

⎡ ⎤ 
yk +⎛CΦΔx
⎞k dy
⎢ ⎥ = K y SP − y , (24)
⎢  2
⎥ dt
⎢yk + C⎝ Φi ⎠Δxk ⎥
⎢ ⎥
⎢ ⎥
⎢ i=1 ⎥
⎢ .. ⎥
⎢ . ⎥ 
⎢ ⎛ ⎞ ⎥
⎢  N ⎥ Cfk−1 + CAk−1 xk + CBk−1 uk + λk = K ySP − y , (25)
γ=⎢
⎢yk + C⎝ Φ ⎠Δxk ⎥,
i
⎥ (23)
⎢ ⎥
⎢ ⎥
⎢ i=1 ⎥
⎢ . ⎥
⎢ .. ⎥
⎢ ⎛ ⎞ ⎥
⎢  ⎥
⎢ P ⎥ Dυ = b, (26)
⎣yk + C⎝ Φi ⎠Δxk ⎦
i=1
Journal of Control Science and Engineering 9

0.025 where
0.02
0.015  
0.01 υ = Δuk · · · Δuk+N λk · · · λk+P , (27)
0.005
0
λ

−0.005
−0.01
−0.015 ⎡ ⎤
−0.02 KykSP − (CA + KC)xk − Cfk−1
⎢ ⎥
−0.025 ⎢ SP ⎥
⎢ Kyk+1 − (CA + KC)Φxk ⎥
0 50 100 150 200 250 ⎢ ⎥
⎢ ⎥
Time (min) b=⎢ SP 2
⎢ Kyk+2 − (CA + KC)Φ xk
⎥,
⎥ (28)
⎢ ⎥
λ1 ⎢ .. ⎥
⎢ ⎥
λ2 ⎣ . ⎦
SP
Kyk+P − (CA + KC)ΦP xk
Figure 16: Closed-loop simulation: RSMPC—slack variables.

⎡ ⎤
CB 0 ··· I 0 0 0 ···
0 0
⎢ (CA + KC)Ψ CB ··· −I I 0 0 · · ·
0 0⎥
⎢ ⎥
⎢ (CA + KC)ΨC (CA + KC)Ψ ··· 0 −I I 0 · · ·
0 0⎥
D=⎢

⎥. (29)
⎢ .. .. .. ..
.. .. .. .. . . .. ⎥

⎣ . . . .
. . . . . .⎦
(CA + KC)ΦP−1 Ψ (CA + KC)ΦP−2 Ψ · · · (CA + KC)ΦP−M −1 Ψ 0 0 0 0 · · · I

The cost function can be written as (30). This equation can Figure 4 shows a diagram indicating how the system
also be reorganized as a quadratic programming problem works. The numbers indicate acid flow (1), base flow (2),
(31): buffer flow (3), and output flow (4). This number is used in
the indexes of the modeling of the experimental system. The
1 
min J = ΔuT RΔu + λT S λ , flows of acid (q1 ) and base (q2 ) are the manipulated variables
Δu(k),...,Δu(k+Nm )λ(k),...,λ(k+N p ) 2 (u); the flow of buffer solution (q3 ) is the unmeasured or
(30) measured disturbance (d), depending on how it is treated in
the control formulation. For the case of this study, the buffer
solution flow was used only to make simulation tests. The
1
min J = υT ευ level height of the reactor (h) and the output flow pH (q4 )
υ(k),...,υ(k+Nm +N p +2) 2 are the controlled variables or the outputs (y).
subject to: Dυ = b The system was specially chosen for this study because of
umink+ j ≤ uk+ j ≤ umaxk+ j its strong nonlinear dynamic behavior and low operational
cost, considering the consumption of inputs as electrical
− |Δumin |k+ j ≤ Δuk+ j ≤ |Δumax | k+ j
energy and raw material [3, 22]. The system becomes inter-
j = 0, . . . , P esting under the control perspective, as it is a multivariable
(31) and nonlinear process in which the static gain suffers strong
variations inside the flow operation band. Figure 5 presents
in which
the system titration curve. It can be observed that the
 
R [0] process pH changes from 4 to 9 in the [0.87; 1.22] interval
ε= . (32) of the ratio q2 /q1 . This change in the process gain is just
[0] S
what makes the neutralization problem difficult to solve
The controller cannot eliminate offset unless (33) is handled under the control perspective, as small modifications on
[3]: the relation q2 /q1 can cause big or small pH variations,
yk− yk−1 depending on where the operating point is in the operational
Cf k−1 ≈ . (33) region. It has to be emphasized that, in an analytical process
Δt
of titration, only one drop is enough to change the pH
3. The Experimental System from 4 to 9. Thus, it can be inferred that the sensitivity
of the dynamic process associated to this offers a challenge
The experimental system built was a neutralization process problem for control in real time. It can be noticed that,
that occurs in a shaken reactor tank. Figure 3 presents a once a buffer solution is added, the nonlinear behavior of
photo of the experimental system. the system smoothes up, so that the controller action gets
10 Journal of Control Science and Engineering

10
9
8

pH (adm.)
7
6
5
4
0 50 100 150
Time (min)

pH
pH-SP
pH-Ref

Figure 17: Experimental response RSMPC—pH (setpoint: SP and reference system: Ref).

18.5
18.4
18.3
18.2
18.1
h (cm)

18
17.9
17.8
17.7
17.6
17.5
0 50 100 150
Time (min)

h
h-Ref
h-SP

Figure 18: Experimental response: RSMPC—h (setpoint: SP and reference system: Ref).

14
13.5
13
12.5
u (mL/s)

12
11.5
11
10.5
10
0 50 100 150
Time (min)

q1 q2
q1 -SP q2 -SP

Figure 19: Experimental response: inputs.

easier, according to what the simulation curves with this the closed-loop system using pH control of the acid flow
solution indicate. If the system was linear, the relation given with classical PID controller. The conclusion was that this
by pH ×q2 /q1 plots would be a line whose approximation implementation was not able to reject disturbances in a
can be realized by the simulated curve of larger amount satisfactory manner. During the preparation of this work it
of buffer solution. Montandon [3] presented results for has been studied certain PID testing for the MIMO case,
Journal of Control Science and Engineering 11

0.1
0.08
0.06
0.04
0.02
0

λ
−0.02
−0.04
−0.06
−0.08
−0.1
0 50 100 150
Time (min)

λ1
λ2

Figure 20: Experimental response: RSMPC—slack variables.

12 22
11 20
10 18
9 16
pH (adm.)

8 h (cm) 14
7 12
6 10
5 8
4 6
3 4
0 50 100 150 0 50 100 150
Time (min) Time (min)

pH h
pH-SP h-SP
pH-Ref
h-Ref
Figure 21: Experimental response RSMPC—pH (setpoint: SP and
Figure 22: Experimental response: RSMPC—h (setpoint: SP and
reference system: Ref).
reference system: Ref).

15
14.5
but the answers proved to be mostly unstable and hard to 14
13.5
tune.
u (mL/s)

13
12.5
12
3.1. Phenomenological Model. Hall [23] developed the phys- 11.5
ical model of this process, which is based on the hypotheses 11
of the perfect mixture, constant density, and total solubility 10.5
10
of the present ions. The chemical reactions involved in 0 20 40 60 80 100 120 140 160
the acid-base neutralization (HNO3 –NaOH) are shown in Time (min)
(34)–(38). In order to have a complete model it will
be considered the presence of a buffer (NaHCO3 ). How- q1 q2
q1 -SP q2 -SP
ever, for the experimental results, this solution was not
used: Figure 23: Experimental response: inputs.

HNO3 −→ H+ + NO3 − , (34) NaHCO3 + H2 O −→ NaOH + H2 CO3 ,


ka1 (37)
NaOH −→ Na+ + OH− , (35) H2 CO3 ←→ H+ + HCO3− ,
ka2
kw
H2 O ←→ H+ + OH− , (36) HCO3 − ←→ H+ + CO3−2 , (38)
12 Journal of Control Science and Engineering

0.06 Table 1: System nominal parameters.


0.04 Variables Symbol Nominal values
0.02 Level H 25 cm
Area Ar 168.38 cm2
λ

0
Volume Vr 4209.67 cm3
−0.02 Acid flow q1 11.9130 mL/s
−0.04 Base flow q2 11.8235 mL/s
Buffer solution flow q3 0.01 mL/s
0 50 100 150
Time (min) pH pH 7.0
Acid conc. in q1 [HNO3 ] 3.510 × 10−03 M
λ1 Base conc. in q2 [NaOH] 3.510 × 10−03 M
λ2

Figure 24: Experimental response: RSMPC—slack variables.


result in an implicit relation of H, Wa , and Wb , as (44)
shows:
  
kw = H+ OH− , (41)
10        
Wai = H+ i − OH− i − HCO3 − i − 2 CO3 −2 i , (42)
9
   
8 Wbi = [H2 CO3 ]i + HCO3 − i − 2 CO3 −2 i ,
pH (adm.)

(43)
7
kw (ka1/H) + 2(ka1ka2)/H2
6 Wa = H − − Wb . (44)
H 1 + (ka1/H) + ka1ka2/H2
5
4 Making the mass balance in the reactor, together with
0 50 100 150 200 250 the invariant equations and considering that the density is
Time (min) constant by hypothesis, will result in a differential equation
pH system (45)–(47):
pH-SP
dh 1 
= q1 + q2 + q3 − q4 , (45)
Figure 25: Closed-loop simulation: IHMPC-pH. dt Ar
dWa 1
= q1 (Wa1 − Wa ) + q2 (Wa2 − Wa )
dt Vr  (46)
+q3 (Wa3 − Wa ) ,
where dWb 1
= q1 (Wb1 − Wb ) + q2 (Wb2 − Wb ) ,
dt Vr  (47)
+q3 (Wb3 − Wb ) ,
  
HCO3 − H+ where Ar is the reactor area, Vr is the reaction volume; h is
ka1 = , (39)
[H2 CO3 ] the solution height in the reactor; Wa is the acid reaction
   invariant; Wb is the base reaction invariant.
CO3 − H+
ka2 =   . (40) The flow output of the system is driven by gravity. The
HCO3 − output flow (q4 ) is done by a globe-type valve (48). This
equation shows the relation between output flow and the
reactor height and the parameters to be estimated (cv and
p7). The valve parameters were estimated by using the model
The amounts of Wa and Wb are called as invariants,
and experimental data from open loop. After this process
because their concentrations are not affected along the
of defining the parameters of the valve it was locked so as
reaction. The reactions are fast enough to allow the system
to avoid modifying these parameters. Nominal operation
to be considered as in equilibrium. Then, the equilibrium
conditions are presented in Table 1:
reactions can be used to determine the concentration of
the hydrogen (H) ions through the concentration of the q4 = cv h p7 . (48)
reaction invariants. Equation (41) gives the equilibrium
concentration. According to Gustafsson and Waller [24], For estimation and validation of the model, the plant
there are two reaction invariants defined for the ith stream operation must be done through the inputs that excite all the
and presented in (42) and (43), which, when are combined, dynamic modes and in a large frequency band, so that the
Journal of Control Science and Engineering 13

21
20
19

h (cm)
18
17
16
15
0 50 100 150 200 250
Time (min)

h
h-SP

Figure 26: Closed-loop simulation: IHMPC-h.

12.8
12.6
12.4
12.2
u (mL/s)

12
11.8
11.6
11.4
11.2
0 50 100 150 200 250
Time (min)

q1
q2

Figure 27: Closed-loop simulation: IHMPC-inputs.

0.1 Table 2: Controller tuning.


0.095
Optimization time (s)

0.09 Controller Parameter


0.085 N = 10
0.08
R = 10I → 100I
0.075
0.07 IHMPC Q=I
0.065 S = 10I
0.06 Δumax = 0.25 mL/s
0.055
0.05 K = 0.005I
0 50 100 150 200 250 R = I → R = 100I
Time (min) S = 10000I
RSMPC
Figure 28: Closed-loop simulation: IHMPC-optimization time. N =2
P = 10
Δumax = 0.5 mL/s
Δt = 10 s
outputs have enough information for that procedure. These General parameters umax = 38 mL/s
are the most usual excitation types: ramp, step, and impulse. umin = 4 mL/s
However, in order to gather information in a broader range
of frequency, random inputs made of sequences of steps with
variable amplitude and duration are applied, as it is indicated the variations occurred were 12 mL/s each, in order to keep
by Montandon [3]. The sampling time for the process was the flow sum inside the limit given by q1 + q2 ∈ [21.5 24.5].
10 s. The acid and base flows vary as q1 ∈ [9.856 13.310], The values presented were defined in such a way that, during
q2 ∈ [9.977 13.215], with a step probability equal to 0,8, in the operation, the reactor did not get empty nor presents
which the duration of each step was 120 s. The flows in which overflow. A set of experimental data composed by 6000
14 Journal of Control Science and Engineering

7.5
7.4
7.3
7.2

pH (adm.)
7.1
7
6.9
6.8
6.7
6.6
6.5
0 20 40 60 80 100 120
Time (min)
pH
pH-SP

Figure 29: Experimental response: IHMPC-pH.

21
20
19
h (cm)

18
17
16
15
14
0 20 40 60 80 100 120
Time (min)

h
h-SP

Figure 30: Experimental response: IHMPC-h.

acquisition points was used. The estimation procedure used 3.2. State-Space Model. The state-space approach resulting
half of the data and the validation employed the remaining from the algebraic manipulation is given by (49)–(55). The
data, in order to assure that the model would have the ability terms dWadz and dWbdz are defined in (56) and (57).
for predicting unknown data. The estimation is a nonlinear Equation (58) presents dydz relation:
solution of the approximation curve that uses the least square
method with a first trial given by a random value and a  
step equal to 15. The estimated values that resulted were x = h Wa Wb , (49)
cv = 20.4477 and p7 = 0.0523. Figure 6 shows the error
in predicting h. Figure 7 presents the set of input random  
u = q1 q2 , (50)
data. Figure 8 presents the results of the simulation and the
experimental ones for the closed-loop run, in which one can
notice that the model is very representative of the process,  
y = h pH , (51)
with difficulties to predict around pH 7; this phenomenon
was already expected, taking into account the titration curve
of the system. Figure 9 shows the validation of the model  
d = q3 , (52)
by the prediction of the output that comes very close to the
experimental data.
Journal of Control Science and Engineering 15

13
12.8
12.6
12.4

u (mL/s)
12.2
12
11.8
11.6
11.4
11.2
11
0 20 40 60 80 100 120
Time (min)
q1 q2
q1 -SP q2 -SP

Figure 31: Experimental response: IHMPC-inputs.

10
9
8
pH (adm.)

7
6
5
4
0 20 40 60 80 100 120
Time (min)

pH
pH-SP

Figure 32: Experimental response: IHMPC-pH.

⎡ ⎤
−cv p7h p7−1 0 0
⎢ ⎥
⎢ q1 (Wa1 − Wa ) + q2 (Wa2 − Wa ) + q3 (Wa3 − Wa ) −q1 − q2 − q3 ⎥
⎢− 0 ⎥

A=⎢ 2 ⎥, (53)
Arh Arh ⎥
⎢ ⎥
⎣ q1 (Wb1 − Wb ) + q2 (Wb2 − Wb ) + q3 (Wb3 − Wb ) −q1 − q2 − q3 ⎦
− 2 0
Arh Arh
⎡ ⎤ ⎡ ⎤
1 1 1
⎢ Ar Ar ⎥ ⎢ Ar ⎥
⎢ ⎥ ⎢ ⎥
⎢ (W − W ) (W − W ) ⎥ ⎢ (W − W ) ⎥
⎢ a1 a a2 a ⎥ ⎢ a3 a ⎥
B=⎢

⎥,

G=⎢

⎥,

(54)
⎢ Arh Arh ⎥ ⎢ Arh ⎥
⎣ (Wb1 − Wb ) (Wb2 − Wb ) ⎦ ⎣ (Wb3 − Wb ) ⎦
Arh Arh Arh
⎡ ⎤
1 0 0
⎢ ⎥
C=⎣ d ydz d ydz ⎦, (55)
0
dWadz dWdz

 
2 3
kw Wb − ka1/H − 4ka1 · ka2/H
dWadz = 1 + 2 − 
H 1 + (ka1/H) + ka1 · ka2/H2
  
Wb (ka1/H) + 2 · ka1.ka2/H2 − ka1/H2 + 2 · ka1 · ka2/H3
+ 2 , (56)
1 + (ka1/H) + ka1 · ka2/H2
16 Journal of Control Science and Engineering

21

20

19

h (cm)
18

17

16

15
0 20 40 60 80 100 120
Time (min)

h
h-SP

Figure 33: Experimental response: IHMPC-h.

15
14.5
14
13.5
u (mL/s)

13
12.5
12
11.5
11
10.5
10
0 20 40 60 80 100 120
Time (min)

q1 q2
q1 -SP q2 -SP

Figure 34: Experimental response: IHMPC-inputs.

 
1 + kw/H2 1 + (ka1/H) + ka1 · ka2/H2
dWdz = 
(ka1/H) + 2 · ka1 · ka2/H2
  
(H − (kw/H) − Wa ) 1 + (ka1/H) + ka1 · ka2/H2 − ka1/H2 − 4 · ka1 · ka2/H3
− 2 (57)
(ka1/H) + 2 · ka1 · ka2/H2
 
(H − (kw/H) − Wa ) − ka1/H2 − 2 · ka1 · ka2/H3
+  ,
(ka1/H) + 2 · ka1 · ka2/H2
e
d ydz = −log10 . (58)
H

4. Results and Discussions mainly in regions with high gain. There are several ways to
deal with this problem. One of them is to expand the states,
The real-time implementation was done using the successive creating a new state matrix, and then the original structure
linearization around the operating point. In this case, the of the optimization problem can be kept (59). The term
linear form results (11). related to the output linearization does originate problems
A controller designed around a stationary state would for the controller, because it only maps state output and does
not be able to control the system, because, for some regions, not interfere in the systems dynamics as the term fk−1 does.
the stationary state would be very far from the desired point Once this modification is done, it is possible to work on
and so the plant model mismatch. Even with the successive the controller previously defined. This option was chosen for
linearization, the system control would not be successful, real-time operation. Another form can be found in Reis [25]:
Journal of Control Science and Engineering 17
  
dx Ak−1 0 x random noise of average equal to zero was added with about
= + Bk−1 u. (59)
dt 0 fk−1 1 10% of the outputs (h and pH):
⎡ ⎤ ⎡ ⎤
For the real application in the experimental system, which h 18 18 18 18 18 21 18 15 18
⎢ ⎥ ⎢ ⎥
has nonlinear characteristics, it is necessary to perform ⎣pH⎦ = ⎣ 7 10 7 4 7 7 7 7 7 ⎦. (63)
successive linearization of the process model in order to d 0 0 0 0 0 0 0 0 0
apply linear algorithms. The dynamic evolution of the
process results in a time-varying system (LTV). The equation Figures 13, 14, 15, and 16 present the closed-loop
of the IHMPC and RSMPC in an LTV form can be easily simulation that uses RSMPC. The performance shown is
extrapolated from the formulation presented herein, and for quite reasonable.
simplicity its description is omitted. The experimental run was done in two steps to preserve
In the experimental system, there are the states h, Wa , and manipulate the data separately. In the first part was made
and Wb . For this case, the last two states are not measured, a variation in the pH setpoint and kept the level constant. In
as it was shown in the modeling section of the system, so you the second part it was done otherwise. Figures 17, 18, 19, and
need to estimate them (x ). To this end, we used a Kalman 20 show the experimental run of the pH setpoint changes,
filter [26], (60). For the IHMPC controller, we also used an and Figures 21, 22, 23, and 24 show the experimental run
open-loop observer (61), informing the model that the error of level setpoint changes. The system was controlled in all
(e) enters into the plant and model in order to minimize the runs. During the experiment, there was a need to increase the
interference of state estimation in the system behavior [27]: weight matrix of the control action (as indicated in Table 2);
as the output response started to oscillate, it was not able to
  reach the desired setpoint. However, the overall response can
x k+1|k = Ax k|k−1 + Buk + L yk − Cx k|k−1 , (60)
be considered satisfactory.
The results for the IHMPC simulation are presented in
Figures 25, 26, 27, and 28. The results are suitable to apply
e = yPlant − yModel . (61) on line the control to the real process.
Like RSMPC, experimental runs with IHMPC were done
RSMPC and IHMPC were implemented in real-time oper- in two steps. Figures 29, 30, and 31 show the experimental
ation of the experimental plant, using the interface through run of the pH setpoint changes, and Figures 32, 33, and 34
LabVIEW and the controller computation, using the routines show the experimental run of level setpoint changes. This
implemented in Matlab. In the literature there are several controller showed a performance superior to the RSMPC
techniques of controller tuning. In this work Henson and controller performance and with satisfactory and adequate
Seborg [11] and Montadon’s [3] indications were adopted. response and performance.
Besides, a field refinement was done during the initial run in
closed loop with each controller, making small adjustments
in parameters, so that the system would deliver a better 5. Conclusions
experimental performance in closed loop. Such results were Through this research it was able to better understand the use
omitted in this paper. Table 2 shows the parameters of each of predictive controllers in a real-time application, paving the
controller resulting from the simulation and used in the way for research in the area. The experimental application
experimental run. The controllers were at first tuned with led to an approximation of reality and industrial practice,
identity matrices and a control horizon equal to 10; besides, experiencing some of the common problems and issues
a tuning was done until achieving a satisfactory response. in the implementation of controllers. It was possible to
Figure 10 presents the structure of the experimental plant, verify the need for a mastery of techniques and concepts of
and Figures 11 and 12 show the LabVIEW implementation. process control, system modeling, parameter identification,
Equation (62) shows that the calibration curves of the scheduling, optimization, and, in addition, common sense
instruments. V0 , V1 , V2 , V3 , and V4 are channel voltages. The engineering to solve the experimental problems.
regression coefficients are equal to 1 [4, 28]: The control of such class of nonlinear processes is a very
challenging area with many possibilities for development
q1 = 5.2788V0 − 9.4776, and that undoubtedly has importance and influence on the
q2 = 5.0997V1 − 9.5541, performance of process and in consequence results in their
q3 = 0.2090V2 − 0.3845, (62) organizations.
This work carried out the experimental application
pH = 1.8982V3 − 3.5583, of two predictive controllers: IHMPC and RSMPC, both
h = 5.6347V4 − 9.0838. in simulation environment and in the experimental plant
dealing with the level and pH control.
Setpoint variations were done to evaluate the ability of the The experimental plant was modeled and has the
controllers to lead with transitions. Starting the system with a required parameters identified through the run in open loop.
pH equal to 7 and a reactor height equal to 18 cm, variations The simulation results were satisfactory and indicated that
were done according to the vector in (63), considering that the model is representative of the real process and suitable
each step took 2000 seconds. For all controller simulations a for the control purposes that were aimed. The simulation
18 Journal of Control Science and Engineering

answers of the system outputs subjected to the controllers [17] N. L. Ricker, “Model predictive control with state estimation,”
were adequate and satisfactory. The theoretical and experi- Industrial and Engineering Chemistry Research, vol. 29, no. 3,
mental responses of the IHMPC runs were satisfactory. pp. 374–382, 1990.
The controller adjustment for real-time operations was [18] J. Richalet, “Industrial applications of model based predictive
suitable and feasible. The interference of matters related to control,” Automatica, vol. 29, no. 5, pp. 1251–1274, 1993.
[19] A. Richards and J. How, “Robust model predictive control with
leaking, process noises, and noises from electrical source and
imperfect information,” in Proceedings of the American Control
other problems associated to real-time application brought
Conference, (ACC’05), pp. 268–273, June 2005.
additional difficulties that demanded process knowledge of [20] Q. L. Zhang and Y. G. Xi, “An efficient model predictive
the process. controller with pole placement,” Information Sciences, vol. 178,
The experimental application of robust controllers asso- no. 3, pp. 920–930, 2008.
ciated or not to control systems tolerant to failures and the [21] M. Morari and J. H. Lee, “Model predictive control: past,
use of online identification employing neural networks will present and future,” Computers and Chemical Engineering, vol.
be presented elsewhere. 23, no. 4-5, pp. 667–682, 1999.
[22] H. M. Henrique, Uma contribuição ao estudo de redes neuronais
aplicadas ao controle de processos, Ph.D. thesis, COPPE/UFRJ,
References 1999.
[23] R. C. Hall, Development of a multivariable pH experiment, M.S.
[1] G. Stephanopoulos, Chemical Process Control: An Introduction thesis, University of California, Santa Barbara, Calif, USA,
to Theory and Practice, Prentice Hall, Upper Saddle River, NJ, 1987.
USA, 1984. [24] T. K. Gustafsson and K. V. Waller, “Dynamic modeling and
[2] K. R. Muske and J. B. Rawlings, “Model predictive control with reaction invariant control of pH,” Chemical Engineering Sci-
linear models,” AIChE Journal, vol. 39, no. 2, pp. 262–287, ence, vol. 38, no. 3, pp. 389–398, 1983.
1993. [25] L. L. G. Reis, Controle tolerante com reconfiguração estrutural
[3] A. G. Montandon, Controle preditivo em tempo real com tra- acoplado a sistema de diagnóstico de falhas, M.S. thesis, Federal
jetória de referência baseado em modelo neural para reator de University of Uberlândia, 2008.
neutralização, M.S. thesis, Federal University of Uberlândia, [26] T. Kailath, Linear Systems, Prentice Hall, Englewood Cliffs, NJ,
2005. USA, 1980.
[4] C. H. F. Silva, Uma contribuição ao estudo de controladores [27] J. M. Maciejowski, Predictive Control With Constraints, Pear-
robustos, Ph.D. thesis, Universidade Federal de Uberlândia , son Education Limited, 2002.
2009. [28] D. Gonçalves, Controle preditivo em tempo real de uma planta
[5] E. F. Camacho and C. Bordons, Model Predictive Control, piloto, M. Sc qualification report, Federal University of
Springer, Barcelona, Spain, 1999. Uberlândia, 2007.
[6] D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. M. Scokaert,
“Constrained model predictive control: stability and optimal-
ity,” Automatica, vol. 36, no. 6, pp. 789–814, 2000.
[7] M. Nikolaou, “Model predictive controllers: a critical synthesis
of theory and industrial needs,” Advances in Chemical Engi-
neering, vol. 26, pp. 131–204, 2001.
[8] M. Morari, “Model predictive control: multivariable control
technique of choice in the 1990?” Tech. Rep. CIT/CDS 93-024,
California Institute Of Technology, 1993.
[9] D. R. Saffer II and F. J. Doyle III, “Analysis of linear program-
ming in model predictive control,” Computers and Chemical
Engineering, vol. 28, no. 12, pp. 2749–2763, 2004.
[10] J. B. Rawlings, “Tutorial: model predictive control technology,”
in Proceedings of the American Control Conference (ACC’99),
pp. 662–676, June 1999.
[11] M. A. Henson and D. E. Seborg, Nonlinear Process Control,
Prentice Hall, Upper Saddle River, NJ, USA, 1997.
[12] S. de Oliveira and M. Morari, “Robust model predictive con-
trol for nonlinear systems with constraints,” in Advanced Con-
trol Of Chemical Processes, pp. 295–300, Kyoto, Japan, 1994.
[13] J. H. Lee, “Recent advances in model predictive control and
other related areas,” AIChE Symposium Series, pp. 201–216,
1997.
[14] J. W. Eaton and J. B. Rawlings, “Model-predictive control of
chemical processes,” Chemical Engineering Science, vol. 47, no.
4, pp. 705–720, 1992.
[15] S. J. Qin and T. A. Badgwell, “A survey of industrial model
predictive control technology,” Control Engineering Practice,
vol. 11, no. 7, pp. 733–764, 2003.
[16] J. Löfberg, Minimax approaches to robust model predictive
control, Ph.D. thesis, Linköping, Sweden, 2003.
Hindawi Publishing Corporation
Journal of Control Science and Engineering
Volume 2012, Article ID 485784, 7 pages
doi:10.1155/2012/485784

Research Article
Robust Model Predictive Control Using Linear Matrix
Inequalities for the Treatment of Asymmetric Output Constraints

Mariana Santos Matos Cavalca,1 Roberto Kawakami Harrop Galvão,2


and Takashi Yoneyama2
1 Departamento de Engenharia Elétrica, Centro de Ciências Tecnológicas, Universidade do Estado de Santa Catarina,
Praça Rua Paulo Malschitzki, Zona Industrial Norte, 89.219-710 Joinville, SC, Brazil
2 Divisão de Engenharia Eletrônica, Instituto Tecnológico de Aeronáutica, Praça Marechal Eduardo Gomes, 50 Vila das Acácias,

12.228-900 São José dos Campos, SP, Brazil

Correspondence should be addressed to Mariana Santos Matos Cavalca, mariana.smcavalca@gmail.com

Received 30 June 2011; Revised 22 September 2011; Accepted 6 October 2011

Academic Editor: Marcin T. Cychowski

Copyright © 2012 Mariana Santos Matos Cavalca et al. This is an open access article distributed under the Creative Commons
Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.

One of the main advantages of predictive control approaches is the capability of dealing explicitly with constraints on the
manipulated and output variables. However, if the predictive control formulation does not consider model uncertainties, then the
constraint satisfaction may be compromised. A solution for this inconvenience is to use robust model predictive control (RMPC)
strategies based on linear matrix inequalities (LMIs). However, LMI-based RMPC formulations typically consider only symmetric
constraints. This paper proposes a method based on pseudoreferences to treat asymmetric output constraints in integrating SISO
systems. Such technique guarantees robust constraint satisfaction and convergence of the state to the desired equilibrium point. A
case study using numerical simulation indicates that satisfactory results can be achieved.

1. Introduction In this context, Kothare et al. [7] proposed an RMPC


strategy with infinite horizon employing linear matrix in-
Model-based predictive control (MPC) is a strategy in which equalities (LMIs) for dealing with model uncertainty and
a sequence of control actions is obtained by minimizing a symmetric constraints on the manipulated and output vari-
cost function considering the predictions of a process model ables. This approach was later extended to encompass mul-
within a certain prediction horizon. At each sample time, timodel representations [8, 9], setpoint management [10],
only the first value of this sequence is applied to the plant, integrator resetting [11], and offline solutions [12, 13].
and the optimization is repeated in order to use feedback Within this scope, Cavalca et al. [4] proposed a heuristic
information [1, 2]. One of the main advantages of MPC is the procedure that allows the inclusion of asymmetric con-
possibility to consider explicitly the physical and operational straints on the plant output, but without stability or con-
constraints of the system during the design of the control straint satisfaction guarantees. The present paper presents
loop [1–3]. However, if there are mismatches between the a formal strategy to handle asymmetric output constraints
nominal model and the actual behavior of the process, then in the control of integrating single input, single output
the performance of the control loop can be degraded and the (SISO) systems, for the case where the output is linear
optimization problem may even become unfeasible. Thus, in states. It is shown that robust constraint satisfaction is
the study of new strategies for design of robust MPC achieved, as well as convergence of the state trajectory to
(RMPC) with guaranteed stability and constraint satisfaction the desired equilibrium point. The effectiveness of the pro-
properties even in the presence of uncertainties is an area posed method is illustrated by means of numerical simula-
with great potential for research [4–7]. tions.
2 Journal of Control Science and Engineering

The remainder of this paper is organized as follows. This min-max problem can be replaced with the fol-
Section 2 reviews the design of RMPC based on LMI. lowing convex optimization problem with variables γ ∈ R,
Section 3 formalizes the proposed technique for the inclusion Q ∈ Rn×n , Σ ∈ R p×n , and LMI constraints [2, 7]:
of asymmetric constraints. Section 4 presents a case study
consisting of a discrete time model of a double integrator. min γ (6)
γ,Q,Σ
The results are evaluated through numerical simulations as
presented in Section 5. Concluding remarks are presented in ⎡ ⎤
Q x(k)
Section 6. subject to ⎣ ⎦ > 0, (7)
Throughout the text, I represents an identity matrix, · 1
the notation (· | k) is used in predictions with respect to ⎡ ⎤
time k, and the ∗ superscript indicates an optimal solution. Q 0 0 Ai Q + Bi Σ
⎢ ⎥
For brevity of notation, only the upper triangular part of ⎢ · γI 0 Wx1/2 Q ⎥
⎢ ⎥
⎢ ⎥ > 0, i = 1, 2, . . . , L.
symmetric matrices is explicitly presented. ⎢ ⎥
⎢ · · γI Wu1/2 Σ ⎥
⎣ ⎦
· · · Q
2. LMI-Based RMPC (8)
Consider a linear time-invariant system described by an un-
certain model of the following form: If the problem (6)–(8) has a solution γk∗ , Qk∗ , Σ∗k , then
the optimal control sequence is given by
x(k + 1) = Ax(k) + Bu(k),    
u k + j | k = Kk∗ x k + j | k , (9)
(1)
y(k) = Cx(k),
where
where x(k) ∈ Rn ,
u(k) ∈ Rp,
and y(k) ∈ Rq
are the  −1
states, the manipulated and the output variables, respectively, Kk∗ = Σ∗k Qk∗ . (10)
for each time k, and A, B, and C are constant matrices of
appropriate dimensions. The uncertainty is represented in Symmetric constraints on the manipulated variables of
polytopic form; that is, matrices A and B are unknown to the the form |ul (k + j | k)| < ul , l = 1, 2, . . . , p, j ≥ 0 and on the
designer, but they are assumed to belong to a convex polytope output variables of the form | ym (k + j + 1 | k)| < y m , m =
Ω with L known vertices (Ai , Bi ), i = 1, 2, . . . , L, so that [2, 7]: 1, 2, . . . , q, j ≥ 0 can be imposed by including additional
LMIs [7]
L
 ⎡ ⎤
(A, B) = λi (Ai , Bi ) (2) X Σ
⎣ ⎦>0 (11)
i=1 · Q

for some unknown set of coefficients λ1 , λ2 , . . . , λL that satisfy with

L
 Xll < ul 2 , l = 1, 2, . . . , p,
λi = 1 λi ≥ 0, i = 1, . . . , L. (3) ⎡ ⎤
i=1 Q [Ai Q + Bi Σ]T Cm
T (12)
⎣ ⎦ > 0, i = 1, 2, . . . , L
· y 2m
At each time k, the sequence of future controls U∞ =
{u(k | k), u(k + 1 | k), u(k + 2 | k), . . .} is obtained as solution for m = 1, 2, . . . , q, where Cm denotes the mth row of C.
of the following min-max optimization problem: Let P(x(k)) denote the optimization problem (6) with
constraints (7), (8), (11), and (12). Suppose that the control
min max J∞ (A, B, U∞ ), (4) law uses the concept of receding horizon; that is, u(k) =
U∞ (A,B)∈Ω
Kk∗ x(k) with Kk∗ recalculated at each sampling time k. The
following lemma is concerned with convergence of the state
where trajectory to the origin under the constraints imposed on the
operation of the plant.
∞ 
       
J∞ (A, B, U∞ ) = x k + j | k 2 + u k + j | k 2
Wx Wu Lemma 1. If P(x(k0 )) is feasible at some time k = k0 , then
j =0
x(k) → 0 as k → ∞ with satisfaction of the input and
(5) output constraints.

in which Wx > 0 and Wu > 0 are symmetric weighting The proof of Lemma 1 follows directly from the recursive
matrices. By assumption, all states are available for feedback feasibility and asymptotic stability properties demonstrated
so that x(k | k) = x(k). in [7].
Journal of Control Science and Engineering 3

Remark 2. As shown in Appendix A of Kothare et al. [7], more conservative constraint − ymax < y(k) < ymax makes
P(x(k)) is equivalent to minimizing a function V (x(k | k)) = the output variable y(k) be located outside of the range of
x(k | k)T Pk x(k | k) with Pk = γk∗ (Qk∗ )−1 > 0. Such V (x(k | admissible values.
k)) function is found to be an upper bound for the cost Cavalca et al. [4] proposed a heuristic solution, based on
function J∞ (A, B, U∞ ) in (5) and can be used as a candidate a time-varying pseudoreference r(k) = min{(ymax + y(k))/
Lyapunov function in the proof of asymptotic stability. 2, 0} as shown in Figure 1(b). The problem of asymmetric
output constraints is then redefined in terms of new sym-
Remark 3. For a regulation problem around a point different metric constraints (a/a) around r(k). However, it should
from the origin, a change of variables can be used, so that the be noted that this technique does not lead to guaranteed
new origin corresponds to the desired equilibrium point stability and constraint satisfaction.
[7]. In the present work, the new value for the reference Unlike the approach described above, which involves a
signal will be termed a pseudoreference. It is assumed that pseudoreference r(k) that may change at each sampling time
the process is integrating, and, therefore, the control value in k, the solution proposed in the present paper employs a
steady state (uss ) is zero for any value of the pseudoreference. sequence of pseudoreferences ri (i = 1, 2, . . . , imax ) which
Otherwise, the determination of uss would not be trivial are defined at k = 0 on the basis of the initial output
since the system matrices A and B are subjected to model value y(0). As illustrated in Figure 2, symmetric constraints
uncertainties. (a/a, b/b, c/c, . . .) are established around each pseudorefer-
ence. It will be shown that the use of such pseudoreferences,
Remark 4. The RMPC problem formulation presented in together with a convenient commutation rule, provides ro-
this section considers that the matrices A and B are unknown bust constraint satisfaction and ensures that the state tra-
but dot not vary with time. This is a particular case of the jectory converges to the origin. For sake of brevity of pres-
general framework introduced in [7], which was concerned entation, only case (i) will be treated. Case (ii) can be recast
with time-varying matrices A(k) and B(k). into case (i) by defining y̆(k) = −Cx(k) and replacing
constraints ymin ≤ y(k) ≤ ymax with − ymax ≤ y̆(k) ≤ − ymin .
Given an initial state x(0) such that y(0) = Cx(0) falls
3. Treatment of Asymmetric Constraints within the scope of case (i), the following algorithm defines
the pseudoreferences ri , as well as a sequence of associated
The present work is concerned with regulation problems matrices Q  i∗ . These matrices will be subsequently employed
around the origin involving a SISO system with output vari- in the control law to establish a rule of commutation from
able y(k) = Cx(k). This is a particular case of the problem one pseudoreference to the next.
described in Section 2, with p = q = 1. Therefore, the It is assumed that the set Xs of possible equilibrium values
indexes l and m in (12) can be omitted. Moreover, the system xs for the state of the plant is known from the physics of the
is assumed to be integrating, so that uss = 0 regardless of process to be controlled.
the pseudoreference, as discussed in Remark 3. It is also
considered that the manipulated variable u(k) is subjected
to a symmetric constraint u as in Section 2. Suppose that the Algorithm for Determination of the Pseudoreferences and
constraints on y are of the following form: Associated Ellipsoids (PR Algorithm).

ymin < y(k) < ymax (13) Step 1. Define the pseudoreferences ri as follows:

with ymin < 0 and ymax > 0. If | ymin | = ymax , then the Step 1.1. Let r0 = [ymax + y(0)]/2
symmetric constraint formulation presented in Section 2 can
be applied directly by making y = ymax . If | ymin | =
/ ymax , a Step 1.2. Let i = 1
different approach is required. One alternative is to adopt a
Step 1.3. While |ri−1 | > ymax do
more conservative constraint, that is,
⎧   ri = (ymax + ri−1 )/2
⎨ ymax , if ymax <  ymin 
y=⎩   (14) i=i+1
− ymin , if ymax >  ymin .
End While
However, this procedure may not be convenient in the
following cases: Step 1.4. Let imax = i

(i) ymax < | ymin | with ymin ≤ y(k) < − ymax , Step 1.5. Let rimax = 0.
(ii) ymax > | ymin | with − ymin < y(k) ≤ ymax .
Step 2. For each ri determine xs,i such that:
In these cases, the initial value of the output is admissible
under the original asymmetric constraints, but not under the Cxs,i = ri , (15)
more conservative constraint (14). Figure 1(a) provides an
illustration for case (i). As can be seen, the imposition of the xs,i ∈ Xs . (16)
4 Journal of Control Science and Engineering

y y

ymax ymax

0 k 0 k
a
− ymax − ymax

ymax +y(k)
r(k) = 2

y(k) y (k)
ymin ymin

(a) (b)

Figure 1: (a) Illustration of the more conservative constraint (14) in case (i). (b) Heuristic solution proposed by Cavalca et al. [4].

y x2
ymax
Q̃1∗
rimax = 0 c ··
k ·
rimax −1 = · · · ·· Q̃i∗max −2
·
··· − ymax c b
r1 +ymax
r2 = 2
a ··
·
b
Q̃i∗max −1
xs,imax −2
··
·
ymax +y(0)
r1 = Q̃i∗max
2 xs,imax −1

xs,imax = 0 x1

Figure 3: Result of the PR algorithm.

y(0)
ymin The matrices Q  i∗ obtained in Step 4 define ellipsoids of
the form (x − xs,i )T (Q  i∗ )−1 (x − xs,i ) < 1, as illustrated in
Constraints Figure 3 for the case of a second-order system. It is
Pseudoreferences
noteworthy that, by construction (in view of LMI (7) with
Figure 2: Illustration of the proposed pseudoreference scheme. x(k) = ξi ), the ellipsoid i contains the center of the ellipsoid
(i − 1).
The PR algorithm is said to be feasible if (15) and (16)
in Step 2 and the optimization problem P(ξi ) in Step 4
Step 3. Let are feasible for every i = 1, 2, . . . , imax . In this case, the re-
 i∗ , i = 0, 1, . . . , imax are used in the control
sulting xs,i , yi , Q
ξ0 = x(0) − xs,0 algorithm proposed below.
ξi = xs,i−1 − xs,i , 1 ≤ i ≤ imax
Algorithm for Control Using the Pseudoreferences (CPR Algo-
yi = ymax − ri , 0 ≤ i ≤ imax . rithm).

Step 4. Solve the problem P(ξi ) with the constraints yi , u for Initialization. Let k = 0 and i = 0.
each i (i = 0, 1, . . . , imax ) and denote the resulting solution by
(γi∗ , Q ∗
 i∗ , Σ i ). Step 1. Read the state x(k).
Journal of Control Science and Engineering 5

Step 2. If i < imax then 2

Let xi+1 (k) = x(k) − xi+1,s


T ∗ −1
 i+1
If xi+1 (k)(Q ) xi+1 (k) < 1 (condition of transi- 0
tion) then
Let i = i + 1
End If −2

End If.

y(k)
Step 3. Let xi (k) = x(k) − xs,i and solve the problem −4 0.4
P(xi (k)) with the constraints yi and u in order to obtain
(γk∗ , Qk∗ , Σ∗k ).
0.2
−6
∗ −1
Step 4. Calculate Kk∗ = Σ∗k (Qk ) and u(k) = Kk∗ xi (k).
0
Step 5. Apply u(k) to the plant. −8

Step 6. Let k = k +1, wait a sample time and return to Step 1. −0.2
4 8 12 16
The main result of this work is stated in the following the- −10
0 5 10 15 20 25
orem, which is concerned with the satisfaction of constraints
k
and convergence of the state trajectory to the origin under
the control law given by the CPR algorithm. κ = 0.9
κ = 1.1
Theorem 5. If the PR algorithm is feasible and the CPR ymax
algorithm is applied to control the plant, then x(k) → 0 as
Figure 4: Simulation results using relaxed output constraints.
k → ∞, with satisfaction of the input and output constraints.

Proof. For k = 0, the state x(k) lies in the ellipsoid associated


with Q  0∗ , which was obtained by solving problem P(x(0) − Lemma 1 will ensure that x(k) → 0 as k → ∞, again with
xs,0 ) in the PR algorithm. Therefore, problem P(x0 (0)) satisfaction of the constraints.
is feasible by hypothesis. Lemma 1 then guarantees that
x0 (k) → 0 (i.e., x(k) − xs,0  → 0) as k → ∞, with
satisfaction of the input and output constraints, under the 4. Numerical Example
control law stated in Steps 3 and 4 of the CPR algorithm with
i = 0. Since the ellipsoid associated with Q  1∗ contains xs,0 , A discrete state-space model of a double integrator will be
by construction, it can be concluded that the condition of employed to illustrate the proposed strategy. The matrices of
transition stated in Step 2 of the CPR algorithm with i = 1 the model are given by
will be satisfied in finite time. Let k1 be the first time when ⎡ ⎤ ⎡ ⎤
1 T T 2 /2  
this condition is satisfied, that is, A=⎣ ⎦, B=⎣ ⎦κ, C = 1 0.2 ,
 −1 0 1 T
x1T (k1 ) Q
 1∗ x1 (k1 ) < 1. (17) (18)

This condition ensures that the optimization problem where T is the sampling time, and κ is an uncertain gain
P(x1 (k1 )) is feasible, since, by construction, the solution parameter.
(γ1∗ , Q ∗
 1∗ , Σ 1 ) of P(ξ1 ) satisfies the constraints of P(x
1 (k1 )). The initial condition is set to x(0) = [−10 0T ], the
In fact, condition (17) ensures that LMI (7) is satisfied sampling time is T = 0.5 s, and the constraints are defined
with x(k) and Q replaced with x1 (k1 ) and Q  1∗ , respectively. as u = umax = umin = 5, ymin = −10 and ymax = 0.1. The
Moreover, the remaining LMIs (8),(11)-(12) are satisfied by uncertain parameter κ is assumed to be in the range 0.9 − 1.1.
Q ∗
 1∗ and Σ 1 by hypothesis. Therefore, after switching from
The weight matrices of the controller are defined as
i = 0 to i = 1, Lemma 1 ensures that  x1 (k) → 0 ⎡ ⎤
(i.e., x(k) − xs,1  → 0) as k → ∞ with satisfaction of 100 0
the input and output constraints. As a result, the condition Wx = ⎣ ⎦, Wu = 100. (19)
0 0.01
of transition with i = 1 will be satisfied in finite time. A
similar reasoning can be applied to show that the condition In this case, the set Xs of possible equilibrium points is
of transition will be satisfied for all i = 0, 1, . . . , imax − 1 in given by
finite time. Finally, when i = imax , the state x(k) will be inside
the last ellipsoid, centered at xs,imax = [0 0 · · · 0]T , and then Xs = {(x1 , x2 ) | x2 = 0}. (20)
6 Journal of Control Science and Engineering

2 3.5

3
0
2.5

2
−2

1.5
y(k)

u(k)
−4 0.2 1

0.1 0.5
−6 0
0
−0.1
−0.5
−8 −0.2

−1
12 16 20 24
−10 −1.5
0 5 10 15 20 25 0 5 10 15 20 25
k k
κ = 0.9 κ = 0.9
κ = 1.1 κ = 1.1
ymax

(a) (b)

Figure 5: Simulation results using the proposed strategy: (a) output and (b) control signals.

Table 1: Pseudoreferences and associated output constraints. resulting optimization problem P(x(0)) becomes infeasible.
In fact, given the control constraint −5 < u(k) < 5, it is not
i ri yi
possible to steer the output from y(0) = −10 to the range
0 −4.95 5.05 [−0.1, 0.1] in a single sampling period.
1 −2.425 2.525 On the other hand, if the constraints are relaxed by set-
2 −1.1625 1.2625 ting y = − ymin = 10, there is no guarantee that the resulting
3 −0.53125 0.63125 output trajectory will remain within the original (ymin , ymax )
4 −0.21563 0.31562 bounds. In fact, the inset in Figure 4 shows that such a
5 −0.057813 0.15781 relaxation of the output constraints does lead to a violation
6 0 0.1 of the original upper bound for κ = 0.9.
These findings motivate the adoption of the proposed
strategy for handling the asymmetric output constraints.
Figures 5(a) and 5(b) present the simulation results obtained
In fact, the state variables x1 and x2 can be regarded as by using the CPR algorithm. As can be seen, both the output
position and velocity, respectively, and thus the equilibrium and control constraints were properly enforced, even by
can only be achieved if the velocity x2 is zero. using the extreme values of κ in the simulation.
The pseudoreferences ri and associated symmetric out- The commutation between the successive pseudorefer-
put constraints yi , which define the controllers i = 0, . . . , 6, ences is illustrated in Figure 6. This graph indicates that the
are presented in Table 1. The simulations were performed in commutation from one pseudoreference to the next occurs
the Matlab environment. in finite time, as expected. The final pseudoreference (i = 6)
corresponds to the origin, which is the desired equilibrium
5. Results and Discussions point for the regulation problem.

As discussed in Section 3, a possible approach to address 6. Conclusion


asymmetric constraints consists of adopting the conservative
bounds defined in (14). In the present example, such This paper presented a strategy for handling asymmetric
procedure amounts to setting y = ymax = 0.1. However, the output constraints within the scope of an LMI-based RMPC
Journal of Control Science and Engineering 7

6 [4] M. S. M. Cavalca, R. K. H. Galvão, and T. Yoneyama, “Robust


model predictive control for a magnetic levitation system
employing linear matrix inequalities,” in ABCM Symposium
5 Series in Mechatronics, vol. 4, Rio de Janeiro, Brazil, 2010.
[5] A. H. González, J. L. Marchetti, and D. Odloak, “Robust model
predictive control with zone control,” IET Control Theory and
Applications, vol. 3, no. 1, pp. 121–135, 2009.
4 [6] M. A. Rodrigues and D. Odloak, “Output feedback MPC with
guaranteed robust stability,” Journal of Process Control, vol. 10,
no. 6, pp. 557–572, 2000.
3 [7] M. V. Kothare, V. Balakrishnan, and M. Morari, “Robust con-
i

strained model predictive control using linear matrix inequal-


ities,” Automatica, vol. 32, no. 10, pp. 1361–1379, 1996.
[8] L. Özkan and M. V. Kothare, “Stability analysis of a multi-
2
model predictive control algorithm with application to control
of chemical reactors,” Journal of Process Control, vol. 16, no. 2,
pp. 81–90, 2006.
1 [9] Z. Wan and M. V. Kothare, “A framework for design of sched-
uled output feedback model predictive control,” Journal of
Process Control, vol. 18, no. 3-4, pp. 391–398, 2008.
0 [10] V. T. Minh and F. B. M. Hashim, “Robust model predictive
0 5 10 15 20 25 control schemes for tracking setpoints,” Journal of Control Sci-
k ence and Engineering, vol. 2010, Article ID 6494361, 2010.
[11] M. S. M. Cavalca, R. K. H. Galvão, and T. Yoneyama, “Inte-
κ = 0.9 grator resetting with guaranteed feasibility for an LMI-based
κ = 1.1 robust model predictive control approach,” in Proceedings of
the 18th Mediterranean Conference on Control and Automation,
Figure 6: Commutation between pseudoreferences. pp. 634–639, Marrakech, Morocco, 2010.
[12] Z. Wan and M. V. Kothare, “An efficient off-line formulation of
scheme. For this purpose, a procedure for defining a se- robust model predictive control using linear matrix inequali-
quence of pseudoreferences was devised, along with a rule ties,” Automatica, vol. 39, no. 5, pp. 837–846, 2003.
for commutation from one pseudoreference to the next. The [13] Z. Wan and M. V. Kothare, “Robust output feedback model
proposed approach guarantees constraint satisfaction and predictive control using off-line linear matrix inequalities,”
Journal of Process Control, vol. 12, no. 7, pp. 763–774, 2002.
convergence of the state trajectory to the origin, provided
that the algorithm for determination of the pseudoreferences
is feasible. The results of a numerical simulation study
indicated that the proposed procedure may be a suitable al-
ternative to the use of either more conservative constraints
(which may lead to infeasibility issues) or more relaxed
constraints (which do not guarantee satisfaction of the
original restrictions). Future research could be concerned
with the extension of the proposed approach to multiple
input-multiple output (MIMO) systems. In this case, it may
be necessary to define different pseudoreferences for each
constrained output under consideration.

Acknowledgments
The authors gratefully acknowledge the support of FAPESP
(scholarship 2008/54708-6 and grant 2006/58850-6), CAPES
(Pró-Engenharias), and CNPq (research fellowships).

References
[1] E. F. Camacho and C. Bordons, Model Predictive Control,
Spring, London, UK, 1999.
[2] J. M. Maciejowski, Predictive Control with Constraints, Pren-
tice-Hall, Harlow, UK, 2002.
[3] J. A. Rossiter, Model-based predictive control—a practical ap-
proach, CRC Press, New York, NY, USA, 2003.
Hindawi Publishing Corporation
Journal of Control Science and Engineering
Volume 2012, Article ID 729748, 13 pages
doi:10.1155/2012/729748

Research Article
Efficient Model Predictive Algorithms for
Tracking of Periodic Signals

Yun-Chung Chu and Michael Z. Q. Chen


Department of Mechanical Engineering, The University of Hong Kong, Hong Kong

Correspondence should be addressed to Michael Z. Q. Chen, mzqchen@hku.hk

Received 30 June 2011; Accepted 31 August 2011

Academic Editor: Baocang Ding

Copyright © 2012 Y.-C. Chu and M. Z. Q. Chen. This is an open access article distributed under the Creative Commons
Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.

This paper studies the design of efficient model predictive controllers for fast-sampling linear time-invariant systems subject to
input constraints to track a set of periodic references. The problem is decomposed into a steady-state subproblem that determines
the optimal asymptotic operating point and a transient subproblem that drives the given plant to this operating point. While the
transient subproblem is a small-sized quadratic program, the steady-state subproblem can easily involve hundreds of variables
and constraints. The decomposition allows these two subproblems of very different computational complexities to be solved in
parallel with different sampling rates. Moreover, a receding horizon approach is adopted for the steady-state subproblem to spread
the optimization over time in an efficient manner, making its solution possible for fast-sampling systems. Besides the conventional
formulation based on the control inputs as variables, a parameterization using a dynamic policy on the inputs is introduced, which
further reduces the online computational requirements. Both proposed algorithms possess nice convergence properties, which are
also verified with computer simulations.

1. Introduction On the other hand, the successful real-life applications of


MPC are mostly on systems with slower dynamics such as
One of the most attractive features of model predictive industrial and chemical processes [2]. The reason is simply
control (MPC) is its ability to handle constraints [1]. that MPC requires a constrained optimization to be carried
Many other control techniques are conservative in handling out online in a receding horizon fashion [3, 4]. Therefore,
constraints, or even try to avoid activating them, thus, to apply MPC to fast-sampling systems, the computational
sacrificing the best performance that is achievable. MPC, on power needed will be substantial. In any case, because of its
the contrary, tends to make the closed-loop system operate great success in slow-sampling systems, the trend to extend
near its limits and hence produces far better performance. MPC to fast-sampling systems is inevitable, and many recent
This property of MPC gives it the strength in practice, leading researches have been carried out to develop efficient methods
to a wide acceptance by the industry. to implement MPC in such cases. While some of these
A very good example of system operating near its works focus on unloading the computational burdens [5–
limits is a plant being driven by periodic signals to track 9], others emphasize on code optimization [10–12] and new
periodic references. Under this situation, some of the system algorithmic paradigms [13–17].
constraints will be activated repeatedly, and the optimal If MPC is applied to the tracking of periodic signals in a
operating control signal is far from trivial. Just clipping the receding horizon fashion, the horizon length will be related
control signal to fit into the system constraints produces to the period length, and a long period will imply an online
inferior performance typically. And the loss being considered optimization problem of many variables and constraints. For
here is not just a transient loss due to sudden disturbances, a fast-sampling system, it is essentially an attempt to solve
but indeed a steady-state loss due to a suboptimal operating a very big computational problem within a very small time
point. Therefore, the loss is on long term and severe. frame. In this paper, we shall analyze the structure of this
2 Journal of Control Science and Engineering

problem and then propose two efficient algorithms for the As a matter of fact, without the above considerations and
task. They aim to make the application of MPC to a fast- restrictions, the problem is not very challenging and can be
sampling system possible by a slight sacrifice on the transient tackled with standard MPC approaches for linear systems.
performance, but the optimal or near-optimal steady-state The underlying idea of the approach proposed in this
performance of periodic tracking will be maintained. paper is that since the transient behaviour of the closed-
In Section 2, the mathematical formation of the problem loop system is expected to be much shorter than the period
will be presented. The two algorithms, one based on the N p , we should decompose the original control problem
concept of receding horizon quadratic programming and the into two: one we call the steady-state subproblem and the
other based on the idea of dynamic MPC policy, will be other we call the transient subproblem. Hence, the transient
presented in Sections 3 and 4, respectively. A comment on the subproblem can be solved with a control horizon and a
actual implementation will be given in Section 5, followed prediction horizon much shorter than N p . While the steady-
by some computer simulations in Section 6 to illustrate state subproblem is still very computationally intensive and
several aspects of the proposed algorithms. Finally, Section 7 cannot be solved within one sampling interval, it is not really
concludes the paper. urgent compared with the transient subproblem, and its
To avoid cumbersome notations like u(k | k), u(k + 1 | computation can be spread over several sampling intervals.
k), . . . , u(k + Nu − 1 | k), the MPC algorithms in this paper Indeed, the two subproblems need not be synchronized even
will only be presented as if the current time is k = 0, and though the transient subproblem depends on the solution
we shall write u(0), u(1), . . . , u(Nu − 1) instead. The reader to the steady-state subproblem due to the coupled input
is asked to bear in mind that the algorithm is actually to be constraints. The former will utilize the latter whenever the
implemented in a receding horizon fashion. latter is updated and made available to the former. It is only
that the transient control will try to make the plant output
2. Problem Formulation y track a suboptimal reference if the optimal steady-state
control is not available in time.
Consider a linear time-invariant plant subject to a periodic Now let us present the detailed mathematical formula-
disturbance: tion of our proposed method. Naturally, since both w and
r are periodic with period N p , the solution u should also
x+ = Ax + B1 w + B2 u, (1) be periodic of the same period asymptotically, that is, there
should exist a periodic signal us (k) such that
y = Cx + D1 w + D2 u, (2)
lim (u(k) − us (k)) = 0. (5)
x(0) = x0 , (3) k→∞

Let xs and ys be the corresponding asymptotic solutions of


where the superscript + denotes the time-shift operator, that x and y, and they obviously satisfy the dynamics inherited
is, from (1) and (2):
x+ (k) = x(k + 1), (4) xs+ = Axs + B1 w + B2 us ,
and the disturbance w is measurable and periodic with ys = Cxs + D1 w + D2 us , (6)
period N p . The control objective is to construct a control  
signal u such that the plant output y will track a specific xs (0) = xs N p .
periodic reference r of the same period N p asymptotically
with satisfactory transient performance. The control input u Ideally, we want ys = r but this might not be achievable
is also required to satisfy some linear inequality constraints when us is required to satisfy the specific linear inequality
(e.g., to be within certain bounds). The reference r is constraints. Therefore, following the spirit of MPC, we shall
not necessarily fixed but may be different for different find us , such that
disturbance w. (For that reason, it may be more appropriate 
Js = es (k)T Qes (k) (7)
to call w an exogenous signal rather than a disturbance). k
The algorithms developed in this paper are motivated by
the following situations: is minimized for some positive definite matrix Q, where es (k)
is the asymptotic tracking error defined by
(1) the period N p is very long compared with the typical
transient behaviours of the closed-loop system; es = y s − r , (8)
(2) the linear inequality constraints on u are persistently and the summation in (7) is over one period of the signals.
 there exists a k > k such
active, that is, for any given k, This is what we call the steady-state subproblem. In Sections
that u(k) will meet at least one of the associated linear 3 and 4 below, we shall present two different approaches
equality constraints; to this steady-state subproblem, with an emphasis on their
computational efficiencies.
(3) there is not sufficient computational power to solve Once the steady-state signals are known, the transient
the associated quadratic program completely within signals defined by
one sampling interval unless both the control horizon
and the prediction horizon are much shorter than N p . ut = u − us , xt = x − xs , yt = y − ys , (9)
Journal of Control Science and Engineering 3

satisfy the dynamics Remark 2. We have deliberately omitted the R-matrix in


the steady-state cost Js in (7). The reason is simply that
xt+ = Axt + B2 ut ,
we want to recover the standard linear solution (for perfect
yt = Cxt + D2 ut , (10) asymptotic tracking) as long as us does not hit any constraint.

xt (0) = x0 − xs (0),
3. Steady-State Subproblem: A Receding
derived from (1)–(3), subject to the original linear inequality Horizon Quadratic Programming Approach
constraints being applied to ut (k) + us (k). Since the control
horizon and the prediction horizon for this transient sub- When the periodic disturbance w is perfectly known, the
problem are allowed to be much shorter than N p , it can be steady-state subproblem is also a conceptually simple (but
tackled with existing MPC algorithms. computationally high-dimensional) quadratic program. One
It is important to note that in this steady-state/transient way to know w is simply to monitor and record it over one
decomposition, the steady-state control us is actually a full period. This, however, does not work well if w is subject
feedforward control signal determined from w and r, whereas to sudden changes. For example, the plant to be considered in
the transient control ut is a feedback control signal depending our simulations in Section 6 is a power quality device called
on x. As an unstable plant can only be stabilized by Unified Power Quality Conditioner [19], where w consists of
feedback, but the main interest of the current paper is the the supply voltage and the load current of the power system,
computational complexity of the steady-state subproblem, and both may change abruptly if there are supply voltage
we shall not discuss in depth the stabilization issue, which sags/swells and variations in the load demands. Indeed,
has been studied quite extensively in the MPC literature. the main motivation of the receding horizon approach in
Typically, stabilizability of a constrained system using MPC MPC is that things never turn out as expected and the
would be cast as the feasibility of an associated optimization control signal should adapt in an autonomous manner. If the
problem. For the numerical example in Section 6, we shall suddenly changed disturbance w can be known precisely only
conveniently pick a plant where A is already stable, and hence after one full period of observation, the transient response
the following quadratic cost may be adopted for the transient of the steady-state subproblem (not to be confused with
subproblem: the transient subproblem described in Section 2) will be
N
unsatisfactory.
u −1 
One way to overcome this is to introduce an exogenous
Jt = yt (k)T Qyt (k) + ut (k)T Rut (k) + xt (Nu )T PT xt (Nu ),
k=0
model for the signals w and r, as adopted in [19]. Specifically,
(11) we construct a state-space model:

where Nu is the control horizon with Nu  N p , Q and R are v+ = Av v, (14)


chosen positive definite matrices, and PT is the (weighted)
observability gramian obtained from the Lyapunov equation w = Cw v, (15)
A PT A − PT + C QC = 0.
T T
(12) r = Cr v, (16)
The minimization of Jt is simply a standard quadratic
program over ut (0), ut (1), . . . , ut (Nu − 1) for a given xt (0). and assume that both w and r are generated from this model.
The situation will be more complicated when A is not stable, Since w and r are periodic with period N p , we have
but one well-known approach is to force the unstable modes
to zero at the end of the finite horizon [18]. Np
Av = I. (17)
Remark 1. Essentially, the choice of the cost function (11)
with PT from (12) for a stable A means that the projected One simple (but not the only) way to construct Av , as
control action after the finite horizon is set to zero, that is, demonstrated similarly in [19] in the continuous-time case,
ut (k) = 0 for k ≥ Nu since the “tail” of the quadratic cost is is to make Av a block-diagonal matrix with each block taking
then given by the form:

 ∞
 ⎡ ⎤
yt (k)T Qyt (k) = xt (k)T C T QCxt (k) cos(nωTs ) − sin(nωTs )
⎣ ⎦, (18)
k=Nu k=Nu (13) sin(nωTs ) cos(nωTs )
T
= xt (Nu ) PT xt (Nu ).
where n is an integer, Ts is the sampling time and ωTs ×
This terminal policy is valid because the steady-state sub- N p = 2π. Then the matrices Cw and Cr are just to sum
problem has already required that us satisfies the linear up their respective odd components of v. This essentially
inequality constraints imposed on u. Hence, Jt is obviously performs a Fourier decomposition of the signals w and
a Lyapunov function which will be further decreased by the r, and hence their approximations by Cw v and Cr v will
receding horizon implementation when the future control be arbitrarily good when more and more harmonics are
ut (Nu ) turns into an optimization variable from zero. included in the model.
4 Journal of Control Science and Engineering

Based on the exogenous model (14)–(17), an observer (2) Compute a search direction w0 = Z0 w
0 , where
can be easily constructed to generate (an estimate of) v from  
the measurements of w and r. From v(0), the model (14)– ZT0 HZ0 w
0 + ZT0 g0 = 0. (25)
(17) can then generate predictions of w(0), w(1), . . . , w(N p −
1) and r(0), r(1), . . . , r(N p − 1), and these can be used to find (3) Let
us (0), us (1), . . . , us (N p − 1) by the quadratic program. The
use of the exogenous model (14)–(17) typically allows the b j − aTj us (0)
changed w and r to be identified much sooner than the end α0 = min . (26)
j ∈A(us (0))\W0 aTj w0
of one full period. s.t. aTj w0 >0.
The quadratic program for the steady-state subproblem
can be written as follows: (4) If α0 ≥ 1, go to step (5). Otherwise, update us (0) to
min Js , u∗s (0) by
us (0)
⎡ ⎤T ⎡ ⎤⎡ ⎤ u∗s (0) = us (0) + α0 w0 , (27)
us (0) H F us (0)
Js := ⎣ ⎦ ⎣ ⎦⎣ ⎦, and add a (blocking) constraint to W0 to form a new
v(0) FT G v(0)
working set W0∗ according to the method described
⎡ ⎤ (19) in Remark 7 below. Quit.
us (0)
⎢ ⎥ (5) Update us (0) to u∗s (0) by
⎢ u (1) ⎥
⎢ s ⎥
⎢ ⎥
us (0) ⎢
:= ⎢ ⎥, u∗s (0) = us (0) + w0 .
.. ⎥ (28)
⎢ . ⎥
⎢ ⎥
⎣  ⎦
us N p − 1 Compute the Lagrange multiplier λ0 from

AT0 λ0 + g0 = 0 (29)
subject to aTj us (0) ≤ b j , j = 1, 2, . . . , Nm , (20)
where Nm is the total number of linear inequality constraints. to see whether any component of λ0 is negative. If
Note that since we assume only input but not state con- yes, remove one of the constraints corresponding to
straints for our original problem, (20) does not depend on a negative component of λ0 from W0 to form a new
v(0) and, hence, the feasibility of any us (0) remains the same working set W0∗ according to the method described
even if there is an abrupt change in v(0) (i.e., if v(0) is in Remark 7 below. Quit.
different from the predicted value from (14) and v(−1)).
Algorithm 4 can be interpreted as follows. It solves the
Furthermore, the active set of constraints remains the same.
equality-constrained quadratic program:
Definition 3. The active set A(us (0)) of any feasible us (0)
min Js ,
satisfying (20) is the subset of {1, 2, . . . , Nm } such that j ∈
us (0)
A(us (0)) if and only if aTj us (0) = b j . ⎡ ⎤T ⎡ ⎤⎡ ⎤
u s (0) H F u s (0)
Next, we present a one-step active set algorithm to solve Js := ⎣
⎦ ⎣ ⎦⎣ ⎦ , (30)
the quadratic program (19)-(20) partially. v(0) FT G v(0)

Algorithm 4. Given an initial feasible us (0) and a working set subject to aTj u s (0) = b j , j ∈ W0 ,
W0 ⊂ A(us (0)). Let the set of working constraints
and then searches along the direction
aTj us (0) ≤ b j , j ∈ W0 , (21)
w0 = u s (0) − us (0), (31)
be represented by
until it is either blocked by a constraint not in W0 (step (4))
A0 us (0) ≤ b0 , (22) or the optimal u s (0) is reached (step (5)). This is indeed a
single step of the standard active set method (the null-space
where the inequality sign applies componentwise, that is,
approach) for quadratic programming [20, 21] except for
each row of A0 , b0 represents a working constraint in (21).
the modifications that will be detailed in Remark 7 below.
(1) Compute the gradient In other words, if we apply Algorithm 4 repeatedly to the
new u∗s (0), W ∗ (0), it will converge to the solution of the
g0 = Hus (0) + Fv(0), (23) original inequality-constrained quadratic program (19)-(20)
within a finite number of steps, and the cost function Js is
and the null space of A0 , denoted Z0 by
strictly decreasing. However, here we only apply a single step
A0 Z0 = 0. (24) of the active set method due to the limited computational
power available within one sampling interval. Furthermore,
If ZT0 g0 ≈ 0, go to step (5). we do not even assume that the single step of computation
Journal of Control Science and Engineering 5

can be completed within Ts . Let Na Ts be the time required or changed. However, it turns out that the first choice is actually
allowed to carry out Algorithm 4. To complete the original undesirable. One well-known “weakness” of the active set
quadratic program (19)-(20) in a receding horizon fashion, method is that it is not so easy to remove a constraint once it
we need to forward u∗s (0), W ∗ (0) to us (Na ), W (Na ) by enters the working set W . This becomes even more a concern
rotating the components of u∗s (0) by an amount of Na since in the receding horizon framework. If v has changed, and
it is supposed to be periodic: so has the objective function (19), the original stationary
⎡ ⎤ u∗s obtained in step (5) may no longer be stationary, and
us (Na ) ⎡ ⎤
⎢ ⎥ ⎢
u∗s (Na ) it will require at least an additional iteration to identify
⎢ us (Na + 1) ⎥ ⎢ ∗ ⎥ the new stationary point before we can decide whether any
⎢ ⎥ ⎢ us (Na + 1) ⎥ ⎥
⎢ ⎥ ⎢ ⎥ constraint can be dropped from the working set or not.
⎢ .. ⎥ ⎢ ⎥
⎢ ⎥ ⎢ .. ⎥ This will seriously slow down the transient response of the
⎢ . ⎥ ⎢ ⎥
⎢   ⎥ ⎢⎥ . ⎥ steady-state subproblem. Indeed, once v has changed, many
⎢ ⎢   ⎥
⎢ u N −1 ⎥ ⎢
⎢ s p ⎥ ⎢u∗ N p − 1 ⎥ ⎥ of the constraints in the previous working set are no longer
⎢   ⎥ s
⎢ ⎥=⎢ ⎥. (32)
⎢ us N p ⎥ ⎢ u∗ (0) ⎥


sensible, and it will be wiser to drop them hastily rather
⎢ ⎥ ⎢ s ⎥ than being too cautious only to find much later that the
⎢   ⎥
⎢ ⎥ ⎢ ⎥
⎢ us N p + 1 ⎥ ⎢
⎢ u∗s (1) ⎥ ⎥ constraints should still be dropped eventually.
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ .. ⎥ ⎢ .. ⎥
⎢ . ⎥ ⎢ . ⎥ Remark 7. One key element in the active set method of the
⎢ ⎥ ⎣ ⎦
⎣   ⎦ quadratic program is to add or drop a constraint to or from
us Na + N p − 1 u∗s (Na − 1) the working set W . The constraint being added belongs to the
blocking set B, defined as those constraints corresponding to
Obviously, Algorithm 4 will then continue to solve an equiv- the minimum α0 in (26). Physically, they are the constraints
alent quadratic program as long as v strictly follows the that will be first encountered when we try to move us (0)
exogenous dynamics (14). Hence we have the following to u s (0) in (31). The constraint being dropped belongs to
convergence result. the set L, defined as those constraints corresponding to a
negative component of the Lagrange multiplier in (29). The
Proposition 5. Algorithm 4 together with the rotation of the active set method will converge in finite time no matter
components of u∗s (0) in (32) will solve the quadratic program which constraint in B will be added or which constraint
(19)-(20) in finite time as long as v(k) satisfies the exogenous in L will be dropped. One standard and popular choice
dynamics (14). in the conventional active set method is that the one in L
corresponding to the most negative component of λ will be
Proof. From the argument above it is easy to see that as long dropped, whereas the choice from B will be arbitrary. This is
as v(k) follows the dynamics (14), the algorithm is consis- a very natural choice when there is no other consideration.
tently solving essentially the same quadratic program. So it However, in the receding horizon framework, one other
remains to check that the convergence proof of the standard (and in fact important) consideration emerges, which is the
active set algorithm remains valid despite the modifications execution time of the control input(s) associated with a
we shall detail in Remark 7, which is indeed the case. constraint. Specifically, if Algorithm 4 takes time Na Ts to
Of course, the most interesting feature of the reced- carry out, then W0∗ updated in the current iteration will be
ing horizon approach is that the solution will adapt used to optimize us (Na ) in the next iteration,
autonomously to the new quadratic program if there is an ⎡ ⎤
abrupt change in v. Since constraint (20) is independent of us (Na )
⎢ ⎥
v, an abrupt change in v will not destroy the feasibility of the ⎢ u (N + 1) ⎥
⎢ s a ⎥
previously determined us and the working set W determined ⎢ ⎥
us (Na ) := ⎢
⎢ .. ⎥,
⎥ (33)
previously also remains a valid subset of the active set. Hence, ⎢ . ⎥
⎢ ⎥
the receding horizon active set method will continue to ⎣  ⎦
work even though the objective function (19) has changed. us Na + N p − 1
However, if it is necessary to include not only control but also
state constraints into the original problem formulation, we but the outcome of that optimization is ready only at k =
shall then require a quadratic programming algorithm (other 2Na , based on which the transient control ut is computed.
than the active set method in its simplest form) that does not Suppose that the transient subproblem takes one sampling
assume the initial guess to be feasible. interval to solve, then the transient subproblem at k = 2Na
will update u(2Na + 1) = us (2Na + 1) + ut (2Na + 1) (see
Remark 6. There could be two possible ways to arrange the Section 5 below for a more detailed discussion). Hence,
steps in Algorithm 4. One is to update the working set W the “time priority” of us is 2Na + 1, 2Na + 2, . . . , N p −
followed by us , and the other is to update us followed by 1, 0, 1, . . . , 2Na and from this argument, we choose to drop
the working set W . In the receding horizon framework, the constraint in L that is associated with the first us in this
it might seem at first glance that the first choice is the sequence or to add the constraint in B that is associated
right one, since we shall then avoid basing the optimization with the last us in this sequence (of course the most negative
of us on an “outdated” working set if v happens to have Lagrange multiplier can still be used as the second criterion
6 Journal of Control Science and Engineering

if two constraints in L happen to have the same urgency). where


The proposal here aims to assign most freedom to the most
T HO,
=O
H
urgent control input in the optimization, which makes sense
in the receding horizon framework since the less urgent
F
= FO, (39)
inputs may be reoptimized later.
Remark 8. Basically, the approach proposed in this section
a j = a j O, j = 1, 2, . . . , Nm ,
is to spread the original quadratic program over many
intervals, so that each interval only carries out one iteration is the observability matrix
and O
of the algorithm, and also to ensure that the quadratic ⎡ ⎤
program being solved is consistent when the prediction Cv
⎢ ⎥
of the exogenous model is valid, but will migrate to a ⎢ Cv Av ⎥
⎢ ⎥
new quadratic program when there is a change in v(k). ⎢ ⎥
O := ⎢
⎢ .. ⎥.

(40)
It is worth mentioning that the original standard MPC is ⎢ . ⎥
a static controller by nature, since the true solution of a ⎣ ⎦
N p −1
complete quadratic program is independent of the MPC’s Cv Av
decisions in the past (past decisions can help to speed up
the computations but will not affect the solution), but by The number of variables in this new quadratic program is the
spreading the algorithm over several intervals, it is turned dimension of v (0), denoted by nv . If Av is constructed from
into a dynamic controller with internal state us (k), W (k). the method of Fourier decomposition described in Section 3,
Shannon’s sampling theorem implies that a sufficiently large
but finite nv will guarantee a full reconstruction of the
4. Steady-State Subproblem: A Dynamic original optimization variable us (0). On the other hand,
Policy Approach a smaller nv restricts the search to a lower dimensional
The approach proposed in Section 3 optimizes us . Conse- subspace of us (0) and hence the optimization is easier but
quently, the number of (scalar) variables being optimized is suboptimal.
proportional to N p . To further cut down the computations One natural choice of the dynamics Av is to make Av =
required, this section proposes another approach based on Av in the exogenous model (14)–(17). Of course, it should
the idea of a dynamic policy, inspired by the works of be noted that constrained control is generally a nonlinear
[13, 22, 23]. This approach optimizes a smaller number of problem, and therefore the number of harmonics to be
variables typically, and the number is independent from N p , included in us may exceed that of w and r in order to achieve
although the number of linear inequality constraints remains the true optimal performance. However, we could have over-
the same. In return, the optimization result is expected to designed the exogenous model (14)–(17) to include more
be slightly inferior to that of Section 3 due to the reduction harmonics in Av than necessary for w and r, thus making
of variables (degree of freedom). However, it should be the choice Av = Av here not so conservative. The simulation
noted that the number of optimized variables in this second results in Section 6 will demonstrate both cases.
approach is adjustable based upon the designer’s wish. It remains to choose the matrix Cv in (35). The one we
The central idea of the dynamic policy approach [13, 22, suggest here is based on the linear servomechanism theory
23] is that instead of optimizing the control input directly, [24–26], which solves the linear version of our problem when
we generate the control input by a dynamic system of which there is no input constraint. Essentially, when there is no
the initial system state is optimized. This is similar to what constraint, perfect asymptotic tracking (i.e., ys = r or es = 0)
we have done to w and r in Section 3. Specifically, we assume can be achieved by solving the regulator equation:
us is also generated from a state-space model: XAv = AX + B1 Cw + B2 U,
v + = Av v , (34) (41)
Cr = CX + D1 Cw + D2 U,
us = Cv v , (35)
for the matrices X, U, and then let
where
Np us = Uv, (42)
Av = I. (36)
This state-space model is designed a priori but the initial which also implies
state v (0) will be optimized online. Obviously, the quadratic xs = Xv. (43)
program (19)-(20) becomes
min J s , Therefore, to recover the optimal (or perfect) solution in the
v (0)
linear case when us does not hit any constraint, the state-
⎡ ⎤T ⎡ ⎤⎡ ⎤
v (0) F
H v (0) (37) space model of us may be chosen as
J s := ⎣ ⎦ ⎣ ⎦⎣ ⎦,
v(0) T G
F v(0) v+ = Av v,
(44)
subject to aTj v (0) ≤ b j ,
j = 1, 2, . . . , Nm , (38) us = U v.
Journal of Control Science and Engineering 7

u(1)
us (1) ut (1)
us (1)
us (2) xt (1)
..
.
us (Nu ) xs (1) x(1)
us (1) or v(1) v(1)
us (0) or v(0) v(0) x(0) u(0)
v(k) for some
−2Na + 1 ≤ k ≤ − Na

Figure 1: Derivations of unknown variables from known variables.

However, this state-space model is not guaranteed to be necessary to solve the quadratic program completely within
observable. When it is not, the resulting H in (37) will one sampling interval. Instead of rotating the components of
become semidefinite instead of strictly positive definite. To u∗s (0) to obtain us (Na ), we obtain v (Na ) by ANv a v ∗ (0).
overcome this, we suggest to perform an orthogonal state
transformation 5. Implementation Issues and Impacts on
⎡ ⎤
∗ Transient Performance
⎣ ⎦ = T v, (45)
v Before we present the simulation results, let us comment on
the impact of computational delay on the transient subprob-
to bring (44) to the Kalman decomposition form lem in Section 2. First of all, we assume that the transient
⎡ ⎤+ ⎡ ⎤⎡ ⎤ quadratic program can be solved completely within one
∗ ∗ ∗ ∗ sampling interval. Therefore, despite the way we presented
⎣ ⎦ =⎣ ⎦⎣ ⎦,
v 0 Av v the cost function Jt in (11), in actual implementation we shall
⎡ ⎤ (46) optimize ut (1), ut (2), . . . , ut (Nu ) based on xt (1) at time k =
 ∗ 0, instead of ut (0), ut (1),. . . ,ut (Nu − 1) based on xt (0). The
us = 0 Cv ⎣ ⎦, unknown xt (1) can be projected from the known variables
v and system dynamics. After the optimization is carried out
to obtain ut (1), ut (2), . . . , ut (Nu ), the control input to be
and hence obtain a reduced-order model to become (34)-
Np executed at k = 1 will be u(1) = us (1) + ut (1). Bear in
(35). It is easy to verify that Av = I since mind that all these calculations can only be based on the best
⎡ ⎤ knowledge of the signals at k = 0.
∗ ∗ Figure 1 summarizes how the unknown variables can
⎣ ⎦ (47)
0 Av be computed from the known variables. The variables in
each layer are derived from those variables directly below
is upper block-triangular and them, but in actual implementation it is sometimes possible
⎡ ⎤N p to derive explicit formulas to compute the upper layer
∗ ∗  N p from the lower layer bypassing the intermediate layer, thus
⎣ ⎦ = TAv T −1 not requiring those intermediate calculations online. The
0 Av
variable on top is the control action u(1), computed from
(48) the variables of the steady-state subproblem on the left, and
Np
= TAv T −1
those of the transient subproblem on the right, separated
= I. by a solid line. The variables in the bottom layer, v(0) is
provided by the observer described in Section 3, x(0) is
Certainly, the discussion above only provides a sugges- a measurement of the current plant state, and u(0) is the
tion of how to choose the state-space model for us , which control input calculated by the algorithm at k = −1. Note
we shall also adopt for our simulations in Section 6, but, that to compute ut (1), the values of us (1), us (2), . . . , us (Nu )
in general, the designer should feel free to employ any valid are needed to form the constraints for the transient quadratic
state-space model to suit his problem. program since the original linear inequality constraints apply
to u(k) = us (k) + ut (k). On the other hand, xs (1) can
Remark 9. Having reparameterized the quadratic program be written as a linear function of v(1) and us (1) (or v (1))
in terms of v (0) rather than us (0), we can apply a similar explicitly. Finally, the steady-state subproblem requires a
version of Algorithm 4 to (37)-(38). In other words, it is not computational time of Na Ts , implying that the solution us (0)
8 Journal of Control Science and Engineering

provided by the steady-state subproblem at k = 0 is based on The exogenous input w is composed of the supply voltage
a measurement of v(k) at some k between −2Na +1 and −Na . and the load current, which are periodic at 50 Hz but may
So in the worst case, u(1) is based on some information as old consist of higher-order harmonics. The plant output y is
as v(−2Na + 1), which corresponds to a worst-case delay of composed of the load voltage and the supply current, which
2Na Ts . For instance, if Na = 1, the control u(k) is computed will be made to track designated pure sine waves of 50 Hz.
from a measurement of x(k − 1), v(k − 1), and v(k − 2). The control input u is composed of two switching signals
across the voltage source inverters (VSIs), both of which
Remark 10. Although we said in Section 2 that the transient are required to satisfy the bounds −1 ≤ u ≤ 1. The
subproblem was not the main interest of this paper, it is an general control objective is to maintain y to the desired
ideal vehicle to demonstrate the power of MPC since the waveforms despite possible fluctuations in w like supply
“useful freedom” of ut (k) may have been totally consumed voltage sags/swells or load demand changes. To apply the
by us (t) when the latter hits a constraint. For example, the MPC algorithms proposed, we obtain a discrete-time version
simulations to be discussed in Section 6 have the constraint of the above state-space model by applying a sampling
|u(k)| = |us (k) + ut (k)| ≤ 1. (49) interval of Ts = 0.2 ms (i.e., 100 samples per period). Small-
sized quadratic programs (such as our transient subproblem)
If us (k) already saturates at ±1, one side of ut (k) is lost can possibly be solved within such a short time thanks to
but that could be the only side to make the reduction of Jt the state-of-the-art code optimization [12], which reports
possible, thus forcing ut (k) to zero. So the input constraint sampling rates in the range of kHz, but to solve a big
does not only restrict the magnitude, but also the time quadratic program like our steady-state subproblem we
instant to apply the transient control ut . Such problem is shall resort to the technique of Algorithm 4. Note that in
extremely difficult for conventional control techniques. our formulation, the transient subproblem and the steady-
state subproblem can be solved in parallel. Although the
6. Simulation Results optimization of ut depends on us , the transient control
ut (k + 1) is computed from us (k) which is made available
In this section we use an example to demonstrate the by the steady-state subproblem in the previous step. So it
performance of our algorithms by computer simulations. is independent of the current steady-state subproblem being
The plant is borrowed from [19] and represents a power solved.
quality device called Unified Power Quality Conditioner As typical in a power system, we assume only odd har-
(UPQC), which has the following continuous-time state- monics in the supply voltage and the load current. Hence we
space model: can reduce the computations in the steady-state subproblem
ẋ = Ax + B1 w + B2 u, by the following easy modifications from the standard
algorithms presented in Sections 3 and 4; N p may be chosen
y = Cx + D1 w + D2 u, to represent half of the period instead of the whole period,
⎡ ⎤ satisfying
−Rl −1 −1
⎢ 0 0  
⎢ Ll Ll Ll ⎥ ⎥ xs (0) = −xs N p , (51)
⎢ ⎥
⎢ −Rse −1 ⎥
⎢ 0 0 0 ⎥
⎢ Lse Lse ⎥
⎢ ⎥ instead of (6), and
⎢ −Rsh −1 ⎥
A=⎢
⎢ 0 0 0 ⎥,
⎢ Lsh Lsh ⎥
⎥ Np
⎢ 1 1 ⎥ Av = −I,
⎢ 0 ⎥
⎢ 0 0 ⎥ (52)


Cse Cse ⎥

Np
⎣ 1 1 ⎦ Av = −I,
0 0 0
Csh Csh instead of (17) and (36), with ωTs × N p = π instead of 2π.
⎡ ⎤ ⎡ ⎤ The rotation operation in (32) should also become
1 0 0 (50)
⎢L 0 ⎥
⎢ l ⎥ ⎢ ⎥ ⎡ ⎤ ⎡
⎢ ⎢ Vdc ⎥ us (Na ) ⎤
⎢0 0 ⎥
⎥ ⎢
⎢ 0 ⎥
⎥ ⎢ ⎥ ⎢
u∗s (Na )
⎢ ⎥ ⎢ 2Lse ⎥ ⎢ us (Na + 1) ⎥ ⎢ ∗ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ us (Na + 1) ⎥
B1 = ⎢

0 0 ⎥⎥, B2 = ⎢
⎢ 0
V dc ⎥,
⎥ ⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ 2Lsh ⎥ ⎢ . ⎥ ⎢ ⎥
⎢0 0 ⎥ ⎢ ⎥ ⎢ .. ⎥ ⎢ .. ⎥
⎢ ⎥ ⎢ 0 0 ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎣ ⎦ ⎢  ⎥
 ⎥ ⎢
. ⎥
⎣ −1 ⎦ ⎢
⎢ u N −1 ⎥ ⎢  ⎥
0
Csh 0 0 ⎢ s p ⎥ ⎢ u∗ N p − 1 ⎥


⎢   ⎥ ⎢ s ⎥.
⎢ ⎥=⎢ ⎥ (53)
⎡ ⎤ ⎢ u N ⎥
⎥ ⎢ −us (0) ⎥
⎢ ∗
0 0 0 0 1 ⎢ s p

⎢  ⎥
 ⎥ ⎢ ⎥
C=⎣ ⎦, ⎢
⎢ us N p + 1 ⎥ ⎢ − u∗
(1) ⎥
1 0 0 0 0 ⎢ ⎥ ⎢ s ⎥
⎢ ⎥ ⎢



⎡ ⎤ ⎡ ⎤ ⎢ . ⎥ ⎢ . ⎥
⎢ .. ⎥ ⎢ .. ⎥
0 0 0 0 ⎢ ⎥ ⎣ ⎦
D1 = ⎣ ⎦, D2 = ⎣ ⎦. ⎣  ⎦
0 0 0 0 us Na + N p − 1 −us (Na − 1)

Journal of Control Science and Engineering 9

w1 : supply voltage (volts)

100 100 100

0 0 0

−100 −100 −100

0.45 0.5 0.55 0.6 0.65 0.95 1 1.05 1.1 1.15 1.45 1.5 1.55 1.6 1.65
Time (seconds) Time (seconds) Time (seconds)

w2 : load current (amperes)

10 10 10

0 0 0

−10 −10 −10

0.45 0.5 0.55 0.6 0.65 0.95 1 1.05 1.1 1.15 1.45 1.5 1.55 1.6 1.65
Time (seconds) Time (seconds) Time (seconds)
r1 : reference of load voltage (volts)

100 100 100

0 0 0

−100 −100 −100

0.45 0.5 0.55 0.6 0.65 0.95 1 1.05 1.1 1.15 1.45 1.5 1.55 1.6 1.65
Time (seconds) Time (seconds) Time (seconds)
r2 : reference of supply current (amperes)
100 100 100

0 0 0

−100 −100 −100


0.45 0.5 0.55 0.6 0.65 0.95 1 1.05 1.1 1.15 1.45 1.5 1.55 1.6 1.65
Time (seconds) Time (seconds) Time (seconds)

Figure 2: Simulation scenario. Voltage sag at t = 0.5 s; load demand changed at t = 1.0 s; sag cleared at t = 1.5 s.

Table 1: Values of the components of the UPQC. unexpected voltage sag/swell or load demand that is beyond
its designed capability.
Component Vdc Lse Lsh Cse Csh
The simulation scenario is summarized in Figure 2. Both
Value 320 V 5.0 mH 1.2 mH 10 μF 20 μF the supply voltage and the load current consist of odd
harmonics up to the 9th order. Despite the harmonics, it is
Table 2: Line impedance and VSI impedances of the UPQC. desirable to regulate the load voltage to a fixed pure sine wave,
whereas the supply current should also be purely sinusoidal,
Component Rl Ll Rse Rsh but its magnitude and phase are selected to maintain a
Value 0.01 Ω 1.0 mH 0.01 Ω 0.01 Ω power factor of unity and to match the supply active power
to the active power demanded by the load, which means
the reference of this supply current is w-dependent. The
This cuts down half of the scalar variables as well as con- waveforms of both w and r are shown in Figure 2. The
straints in the quadratic program. simulation scenario is designed such that the steady-state
The model parameters of the UPQC used in our simula- control us is not saturated at the beginning. At t = 0.5 s,
tions are summarized in Table 1 for the circuit components a voltage sag occurs which reduces the supply voltage to
and Table 2 for the line and VSI impedances. They are the 10% of its original value. The UPQC is expected to keep the
same as those values in [19], except for Vdc which we have load voltage unchanged but (gradually) increase the supply
changed from 400 V to 320 V so as to produce a saturated current so as to retain the original active power. This will
control u more easily. Note that Vdc is the DC-link voltage, drive us into a slightly saturated situation. At t = 1.0 s,
which determines how big a fluctuation in the supply voltage the load demand increases, causing the reference of the
or load current the UPQC can handle. In other words, supply current to increase again, and us will become deeply
saturation occurs when the UPQC is trying to deal with an saturated. At t = 1.5 s, the voltage sag is cleared and the
10 Journal of Control Science and Engineering

Quadratic cost Js , receding horizon quadratic programming approach


104 104 104

102 102 102

100 100 100

10−2 10−2 10−2

10−4 10−4 10−4

10−6 10−6 10−6

10−8 10−8 10−8


0.5 0.6 0.7 1 1.1 1.2 1.5 1.6 1.7
Time (seconds) Time (seconds) Time (seconds)

MVR
MPC, complete QP
MPC, algorithm 1

Figure 3: Quadratic cost Js of the steady-state subproblem, the receding horizon quadratic programming approach.

supply voltage returns to its starting value, but the (new) load where F is the optimal state-feedback gain minimizing the
demand remains. Although the load demand is still higher transient quadratic cost
than its initial value, the us required will just be within the ∞ 
 
bounds of ±1, thus leaving the saturation region to return yt (k)T Qyt (k) + ut (k)T Rut (k) . (56)
to the linear region. So, in short, us is expected to experience k=0
“linear → slightly saturated → deeply saturated → linear but
nearly saturated” in this simulation scenario. However, the combined input u is still clipped at ±1. We
To evaluate the performance of our algorithms, we label this control law the multivariable regulator (MVR)
compare them to two other cases. In the first case, instead following the linear servomechanism theory. This case serves
of Algorithm 4, the complete quadratic program is solved to indicate how bad the quadratic cost Js can be if a linear
in each iteration of the steady-state subproblem every Na - control law is used without taking the constraints into
sampling intervals. We call this case the complete QP, consideration.
and it serves to indicate how much transient performance Note that in both cases, the computational delays dis-
has been sacrificed (in theory) by spreading the quadratic cussed in Section 5 will be in force, where u(k + 1) instead
program over a number of iterations. In the second case, the of u(k) will be optimized and the steady-state control us
constraints are totally ignored, such that the optimal us (0) in is only updated every Na -sampling intervals. In reality, of
the steady-state subproblem is supposed to be course the MVR should involve negligible computational
delay, whereas the complete QP should need a longer time
⎡ ⎤ to solve than Algorithm 4, but we are merely using their
U associated quadratic costs here to analyze the behaviours of
⎢ ⎥
⎢ UAv ⎥ our algorithms.
⎢ ⎥
⎢ ⎥ Figure 3 plots the steady-state cost Js of our first ap-
us (0) = ⎢ .. ⎥v(0), (54)
⎢ ⎥ proach in Section 3 based on receding horizon quadratic
⎢ . ⎥
⎣ ⎦ programming together with the costs in the other two cases.
N p −1
UAv Na is assumed to be 3 in this simulation. The transient
subproblem has a control horizon of Nu = 5, corresponding
to 10 scalar variables and 20 scalar inequality constraints.
where U is the solution to the regulator equation (41). The
On the other hand, the steady-state subproblem has N p =
transient subproblem can also be solved by
50 corresponding to 100 variables and 200 constraints. As
shown in Figure 3, all Js are zero prior to t = 0.5 s and
ut = Fxt , (55) should also settle down to zero after t = 1.5 s. It is observed
Journal of Control Science and Engineering 11

Table 3: Summary of Js values for various cases studied.

Steady-state cost Js Na Number of


at t < 1.0 s at t < 1.5 s Variables Constraints
MVR 0.1156 53.276
MPC, receding horizon quadratic programming 0.0178 8.7527 3 100 (N p × 2) 200 (N p × 2 × 2)
MPC, dynamic policy (up to 9th harmonics) 0.0225 9.0273 1 nv = 20 (5 × 2 × 2) 200 (N p × 2 × 2)
MPC, dynamic policy (up to 29th harmonics) 0.0182 8.7640 2 nv = 60 (15 × 2 × 2) 200 (N p × 2 × 2)
Transient subproblem 10 (Nu × 2) 20 (Nu × 2 × 2)

1.005 1.005

1 1

0.995 0.995

0.99 0.99

0.985 0.985
0.99 0.992 0.994 0.996 0.998 1 1.49 1.492 1.494 1.496 1.498 1.5
Time (seconds) Time (seconds)

Control u, MVR Control u, MVR


Control u, MPC algorithm 1 Control u, MPC algorithm 1

2 2

1 1

0 0

−1 −1

−2 −2

0.99 1 1.01 1.02 1.49 1.5 1.51 1.52


Time (seconds) Time (seconds)
us used to compute ut , MPC us used to compute ut , MPC
Transient control ut , MPC Transient control ut , MPC

Figure 4: The first component of controls u and ut before and after transitions at t = 1.0 s and t = 1.5 s.

that the transient response of our Js is pretty close to that from the working set as discussed in Remark 6. Figure 3 also
of the complete QP during the transitions from “linear” to indicates that the MVR settles down to a much higher Js value
“slightly saturated” and from “slightly saturated” to “deeply when a saturation occurs, due to its ignorance of the control
saturated,” but is poorer when it tries to return from “deeply constraints. The exact values of Js just prior to t = 1.0 s and
saturated” to “linear.” This can probably be attributed to the t = 1.5 s are summarized in Table 3, which are about 6-7
weakness of the active set method in removing a constraint times the Js values of our algorithm.
12 Journal of Control Science and Engineering

Quadratic cost Js , dynamic policy approach


104 104 104

102 102 102

100 100 100

10−2 10−2 10−2

10−4 10−4 10−4

10−6 10−6 10−6

10−8 10−8 10−8


0.5 0.6 0.7 1 1.1 1.2 1.5 1.6 1.7
Time (seconds) Time (seconds) Time (seconds)

MVR
MPC, complete QP
MPC, algorithm 1

Figure 5: Quadratic cost Js of the steady-state subproblem, the dynamic policy approach.

Figure 4 zooms into the first component of the control of variables is much lower than that of the first approach,
input u. The top two plots draw attentions to the steady-state we assume Na = 1 here, that is, one iteration of Algorithm 4
control us (u is essentially us just prior to t = 1.0 s and t = (equivalent version) is carried out in each sampling interval,
1.5 s). It is observed that the differences in us between MVR whereas the transient subproblem is solved completely within
and MPC are very subtle. Compared with the MVR, the MPC each sampling interval. The transient performance of Js is
us merely reaches and leaves its limit (+1 here) at very slightly plotted in Figure 5. Note that the MVR curve exhibits a
different time instants and also produces some “optimized slightly different transient from Figure 3 since their Na values
ripples” of less than 0.25% around that limit instead of a are different. The dynamic policy approach clearly shows a
“flat” value as adopted in the clipped linear control law, but faster transient response than the receding horizon quadratic
by doing these little things the MPC manages to bring Js programming approach, not only because of a smaller Na but
down by almost one order of magnitude. This demonstrates also a smaller-sized quadratic program overall. However, the
how nontrivial the optimal us can be. We can also see from drawback is a slightly suboptimal Js , as indicated in Table 3.
the plots that only one constraint is active in the “slightly As mentioned in Section 4, it is possible to over-design
saturated” situation whereas multiple constraints are active Av , and hence Av , so that the optimal Js in this second MPC
in the “deeply saturated” saturation. On the other hand, method will approach the first MPC method. For example,
the bottom two plots in Figure 4 illustrates our discussion although we only have odd harmonics up to the 9th order in
in Remark 10. The plots clearly show that during certain w, we may include odd harmonics up to the 29th order in Av
moments of the transient stages (t > 1.0 s and t > 1.5 s), and Av . The results are also recorded in Table 3, and we see
the transient control ut is “disabled” due to the saturation that this Js value is very close to the optimal one in the first
of the steady-state control us . Note that we are labeling the method.
dashed blue curve as “us used to compute ut ” since it is
slightly different from the actual us . For instance, ut (k + 1) 7. Conclusions
is computed from the knowledge of us at time k, which is
not exactly the same as us (k + 1). Obviously, ut is not just To apply MPC to fast-sampling systems with input con-
disabled whenever us saturates. It happens only when the straints for the tracking of periodic references, efficient
desired direction of ut violates the active constraint. algorithms to reduce online computational burdens are
Next, let us look at the performance of our second necessary. We have decomposed the tracking problem into
MPC approach in Section 4 based on dynamic policy. Odd a computationally complex steady-state subproblem and a
harmonics up to the 9th order are included in Av resulting computationally simple transient subproblem, and then pro-
in a total of nv = 20 variables. See Table 3. Since the number posed two approaches to solve the former. The first approach,
Journal of Control Science and Engineering 13

based on the concept of receding horizon quadratic pro- [13] M. Cannon and B. Kouvaritakis, “Optimizing prediction
gramming, spreads the optimization over several sampling dynamics for robust MPC,” IEEE Transactions on Automatic
intervals, thus reducing the computational burdens at the Control, vol. 50, no. 11, pp. 1892–1897, 2005.
price of a slower transient response. The second approach, [14] A. Richards, K.-V. Ling, and J. Maciejowski, “Robust mul-
based on the idea of a dynamic policy on the control input, tiplexed model predictive control,” in Proceedings of the
further reduces the online computations at the price of European Control Conference, Kos, Greece, July 2007.
a slightly suboptimal asymptotic performance. Despite the [15] K. V. Ling, W. K. Ho, Y. Feng, and B. Wu, “Integral-square-
limitations, these approaches make the application of MPC error performance of multiplexed model predictive control,”
to fast-sampling systems possible. Their transient behaviours IEEE Transactions on Industrial Informatics, vol. 7, no. 2, pp.
196–203, 2011.
and steady-state optimality have been analyzed via computer
simulations, which have also demonstrated that the steady- [16] D. Li and Y. Xi, “Aggregation based robust model predictive
controller for systems with bounded disturbance,” in Pro-
state subproblem and the transient subproblem can be
ceedings of the 7th World Congress on Intelligent Control and
solved in parallel with different sampling rates. When Automation, (WCICA ’08), pp. 3374–3379, Chongqing, China,
the methods proposed in this paper are combined with June 2008.
modern code optimizations, the applicability of MPC to the [17] D. Li and Y. Xi, “Aggregation based closed-loop MPC with
servomechanism of fast-sampling constrained systems will guaranteed performance,” in Proceedings of the 48th IEEE
be greatly enhanced. Conference on Decision and Control, (CDC/CCC ’09), pp.
7400–7405, Shanghai, China, December 2009.
Acknowledgment [18] J. B. Rawlings and K. R. Muske, “The stability of constrained
receding horizon control,” IEEE Transactions on Automatic
This work is supported by HKU CRCG 201008159001. Control, vol. 38, no. 10, pp. 1512–1516, 1993.
[19] K. H. Kwan, Y. C. Chu, and P. L. So, “Model-based H∞ control
of a unified power quality conditioner,” IEEE Transactions on
References Industrial Electronics, vol. 56, no. 7, pp. 2493–2504, 2009.
[1] J. M. Maciejowski, Predictive Control with Constraints, [20] S. G. Nash and A. Sofer, Linear and Nonlinear Programming,
Prentice-Hall, New Jersey, NY, USA, 2002. McGraw-Hill, New York, NY, USA, 1996.
[2] S. J. Qin and T. A. Badgwell, “A survey of industrial model [21] J. Nocedal and S. J. Wright, Numerical Optimization, Springer,
predictive control technology,” Control Engineering Practice, New York, NY, USA, 2nd edition, 2006.
vol. 11, no. 7, pp. 733–764, 2003. [22] A. Gautam, Y.-C. Chu, and Y. C. Soh, “Robust MPC with
[3] P. Whittle, Optimization Over Time, Wiley, New York, NY, optimized controller dynamics for linear polytopic systems
USA, 1982. with bounded additive disturbances,” in Proceedings of the 7th
[4] W. H. Kwon and S. Han, Receding Horizon Control, Springer, Asian Control Conference, (ASCC ’09), pp. 1322–1327, Hong
New York, NY, USA, 2005. Kong, China, August 2009.
[5] A. Bemporad, M. Morari, V. Dua, and E. N. Pistikopoulos, [23] A. Gautam, Y.-C. Chu, and Y. C. Soh, “Optimized dynamic
“The explicit linear quadratic regulator for constrained sys- policy for receding horizon control of linear time-varying
tems,” Automatica, vol. 38, no. 1, pp. 3–20, 2002. systems with bounded disturbances,” IEEE Transactions on
Automatic Control. In press.
[6] A. Bemporad, F. Borrelli, and M. Morari, “Min-max control
of constrained uncertain discrete-time linear systems,” IEEE [24] E. J. Davison, “The robust control of a servomechanism
Transactions on Automatic Control, vol. 48, no. 9, pp. 1600– problem for linear time-invariant multivariable systems,” IEEE
1606, 2003. Transactions on Automatic Control, vol. 21, no. 1, pp. 25–34,
1976.
[7] A. Bemporad and C. Filippi, “Suboptimal explicit receding
horizon control via approximate multiparametric quadratic [25] B. A. Francis and W. M. Wonham, “The internal model
programming,” Journal of Optimization Theory and Applica- principle of control theory,” Automatica, vol. 12, no. 5, pp.
tions, vol. 117, no. 1, pp. 9–38, 2003. 457–465, 1976.
[8] Z. Wan and M. V. Kothare, “Robust output feedback model [26] B. A. Francis, “The linear multivariable regulator problem,”
predictive control using off-line linear matrix inequalities,” SIAM Journal on Control Optimization, vol. 15, no. 3, pp. 486–
Journal of Process Control, vol. 12, no. 7, pp. 763–774, 2002. 505, 1977.
[9] Z. Wan and M. V. Kothare, “An efficient off-line formulation of
robust model predictive control using linear matrix inequali-
ties,” Automatica, vol. 39, no. 5, pp. 837–846, 2003.
[10] M. Åkerblad and A. Hansson, “Efficient solution of second
order cone program for model predictive control,” Interna-
tional Journal of Control, vol. 77, no. 1, pp. 55–77, 2004.
[11] C. V. Rao, S. J. Wright, and J. B. Rawlings, “Application of
interior-point methods to model predictive control,” Journal of
Optimization Theory and Applications, vol. 99, no. 3, pp. 723–
757, 1998.
[12] J. Mattingley, Y. Wang, and S. Boyd, “Receding horizon
control: automatic generation of high-speed solvers,” IEEE
Control Systems Magazine, vol. 31, no. 3, pp. 52–65, 2011.
Hindawi Publishing Corporation
Journal of Control Science and Engineering
Volume 2012, Article ID 630645, 6 pages
doi:10.1155/2012/630645

Research Article
Pole Placement-Based NMPC of Hammerstein Systems and
Its Application to Grade Transition Control of Polypropylene

He De-Feng and Yu Li
College of Information Engineering, Zhejiang University of Technology, Zhejiang, Hangzhou 310023, China

Correspondence should be addressed to He De-Feng, hdfzj@zjut.edu.cn

Received 1 July 2011; Accepted 11 September 2011

Academic Editor: Baocang Ding

Copyright © 2012 H. De-Feng and Y. Li. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.

This paper presents a new nonlinear model predictive control (MPC) algorithm for Hammerstein systems subject to constraints on
the state, input, and intermediate variable. Taking no account of constraints, a desired linear controller of the intermediate variable
is obtained by applying pole placement to the linear subsystem. Then, actual control actions are determined in consideration of
constraints by online solving a finite horizon optimal control problem, where only the first control is calculated and others are
approximated to reduce the computational demand. Moreover, the asymptotic stability can be guaranteed in certain condition.
Finally, the simulation example of the grade transition control of industrial polypropylene plants is used to demonstrate the
effectiveness of the results proposed here.

1. Introduction than that where the nonlinearities were incorporated into


optimal control problems directly [10, 11], solving nonlin-
Hammerstein systems consist of the cascade connection of ear algebraic equations will inevitably have error and the
a static (memoryless) nonlinear function followed from a restricted control is generally different from the desired one
linear dynamic system. Under certain assumptions (such as for constrained Hammerstein systems [6]. Therefore, the
fading memory), the Hammerstein model could approxi- performance and stability properties may be deteriorated in
mately represent nonlinear dynamics of real systems and the presence of constraints. Moreover, to the best of authors’
has been successfully applied to many kinds of industrial knowledge, the conversional two-step MPC schemes usually
processes such as pH neutralization [1], distillation [2], and have limited ability to dealing with input constraints. So, [12]
polymerization transition [3]. In recent years, constrained presented a new two-step MPC scheme for Hammerstein
control of Hammerstein systems has become one of the most systems with overall constraints, where the actual controller
needed and yet very difficult tasks for the process industry. was determined by solving a finite horizon optimization
Model predictive control (MPC) is an effective control problem that was described as tracking quadratic, optimal
algorithm for handling constrained control problems, and trajectories of the intermediate variable. However, the online
various MPC algorithms have been proposed for control computational demand of optimization problem of the
of Hammerstein systems with constraints [4–11]. Making scheme is still an open question.
use of the geometry structure of Hammerstein model, In this paper, we consider the problem of the grade tran-
[4–9] developed two-step MPC schemes for Hammerstein sition control of industrial polypropylene plants and propose
systems with input constraints, where the intermediate an efficient NMPC scheme for Hammerstein systems with
variable was firstly obtained by linear MPC and then the overall constraints based on the pole placement method
actual control was calculated by solving nonlinear algebraic [13]. We firstly obtained desired trajectories of Hammerstein
equations, desaturation, and so forth. Although two-step systems by using pole placement to the linear subsystem and
MPC algorithms possess more light computational burden then calculated actual control actions by solution of finite
2 Journal of Control Science and Engineering

horizon optimal control problems. In order to reduce the Define a finite horizon objective function J(t) as follows:
online computational burden of the optimization problem,
T p−1 T
only the first control action is calculated and the remaining 
is approximated by solving nonlinear algebraic equations J(t) = x(t + i + 1 | t) − x(t + i + 1)d
i=0
over predictive horizon. Thus, the performance and sta-  
bility properties can be guaranteed in the face of overall × Q x(t + i + 1 | t) − x(t + i + 1)
d (4)
constraints, with moderate increment of the computational 
demand. Finally, an example of the grade transition control +u(t + i | t)T Ru(t + i | t) ,
of propylene plants is used to demonstrate the effectiveness
of the proposed algorithm.
where the vector •(t + i | t) is i-step ahead prediction from
time instant t, integer 0 < T p < ∞ is the predictive horizon,
2. Pole Placement-Based NMPC Algorithm and matrices 0 ≤ Q and 0 < R are the weighting matrices of
the state and input, respectively. Then a new two-step MPC
Consider the discrete-time Hammerstein systems described algorithm of Hammerstein systems with overall constraints
by is presented below.

Algorithm 1 (modified two-step MPC, MTMPC). This algo-


x(t + 1) = Ax(t) + Bv(t), v(t) = g(u(t), t), t = 0, 1, . . . , rithm comprises the following steps:
(1)
Step 1. Given n desired poles {λ1 , . . . , λn } and off-line
where x ∈ Rn , v ∈ Rm , and u ∈ Rr are the state, in- compute the gain matrix K in (3) such that the eigenvalues
termediate variable and control input, respectively. Vector of A − BK are those specified in vector {λ1 , . . . , λn }.
field g represents the nonlinear relationship between the
Step 2. Set T p , Q, R in the objective function in (4).
input and intermediate variable, satisfying g(0, t) = 0. It is
assumed that (A, B) is stabilizable and the state is available for
Step 3. With v(·)d = −Kx(·)d and the current state x(t),
feedback control. Without loss of generality, we also assume
solve on-line the nonlinear algebra equations (5) without
that the origin is the equilibrium of system (1).
constraints to obtain T p desired control actions u(t + i)d
Given constraints of system (1) as follows:

  v(t + i)d − g u(t + i)d , t + i = 0,


x(t) ∈ Mx = x ∈ Rn : xLB ≤ x ≤ xUB , t = 0, 1, . . . ,
  x(t + i + 1)d = (A − BK)x(t + i)d , (5)
m
v(t) ∈ Mv = v ∈ R : v LB
≤v≤v UB
, t = 0, 1, . . . ,
x(t)d = x(t), i = 0, 1, . . . , T p − 1.
 
r
u(t) ∈ Mu = u ∈ R : u LB
≤u≤u UB
, t = 0, 1, . . . ,
Step 4. With the current state x(t), determine on-line the
(2) actual control action u(t) as follows:

u(t) = arg minJ(t),


where sets Mx ⊆ Rn , Mv ⊆ Rm , and Mu ⊆ Rr are u(t |t)
constraints on the state, intermediate variable, and input,
respectively. The goal of this paper is to design an efficient s.t. x(t + i + 1 | t) = Ax(t + i | t) + Bg(u(t + i | t), t + i),
NMPC algorithm for the Hammerstein system (1) with the
constraint (2). x(t + i + 1)d = (A − BK)x(t + i)d ,

u(t + i | t) = u(t + i)d , i = 1, . . . , T p − 1,


2.1. Design of NMPC Algorithm. Consider n desired poles,
denoted by {λ1 , . . . , λn }, of the linear subsystem of (1), x(t + 1 | t) ∈ Mx , g(u(t | t), t) ∈ Mv , u(t | t) ∈ Mu ,
which represent some desired performance of system (1).
x(t | t) = x(t)d = x(t),
Associated with this set of desired poles is a linear controller
(6)
of the intermediate variable
where J(t) is defined by (4).
v(t) = −Kx(t) (3)
Step 5. Implement u(t), set t = t + 1, and go back Step 3.
which is generated via Lyapunov’s method without any Remark 2. For the algorithm here, only the first control
constraints [13]. Since the controller (3) is unable to be action (i.e., u(t | t)) is optimized subject to constraints at
implemented by real control action, here it is labeled as each time. So, the online optimization problem has only r
v(t)d = −Kx(t)d with associated closed-loop states x(t)d . decision variables versus T p r decision variables for general
Journal of Control Science and Engineering 3

nonlinear MPC, which results in a significant reduction of where function V (x) = xT Px and x(t + 1)d = (A − BK)x(t)d
the online computational demand since the computational with x(t)d = x(t). Moreover, the set M is an attractive region
demand, in general, grows exponentially with the number of system (8).
of decision variables [14]. Thus, the predictive horizon
T p should be chosen reasonably large to ensure adequate Proof. Consider the system (1), (2), define a level set M of
performance. On the other hand, since only the first control function V (x) as
is implemented in MPC scheme, it is expected that only
enforcing constraints for the first prediction should give M = {x ∈ Rn : V (x) ≤ c, c > 0} ⊆ Mx (12)
excellent results. In addition, the closed-loop performance
of MPC systems clearly depends on how n desired poles such that u(t)MPC ∈ Muv , for all x(t) ∈ M, t = 0, 1, . . .. Note
{λ1 , . . . , λn } are arranged. The further work will focus on how that the set M is always nonempty since the constraint in (2)
to determine {λ1 , . . . , λn } optimally. includes the origin as the internal point.
Let function V (x) be a Lyapunov function candidate of
It should be pointed out that the feasibility of optimiza-
system (8). Then, for any x(t) ∈ M, we have
tion problem (6) may not be guaranteed theoretically with
respect to arbitrary constraints in (2). However, it is usually
ensured in practice via adjusting the intermediate variable V (x(t + 1)) − V (x(t))
constraint. Next, we show that the closed-loop MPC system T T
= x(t + 1) Px(t + 1) − x(t) Px(t)
(1) is asymptotically stable, provided that the feasibility of
(6) is hold. T T
= x(t) (A − BK) P(A − BK)x(t)
T

T T T
2.2. Analysis of Stability. In terms of Algorithm MTMPC, − x(t) Px(t) + 2x(t) (A − BK) PBΔ + ΔT B T PBΔ
the MPC law of constrained Hammerstein system (1), (2) is  
T T
defined as = x(t) (A − BK) P(A − BK) − P x(t)

u(t)MPC = u(t | t), t = 0, 1, . . . (7) + 2x(t)T (A − BK)T PBΔ + ΔT BT PBΔ,


(13)
with the closed-loop system

where Δ = g(u(t)MPC , t)+Kx(t) = g(u(t)MPC , t) − v(t)d . Sub-


x(t + 1) = Ax(t) + Bg u(t)MPC , t
stituting (10) into (13) yields


MPC
= (A − BK)x(t) + B Kx(t) + g u(t) ,t .
V (x(t + 1)) − V (x(t))
(8)
T T
= −x(t) Qx(t) + 2[(A − BK)x(t)] PBΔ + ΔT B T PBΔ
Definition 3. Set M ⊆ Rn is called an attractive region
of system (8) if for all x(t) ∈ M, the system trajectories T
= −x(t) Qx(t) + 2x(t) AT PBΔ
T
x(s; x(t)) → 0 when s → +∞ and satisfies
T
MPC + 2 g − Δ BT PBΔ + ΔT BT PBΔ
x(t) ∈ M =⇒ x(t + 1) ∈ M, ∀u(t) ∈ Muv , t = 0, 1, . . . ,
T T
(9) = −x(t) Qx(t) + 2x(t + 1) PBΔ − ΔT B T PBΔ

where subset Muv = {u ∈ Rr : g(u, ·) ∈ Mv } ⊆ Mu . T dT


= −x(t) Qx(t) − x(t + 1) Px(t + 1)
d

Let a set of points {λ1 , . . . , λn } be the desired poles of the + x(t + 1)T Px(t + 1)
linear subsystem x(t + 1) = (A − BK)x(t). Then from inverse

Lyapunov’s theorem [13], there is systematic matrices P > 0 T d


= −x(t) Qx(t) − V x(t + 1) + V (x(t + 1)).
and Q > 0 such that (14)
T
(A − BK) P(A − BK) − P < −Q. (10)
From the inequality (11), it is straight forward to obtain that
In following, we present the result on our proposed algo-
rithm. V (x(t + 1)) − V (x(t)) ≤ −x(t)T Qx(t), ∀x(t) ∈ M. (15)

Theorem 4. Consider the system (1), (2), there exists a Hence, function V (x) is a Lyapunov function of system
nonempty region M such that the closed-loop system (8) is as- (8), and the system is asymptotically stable in region M.
ymptotically stable if the inequality (11) is satisfied Furthermore, from the definition of attractive regions and set

M in (12), we derive that M is an attractive region of system
V (x(t + 1)) ≤ V x(t + 1)d , ∀x(t) ∈ M, t = 0, 1, . . . , (8).
(11) This establishes the theorem.
4 Journal of Control Science and Engineering

Remark 5. From the proof of the above theorem, we know 50


that the stability property of closed-loop system (8) is guar- 40

MI (g/10 min)
anteed by the linear controller of the intermediate variable, Grade B
30
provided that the feasibility of optimization problem (6) 20
and condition (11) holds. This suggests that the stability Grade C
10 Grade A
and optimality can be separated to some extent. Therefore,
the design parameter of the objective function in (4) can 0
0 10 20 30 40 50 60 70 80 90 100
be tuned freely to achieve more satisfactory performance,
regardless the stability of the closed-loop system. In addition, Times (0.5 h)
it is pointed out that the attractive region M obtained here is (a) Melt index
merely a theoretical concept and the computation of M is not 3
an easy task.
2 Grade C

Et (%)
Remark 6. Instead of conventional two-step MPC algo-
rithms, where the actual control was determined by solving 1
nonlinear algebraic equations based on desired controllers Grade A Grade B
of the intermediate variable resulted from linear MPC, 0
actual control actions here are derived by solving a finite 0 10 20 30 40 50 60 70 80 90 100
horizon optimal control problem (i.e., nonlinear MPC), Times (0.5 h)
which is defined as tracking the desired state trajectories (b) Concentration of ethylene in polypropylene
driven by pole placement state feedback controller. Thus,
overall constraints of Hammerstein systems are taken into Figure 1: MI and Et output trajectories of grade transition
account when designing the predictive controller, and then processes. (Solid line is the set-point, dotted line the instantaneous,
and dashed line the cumulative).
performance and stability properties are guaranteed. Mean-
while, the ability to handling constraints is enhanced in the
algorithm proposed here. Note that the linear controller of
intermediate variables in (3) can be any one that stabilizes where states x1 and x2 are the cumulative MI (MIc , g/10 min)
the linear subsystems of Hammerstein models with certain and Et (Etc , %) of PP, respectively; intermediate variables
desired performance. g1 and g2 denote the instantaneous MI (MIi , g/10 min)
and Et (Eti , %) of PP, respectively; input, u1 , u2 , u3
denote the reaction temperature T(K), concentration ratio
3. Simulation Example of hydrogen to propylene CH2 /Cm (%), and concentration
Taking an example of polypropylene (PP) grade transition ratio of ethylene to propylene Cm2 /Cm (%), respectively, and
control, we illustrate the effectiveness of the proposed parameter τ is the average polymer residence time (τ = 2 h).
algorithm. The parameters of model ki (i = 1, . . . , 10) can be identified
In propylene polymerization industry, melt index (MI, on-line by industrial data [15].
g/10 min) is usually used to identify grade indices of PP The grade transition sequence considered here is A →
homopolymer and both of MI and concentration of ethylene B → C, where grades A and B are PP homopolymer and C
(Et , %) in PP denote grade indices of total phase PP [12, 15– is total phase PP. It should be pointed out that the feed
18]. This implies that the nature of grade transition process flow rate of ethylene is always zero (i.e., [Cm2 /Cm ] = 0)
is to operate the response of the variables MI and Et in in production of PP homopolymer grades A and B. The
polymer. Then according to the propylene polymerization grade transition process is defined by the set-point of states
mechanism, we have the following mathematical model and input/output constraints as shown in Table 1, where
which formulates the dynamics of the grade transition in superscripts c, i, and Δ denote accumulation, instantaneous
polypropylene process [12]: value, and increment constraints, respectively. Each grade
transition is optimized over 10 h horizon with a sampling
⎡ ⎤ ⎡ ⎤ interval of 0.5 h.
⎡ ⎤ 1 ⎡ ⎤ 1 ⎡ ⎤
⎢−
ẋ1 0 ⎥ x1 ⎢ τ 0 ⎥ g1 (u, t) In the simulation running, the closed-loop desired poles
⎣ ⎦=⎢ τ ⎥⎣ ⎦ ⎢ ⎥⎣ ⎦ of system (16) are placed at {λ1 , λ2 } = {0.72, 0.60}, and the

ẋ2 1 ⎦ x2 + ⎣ 1 ⎦ g2 (u, t) ,
0 − 0 corresponding state-feedback matrix K in (3) is computed
τ τ
by K = [0.12, 0; 0, 0.6]. Moreover, the parameters in the
⎡ ⎤ objective function (4) are chosen as penalty on the states,
⎡ ⎤ k2
g1 (u, t) ⎢k1 + u + k3 log(k4 + k5 u2 + k6 u3 )⎥ (16) Q = qI2 , with q = 1, penalty on the inputs, R = I3 and
⎣ ⎦=⎢ 1 ⎥,
g2 (u, t)
⎣ k7 u2 + 1 ⎦ a horizon length of T p = 20. Assume that grade transition
k8 /u3 + k9 u3 + k10 A → B takes place at the tenth hour and B → C at the thirtieth
hour. Then some curves of the grade transition process being
⎡ ⎤
1 0 controlled by NMPC law are shown in Figures 1 and 2, where
y(t) = ⎣ ⎦x(t), t ≥ 0, the solid line in Figure 1 is the set-point, the dotted line the
0 1 instantaneous, and the dashed line the cumulative.
Journal of Control Science and Engineering 5

Table 1: Grade indices and transition constraints.


Grade MI (g/10 min) Et (%) T (K) CH2 (%) Cm2 /Cm (%)
A 2.7 — 343.15 0.050 —
B 39.0 — 343.15 0.330 —
C 10.0 2.4 343.15 0.236 1.99
[2.4, 42.0]c — [341.15, 345.15] [0.02, 0.35] —
A→B
[2.0, 45.0]i — — [−0.1, 0.10]Δ —
[8.0, 42.0]c [0.0, 2.5]c [341.15, 345.15] [0.20, 0.35] [0.00, 2.50]
B→C
[6.0, 45.0]i [0.0, 2.6]i — [−0.1, 0.10]Δ [−0.1, 0.10]Δ

345 variables, and the states in the grade transition operation do


344 not violate the constraints in Table 1.
T (K)

Grade A Grade B Grade C


343
342
0 10 20 30 40 50 60 70 80 90 100
4. Conclusion
Times (0.5 h) Together with the pole placement method, the paper pre-
(a) Reaction temperature sented an efficient MPC algorithm for control of Hammer-
stein systems subject to constraints on states, inputs, and
0.4
CH2 /Cm (%)

intermediate variables. Applying pole placement to the linear


Grade B
0.2 subsystem of Hammerstein models yielded linear controllers
Grade A Grade C
of the intermediate variable and then the actual control
0 action was determined by online solving an optimization
0 10 20 30 40 50 60 70 80 90 100
problem. The size of the online optimization problem
Times (0.5 h)
depended only on the number of inputs, not on the
(b) Concentration ratio of hydrogen to propylene predictive horizon, which greatly reduced the computational
3 demand of the algorithm. With condition of feasibility of
Cm2 /Cm (%)

2
the optimization problem, the asymptotical stability of the
Grade C closed-loop system with constraints was guaranteed and the
1 Grade B
Grade A results on simulation example of grade transition control of
0 polypropylene plants showed the effectiveness of the results
0 10 20 30 40 50 60 70 80 90 100
obtained here.
Times (0.5 h)
(c) Concentration ratio of ethylene to propylene
Acknowledgments
Figure 2: Input profiles of grade transition processes.
This work is supported by the National Natural Science
Foundation of China under Grant no. 60904040, Specialized
Research Fund for Doctoral Program of Higher Education of
In order to calculate the quantity of off-specification
China under Grant no. 20093317120002 and Natural Science
polymer in grade transition process, we employ the criterion
Foundation of Zhejiang Province under Grant no. Y1100911.
where on-specification polymer is defined as the polymer
with MIc and Etc within a ±5% deviation of desired Finally, the authors are grateful to the reviews’ constructive
specification in Table 1. All other polymers beyond the comments. This paper includes revised and extended text of
criterion are categorized as off-specification production. the first author’s presentation at the 29th CCC, July 27, 2010,
Similarly, the transition time is defined as the time interval Beijiang, China.
when the cumulative properties of polymer are out of the
specification range ±5% of the previous grade and lasting References
until the cumulative properties enter and stay within the
specification range ±5% of the new target grade. Therefore, [1] Z. Hasiewicz, “Hammerstein system identification by the
the time of grade A → B transition is 8.5 hours and the Haar multiresolution approximation,” International Journal of
time of grade B → C transition is 6.5 hours. Compare to the Adaptive Control and Signal Processing, vol. 49, pp. 691–717,
1999.
time of 10 hours in actual grade transition operation, this
[2] N. Bhandari and D. Rollins, “Continuous-time Hammerstein
decreases the quantity of off-specification polymer in grade nonlinear modeling applied to distillation,” AIChE Journal,
transition process and then increases the productive time vol. 50, no. 2, pp. 530–533, 2004.
of on-specification polymer. Also, Figure 2(a) suggests that [3] D. F. He, L. Yu, and M. S. Xue, “On operation guidance
there is no change in reaction temperature during the grade for grade transition in propylene polymerization plants,” in
transition processes, which is desired in actual operation. Proceedings of the 19th Chinese Process Control Conference, pp.
Finally, it can be seen that the control laws, intermediate 339–342, Beijiang, China, 2008, (Chinese).
6 Journal of Control Science and Engineering

[4] Q. M. Zhu, K. Warwick, and J. L. Douce, “Adaptive general


predictive controller for nonlinear systems,” IEE Proceedings
D, vol. 138, no. 1, pp. 33–40, 1991.
[5] K. P. Fruzzetti, A. Palazoǧlu, and K. A. McDonald, “Nonlinear
model predictive control using Hammerstein models,” Journal
of Process Control, vol. 7, no. 1, pp. 31–41, 1997.
[6] B. C. Ding, Y. G. Xi, and S. Y. Li, “Stability analysis on
predictive control of discrete-time systems with input non-
linearity,” Acta Automatica Sinica, vol. 29, no. 6, pp. 827–834,
2003.
[7] B. C. Ding, Y. G. Xi, and S. Y. Li, “On the stability of
output feedback predictive control for systems with input
nonlinearity,” Asian Journal of Control, vol. 6, no. 3, pp. 388–
397, 2004.
[8] B. Ding and Y. Xi, “A two-step predictive control design for
input saturated Hammerstein systems,” International Journal
of Robust and Nonlinear Control, vol. 16, no. 7, pp. 353–367,
2006.
[9] H. T. Zhang, H. X. Li, and G. Chen, “Dual-mode predictive
control algorithm for constrained Hammerstein systems,”
International Journal of Control, vol. 81, no. 10, pp. 1609–1625,
2008.
[10] H. H. J. Bloemen and T. J. J. van den Boom, “Model-based
predictive control for Hammerstein systems,” in Proceedings of
the 39th IEEE Conference on Decision and Control, vol. 5, pp.
4963–4968, Sydney, Australia, 2000.
[11] H. H. J. Bloemen, T. J. J. van den Boom, and H. B. Verbruggen,
“Model-based predictive control for Hammerstein-Wiener
systems,” International Journal of Control, vol. 74, pp. 482–495,
2005.
[12] D. F. He and L. Yu, “Nonlinear predictive control of con-
strained Hammerstein systems and its research on simulation
of polypropylene grade transition,” Acta Automatica Sinica,
vol. 35, no. 12, pp. 1558–1563, 2009 (Chinese).
[13] C. T. Chen, Linear system Theory and Design, Oxford Univer-
sity Press, New York, NY, USA, 3rd edition, 1999.
[14] A. Zheng, “A computationally efficient nonlinear MPC algo-
rithm,” in Proceedings of the American Control Conference, pp.
1623–1627, Albuquerque, NM, USA, June 1997.
[15] J. R. Richards and J. P. Congalidis, “Measurement and
control of polymerization reactors,” Computers and Chemical
Engineering, vol. 30, no. 10–12, pp. 1447–1463, 2006.
[16] H. S. Yi, J. H. Kim, C. Han, J. Lee, and S. S. Na, “Plantwide
optimal grade transition for an industrial high-density
polyethylene plant,” Industrial and Engineering Chemistry
Research, vol. 42, no. 1, pp. 91–98, 2003.
[17] Y. Wang, H. Seki, S. Ohyama, K. Akamatsu, M. Ogawa, and
M. Ohshima, “Optimal grade transition control for polymer-
ization reactors,” Computers and Chemical Engineering, vol. 24,
no. 2–7, pp. 1555–1561, 2000.
[18] T. Kreft and W. F. Reed, “Predictive control of average com-
position and molecular weight distributions in semibatch free
radical copolymerization reactions,” Macromolecules, vol. 42,
no. 15, pp. 5558–5565, 2009.

You might also like