You are on page 1of 11

Journal of Process Control 88 (2020) 43–53

Contents lists available at ScienceDirect

Journal of Process Control


journal homepage: www.elsevier.com/locate/jprocont

Nash-based robust distributed model predictive control for large-scale


systems
Reza Aliakbarpour Shalmani, Mehdi Rahmani∗, Nooshin Bigdeli
Department of Electrical Engineering, Faculty of Engineering, Imam-Khomeini International University, Qazvin, Iran

a r t i c l e i n f o a b s t r a c t

Article history: In this paper, a new robust distributed model predictive control (RDMPC) is proposed for large-scale
Received 14 November 2018 systems with polytopic uncertainties. The time-varying system is first decomposed into several inter-
Revised 27 December 2019
connected subsystems. Interactions between subsystems are obtained by a distributed Kalman filter, in
Accepted 14 February 2020
which unknown parameters of the system are estimated using local measurements and measurements
Available online 28 February 2020
of neighboring subsystems that are available via a network. Quadratic boundedness is used to guarantee
Keywords: the stability of the closed-loop system. In the MPC algorithm, an output feedback-interaction feedfor-
Robust distributed MPC ward control input is computed by an LMI-based optimization problem that minimizes an upper bound
Kalman filter on the worst case value of an infinite-horizon objective function. Then, an iterative Nash-based algorithm
Nash optimization is presented to achieve the overall optimal solution of the whole system in partially distributed fashion.
Linear matrix inequality Finally, the proposed distributed MPC approach is applied to a load frequency control (LFC) problem of a
Load-frequency control
multi-area power network to study the efficiency and applicability of the algorithm in comparison with
the centralized, distributed and decentralized MPC schemes.
© 2020 Elsevier Ltd. All rights reserved.

1. Introduction systems with minimum coupling, see e.g. [2,3]. In decentralized


strategy, each subsystem is controlled independently and the ef-
Model predictive control (MPC) has attracted great attention in fects of other subsystems are neglected or considered as model er-
both theoretical and practical domains in the past few decades. It rors or disturbances. In this regard, different decentralized MPC ap-
has successfully been applied in several industrial processes; see proaches for coupled systems are presented in the literature. In [4],
[1] and references therein. Alessio et al. brought input and output constraints into account,
In recent years, because of growing complexity and size of sys- analyzed the degree of decoupling between submodels, and its ef-
tems, centralized control approaches may be very complicated, fect on computational burden and overall system performance. In
time-consuming and impractical. In the centralized control, all [5], Vaccarini et al. proposed a decentralized method for fast net-
measurements of the system are collected in one control center worked systems that guarantees system stability despite the strong
where all computations are performed to obtain the optimal per- interaction between subsystems. This method has no constraints
formance of the whole system. In this scheme, when a large-scale on input, output, and states. Stabilizing decentralized MPC for non-
system is physically distributed, the algorithm may commonly fail. linear systems is also presented in [6,7].
Moreover, if a part of the process has a problem or failure, the In a hierarchical (two-level) control strategy, a local controller
whole control system would be stopped. at each subsystem solves its optimization subproblem using lo-
To reduce the computational complexity and avoid the difficul- cal information at the first level, and the overall optimal solution
ties of the centralized control, several alternative algorithms such is obtained by an appropriate coordination strategy at the second
as decentralized, hierarchical, and distributed approaches are pre- level [8]. Hierarchical robust control for large-scale linear and non-
sented through the past years. In these approaches, the large-scale linear systems are presented in [9,10], respectively. In a hierarchi-
system is considered as a combination of several subsystems with cal MPC, in the low level of control, an MPC approach is applied at
a local controller for each of them. Several methods have been pro- each subsystem. It solves an optimization problem based on local
posed to decompose the original system into interconnected sub- cost function subject to the input, output, and state constraints. In
this procedure, a coherence constraint is considered for values of
interconnection variables. If this constraint is satisfied in the low-

Corresponding Author.
level control, the computed control input can be implemented into
E-mail address: mrahmani@eng.ikiu.ac.ir (M. Rahmani). the corresponding subsystem; otherwise, an iterative cost coordi-

https://doi.org/10.1016/j.jprocont.2020.02.005
0959-1524/© 2020 Elsevier Ltd. All rights reserved.
44 R.A. Shalmani, M. Rahmani and N. Bigdeli / Journal of Process Control 88 (2020) 43–53

nator is used. In the cost coordinator, Lagrange multipliers of the in [25], where each agent minimizes the overall cost function and
coherence constraints in the global optimization problem are up- applies the feedback control into the corresponding subsystem. In
dated at each iteration and optimal cost is determined for each this case, the whole system information should be accessible to
controller to recompute its local optimization problem. The itera- each agent. It is usually not possible to obtain a model that glob-
tive algorithm continues until coherence constraint is satisfied [11– ally characterizes the system and even if it is possible, the calcula-
13]. tions of the algorithm would not be carried out with reasonable
In distributed approaches, a control agent is considered for each time and price. These deficiencies motivate us to devlop a new
subsystem. Each agent can communicate with other ones. If the in- distributed MPC scheme with lower computation cost while main-
formation exchange happens more than once during a sampling taining a satisfactory level of performance and optimality close to
time, the algorithm would be iterative. By choosing a different the centralized method.
types of cost function and using the feature of iterative exchange of In this paper, a new robust distributed MPC (RDMPC) is pro-
information, Nash-based and cooperative model predictive control posed for linear time-varying systems with polytopic uncertain-
have been proposed in the literature, see e.g. [14–17]. In a coop- ties. The system is decomposed into M subsystems that are state
erative algorithm, all agents have the same cost function including coupled. Regarding to the polytopic uncertainty, the purpose of
all variables of the original system and solve the overall optimiza- the proposed robust MPC is to minimize the upper bound on the
tion problem by Local and transmitted information. In this strategy, worst-case possible value of cost function inside the uncertainty
agents should be fully connected with each other and share infor- polytope. In this approach, the control law of each agent is con-
mation completely [18]. In a Nash-based algorithm, each agent has sidered as a new state-feedback and interaction-feedforward form.
a cost function based on local variables and estimates the effect This control input guarantees the quadratic bounded stability of
of other subsystems by iterative exchange of information to reach the closed-loop system, and also compensates undesirable effects
the Nash optimal solution [19]. In the distributed MPC reported of neighboring subsystems. Moreover, to obtain the proposed MPC,
in [20], using a cooperative approach and iterative problem solv- a distributed Kalman filter is designed to estimate the states and
ing, the optimal point of the centralized MPC is achieved. Maestre interactions at each subsystem using local measurements and the
et al. presented an MPC scheme based on game theory with two measurements of neighboring subsystems provided by the net-
solution steps [21]. In the first step, each controller solves the lo- work. Since the subsystems are coupled dynamically, a Nash-based
cal optimization problem and shares the information among con- iterative algorithm is presented to reach the optimal equilibrium
trol agents. In the second step, agents choose the best suboptimal point for the whole system.
solution that ensures the best performance for the whole system. In summary, according to the above discussion, the main goals
In [22], Camponogara et al. presented a decomposition method of this paper are as follows
for large-scale systems and then proposed a communication-based
1. To present a general model of a distributed system with in-
distributed MPC approach. Also, Liu and Christofides proposed a
terconnected subsystems under polytopic uncertainties.
distributed Lyapunov-based MPC that guarantees the closed-loop
2. To propose an LMI-based robust distributed MPC for uncer-
stability of the nominal system in the presence of disturbances
tain large-scale systems.
[23].
3. To consider the interaction signals directly in the control law
Moreover, it is well-known that model errors and uncertainties
of the distributed MPC.
could lead to closed-loop instability and performance degradation.
4. To design a distributed Kalman filter to achieve estimations
In the literature, a wide range of robust MPC strategies have been
of states and interaction signals.
studied for systems subjected to uncertainties. Campo and Morari
5. To present a Nash-based algorithm to obtain an overall opti-
developed an MPC formulation that optimizes the performance of
mal solution for the whole system.
a linear plant with explicitly modeled uncertainty [24]. Kothare
6. To evaluate the performance and applicability of the pro-
et al. [25] also proposed a robust MPC scheme for uncertain sys-
posed approach by applying to a load-frequency control
tems by minimization of the upper bound on worst-case possi-
problem in a multi-area power system.
ble objective function using linear matrix inequality (LMI) tech-
nique. In addition, robust MPC has been applied to nonlinear sys-
2. Robust distributed MPC
tems with different approaches. In [26], Magni et al. improved the
region of attraction by applying a receding horizon paradigm to
2.1. Problem statement
H∞ control problem for discrete-time nonlinear systems. Also, LMI-
based robust MPC schemes for nonlinear systems have been re-
Consider the following linear time varying (LTV) system:
cently studied, e.g. see [27,28]. To reduce the computation time,
self-triggered mechanism has been proposed for robust MPC algo- x ( k + 1 ) = A ( k )x ( k ) + B ( k )u ( k ) + E ( k )w ( k ),
rithms. Self-triggered mechanism is choosing inter-sampling times y ( k ) = C ( k )x ( k ) + v ( k ),
by guaranteeing a fast decrease in optimal costs [29–31]. Brunner
[A ( k ) , B ( k ) ] ∈ , (1)
et al. [30] presented a robust self-triggered MPC for linear system
based on tube MPC approach. In [31], the inter-sampling time is where x ∈ Rnx ,
u∈ y∈ Rnu , Rny
are the state vector, control input
maximized by optimizing the worst-case of a novel objective func- and output vector, respectively. w ∈ Rnw , v ∈ Rnv are process and
tion and the stability of nonlinear system is ensured at each trig- measurement noises, respectively. k is the discrete-time index, and
gering time instant.  is a polytope defined as:
Numerous robust distributed model predictive control (RDMPC)
 = Co{[A1 , B1 ], [A2 , B2 ], . . . , [AL , BL ]}, (2)
approaches have also been presented for uncertain large-scale sys-
tems [32–34]. Further, a robust DMPC algorithm for nonlinear sys- in which Co{.} denotes the convex hull. Any [A(k) B(k)] is an affine
tems has been described in [35] where the feasibility of the prob- combination of the polytope vertices as :
lem and stability of all agents is provided by considering a robust- 
L 
L
ness constraint in the presence of external disturbance. In [36], Li [ A ( k ) B ( k )] = λl [Al Bl ], λl = 1. (3)
and Shi focused on development of RDMPC for constrained nonlin- l=1 l=1

ear systems with communication delays. Al-Gherwi et al. [37] dis- In this paper, the large-scale system is decomposed into M sub-
cussed a robust distributed MPC based on the approach presented systems in the same way described in [22]. In this model of a
R.A. Shalmani, M. Rahmani and N. Bigdeli / Journal of Process Control 88 (2020) 43–53 45

distributed system, the states of subsystems are coupled and the as the minimization of the upper bound on the worst-case value
model of the ith subsystem can be represented in the following of cost function inside the corresponding polytope. It can be de-
form, for i, j = 1, . . . , M; i = j. scribed as a min-max optimization problem as follows:


M min max Ji (k )
xi (k + 1 ) = Aii (k )xi (k ) + Bi (k )ui (k ) + Ei (k )wi (k ) + Ai j ( k )x j ( k ), ui (k+n|k ) [Aii (k+n ),Ai j (k+n ),Bi (k+n )]∈i (12)
j=1 subject to ( 7 ), ( 8 ) f or i = 1, . . . , M
j=i

= Aii (k )xi (k ) + Bi (k )ui (k ) + Ei (k )wi (k ) + Di (k )zi (k ) (4) In the distributed MPC, each control agent engages itself simul-
taneously with other agents in solving the optimization problem.
where xi ∈ Rnxi , ui ∈ Rnui , wi ∈ Rnwi , and zi (k) is the interaction sig- Based on the Nash theory [38], each agent solves its local opti-
nal that is a linear combination of states of the neighboring sub- mization problem assuming that the other agents’ optimal solu-
systems which affect the ith subsystem as tions are known. Then, all agents share the local optimal solu-
tions with each other and again, solve the local problems using
zi (k ) = [xT1 (k ) . . . xTi−1 (k ) xTi+1 (k ) . . . xTM (k )]T ,
the newly received information. This iterative algorithm is contin-
Di = [Ai1 . . . Ai,i−1 Ai,i+1 . . . AiM ] (5) ued until there are no changes in the local solutions. At this point,
Note that the jth subsystem is a neighbor of ith subsystem when it can be said that the whole system arrives the Nash optimal so-
Aij = 0. It is assumed that the ith subsystem has the following lution in a coupling decision process.
polytopic uncertainty: Remark 1. The global performance index can be decomposed into

L a number of local performance indexes (10,11). However, the local
[Aii (k ), Ai j (k ), Bi (k )] ∈ i = λli [Alii , Ali j , Bli ], optimization problem of each agent is still related to all input vari-
l=1 ables due to coupling between agents. Using the Nash optimal con-
i = 1, . . . , M; j = 1, . . . , M; j = i, (6) cept [38] in the proposed distributed control approach, each agent
optimizes the local performance index only using its own control
A local control agent is assumed to provide an appropriate con-
decision assuming that other agents’ Nash optimal solutions are
trol input for the corresponding subsystem where a distributed
known. Then, these new solutions are shared and the procedure
Kalman filter (DKF) is applied to estimate the states and also the
is repeated till the whole system arrives to the Nash optimal equi-
interactions coming from other subsystems. The DKF estimates the
librium point. It is remarkable that the mutual communication and
missing variables using the local information and the information
the information exchange are adequately taken into account in the
shared by other agents via the network.
large-scale system. The iterative algorithm is stopped if the ter-
Moreover, the control signal of the ith agent is assumed to have
minal condition is met, i.e. each agent compares the newly com-
the following constraint:
puted optimal solution with that obtained in the last iteration and
|uhi (k + n|k )| ≤ uhi,max , h = 1, 2, . . . , nui (7) checks if the difference is small enough. At each iteration of this
algorithm, one has
2.2. Nashed-based distributed MPC
γi∗ (F1p−1 , . . . , Fi p , . . . , FMp−1 , K1p−1 , . . . , Kip , . . . , KMp−1 )
In the proposed distributed MPC, the prediction model of ith ≤ γi (F1p−1 , . . . , Fi , . . . , FMp−1 , K1p−1 , . . . , Ki , . . . , KMp−1 ), for i = 1, . . . , M
subsystem is given by (13)
xˆi (k + n + 1|k ) = Aii (k + n )xˆi (k + n|k ) + Bi (k + n )ui (k + n|k )
where γ i is the upper bound of the performance index. It means
+ Di (k + n )zˆi (k + n|k ), n ≥ 0 (8)
that the optimal upper bound is reduced at each iteration, i.e. the
where xˆi (k + n|k ) is the n-step ahead prediction of xi at time step control performance is improved.
k, and zˆi (k + n|k ) is the n-step ahead prediction of the interac-
Remark 2. The Nash-based distributed model predictive control
tion signal including the states of neighboring subsystems that are
differs from the cooperative and decentralized MPC approaches.
predicted by the corresponding control agents at time step k and
In the cooperative MPC, subsystems are related to each other, but
transferred to the ith agent. The control input is considered as fol-
each control agent optimizes the global cost function of the system
lows:
by using local variables that causes high computational complex-
ui (k + n|k ) = Fi xˆi (k + n|k ) + Ki zˆi (k + n|k ), (9) ity. This strategy needs all agents to be connected to each other;
where Fi (k) and Ki (k) are state feedback and interaction feedfor- however, in the Nash-based algorithm, agents just need neighbor
ward gains, respectively. agents information and construct a partially distributed connec-
The whole system’s cost function is assumed to be decomposed tion fashion which causes to decrease the load of data transfer
into the M local cost functions as in the network. Moreover, in the decentralized MPC, communica-
tions between agents are neglected and each agent solves its local

M
J (k ) = Ji (k ), (10) optimization problem independently. This results in performance
i=1 degradation in particular if the interactions among subsystems are
in which the infinite horizon objective function of ith subsystem is strong. Note that model of subsystems and the optimization prob-
considered as follows: lems are different in these three MPC strategies as illustrated in
∞ Table 1.

Ji (k ) = {xˆi (k + n|k )T Mi xˆi (k + n|k ) To solve the optimization problem in the proposed DMPC, at
n=0
first, let us define the quadratically bounded condition that is
+ ui ( k + n|k )T Ri ui ( k + n|k )} (11) pointed out in [39].
where Mi and Ri are positive-definite weighting matrices for states
Definition 1. The system (4) is quadratically bounded (QB) for all
and control inputs, respectively.
allowable zˆi and λl , if
Considering the polytopic uncertainty for dynamic systems, the
optimization problem of the distributed MPC can be interpreted xˆi (k + n|k )T Qi−1 xˆi (k + n|k ) ≥ 1,
46 R.A. Shalmani, M. Rahmani and N. Bigdeli / Journal of Process Control 88 (2020) 43–53

Table 1
Formulation of distributed, cooperative, and decentralized MPC strategies.

Nash-based distributed MPC Cooperative distributed MPC Decentralized MPC

min Ji (k) min Ji (k) min Ji (k)


  
Ji (k ) = ∞ k=0 {xi (k ) Mi xi (k )
T
Ji (k ) = ∞ k=0 {x (k )i Mi xi (k )
T
Ji (k ) = ∞k=0 {xi (k ) Mi xi (k )
T
 M
+ ui ( k )T Ri ui ( k )} +ui (k )T Ri ui (k ) + [ j=j=1
u j ( k ) T R j u j ( k )] } + ui ( k )T Ri ui ( k )}
j=i
xi (k + 1 ) = Aii (k )xi (k ) + Bi (k )ui (k ) xi ( k + 1 ) = Ai ( k )xi ( k ) + Bi ( k )ui ( k ) xi (k + 1 ) = Aii (k )xi (k ) + Bi (k )ui (k )
 
+ M j=1 Ai j (k )x j (k ) + M j=1 [B j (k )u j (k )]
j=i j=i

implies Proof. Considering (16), the minimization of the performance in-


dex Ji (k) can be interpreted as follows
xˆi (k + n + 1|k )T Qi−1 xˆi (k + n + 1|k ) ≤ xˆi (k + n|k )T Qi−1 xˆi (k + n|k ),
min γi
(14) Yi ,Ki ,Qi

where Qi−1 is a symmetric positive-definite Lyapunov matrix. subject to Vi (xˆi (k|k )) = xˆi (k|k )T Pi xˆi (k|k ) ≤ γi (21)

Now, consider the quadratic Lyapunov function as V (xˆi ) = By replacing Pi = γi Qi−1


and using the Schur complement, the
xˆTi Pi xˆi , where Pi is a symmetric positive-definite matrix. Assume above constraint can be described as:
 
that the following condition is satisfied. 1 xˆi (k|k )T
≥ 0.
Vi (xˆi (k + n|k )) ≥ γi ⇒ Vi (xˆi (k + n + 1|k )) − Vi (xˆi (k + n|k )) xˆi (k|k )T Qi
≤ −{xˆi (k + n|k )T Mi xˆi (k + n|k ) Now, by substituting the state space model (8) and control input
+ ui ( k + n|k )T Ri ui ( k + n|k )} (15) (9) in the quadratically bounded condition (15), one get
 T  
It is obvious that (15) results in the QB condition (14). xˆi (k + n|k ) ζ11† ζ12

xˆi (k + n|k )
According to (15) and the decreasing Lyapunov function, the up- ≥0 (22)
zˆi (k + n|k ) ζ21

ζ22
† zˆi (k + n|k )
per bound of the objective function can be obtained by summing
both sides of the above inequality from i = 0 to i = ∞ as where
 
Ji (k ) ≤ Vi (xˆi (k|k )) = xˆi (k|k ) Pi xˆi (k|k ) ≤ γi
T
(16) ζ11† = Pi − Mi − FiT Ri Fi − Aii (k + n ) + Bi (k + n )Fi T
 
in which the scalar γ i is the upper bound of the performance in- × Pi Aii (k + n ) + Bi (k + n )Fi
dex, Ji (k).  
The aim of the proposed robust DMPC at the ith agent is to syn- ζ12

= −FiT Ri Ki − Aii (k + n ) + Bi (k + n )Fi T
thesize a control law as ui (k + n|k ) = Fi (k )xˆi (k + n|k ) + Ki (k )zˆi (k +
 
× Pi Di (k + n ) + Bi (k + n )Ki
n|k ) at time step k that minimizes the upper bound of the per-  
formance index. The following theorem is presented to construct ζ21

= −KiT Ri Fi − Di (k + n ) + Bi (k + n )Ki T
the control input at ith agent by satisfying the closed-loop stabil-  
× Pi Aii (k + n ) + Bi (k + n )Fi
ity condition and input constraints.  
ζ22

= −KiT Ri Ki − Di (k + n ) + Bi (k + n )Ki T
Theorem 1. For each subsystems in (4) (for i = 1, · · · , M), the control  
input ui (k + n|k ) that minimizes the cost function Ji (k), is obtained × Pi Di (k + n ) + Bi (k + n )Ki
by solving the following optimization problem subject to the LMI con-
The interaction vector is the states of neighboring subsystems.
straints:
It will be shown in Remark 4 that it has an upper bound as
||zˆi (k + n|k )||2 ≤ βi,k . Consequently, if Vi (xˆi (k + n|k )) ≥ γi , one has
min γi γi ||zˆi (k + n|k )||2 ≤ βi,kVi (xˆi (k + n|k )). It yields that
(17)
Yi ,Ki ,Qi  T   
xˆi (k + n|k ) βi,k Pi 0 xˆi (k + n|k )
subject to ≥0 (23)
  zˆi (k + n|k ) 0 −γi I zˆi (k + n|k )
1 xˆi (k|k )T
≥0 (18)
xˆi (k|k )T Qi Using the S-procedure argument presented in [40, chapter 2] for
(22) and (23) with the multiplier α i ∈ (0, 1] results in
⎡ ⎤
(1 − αi βi,k )Qi      T  
⎢ 0 αi I    ⎥ xˆi (k + n|k ) ζ11† − αi βi,k Pi ζ12

xˆi (k + n|k )
⎢ ⎥ ≥ 0,
⎢ ( Al Q + Bl Y ) (Dli + Bli Ki )  ⎥ zˆi (k + n|k ) ζ21

ζ22

+ αi γi I zˆi (k + n|k )
⎢ ii i i i
Qi ⎥ ≥ 0,
⎢ ⎥
⎣ R1i /2Yi R1i /2 Ki 0 γi I ⎦ (24)
Mi1/2 Qi 0 0 0 γi I The above inequality is satisfied for all n ≥ 0, if and only if

l ∈ {1, . . . , L} (19) ζ11† − αi βi,k Pi ζ12

≥0 (25)
ζ21

ζ22

+ αi γi I
⎡ √ √ ⎤
ξi 2Yi 2Ki
√ Replacing Yi = Fi Qi , Pi = γi Qi−1 , pre- and post-multiplying by
⎣ 2Y T Qi 0 ⎦ ≥ 0, (20)
√i diag[Qi , I] and multiplying γi−1 leads to
2KiT 0 σi I  
ζ11◦ ζ12

n ≥0 (26)
ξi = diag[(u1i,max )2 , · · · , (ui,max
ui
)2 ] ζ21

ζ22

R.A. Shalmani, M. Rahmani and N. Bigdeli / Journal of Process Control 88 (2020) 43–53 47

where 2.3. Stability of the closed-loop system


 T
ζ11◦ = (1 − αi βi,k )Qi − Aii (k + n )Qi + Bi (k + n )Yi The following lemmas are devoted to show the invariant ellip-
 
× Qi−1 Aii (k + n )Qi + Bi (k + n )Yi − Qi Mi Qi − Yi T RiYi soid of the system, feasibility of the optimization problem and con-
  vergence of the solution for the ith closed-loop subsystem, respec-
ζ12

= −YiT Ri Ki − Aii (k + n )Qi + Bi (k + n )Yi T tively.
 
× Qi−1 Di (k + n ) + Bi (k + n )Ki
  Lemma 2. For the LTV system (4) with the uncertainty set i de-
ζ21

= −KiT RiYi − Di (k + n ) + Bi (k + n )Ki T fined by (6), the optimization problem (17) at each instant k has a
  solution, Pi , γ i , Ki , Yi . Using this solution, the obtained control input
× Qi−1 Aii (k + n )Qi + Bi (k + n )Yi from (9) yields that
 
ζ22

= αi I − KiT Ri Ki − Di (k + n ) + Bi (k + n )Ki T
max xˆi (k + n|k )T Pi xˆi (k + n|k ) ≤ γi
  [Ai j (k+n ),Bi (k+n )]∈i
(29)
× Qi−1 Di (k + n ) + Bi (k + n )Ki
It means that εi = {xˆi |xˆi Qi−1 xˆi ≤ 1} is an invariant ellipsoid for future
T
Consequently, the following LMI is obtained by applying the Schur
complement. states of the uncertain system (8).
⎡ ⎤
(1 − αi βi,k )Qi     Proof. If the problem (17) has a feasible solution, then (15) is sat-
⎢ 0 αi I   ⎥ isfied. The inequality (15) guarantees the QB condition (14). That
⎢ ⎥
⎢ ζ31 ζ32  ⎥
states satisfying the quadratically bounded condition with the Lya-
⎢ Qi ⎥≥0 (27) punov matrix Qi−1 lie in the set εi = {xˆi |xˆi Qi−1 xˆi ≤ 1}, i.e. it is a
T
⎢ ⎥
⎣ R1i /2Yi R1i /2 Ki 0 γi I ⎦ robust invariant set for the system. 
Mi1/2 Qi 0 0 0 γi I Lemma 3. If the optimization problem (17) has a feasible solution at
where time step k, it is feasible for all steps t > k.

ζ31 = Aii (k + n )Qi + Bi (k + n )Yi Proof. In the problem (17), the only LMI that is directly dependent
on states of the subsystem is (18). Consequently, to prove the fea-
ζ32 = Di (k + n ) + Bi (k + n )Ki
sibility of the optimization problem, it is sufficient to show that a
The LMI (27) is affine in [Aii (k + n ) . . . Ai j (k + n ) . . . Bi (k + n )]. feasible solution of (17) at step k is also feasible for (18) at step
Therefore, it is satisfied for all k + 1. To this end, consider a feasible solution of (17) at time step
k as Pi,k , γ i,k . This solution is also feasible at step k + 1 if
[Aii (k + n ) . . . Ai j (k + n ) . . . Bi (k + n )] ∈ i ,
xˆi (k + 1|k + 1 )T Pi,k xˆi (k + 1|k + 1 ) < γi,k (30)
if and only if the following LMI holds for all l ∈ {1, . . . , L}
⎡ ⎤ where xi (k + 1|k + 1 ) = xi (k + 1 ) is the measurement of state at k +
(1 − αi βi,k )Qi    
1 that can be obtained by the prediction Eq. (8).
⎢ 0 αi I    ⎥
⎢ ⎥ To prove (30), note that the feasible solution satisfies (19). The
⎢ ( Al Q + Bl Y ) ( Dli+ Bli Ki )  ⎥ LMI (19) leads to the inequality (14). This inequality for n = 0, and
⎢ ii i i i
Qi ⎥≥0
⎢ ⎥ the measured state at k + 1 results in
⎣ R1i /2Yi R1i /2 Ki 0 γi I ⎦
Mi1/2 Qi 0 0 0 γi I xˆi (k|k )T Pi,k xˆi (k|k ) − xˆi (k + 1|k + 1 )T Pi,k xˆi (k + 1|k + 1 ) > 0, (31)

Note that at the time instant k, β i,k is known and there exists a that proves (30). It can be shown for k + 2, k + 3, . . . in a similar
positive scalar α i that satisfies (1 − αi βi,k ) > 0. way. 
At the final step of the proof, the input constraint |uhi (k + Lemma 4. The ith closed-loop subsystem is quadratically bounded for
n|k )| ≤ uhi,max , h = 1, 2, . . . , nui can be expressed as: the feasible solution achieved from Theorem 1, and xi (k) converge to
a neighborhood of zero when k → ∞.
max |uhi (k + n|k )|2
n≥0
Proof. Assume that Pi,k+1 is the optimal value of Pi obtained from
= max |Yi Qi−1 xˆi (k + n|k ) + Ki zˆi (k + n|k )|2 the optimization problem (17) at step k + 1. From Lemma 3, Pi,k is
n≥0
feasible at k + 1. Consequently, optimality of Pi,k+1 yields that
≤ [max |Yi Qi−1 xˆi (k + n|k )| + max |Ki zˆi (k + n|k )|]2
xi zi
xˆi (k + 1|k + 1 )T Pi,k+1 xˆi (k + 1|k + 1 ) < xˆi (k + 1|k + 1 )T
≤ 2 Yi Qi−1/2 2 + 2 Ki i,k 2
β × Pi,k xˆi (k + 1|k + 1 ) (32)
= 2( ) ( σ )
Yi Qi−1YiT + 2 Ki i−1 KiT
Now from (31), one has
−1/2
where σi = βi,k . Applying the Schur complement yields
xˆi (k + 1|k + 1 )T Pi,k+1 xˆi (k + 1|k + 1 ) < xˆi (k|k )T Pi,k xˆi (k|k ) (33)
⎡ √ √ ⎤
√ ξi 2Yi 2Ki Thus, the Lyaponuv function, xˆi (k|k )T Pi,k xˆi (k|k ), for the closed-loop
⎣ 2Y T Qi 0 ⎦ ≥ 0, (28) subsystem is decreasing and xˆi (k ) will converge to the neighbor-
√ i
2KiT 0 σi I hood of zero. In addition, in the next section, it is shown that the
nu
state estimation xˆi (k ) converges to the state of subsystem xi (k).
where ξi = diag[(u1i,max )2 , · · · , (ui,max
i
)2 ].  Therefore, xi (k) enters into a neighborhood of zero and stays in it
if k → ∞. 
Remark 3. In this approach, xˆi (k|k ) and zˆi (k|k ) represent the esti-
mations of xi (k) and zi (k) at time instant k, respectively. The esti- Remark 4. Feasibility of the optimization problem results in
mations can be achieved by a distributed Kalman filter presented
in the next section. xˆj (k + 1|k + 1 )T Pj,k xˆj (k + 1|k + 1 ) ≤ xˆj (k|k )T Pj,k xˆj (k|k ), (34)
48 R.A. Shalmani, M. Rahmani and N. Bigdeli / Journal of Process Control 88 (2020) 43–53

and consequently for j = 1, · · · , M, one has Now, using the prediction Eq. (39) and having the measurements
  12 of ith subsystem and other subsystems, the updated prediction
σ (Pj,k )
||xˆj (k + 1|k + 1 )|| ≤ σ (Pj,k ) ||xˆj (k|k )||. (35) model is given by

Now, at ith agent, considering the communication network be- 


M

tween agents, for the interaction signal, one get xˆi (k + 1|k ) = Aii xˆi (k|k − 1 ) + Bi ui (k ) + Ai j xˆj (k|k − 1 )
j=1
||zˆi (k + 1|k + 1 )|| ≤ λi,max ||zˆi (k|k )||, (36) j=i

where 
M

  12 + Lii (k )[yi (k ) − Ci xˆi (k|k − 1 )] + L i j ( k )[ y j ( k )


σ (Pj,k ) j=1
λi,max = max , j = 1, . . . , i − 1, i + 1, . . . , M j=i
j σ (Pj,k )
− C j xˆj (k|k − 1 )] (42)
The inequality (36) results in the boundedness of the interac-
tion signal at step k as follows where Lii and Lij are the gains of the distributed Kalman filter at
||zˆi (k + 1|k + 1 )||2 ≤ βi,k (37) ith subsystem. Lii is obtained by minimizing the following error co-
variance matrix
This inequality has been utilized in proof of Theorem 1.
Pi (k + 1|k ) = cov[xi (k + 1|k ) − xˆi (k + 1|k )]
3. Communication-based distributed state estimation
= Aii Pi (k|k − 1 )ATii + Lii (k )Cii Pi (k|k − 1 )(Lii (k )Ci )T
In a distributed control, an estimation problem is usually con- − 2Lii (k )Ci Pi (k|k − 1 )ATii
sidered to obtain states of the system that are not available or can- + Ei Qwi (k )EiT + Lii (k )Rvi (k )LTii
not be measured at each control agent. In this regard, different
estimation strategies have been reported in a distributed fashion 
M
+ Ai j Pj (k|k − 1 )ATi j (43)
[41–43].
j=1
In the proposed distributed MPC, estimations of state and in- j=i
teraction vectors are required to achieve the control input at each
agent. For this purpose, to design a distributed Kalman filter, con- The optimal value of Lii that minimizes (43) satisfies the following
sider the following model of the ith subsystem with the nominal equation:
values of parameters.

M −2Aii Pi (k|k − 1 )CiT + 2Lii (k )Ci Pi (k|k − 1 )Ci + 2Lii (k )Rvi (k ) = 0
xi (k + 1 ) = Aii xi (k ) + Bi ui (k ) + Ai j x j ( k ) + Ei wi ( k ) (44)
j=1
j=i
Consequently, the Kalman filter gain, Lii (k), is given by
yi (k ) = Ci xi (k ) + vi (k ) (38)
Lii (k ) = Aii Pi (k|k − 1 )CiT (Rvi + Ci Pi (k|k − 1 )CiT )−1 (45)
where yi is the measurement of ith subsystem. Also, wi and vi are
zero mean uncorrelated Gaussian process noise and measurement The gain, Lij (k), corresponds to the interaction between subsys-
noise with the covariance matrices, Qwi and Rvi , respectively. It is tem. To obtain it, let us define the following interaction error co-
remarkable that the center of the uncertainty polytope, i , in (6) is variance matrix.
considered as the nominal value of parameters.
Let (A, C) and (Aii , Ci ) be detectable for each i = 1, . . . , M. The 
M 
M

prediction equation of the estimator is given by P j ( k|k ) = E[e j (k|k )e j (k|k )T ]


j=1 j=1

M j=i j=i
xˆi (k + 1|k ) = Aii xˆi (k|k − 1 ) + Bi ui (k ) + Ai j xˆ j (k|k − 1 ) (39)
j=1 
M 
M
j=i = Ai j P j ( k|k − 1 )Ai j + Li jC j Pj (k|k − 1 )(Li jC j )T
This equation is a primary estimation for one-step ahead predic- j=1 j=1
j=i j=i
tion of states at the sampling time k. In this step, control input,
ui (k), is available, and xˆi (k|k − 1 ) is the state estimation at the 
M

sampling time k given the observation up to time k − 1, and is ob- −2 Ai j Pj (k|k − 1 )(Li jC j )T (46)
j=1
tained as follows j=i

xˆi (k|k − 1 ) = Aii xˆi (k − 1|k − 1 ) + Bi ui (k − 1 )


Now, the optimal value of Lij can be computed by minimization of

M
(46) as follows
+ Ai j xˆ j (k − 1|k − 1 ) (40)
j=1  −1
j=i Li j = Ai jC Tj C jC Tj . (47)
Similarly, the ith Kalman predictor can access xˆ j (k|k − 1 ) by
In summary, in the proposed RDMPC, using the distributed
the communication between agents. The error covariance matrix
Kalman filter, estimations of states are available at the ith agent
is computed as
to solve the LMI optimization problem (17)–(20). This problem re-
Pi (k|k − 1 ) = cov[xi (k|k − 1 ) − xˆi (k|k − 1 )] sults in the local control input, ui (k). Since the subsystems are cou-
= Aii Pi (k − 1|k − 1 )ATii + Ei Qwi EiT pled, it is necessary for the ith agent to exchange information with
other agents via the communication network which is assumed

M
ideal without any delay. To reach the optimal equilibrium point in
+ Ai j Pi (k − 1|k − 1 )ATi j (41)
the proposed distributed MPC approach, the Nash-based iterative
j=1
j=i approach at ith agent is presented in Algorithm 1.
R.A. Shalmani, M. Rahmani and N. Bigdeli / Journal of Process Control 88 (2020) 43–53 49

Algorithm 1 Proposed Nash-based Robust Distributed MPC at ith


Agent.
1: At time step k, acquire the predicted interaction by the network
and get the xˆi (k ) from the corresponding Kalman filter.
(0 )
2: Initial condition: Choose the initial values for F
i
, and Ki(0 ) .
3: Share the initial values, Fi(0 ) , Ki(0 ) , and local state estimations,
xˆi (k ), with other subsystems via the network.
4: Set the iteration index, p = 1.
5: while p < pmax ∗ do
6: Solve the optimization problem (17–20) to obtain the opti-
mal values of Fi∗ , and Ki∗ .
( p) ( p)
7: Set Fi = Fi∗ , and Ki = Ki∗ .
8: If the following convergence criteria are satisfied, go to step
9.

||Fi( p) − Fi( p−1) || ≤ εFi and ||Ki( p) − Ki( p−1) || ≤ εKi , Fig. 1. Interconnection between three-area power system.

εFi εKi ⎡ 1 ⎤
where and are prespecified error accura-
( p) ( p) ( p) Di 1
cies.Otherwise, exchange the solutions Fi , Ki , Pi and − 0 0 −
set p = p + 1. ⎢ Mi Mi Mi ⎥
⎢ 1 1 ⎥
9: end while ⎢ 0 − 0 0 ⎥
⎢ Tchi Tchi ⎥
10: Apply the control input, ui (k + n|k ) = Fi∗ xˆi (k + n|k ) + Ki∗ zˆi (k + ⎢ 1 1 ⎥
n|k ), to the ith subsystems. ⎢ 0 ⎥
Aii = ⎢ − RT 0 −
Tgi
0 ⎥,
Get the output of neighboring subsystems, y j (k ), obtain xˆi (k + ⎢ gi ⎥
11:
⎢ βi 0 0 0 1 ⎥
1|k ) from the corresponding DKF (42), and then broadcast the ⎢ ⎥
⎢  M ⎥
results. ⎣2π Ti j 0 0 0 0 ⎦
12: Set Fi(0 ) = Fi∗ , Ki(0 ) = Ki∗ , k = k + 1, and go to step 3. j=1
j=i
* pmax is the maximum number of iterations.
 T
1
Bi = 0 0 0 0 ,
Tgi
Remark 5. Algorithm 1 can be executed at ith, (i = 1, . . . , M ),
 1
T
agent separately in parallel with other agents. Each agent checks
Fi = − 0 0 0 0 ,
the local termination condition. Meeting the convergence crite- Mi
ria declares that the agent reaches the optimal solution. Conse-  
Ci = 1 0 0 0 0
quently, the whole system moves toward the Nash optimal equi-
librium point if iterative algorithms in all agents are terminated. Parameters and variables of the power system are tabulated in
Note that the system does not need a centralized agent to stop the Table 2. The block diagram of ith area in an interconnected power
algorithm. system is shown in Fig. 2. The tie-line power between areas is
given by
d Ptie 
M
dt
i
= 2π Ti j ( fi − f j ) (49)
j=1
4. Case study j=i

Therefore, the interaction vector and the corresponding matrix


To evaluate the performance of the proposed method, load fre- can be considered as:
quency control (LFC) problem of a three-area power system is con-
0
⎡ 0 0 0 0

sidered [9,44]. The aim of an LFC problem is to control the real
power output of generation units in response to changes in system ⎢ 0 0 0 0 0⎥
Ai j = ⎢ 0 0 0 0 0⎥,
frequency and tie-line power interchange. LFC in interconnected ⎣ ⎦
0 0 0 0 0
multi-area power systems, is an illustrative example to show the
−2π Ti j 0 0 0 0
bad effect of frequency deviation of an area on the others. Thus,
it is a practical example to distinguish the effectiveness of the  
proposed distributed MPC approach. Three areas are connected to Di = Ai1 . . . Ai j . . . Aim ,
their neighbors via the transmission lines called tie-lines, as de-  T
Xi = xT1 . . . xTj . . . xTm , j = 1, . . . , m, j = i
picted in Fig. 1.
Each area may have several generation units and loads that are To apply the proposed MPC approach, forward difference Euler
generally modeled as a single generator and load. The state space approximation is used for discretization of the system with sample
model of each area is expressed as follows time Ts = 0.1 sec. Nominal values of parameters for each area are
provided in Table 3. To investigate the robustness of the approach,
x˙ i (t ) = Aii xi (t ) + Bi ui (t ) + Di Xi (t ) + Fi Pdi 20% variation in the nominal values of parameters are considered
here. The weighting matrices of the optimization problem are Qi =
yi (t ) = Ci xi (t ) (48) Ini , Ri = 1, the initial value of the state is x0i = [0.1 0 0 0 0]T , and
αi = 0.01. The input constraint is also considered as |ui | ≤ 10.
where As it is obvious in Fig. 2, each Kalman filter estimates the local
states using the frequency deviation, control input, and also the in-
xi (t ) = [ fi (t ) Pmi (t ) Pvi (t ) Ei (t ) Ptiei (t )]T teraction signal that is available via the network. The process and
50 R.A. Shalmani, M. Rahmani and N. Bigdeli / Journal of Process Control 88 (2020) 43–53

Table 2
Parameters and variables description.

Nomenclature

Parameter Definition

f Frequency deviation (Hz)


P m Generator mechanical output deviation
P v Valve position deviation
P d Load disturbance
tie Tie-line active power deviation
M Moment of inertia of generator
R Speed drop
Tg Time constant of the governor(s)
Tch Time constant of turbine (s)
D Generator damping coefficient
β Frequency bias factor
Tij Tie-line synchronizing coefficient between the ith and jth power areas

Fig. 2. Block diagram of the ith control area in multi-area power system structure.

Fig. 3. Convergence errors of the proposed algorithm for three areas at the first time step.

Table 3 For comparison purpose, centralized, and decentralized RMPC is


Nominal values of parameters.
considered here. In the fully decentralized scheme, at each agent,
Area 1 Area 2 Area 3 a Kalman filter and a robust MPC are designed based on the local
Tch1 0.3s Tch2 0.17s Tch3 0.17s information, and communication between areas are neglected. In
Tg1 0.1s Tg2 0.4s KP3 0.4s the centralized control, the whole system is considered with the
R1 0.05 R2 0.05 R3 0.05 following dynamic and input matrices
D1 1.5 D2 1.5 D3 1.5
M1 10 M2 12 M3 12 A11 A12 A13 B1 0 0
β1 2
R1
+ D1 M2 4
R2
+ D2 M3 3
R3
+ D3 A= A21 A22 A23 , B = 0 B2 0 .
A31 A32 A33 0 0 B3
The results are also compared with the distributed RMPC pre-
measurement noises of the ith area are considered as zero mean sented in [37].
signals with the covariances Rvi = 10−5 , and Qwi = 10−5 , respec- The parameters of controllers used in the simulation are the
tively. same in all methods.
R.A. Shalmani, M. Rahmani and N. Bigdeli / Journal of Process Control 88 (2020) 43–53 51

Fig. 4. f, and Ptie responses of all areas for four RMPC approaches (dashed: centralized; dash-dotted: distributed in [37]; colon: decentralized; solid: proposed distributed).

Fig. 5. Control inputs of all areas for four RMPC approaches (dashed: centralized; dash-dotted: distributed in [37]; colon: decentralized; solid: proposed distributed).

The error accuracies are considered as εFi = εKi = 0.01, and Moreover, for comparison, the mean cost of each approach is
pmax = 5. Convergence behaviors of the proposed algorithm for Fi computed as follows:
and Ki (i = 1, 2, 3 ) at the first time step are shown in Fig. 3. It de-   Nf M
clares that the algorithm converges in less than 5 iterations. 1  
Frequency deviation and tie-line active power for all areas of
Jcost = zˆi (k + n|k )T Mi zˆi (k + n|k )
2N f
k=0 i=1
the LFC system equipped with the three aforementioned control
approaches are depicted in Fig. 4. The control inputs are also plot- + ui ( k + n|k )T Ri ui ( k + n|k ) (50)
ted in Fig. 5.
where Nf is the final time of running the algorithm. The results are
tabulated in Table 4. It shows that the decentralized RMPC has the
52 R.A. Shalmani, M. Rahmani and N. Bigdeli / Journal of Process Control 88 (2020) 43–53

Table 4 References
Comparison between the values of mean
costs. [1] E.F. Camacho, C. Bordons, Model Predictive Control in the Process Industry,
Springer Science & Business Media, 2012.
Approach Cost
[2] D.D. Šiljak, A.I. Zečević, Large-scale and decentralized systems, Wiley Encycl
Centralized RMPC 6.16 Electrical Electr Eng (1999).
Distributed RMPC in [37] 6.67 [3] X.-B. Chen, S.S. Stanković, Decomposition and decentralized control of systems
Decentralized RMPC 29.93 with multi-overlapping structure, Automatica 41 (10) (2005) 1765–1772.
The proposed distributed RMPC 6.23 [4] A. Alessio, D. Barcelli, A. Bemporad, Decentralized model predictive control of
dynamically coupled linear systems, J. Process Control 21 (5) (2011) 705–714.
[5] M. Vaccarini, S. Longhi, M.R. Katebi, Unconstrained networked decentralized
Table 5 model predictive control, J. Process Control 19 (2) (2009) 328–339.
Comparison between CPU times. [6] L. Magni, R. Scattolini, Stabilizing decentralized model predictive control of
nonlinear systems, Automatica 42 (7) (2006) 1231–1236.
Approach CPU time (s) [7] D. Raimondo, L. Magni, R. Scattolini, Decentralized MPC of nonlinear systems:
an input-to-state stability approach, Int. J. Robust Nonlinear Control 17 (17)
Centralized RMPC 48.29 (2007) 1651–1667.
Distributed RMPC in [37] 33.12 [8] M.D. Mesarovic, D. Macko, Y. Takahara, Theory of hierarchical, multilevel, sys-
Decentralized RMPC 0.633 tems, 68, Elsevier, 20 0 0.
The proposed distributed RMPC 2.81 [9] M. Rahmani, N. Sadati, Hierarchical optimal robust load-frequency control for
power systems, IET Gener. Trans. Distrib. 6 (4) (2012) 303–312.
[10] N. Sadati, M. Rahmani, M. Saif, Two-level robust optimal control of large-scale
nonlinear systems, IEEE Syst. J. 9 (1) (2015) 242–251.
largest value, whereas the proposed distributed RMPC has a close [11] R. Scattolini, Architectures for distributed and hierarchical model predictive
performance to the centralized approach. control–a review, J. Process Control 19 (5) (2009) 723–731.
Considering the capability of parallel processing in decentral- [12] M. Katebi, M. Johnson, Predictive control design for large-scale systems, Auto-
matica 33 (3) (1997) 421–425.
ized and distributed approaches, the CPU times of four methods [13] C. Ocampo-Martinez, D. Barcelli, V. Puig, A. Bemporad, Hierarchical and de-
are presented in Table 5. It is obvious that the centralized MPC centralised model predictive control of drinking water networks: application
has the highest computation time because of large-dimension ma- to barcelona case study, IET Control Theory Appl. 6 (1) (2012) 62–71.
[14] M. Farina, R. Scattolini, Distributed predictive control: a non-cooperative algo-
trices and equations, and decentralized MPC has the lowest CPU rithm with neighbor-to-neighbor communication for linear systems, Automat-
time with the worst performance. It is remarkable that the dis- ica 48 (6) (2012) 1088–1096.
tributed RMPC approach in [37] has close performance to the pro- [15] J. Maestre, D.M. De La Pena, E. Camacho, T. Alamo, Distributed model pre-
dictive control based on agent negotiation, J. Process Control 21 (5) (2011)
posed DMPC, but has a much bigger CPU time, because the dimen- 685–697.
sions of its LMIs and Kalman filter are in the size of the centralized [16] P.J. Goulart, E.C. Kerrigan, J.M. Maciejowski, Optimization over state feedback
MPC. Moreover, the CPU time of the distributed RMPC is a little policies for robust control with constraints, Automatica 42 (4) (2006) 523–533.
[17] P. Trodden, A. Richards, Cooperative distributed MPC of linear systems with
more than the decentralized approach. Note that the same com-
coupled constraints, Automatica 49 (2) (2013) 479–487.
puter with Intel Core i5 processor and 4G RAM is used to solve [18] B.T. Stewart, A.N. Venkat, J.B. Rawlings, S.J. Wright, G. Pannocchia, Cooperative
the problems of the four studied RMPC approaches. distributed model predictive control, Syst. Control Lett. 59 (8) (2010) 460–469.
[19] S. Li, Y. Zhang, Q. Zhu, Nash-optimization enhanced distributed model pre-
dictive control applied to the shell benchmark problem, Inf. Sci. (Ny) 170 (2)
5. Conclusion (2005) 329–349.
[20] A.N. Venkat, I.A. Hiskens, J.B. Rawlings, S.J. Wright, Distributed MPC strategies
In this paper, a new robust distributed MPC algorithm has been with application to power system automatic generation control, IEEE Trans.
Control Syst. Technol. 16 (6) (2008) 1192–1206.
proposed for polytopic uncertain large-scale systems. Considering
[21] J. Maestre, D. Munoz De La Pena, E. Camacho, Distributed model predictive
the effects of neighboring subsystems, the proposed method has a control based on a cooperative game, Optimal Control Appl. Methods 32 (2)
special emphasis on guaranteeing the robust stability based on the (2011) 153–176.
quadratic boundedness of each subsystem. In the distributed MPC [22] E. Camponogara, D. Jia, B.H. Krogh, S. Talukdar, Distributed model predictive
control, IEEE Control Syst. 22 (1) (2002) 44–52.
scheme, the local LMI-based optimization problems at subsystems [23] J. Liu, D. Muñoz de la Peña, P.D. Christofides, Distributed model predictive con-
are solved using state and interaction estimations obtained by a trol of nonlinear process systems, AlChE J. 55 (5) (2009) 1171–1184.
distributed Kalman filter. Exchange of information between con- [24] P.J. Campo, M. Morari, Robust model predictive control, in: 1987 American
Control Conference, IEEE, 1987, pp. 1021–1026.
trol agents are available via a communication network. An itera- [25] M.V. Kothare, V. Balakrishnan, M. Morari, Robust constrained model predictive
tive algorithm is presented to achieve the Nash optimal solution control using linear matrix inequalities, Automatica 32 (10) (1996) 1361–1379.
of the whole system. In this algorithm, each subsystem reaches its [26] L. Magni, G. De Nicolao, R. Scattolini, F. Allgöwer, Robust model predictive con-
trol for nonlinear discrete-time systems, Int. J. Robust Nonlinear Control 13
best solution by satisfying the convergence constraints. Effective- (3–4) (2003) 229–246.
ness of this approach is illustrated by solving an LFC problem in a [27] N. Poursafar, H. Taghirad, M. Haeri, Model predictive control of non-linear dis-
three-area power system with parameter uncertainties. The results crete time systems: a linear matrix inequality approach, IET Control Theory
Appl. 4 (10) (2010) 1922–1932.
show that the performance of the proposed distributed MPC is [28] P. Ojaghi, N. Bigdeli, M. Rahmani, An LMI approach to robust model predic-
close to the centralized MPC, while the computation time is much tive control of nonlinear systems with state-dependent uncertainties, J. Process
lower. Also, the algorithm outperforms the decentralized MPC sig- Control 47 (2016) 1–10.
[29] E. Aydiner, F.D. Brunner, W. Heemels, et al., Robust self-triggered model pre-
nificantly.
dictive control for constrained discrete-time LTI systems based on homothetic
tubes, in: 2015 European Control Conference (ECC), IEEE, 2015, pp. 1587–1593.
Declaration of competing interest [30] F.D. Brunner, M. Heemels, F. Allgöwer, Robust self-triggered MPC for con-
strained linear systems: a tube-based approach, Automatica 72 (2016) 73–83.
[31] C. Liu, H. Li, J. Gao, D. Xu, Robust self-triggered min–max model predictive
The authors declare that they have no known competing finan-
control for discrete-time nonlinear systems, Automatica 89 (2018) 333–339.
cial interests or personal relationships that could have appeared to [32] A. Richards, J.P. How, Robust distributed model predictive control, Int. J. Con-
influence the work reported in this paper. trol 80 (9) (2007) 1517–1531.
[33] Y. Kuwata, J.P. How, Cooperative distributed robust trajectory optimization us-
ing receding horizon MILP, IEEE Trans. Control Syst. Technol. 19 (2) (2011)
CRediT authorship contribution statement
423–431.
[34] Y. Kuwata, A. Richards, T. Schouwenaars, J.P. How, Distributed robust receding
Reza Aliakbarpour Shalmani: Methodology, Investigation, Soft- horizon control for multivehicle guidance, IEEE Trans. Control Syst. Technol. 15
(4) (2007) 627–641.
ware, Writing - original draft. Mehdi Rahmani: Supervision, Con-
[35] H. Li, Y. Shi, Robust distributed model predictive control of constrained con-
ceptualization, Validation, Writing - review & editing. Nooshin tinuous-time nonlinear systems: a robustness constraint approach, IEEE Trans.
Bigdeli: Supervision, Validation, Writing - review & editing. Automat. Contr. 59 (6) (2013) 1673–1678.
R.A. Shalmani, M. Rahmani and N. Bigdeli / Journal of Process Control 88 (2020) 43–53 53

[36] H. Li, Y. Shi, Distributed model predictive control of constrained nonlinear sys- [42] S. Roshany-Yamchi, M. Cychowski, R.R. Negenborn, B. De Schutter, K. Delaney,
tems with communication delays, Syst. Control Lett. 62 (10) (2013) 819–826. J. Connell, Kalman filter-based distributed predictive control of large-scale
[37] W. Al-Gherwi, H. Budman, A. Elkamel, A robust distributed model predictive multi-rate systems: application to power networks, IEEE Trans. Control Syst.
control algorithm, J. Process Control 21 (8) (2011) 1127–1137. Technol. 21 (1) (2013) 27–39.
[38] J. Nash, Non-cooperative games, Ann. Math. (1951) 286–295. [43] M. Farina, G. Ferrari-Trecate, R. Scattolini, Moving-horizon partition-based state
[39] A. Alessandri, M. Baglietto, G. Battistelli, On estimation error bounds for reced- estimation of large-scale systems, Automatica 46 (5) (2010) 910–918.
ing-horizon filters using quadratic boundedness, IEEE Trans. Automat. Contr. [44] P. Ojaghi, M. Rahmani, LMI-Based robust predictive load frequency control for
49 (8) (2004) 1350–1355. power systems with communication delays, IEEE Trans. Power Syst. 32 (5)
[40] S. Boyd, L. El Ghaoui, E. Feron, V. Balakrishnan, Linear matrix inequalities in (2017) 4091–4100.
system and control theory, SIAM, 1994.
[41] R. Vadigepalli, F.J. Doyle, A distributed state estimation and control algo-
rithm for plantwide processes, IEEE Trans. Control Syst. Technol. 11 (1) (2003)
119–127.

You might also like