You are on page 1of 9

A System Level Approach to Tube-based Model Predictive Control

Jerome Sieber1 , Samir Bennani2 , and Melanie N. Zeilinger1

Abstract— Robust tube-based model predictive control and enables optimization over closed-loop trajectories instead
(MPC) methods address constraint satisfaction by leveraging of controller gains [7]. We relate the SLP to df-MPC and
an a priori determined tube controller in the prediction to exploit this relation in order to formulate a system level
tighten the constraints. This paper presents a system level
tube-MPC (SLTMPC) method derived from the system level pa- tube-MPC (SLTMPC) method.
rameterization (SLP), which allows optimization over the tube Related Work: A tube-based MPC method reducing con-
arXiv:2103.02460v2 [eess.SY] 30 Apr 2021

controller online when solving the MPC problem, which can servativeness by using multiple pre-computed tube con-
significantly reduce conservativeness. We derive the SLTMPC trollers was introduced in [9]. In contrast, the proposed
method by establishing an equivalence relation between a class
SLTMPC method directly optimizes over the controller gains
of robust MPC methods and the SLP. Finally, we show that the
SLTMPC formulation naturally arises from an extended SLP online. At the intersection of SLP and MPC, the first SLP-
formulation and show its merits in a numerical example. based MPC formulation was introduced in [7] and then
Index Terms— Robust control, optimal control, predictive formalized as SLS-MPC in [10], where both additive distur-
control for linear systems. bances and parametric model uncertainties were integrated in
the robust MPC problem. A more conservative variant of this
I. INTRODUCTION
approach was already introduced in the context of the linear
The availability of powerful computational hardware for quadratic regulator (LQR) in [11]. In a distributed setting, an
embedded systems together with advances in optimization SLP-based MPC formulation was proposed in [12], which
software has established model predictive control (MPC) builds on the principles of distributed SLS presented in [8]
as the principal control method for constrained dynamical and was later extended to distributed explicit SLS-MPC [13]
systems. MPC relies on a sufficiently accurate system model and layered SLS-MPC [14].
to predict the system behavior over a finite horizon. In Contributions: We consider linear time-invariant dynam-
order to address constraint satisfaction in the presence of ical systems with additive uncertainties. In this setting, we
bounded uncertainties, robust MPC methods [1] typically first show the equivalence of df-MPC and the SLP, before
optimize over a feedback policy and tighten constraints in using this relation to analyze the inherent tube structure
the prediction problem. The two most common robust MPC present in the SLP. Based on this analysis, we propose
methodologies are disturbance feedback MPC (df-MPC) [2], a SLTMPC formulation, which allows optimization over
[3] and tube-based MPC [4], [5], [6], which mainly differ the tube controller in the online optimization problem and
in their parameterization of the feedback policy. Disturbance thus reduces conservativeness compared to other tube-based
feedback parametrizes the input in terms of the disturbance, methods. Additionally, we show that the SLTMPC can be
while tube-based methods split the system dynamics into derived from an extended version of the SLP and outline
nominal and error dynamics and parametrize the input in extensions to distributed and explicit SLTMPC formulations.
terms of those two quantities. In general, df-MPC is compu- Finally, we show the effectiveness of our formulation on a
tationally more demanding, yet less conservative than tube- numerical example.
based methods. In this paper, we present a tube-based MPC The remainder of the paper is organized as follows: Sec-
method, which lies at the intersection of these two extremes tion II introduces the notation, the problem formulation, and
by offering a trade-off between computational complexity relevant concepts for this paper. We present the equivalence
and conservativeness. relation between the SLP and df-MPC in Section III, before
We derive the proposed method by leveraging the system deriving the SLTMPC formulation in Section IV. Finally,
level parameterization (SLP), which has recently received Section V presents a numerical application of SLTMPC and
increased attention. It was introduced as part of the system Section VI concludes the paper.
level synthesis (SLS) framework [7], in particular in the
context of distributed optimal control [8]. The SLP offers II. PRELIMINARIES
the advantage that convexity is preserved under any addi-
tional convex constraint imposed on the parameterization A. Notation
In the context of matrices and vectors, the superscript r,c
*This work was supported by the European Space Agency (ESA) under
NPI 621-2018 and the Swiss Space Center (SSC). denotes the element indexed by the r-th row and c-th column.
1 J. Sieber and M. N. Zeilinger are members of the Institute for Dy-
Additionally, :r,c: refers to all rows from 1 to r − 1 and all
namic Systems and Control (IDSC), ETH Zurich, 8092 Zurich, Switzerland columns from c to the end, respectively. The notation S i
{jsieber,mzeilinger}@ethz.ch
2 S. Bennani is a member of ESA-ESTEC, Noordwijk 2201 AZ, The refers to the i-th Cartesian product of the set S, i.e. S i =
Netherlands samir.bennani@esa.int S × · · · × S = {(s0 , . . . , si−1 ) | sj ∈ S ∀j = 0, . . . , i − 1}.
B. Problem Formulation The maps Φx , Φu have a block-lower-triangular structure
We consider linear time-invariant (LTI) dynamical systems and completely define the behavior of the closed-loop sys-
with additive disturbances of the form tem with feedback controller K. Using these maps as the
parametrization of the system behavior allows us to recast
xk+1 = Axk + Buk + wk , (1) the optimal control problem in terms of Φx , Φu instead
with wk ∈ W, where W is a compact set. Here, we assume of the state feedback gain K by exploiting the following
time-invariant polytopic sets W = {w ∈ Rn | Sw ≤ s}, but theorem.
the results can easily be generalized to other compact convex Theorem 1: (Theorem 2.1 in [7]) Over the horizon N ,
sets. The system (1) is subject to polytopic state and input the system dynamics (1) with block-lower-triangular state
constraints containing the origin in their interior feedback law K defining the control action as u = Kx, the
following are true:
X = {x ∈ Rn | Hx x ≤ hx }, U = {u ∈ Rm | Hu u ≤ hu }. (2)
1) the affine subspace defined by
This paper presents a system level tube-MPC formulation,  
i.e. a tube-based MPC formulation derived via a system level 
I − ZA −ZB
 Φx
=I (5)
parameterization, which enables online optimization of the Φu
tube control law. To formulate this method, we first show
parametrizes all possible system responses (4),
the equivalence of the system level parameterization and
2) for any block-lower-triangular matrices Φx , Φu sat-
disturbance feedback MPC, both of which are introduced
isfying (5), the controller K = Φu Φ−1x achieves the
in the following sections.
desired system response.
C. System Level Parameterization (SLP)
We consider the finite horizon version of the SLP, as D. Robust Model Predictive Control (MPC)
proposed in [7]. The main idea of the SLP is to parameterize We define the optimal controller for system (1) subject
the controller synthesis by the closed-loop system behav- to (2) via a robust MPC formulation [15], where we consider
ior, which restates optimal control as an optimization over
PN −1 2 2
the quadratic cost k=0 kxk kQk + kuk kRk , with Qk , Rk
closed-loop trajectories and renders the synthesis problem the state and input weights at time step k, respectively,
convex under convex constraints on the control law allowing, and N the prediction horizon. For simplicity, we focus on
e.g., to impose a distributed computation structure [8]. We quadratic costs, however the results can be extended to
define x, u, and w as the concatenated states, inputs, and other cost functions fulfilling the standard MPC stability
disturbances over the horizon N , respectively, and δ as the assumptions [15]. Using the compact system definition in (3),
initial state x0 concatenated with the disturbance sequence the robust MPC problem is then defined as
w: x = [x0 , x1 , . . . , xN ]> , u = [u0 , u1 , . . . , uN ]> , δ =
[x0 , w]> . The dynamics can then be compactly defined as min x> Qx + u> Ru, (6a)
π
trajectories over the horizon N ,
s.t. x = ZAx + ZBu + δ (6b)
x = ZAx + ZBu + δ, (3) x :N N
∈X , x N N
∈ Xf , u ∈ U , ∀w ∈ W N −1
(6c)
where Z is the down-shift operator and ZA, ZB are the u = π(x, u), x = xk 0
(6d)
corresponding dynamic matrices
    where
0 ... ... 0 0 ... ... 0
A 0 . . . 0 B Q = diag(Q0 , . . . , QN −1 , P ), R = diag(R0 , . . . , RN −1 , 0),
0 ... 0
ZA =  . . ..  , ZB =  .. ..  . with P a suitable terminal cost matrix, Xf a suitable terminal
. .. ..
   
.. . . . . . . . . . set, and π = [π 0 , . . . , π N ]> a vector of parametrized
0 ... A 0 0 ... B 0 feedback policies. Available robust MPC methods can be
In order to formulate the closed-loop dynamics, we define classified according to their policy parameterization and
an LTV state feedback controller u = Kx, with constraint handling approach. Two of the most common
 0,0  policies are the tube policy [6] and the disturbance feedback
K 0 ... 0 policy [2], defined as
 K 1,1 K 1,0 ... 0 
K= . . . ,
 
 .. .. .. π tube (x, u) = K (x − z) + v, (7)
0 
K N,N K N,N −1 ... K N,0
df
π (x, u) = Mw + v, (8)
resulting in the closed-loop trajectory x = where z = [z0 , . . . , zN ]> , v = [v0 , . . . , vN ]> , and
−1
(ZA + ZBK) x + δ = (I − ZA − ZBK) δ. The
system response is then defined by the closed-loop
 
  0 ... 0
K
map Φ : δ → (x, u) as  M1,0 . . . 0
..

K = . , M = . .. .. . (9)
   
−1 
 .. .
    
.
 
x (I − ZA − ZBK) Φx
δ = Φδ. (4)

= −1 δ = K
u K (I − ZA − ZBK) Φu MN,0 . . . MN,N −1
The tube policy parameterization (7) is based on splitting direction for any x0 ∈ XN SLP
, (Φx , Φu ) ∈ ΠSLP
N (x0 ), and
the system dynamics (3) into nominal dynamics and error disturbance w ∈ W N −1
, therefore XN df
= XN SLP
.
dynamics, i.e. Proof: We prove XN df
= XN SLP
by proving the two set
df df
inclusions XN ⊆ XN SLP
and XN ⊆ XN
SLP
separately. The
z = ZAz + ZBv, (10)
detailed steps of the two parts are stated in Table I.
x − z = e = (ZA + ZBK) e + δ. (11) a) XN df
⊆ XNSLP
: Given x0 ∈ XN df
, an admissible
df
(M, v) ∈ ΠN (x0 ) exists by definition. Using (12), we write
This allows for recasting (6) in the nominal variables z, v in-
the input and state trajectories for a given w ∈ W N −1 as
stead of the system variables x, u, while imposing tightened
udf = Mw + v and xdf = Ax0 + Budf + Ew, respectively.
constraints on the nominal variables. One way to perform
In order to show the existence of a corresponding (Φx , Φu ),
the constraint tightening, which we will focus on in this
we choose (Φx , Φu ) according to (14) in Table I and Φv
paper, is via reachable sets for the error dynamics. In this
such that Φv x0 = v holds. Then, the derivations in Table I
paper, we will refer to this approach as tube-MPC, which was
show that the input and state trajectories of df-MPC and SLP
first introduced in [4]. For an overview of other tube-based df
are equivalent. Therefore, x0 ∈ XN =⇒ x0 ∈ XN SLP
.
methods or robust MPC in general, see e.g. [15]. df
b) XN SLP
⊆ XN : Given x0 ∈ XN , an admissible
SLP

III. A SYSTEM LEVEL APPROACH TO (Φx , Φu ) exists by definition. Using (4) and a given w ∈
TUBE-BASED MODEL PREDICTIVE CONTROL W N −1 we write the state and input trajectories as xSLP =
Φx δ and uSLP = Φu δ, respectively. If (M, v) is chosen
In this section, we present a new perspective on distur- according to (15) in Table I, then these choices are admissible
bance feedback MPC (df-MPC) by utilizing the SLP. In and the input and state trajectories for both df-MPC and SLP
particular, we show the equivalence of SLP and df-MPC, are equivalent. Therefore, x0 ∈ XN SLP
=⇒ x0 ∈ XN df
.
which includes the tube policy (7) as a subclass [16], and
discuss the implications that arise. TABLE I
Consider the disturbance feedback policy (8). We define D ERIVATION OF THE EQUIVALENCE BETWEEN SLP AND DF -MPC.
the convex set of admissible (M, v) as
df SLP SLP ⊆ X df
XN ⊆ XN XN
M structured as in (9) 
  N
  
Πdf (M, v) x:N ∈ X N , udf ∈ U N , Φx = A E + BΦu
N (x0 ) = , (14) v = Φ:,0 (15)
:,1:
  u x0 , M = Φu

xN ∈ Xf , ∀w ∈ W N −1
 Φu = Φv M
(14) 
uSLP = Φu δ = Φv M δ udf = Mw + v

and the set of initial states x0 for which an admissible (15)
control policy of the form (8) exists is given by XN df
= = Φv x0 + Mw = Φ:,1: :,0
u w + Φu x0
df
df
{x0 | ΠN (x0 ) 6= ∅}. Similarly, we define the convex set of = v + Mw = u = Φu δ = uSLP
SLP
admissible (Φx , Φu ) as x = Φx δ df
x = Ax0 +Ew+B(Mw+v)
(14)
 
= A E δ + BΦu δ
 
= A E δ + BΦu δ
Φx , Φu satisfy (5) and
 
(13)
= Ax0 +Ew+Budf = xdf = Φx δ = xSLP
 
are block-lower-triangular,

 

SLP
ΠN (x0 ) = (Φx , Φu ) :N,: ,
Φx δ ∈ X N , Φu δ ∈ U N , 
Remark 1: Theorem 2 is also valid for linear time-

N −1
 
ΦN,:
x δ ∈ Xf , ∀w ∈ W
 
varying (LTV) systems without any modifications to the
and the set of initial states x0 for which an admissible proof, by adapting the definitions of ZA and ZB accord-
control policy uSLP = Φu Φ−1 x x exists, as XN
SLP
= {x0 | ingly.
ΠN (x0 ) 6= ∅}. To show the equivalence between MPC and
SLP
Leveraging Theorem 2, the following insights can be
SLP trajectories, we will rely on Lemma 1 and we formalize derived: Compared with df-MPC, the SLP formulation offers
the equivalence in Theorem 2. a direct handle on the state trajectory, which allows for
Lemma 1: Consider the dynamics (3) as a function of the directly imposing constraints on the state and simplifies
initial state x0 the numerical implementation. Additionally, computational
efficiency is increased by recently developed solvers for
x = Ax0 + Bu + Ew, (12) SLP problems based on dynamic programming [17] and
with A, B, and E defined as in Appendix I. Then, the ready-to-use computational frameworks like SLSpy [18]. The
following relation holds for any u, any w, and any admissible SLP formulation can thereby address one of the biggest
(Φx , Φu ): limitations of df-MPC, i.e. the computational effort required.
SLP problems also facilitate the computation of a distributed
(13)
 
Φx = A E + BΦu . controller due to the parameterization by system responses
Proof: See Appendix I. Φx , Φu , on which a distributed structure can be imposed
df
Theorem 2: Given any initial state x0 ∈ XN , (M, v) ∈ through their support [8]. This offers a direct method to also
df
ΠN (x0 ), and some disturbance sequence w ∈ W N −1 , there distribute df-MPC problems. We will leverage Theorem 2 to
exist (Φx , Φu ) ∈ ΠSLP
N (x0 ) such that xdf = xSLP and derive an improved tube-MPC formulation as the main result
df
u = u SLP
. The same statement holds in the opposite of this paper in the following section.
IV. SYSTEM LEVEL TUBE-MPC (SLTMPC) over the horizon N . The associated system responses are
We first analyze the effect of imposing additional struc- computed using (4), resulting in
ture on the SLP, before deriving the SLTMPC formulation. 
I ... ... 0
 
K̄ ... ... 0

Finally, we show that the SLTMPC formulation emerges AK̄ I . . . 0 K̄AK̄ K̄ ... 0
naturally from an extended SLP formulation. Φ̄K̄ =

. . . .

, Φ̄K̄ 
= . . . .. ,

x
 .. . . . . ..  u
 .. .. .. .
  
A. Diagonally Restricted System Responses AN . . . AK̄ I K̄AN . . . K̄AK̄ K̄
K̄ K̄
Following the analysis in [16], it can be shown that
disturbance feedback MPC is equivalent to a time-varying where AK̄ = A + B K̄ and A, B as in (1). Given these
version of tube-MPC. The same insight can be derived system responses, the states and inputs over the horizon N
from the SLP by defining the nominal state and input as are computed as x = Φ̄K̄x δ and u = Φ̄u δ, respectively, or

in convolutional form for each component of the sequence


z := Φ:,0x x0 and v := Φu x0 , respectively, and solving
:,0

x = Φx δ for the disturbance trajectory w: i−1


X i−1
X
xi = ΦK̄,i
x x0 + ΦK̄,i−1−j wj = AiK̄ x0 + Ai−1−j wj ,
x − z = Φ:,1: w, x K̄
  x  j=0 j=0
0 0 i−1 i−1
= ,
x1: − z1: Φ1:,1:
X X
x w ui = ΦK̄,i
u x0 + ΦK̄,i−1−j
u wj = K̄AiK̄ x0 + K̄ Ai−1−j

wj .
−1 1:
⇒ w = Φ1:,1: x − z1: . j=0 j=0
 
x
The nominal states and nominal inputs are then defined by
Plugging this definition of w into u = Φu δ, yields
 1:,1: −1 1: the relations
u = v+Φ:,1: x − z1: = v+K:,1: x1: − z1: ,
 
u Φx
zi = ΦK̄,i i
x x0 = AK̄ x0 , vi = ΦK̄,i i
u x0 = K̄AK̄ x0 . (17)
which reveals the inherent tube structure of the SLP and con-
sequently df-MPC, with the time-varying tube controller K. Therefore,
The main drawback of this formulation is its computa- i−1
X
tional complexity, since the number of optimization vari- xi = zi + Ai−1−j

wj , ui = vi + K̄ (xi − zi ) , (18)
ables grows quadratically in the horizon length N , which j=0
motivates the derivation of a computationally more efficient Pi−1 i−1−j
df-MPC formulation. Inspired by the infinite-horizon SLP where we use the fact that j=0 AK̄ wj = xi − zi to
formulation [7], we restrict Φx and Φu to have constant rewrite the input relation. As in tube-MPC [4, Section 3.2],
diagonal entries1 , which corresponds to restricting K to the uncertain state is described P via the nominal state and
i−1
also have constant diagonal entries and therefore results in the evolution of the error system j=0 Ai−1−j K̄
wj under the
a more restrictive controller parameterization. This results tube controller K̄. In contrast to the tube-MPC formulation,
in a tube formulation where the number of optimization however, zi , vi are not free variables here, but are governed
variables is linear in the horizon length, alleviating some by the equations (17), i.e. zi+1 = (A + B K̄)zi , which is an
of the computational burden: LTI system with feedback controller K̄. Therefore, both the
 0   0  error system and the nominal system are driven by the same
Φx . . . . . . 0 Φu . . . . . . 0 controller, which is generally suboptimal.
Φ1x Φ0x . . . 0  Φ1u Φ0u . . . 0  The following section derives an extended SLP that al-
Φ̄x = . . . (16)
. . . . . ... 
, Φ̄u = . .
. . . . . ... 
   
 ..   ..  lows for optimizing the nominal system dynamics in the
ΦN 1 0
ΦN 1 0 corresponding tube-MPC interpretation independent of the
x . . . Φx Φx u . . . Φu Φ u
error dynamics. This is achieved by adding the auxiliary
Enforcing the structure (16) also alters the equivalence rela- decision variables φiz , φiv to the nominal variables zi and
tion established by Theorem 2. If we interpret the imposed vi , i.e. z̄i = Φix x0 + φiz , v̄i = Φiu x0 + φiv . In vector notation,
structure in a tube-MPC framework, this corresponds to both this results in
the nominal and error system being driven by the con-
troller K̄, since Φ̄x , Φ̄u not only define the tube controller z̄ = Φ̄:,0
x x0 + φz ,
:,0
v̄ = Φ̄u x0 + φv . (19)
but also the behavior of the nominal system, i.e. z̄ = Φ̄:,0
x x0
and v̄ = Φ̄:,0 x . To illustrate this point, we exemplify it B. Affine System Level Parameterization
u 0
on the LTI system (1) subject to a constant tube controller We extend the SLP in order to capture formulations of the
K̄. This results in the input ui = K̄(xi − zi ) + vi and thus form (19). The following derivations are stated for general
in a constant block-diagonal matrix K̄ = diag(K̄, . . . , K̄) system responses Φx , Φu but naturally extend to the re-
stricted system responses Φ̄x , Φ̄u without any modifications.
1 In the infinite-horizon formulation, the diagonals are restricted in order > >
We expand the state x̃ = [1 x] , the disturbance δ̃ = [1 δ] ,
to comply with the frequency-domain interpretation of the closed-loop
system. Commonly, a finite impulse response (FIR) constraint is additionally and appropriately modify the dynamics (3) to obtain
imposed in this setting, requiring the closed-loop dynamics to approach an    
equilibrium after the FIR horizon, which we do not need here. For more 0 0 0
x̃ = Z x̃ + Z u + δ̃ = Z Ãx̃ + Z B̃u + δ̃. (20)
details see [7]. 0 A B
Then, the corresponding affine system responses Φ̃x , Φ̃u , where we use (26) to eliminate Φ−1 e . The affine half-
assuming an affine state feedback control law u = K̃x̃ = space (22) combines both (26) and (27), therefore (23) holds.
Kx + k, are defined as Finally, u = Φ̃u Φ̃−1x x̃ = Φ̃u Φ̃x Φ̃x δ̃ = Φ̃u δ̃ follows
−1

1 0
 trivially.
(21)
 
Φ̃x =
φz Φe
, Φ̃u = φv Φk , Remark 2: Theorem 3 is also valid for LTV systems
without any modifications to the proof, by adapting the
where φ? and Φ? denote vector-valued and matrix-valued definitions of ZA and ZB accordingly.
quantities, respectively. Given the particular structure of the The affine subspace (22) can also be separated and inter-
affine system responses, we can extend Theorem 1 to the preted as the governing equation of two separate dynamical
more general affine case. systems using the following corollary.
Theorem 3: Consider the system dynamics (20) over a Corollary 1: Given an admissible (Φ̃x , Φ̃u ) with struc-
horizon N with block-lower-triangular state feedback law K̃ ture (21), the affine half-spaces (26) and (27) define two
defining the control action as u = K̃x̃. The following decoupled dynamical systems, which evolve according to
statements are true:
e = ZAe + ZBue + δ, (28)
1) the affine subspace defined by
  φz = ZAφz + ZBuz , (29)
 Φ̃x
(22)

I − Z Ã −Z B̃ =I where e = x − φz . The inputs of the two systems are
Φ̃u
ue = Φk Φ−1 e e = Φk δ and uz = φv , respectively.
parametrizes all possible affine system responses Proof: See Appendix II.
Φ̃x , Φ̃u with structure (21), Corollary 1 provides the separation into error and nominal
2) for any block-lower-triangular matrices Φ̃x , Φ̃u satis- dynamics commonly encountered in tube-based robust MPC
x achieves the
fying (22), the controller K̃ = Φ̃u Φ̃−1 methods, even though the nominal and error state definitions
desired affine system response. are not exactly the same. Here, φz and Φ:,0x x0 constitute the
Proof: nominal state, of which the latter part is contributed by the
a) Proof of 1): Follows trivially from the proof of tube controller, while the nominal state of the standard tube
Theorem 2.1 in [7]. formulation in (10) is independent of the tube controller. The
b) Proof of 2): As in the proof of Theorem 2.1 in [7], same applies for the nominal input.
Φ̃−1
x exists since Φ̃x admits a finite Neumann series2 . In Note that (27) defines a dynamical system similar to (3),
order to show x̃ = Φ̃x δ̃, the following relation needs to but with δ = 0 and therefore zero initial state. However,
hold  −1 the dynamics (27) can similarly be rewritten to allow non-
Φ̃x = I − Z Ã − Z B̃ Φ̃u Φ̃−1
x . (23) trivial initial states by replacing
 0 φ> z in (21) with φz −
−1 >
(I − ZA) δz , where δz = φz 0 . All computations
Plugging in the affine system responses and after some in the proof hold for any choice of φz , thus (27) becomes
algebraic manipulations this relation becomes  
 φz
(30)


1 0
−1 I − ZA −ZB = δz .
Φ̃x = , φv
−ZB φv − Φk Φ−1 I − ZA − ZBΦk Φ−1

e φz e
At the same time, the error state definition in (28) is changed
−1
which holds if the following two equations hold to e = x − φz + (I − ZA) δz , which removes the
−1 contribution of the initial nominal state φ0z from the error
Φe = I −ZA − ZBΦk Φ−1 e , (24) dynamics.
−1
(25)

φz = Φe ZB φv − Φk Φe φz . We use Theorem 3 to formulate the proposed SLTMPC
problem, using the diagonally restricted system responses
Equation (24) is fulfilled if Φe and Φk satisfy
  Φ̄e and Φ̄k
 Φe  2
(26)

I − ZA −ZB
 1
=I φz + Φ̄e:,0 x0

Φk Q2 0
min 1 :,0 (31a)
Φ̃x ,Φ̃u 0 R 2 φv + Φ̄k x0 2
as in [7], then (25) can be rewritten as  
 Φ̃x
s.t. I − Z Ã −Z B̃ (31b)

Φ−1 −1
 = I,
e φz = ZB φv − Φk Φe φz , Φ̃u
(I + ZBΦk ) Φ−1
e φz = ZBφv , φ:N :N,: N
z + Φ̄e δ ∈ X , ∀w ∈ W N −1 (31c)
(I − ZA) Φe Φ−1
e φz = ZBφv , φv + Φ̄k δ ∈ U N , ∀w ∈ W N −1 (31d)
 
 φz
(27) φN N,:
z + Φ̄e δ ∈ Xf , ∀w ∈ W N −1 (31e)

I − ZA −ZB = 0,
φv
δ 0 = xk , (31f)
2 The inverse of a matrix (I − A)−1 can be expressed as an infinite where Q, R, and Xf are defined as in (6). The input applied
Neumann series, iff the matrix (I − A) is stable. However, if A is nilpotent
the stability requirement is omitted and the inverse is defined by the to the system is then computed as uk = φ0v + Φ0,0 0
k δ =
truncated Neumann series [19]. φv + Φu xk . In order to render the constraints (31c) - (31e)
0 0
TABLE II
amenable for optimization, we rely on a standard technique
P ROBLEM SIZE IN TERMS OF THE NUMBER OF VARIABLES AND
from df-MPC [3], which removes the explicit dependency
CONSTRAINTS FOR DF -MPC, SLTMPC, AND TUBE -MPC.
on the disturbance trajectory by employing Lagrangian dual
variables. This corresponds to an implicit representation No. variables No. constraints
of the error dynamics’ reachable sets commonly used for
df-MPC (N + 1)( N2n+ 1)(n + m) O(N (N + 1)n2 )
constraint tightening in tube-MPC.
SLTMPC 2 · (N + 1)(n + 1)(n + m) O(2(N + 1)n2 )
Remark 3: Under the assumption that the sets X , U, and tube-MPC (N + 1)(n + m) O((N + 1)n2 )
W are polytopic, the constraint tightening approach using
dual variables [3] is equivalent to the standard constraint Using the concept of sparsity invariance [21], any distributed
tightening based on reachable sets, usually employed in structure on K̃ can be reformulated as structural conditions
tube-MPC. This can be shown using results from [20] on on Φ̃x and Φ̃u . This in turn enables us to rewrite (31) with
support functions, Pontryagin differences, and their relation the additional constraints
in the context of polytopic sets. The main difference is that
the constraint tightening based on reachable sets is generally Φ̃x ∈ Kx , Φ̃u ∈ Ku ,
performed offline, while the formulation using dual variables where Kx and Ku are sparsity patterns.
is embedded in the online optimization. b) Explicit SLTMPC: The computational complexity
Remark 4: The cost (31a) is equivalent to a standard associated with optimization problem (31) can be significant,
quadratic cost on the nominal variables (19), which ren- especially for long horizons and large state spaces. This
ders (31a) identical to the quadratic form zT Qz + vT Rv. motivates an explicit formulation of the SLTMPC method,
Other cost functions can be similarly considered, e.g. not which allows pre-computation of the control law and the
only including the nominal states and inputs but also the corresponding set of initial states. One way to formulate
effect of the tube controller K. such an explicit scheme is to solve the Karush-Kuhn-Tucker
Remark 5: Additional uncertainty in the system matri- conditions as a function of the initial state and thus obtain
ces A, B can be incorporated in (31) using an adapted a control policy for each partition of the set of initial
version of the robust3 SLP formulation [7, Section 2.3]. The states [22]. In the context of SLS-MPC, an adapted version
robust SLP formulation was already used in [10] to derive of this method was proposed in [13]. Another option is to
a tube-based MPC method, which can handle both additive approximate the explicit solution by defining the sets of
and system uncertainty. The affine extension presented in initial states manually and solve the SLTMPC problem for
this paper can be similarly adapted to formulate a SLTMPC a whole set of initial states, similar to the method presented
method, which can also handle system uncertainties. in [23].
Using Theorem 2 we can relate df-MPC, tube-MPC, and
SLTMPC to each other. While all three methods employ V. NUMERICAL RESULTS
nominal states and nominal inputs, they differ in the defi- In the following, we show the benefits of the proposed
nition of the tube controller. Df-MPC does not restrict the SLTMPC method for an example by comparing it against
tube controller and treats it as a decision variable. Contrary, df-MPC and tube-MPC. The example is implemented in
tube-MPC fixes the tube controller a priori and does not Python using CVXPY [24] and is solved using MOSEK [25].
optimize over it. SLTMPC positions itself between those The example was run on a machine equipped with an
two extremes by allowing time-varying controllers, while Intel i9 (4.3 GHz) CPU and 32 GB of RAM.
preventing changes over the prediction horizon. As a result, We consider the uncertain LTI system (1) with discrete-
SLTMPC improves performance over standard tube-MPC by time dynamic matrices
offering more flexibility, rendering it computationally more    
expensive, yet cheaper than df-MPC. The problem sizes of 1 0.15 0.5
A= , B= ,
all three methods are stated in Table II. 0 1 0.5
subject to the polytopic constraints
C. Extensions of SLTMPC            
Finally, we outline two extensions of the SLTMPC method −1.5 x1 0.5 −θ w θ
≤ ≤ , −1 ≤ u ≤ 1, ≤ 1 ≤
to improve scalability and reduce computation time via a −1 x2 1.5 −0.1 w2 0.1
distributed and an explicit formulation. with state cost Q = I, input cost R = 10, horizon N =
a) Distributed SLTMPC: Using ideas from distributed 10, and parameter θ. As the nominal terminal constraint
control allows us to formulate a distributed version of (31) we choose the origin, i.e. zN = [0, 0]> . We compare the
similar to the distributed SLP formulation proposed in [7]. region of attraction (RoA) and performance of SLTMPC (31)
Since any structure on Φ̃x and Φ̃u can be enforced directly in to those of df-MPC and tube-MPC, where the constant
the optimization problem (31) [7], this provides a straight- tube controller K is computed such that it minimizes the
forward way to formulate a distributed SLTMPC scheme. constraint tightening [26, Section 7.2]. Figure 1 (A, B, C)
3 Please note that the definition of robustness in the context of the SLP
shows the RoAs for three different noise levels, i.e. θ =
does not directly correspond to the robustness definition commonly used in {0.05, 0.1, 0.12}. SLTMPC achieves a RoA which is con-
MPC. For more details on the robust SLP formulation see [7, Section 2.3]. siderably larger than the RoA of tube-MPC and comparable
(A) θ = 0.05 (B) θ = 0.10
1.5 1.5
State Constraints State Constraints
1.0 df-MPC 1.0 df-MPC
SLTMPC SLTMPC
0.5 tube-MPC 0.5 tube-MPC
x2

x2
0.0 0.0

−0.5 −0.5

−1.0 −1.0

−1.50 −1.25 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 −1.50 −1.25 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50
x1 x1

(C) θ = 0.12 (D) RoA Coverage


1.5 80
State Constraints df-MPC
df-MPC 70 SLTMPC
1.0
SLTMPC tube-MPC

Coverage [%]
0.5 tube-MPC 60
x2

0.0 50

−0.5 40

−1.0 30

−1.50 −1.25 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.00 0.05 0.10 0.15
x1 θ

Fig. 1. RoA of SLTMPC and tube-MPC for θ = {0.05, 0.1, 0.12} (A,B,C) and approximate RoA coverage with respect to the state constraints in percent,
as a function of the parameter θ (D).

State Constraints
to the one of df-MPC. Figure 1 (D) further highlights this 1
Terminal Nominal State
by approximating the coverage of the RoA, i.e. the area Nominal States
x2

of the state constraints covered by the RoA in percent. 0


RoA coverage of SLTMPC is consistently larger than for
tube-MPC and decreases more slowly as the disturbance df-MPC
−1
parameter θ is increased. This behavior is comparable to
that of df-MPC, although SLTMPC achieves lower RoA 1
coverage. All three methods become infeasible and thus
achieve a coverage of 0% for θtube = 0.13, θSLT M P C =
x2

0
0.15, and θdf = 0.16, respectively. Figure 2 shows the open-
loop nominal trajectory and trajectories for 10’000 randomly SLTMPC
sampled noise realizations for all three methods. Due to the −1
optimized tube controller and less conservative constraint
tightening, df-MPC and SLTMPC compute trajectories that 1

approach the constraints more closely than tube-MPC. Ta-


x2

ble III states the computation times and costs of the methods 0
for the noisy trajectories shown in Figure 2, highlighting the
trade-off in computation time and performance offered by tube-MPC
−1
SLTMPC.
−1.50 −1.25 −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50

VI. CONCLUSIONS x1

This paper has proposed a tube-MPC method for lin- Fig. 2. Nominal state trajectory and 10’000 noise realizations from initial
ear time-varying systems with additive noise based on the state x0 = [−0.9, 0.0]> and parameter θ = 0.05 for df-MPC, SLTMPC,
and tube-MPC.
system level parameterization (SLP). The formulation was
derived by first establishing the equivalence between dis- was extended in order to formulate the proposed system
turbance feedback MPC (df-MPC) and the SLP, offering level tube-MPC (SLTMPC) method. Finally, we showed the
a new perspective on a subclass of robust MPC methods effectiveness of the proposed method by comparing it against
from the angle of SLP. Subsequently, the standard SLP df-MPC and tube-MPC on a numerical example.
TABLE III
Adding − (I − ZA) φz on both sides of (36), results in
C LOSED - LOOP COSTS AND COMPUTATION TIMES FOR 10’000 NOISY
x0 = [−0.9, 0.0]> . I − ZA − ZBΦk Φ−1

TRAJECTORIES STARTING IN (x − φz ) =
e
− (I − ZA) φz + ZBφv + δ. (37)
Cost [-] Computation Time [ms]
Mean Std. Deviation Mean Std. Deviation Using (27), we conclude that − (I − ZA) φz + ZBφv = 0,
df-MPC 24.61 1.76 53.21 6.66 hence (37) becomes I − ZA − ZBΦk Φ−1 e e = δ, and
SLTMPC 26.38 1.87 33.49 6.35 thus
tube-MPC 30.44 1.98 6.9 1.43 −1
e = I − ZA − ZBΦk Φ−1 e δ = Φe δ, (38)
ACKNOWLEDGMENT
which describes the closed loop behavior of a system with
We would like to thank Carlo Alberto Pascucci from state e = x − φz under the control law ue = Φk Φ−1 e e.
Embotech for the insightful discussions on the topic of this The dynamics (29) are directly given by (27). Therefore, (28)
work. and (29) define decoupled dynamical systems, with the inputs
ue = Φk δ and uz = φv .
A PPENDIX I
P ROOF OF L EMMA 1 R EFERENCES
[1] A. Bemporad and M. Morari, “Robust model predictive control: A
Proof: The matrices in (12) are defined as survey,” in Robustness in identification and control. Springer, 1999,
  pp. 207–226.
  0 ... ... 0 [2] J. Löfberg, “Approximations of closed-loop minimax MPC,” in Proc.
I .
0 . . . .. 
A

 I 42nd IEEE Conf. Decis. Control, vol. 2, 2003, pp. 1438–1442.
[3] P. J. Goulart, E. C. Kerrigan, and J. M. Maciejowski, “Optimization
.. ..  ,
 2  
A = A , E =  A (32) over state feedback policies for robust control with constraints,”
. .
  
 ..  I 
Automatica, vol. 42, no. 4, pp. 523–533, 2006.
 .   . . .
 
 .. .. . . 0
 [4] L. Chisci, J. A. Rossiter, and G. Zappa, “Systems with persistent dis-
AN turbances: predictive control with restricted constraints,” Automatica,
N −1 vol. 37, no. 7, pp. 1019–1028, 2001.
A ... A I
[5] W. Langson, I. Chryssochoos, S. Raković, and D. Q. Mayne, “Robust
and B = A E ZB. These quantities can then be used to model predictive control using tubes,” Automatica, vol. 40, no. 1, pp.
 
125–133, 2004.
rewrite (5) as [6] D. Q. Mayne, M. M. Seron, and S. Raković, “Robust model predictive
control of constrained linear systems with bounded disturbances,”
−1 −1
Φx = (I − ZA) + (I − ZA) ZBΦu Automatica, vol. 41, no. 2, pp. 219–224, 2005.
(i)     [7] J. Anderson, J. C. Doyle, S. H. Low, and N. Matni, “System level
= A E + A E ZBΦu synthesis,” Annu. Rev. Control, vol. 47, pp. 364–393, 2019.
[8] Y. S. Wang, N. Matni, and J. C. Doyle, “Separable and Localized
(33)
 
= A E + BΦu , System-Level Synthesis for Large-Scale Systems,” IEEE Trans. Au-
−1
tomat. Contr., vol. 63, no. 12, pp. 4234–4249, 2018.
where relation (i) follows from the fact that (I − ZA) [9] M. Kögel and R. Findeisen, “Robust MPC with Reduced Conservatism
can be represented as a finite Neumann series since ZA is Blending Multiples Tubes,” in Proc. of American Control Conference
(ACC), 2020, pp. 1949–1954.
nilpotent with index N , which leads to [10] S. Chen, H. Wang, M. Morari, V. M. Preciado, and N. Matni, “Robust
Closed-loop Model Predictive Control via System Level Synthesis,”
∞ N
−1
X k
X k   in Proc. 59th IEEE Conf. Decis. Control, 2020, pp. 2152–2159.
(I − ZA) = (ZA) = (ZA) = A E . [11] S. Dean, S. Tu, N. Matni, and B. Recht, “Safely learning to control
k=0 k=0 the constrained linear quadratic regulator,” in 2019 American Control
Conference (ACC), 2019, pp. 5582–5588.
Then, (33) proves Lemma 1. [12] C. A. Alonso and N. Matni, “Distributed and Localized Closed Loop
Model Predictive Control via System Level Synthesis,” in Proc. 59th
IEEE Conf. Decis. Control, 2020, pp. 5598–5605.
A PPENDIX II [13] C. A. Alonso, N. Matni, and J. Anderson, “Explicit Distributed and
P ROOF OF C OROLLARY 1 Localized Model Predictive Control via System Level Synthesis,” in
Proc. 59th IEEE Conf. Decis. Control, 2020, pp. 5606–5613.
Proof: Given an admissible (Φ̃x , Φ̃u ) with struc- [14] J. S. Li, C. A. Alonso, and J. C. Doyle, “Frontiers in Scalable
ture (21), the affine state trajectory is Distributed Control: SLS, MPC, and Beyond,” 2020. [Online].
Available: https://arxiv.org/abs/2010.01292
[15] J. B. Rawlings, D. Q. Mayne, and M. Diehl, Model Predictive Control:
     
1 1 −1 1
= Z Ã + Z B̃ Φ̃u Φ̃x + δ̃, (34) Theory, Computation, and Design. Nob Hill Publishing, 2017.
x x x [16] S. V. Rakovic, B. Kouvaritakis, M. Cannon, C. Panos, and R. Find-
eisen, “Parameterized Tube Model Predictive Control,” IEEE Trans.
where Automat. Contr., vol. 57, no. 11, pp. 2746–2761, 2012.
T [17] S.-H. Tseng and J. S. Li, “SLSpy: Python-Based System-
φv −Φk Φ−1
  
1 0 e φz Level Controller Synthesis Framework,” 2020. [Online]. Available:
Φ̃u Φ̃−1
 
= φv Φ k = , https://arxiv.org/abs/2004.12565
x −Φ−1 e φz Φe
−1
Φk Φ−1
e
(35) [18] S. H. Tseng, C. A. Alonso, and S. Han, “System Level Synthesis via
Dynamic Programming,” in Proc. 59th IEEE Conf. Decis. Control,
then plugging (35) into (34) yields 2020, pp. 1718–1725.
[19] G. W. Stewart, Matrix Algorithms: Volume 1: Basic Decompositions.
x = ZA+ZBΦk Φ−1 x+ZBφv+δ−ZBΦk Φ−1 e φz . (36)

e SIAM, 1998.
[20] I. Kolmanovsky and E. G. Gilbert, “Theory and computation of dis-
turbance invariant sets for discrete-time linear systems,” Mathematical
Problems in Engineering, vol. 4, 1998.
[21] L. Furieri, Y. Zheng, A. Papachristodoulou, and M. Kamgarpour,
“Sparsity Invariance for Convex Design of Distributed Controllers,”
IEEE Transactions on Control of Network Systems, vol. 7, no. 4, pp.
1836–1847, 2020.
[22] A. Bemporad, M. Morari, V. Dua, and E. N. Pistikopoulos, “The
explicit solution of model predictive control via multiparametric
quadratic programming,” in 2000 American Control Conference
(ACC), vol. 2. IEEE, 2000, pp. 872–876.
[23] A. Carron, J. Sieber, and M. N. Zeilinger, “Distributed Safe Learning
using an Invariance-based Safety Framework,” 2020. [Online].
Available: https://arxiv.org/abs/2007.00681
[24] S. Diamond and S. Boyd, “CVXPY: A Python-Embedded Modeling
Language for Convex Optimization,” Journal of Machine Learning
Research, vol. 17, no. 83, pp. 1–5, 2016.
[25] MOSEK ApS, MOSEK Optimizer API for Python. Version 9.1., 2019.
[Online]. Available: https://docs.mosek.com/9.1/pythonapi/index.html
[26] D. Limón, I. Alvarado, T. Alamo, and E. F. Camacho, “Robust tube-
based MPC for tracking of constrained linear systems with additive
disturbances,” Journal of Process Control, vol. 20, no. 3, pp. 248–260,
2010.

You might also like