You are on page 1of 18

Lowering the Upper Bounds on the Cost of Robust

Distributed Controllers Beyond Quadratic Invariance


Luca Furieri and Maryam Kamgarpour∗†

April 2, 2018
arXiv:1803.05528v2 [cs.SY] 30 Mar 2018

Abstract
The problem of robust distributed control arises in several large-scale systems, such as trans-
portation networks and power grid systems. In many practical scenarios controllers might not
know enough information to make globally optimal decisions in a tractable way. This paper
proposes a novel class of tractable optimization problems whose solution is a controller comply-
ing with any given information structure. The approach we suggest is based on decomposing
intractable information constraints into two subspace constraints in the disturbance feedback
domain. The resulting control policy is optimal when a condition known as Quadratic Invari-
ance (QI) holds, whereas it provides an upper bound to the minimum cost when QI does not
hold. We discuss how to perform the decomposition in an optimized way. We interpret our
theoretical results in terms of the possibility for certain controllers to share input variables and
privacy of the information known to controllers. Finally, we show that our method can lead to
improved performance guarantees with respect to other approaches, by applying the developed
techniques to the platooning of autonomous vehicles.

1 Introduction
Safe and efficient operation of large-scale systems, such as the electric power grid, digital commu-
nication networks, autonomous vehicles, and financial systems, relies on coordinating the decision
making of multiple interacting agents. In most practical scenarios these agents can only base their
decisions on partial local information due to geographic distance, privacy concerns or the high
cost of communication. Lacking full information can make the task of designing optimal decisions
significantly more challenging.
The celebrated work of [1] highlighted that optimal control policies for the Linear Quadratic
Gaussian control problem given partial information may be nonlinear. The intractability inherent
to lack of full information was further investigated in [2, 3]. The core challenges discussed in these
works motivated identifying special cases of optimal control problems with partial information for
which efficient algorithms can be used.
Several cases of tractable problems with partial output information were characterized in [4–6]
and were later generalized in [7,8], where the authors established necessary and sufficient conditions
for convexity referred to as quadratic invariance (QI).
The framework of QI was derived in the context of infinite horizon optimization, whereas sev-
eral modern control architectures are based on finite horizon optimization with constraints on the

This research was gratefully funded by the European Union ERC Starting Grant CONENE.

The authors are with the Automatic Control Laboratory, Department of Information Technology and Electrical
Engineering, ETH Zurich, Switzerland. e-mails: {furieril, mkamgar}@control.ee.ethz.ch

1
states and the inputs. To address state and input constraints, [9–11] have considered constrained
finite horizon optimization given predetermined sparsity patterns for the controllers. In [12] the
analysis was extended to the case where controllers are allowed to communicate and propagate the
information with delays. Necessary and sufficient conditions for convexity were derived by adapting
the QI condition to the finite horizon framework [9–12].
The conditions posed by QI can be too stringent for practical purposes. For instance, when
the dynamics of the system evolve according to a strongly connected topology, delayed information
about every output is required for each control input [12]. Sharing this information might be limited
by bandwidth and other network restrictions. Furthermore, controllers may be unable to share
information due to strict privacy requirements. For instance, this can be the case when considering
control of a power grid given an information structure. Such scenarios motivate designing control
policies which comply with non-QI information structures and state and input safety constraints.
Recent works [13, 14] showed that analysis and synthesis can be greatly simplified for positive
systems. In some applications controllers know the realization of past disturbances and this allows
for tractable computation of optimal robust distributed controllers [15,16]. However, most scenarios
involve systems which are not positive and controllers can only measure outputs of the system.
For general systems, one can only obtain approximate solutions to the control problem given an
information structure which is not QI. Computing sub-optimal static decentralized controllers for
stochastic linear systems has been addressed in [17, 18]. Another line of work has considered rank
relaxation approaches to obtain convex approximations of the generally non-convex distributed
control problem [19, 20].
Other approaches hinge on exploiting Youla parametrizations to determine possibly sub-optimal
control policies. Restricting the information structure to restore QI and obtain upper bounds on
the cost of the original problem is considered in [21]. The work in [22] introduces the concept of
optimization over QI covers in the presence of delay constraints. It is shown that the iterative pro-
cedure proposed in [22] yields globally optimal solutions in certain cases. However, the assumption
is that controllers have access to delayed measurements of all the outputs.
Inspired by the above works, we aim at characterizing tractable formulations of the generally
intractable problem which are valid for any system and information structure. The recovered
feasible solution provides an upper bound to the minimum cost when QI does not hold.
Our main contributions are as follows. First, we suggest a novel class of subspace constraints on
the disturbance feedback parameter which preserve the given information structure. The approach
is based on decomposing constraints which are quadratic in the decision variables into subspace
constraints in the disturbance feedback domain. This decomposition can be performed in multiple
ways, all of which lead to an upper bound on the cost of controllers complying with the given
information structure. Second, we determine optimized choices for the subspace constraints, with
the goal of lowering these upper bounds. Third, we provide interpretation of our theoretical results
in terms of the possibility for certain controllers to share their input variables while preserving
privacy of the information. Last, we show that the developed techniques can lead to improved
performance guarantees with respect to different approaches through a platooning example, aris-
ing in autonomous vehicles. Here, we consider information structure constraints as well as hard
constraints on inputs and states due to safety.
Section 2 sets up the problem. Section 3 contains our main results about upper bounds on the
cost of control policies complying with an information structure. The application to the platooning
of vehicles, where each vehicle only knows local information, is studied in Section 4.
Notation: Given a matrix Y ∈ Ra×b we refer to its scalar element located at row i and column
j as Y (i, j). Given a vector v ∈ Ra we refer to its i-th entry as v i ∈ R. The symbol Ia denotes
the identity matrix of dimensions a × a while 0a×b denotes the zero matrix of dimensions a × b,

2
for every a ∈ Z[1,∞) and b ∈ Z[1,∞) . Given a binary matrix X ∈ {0, 1}a×b we define the subspace
Sparse (X) ⊆ Ra×b as

{Y ∈ Ra×b | Y (i, j) = 0, ∀i, j s.t. X(i, j) = 0} .

We define X = Struct(Y ) to be the binary matrix given by


(
1 if Y (i, j) 6= 0 ,
X(i, j) =
0 otherwise .

Let X, X 0 ∈ {0, 1}a×b be binary matrices. Throughout the paper we will adopt the following
conventions. XX 0 := Struct(XX 0 ) and X r := Struct(X r ). X ≤ X 0 if and only if X(i, j) ≤
X 0 (i, j) ∀i, j. X < X 0 if and only if X ≤ X 0 and there exist indices i, j such that X(i, j) < X 0 (i, j).
X 6≤ X 0 if and only if there exist indices i, j such that X(i, j) > X 0 (i, j).

2 Problem Formulation
We consider a discrete time system
xk+1 = Axk +Buk +Dwk ,
(1)
yk = Cxk +Hwk ,
where k ∈ Z[0,∞) , xk ∈ Rn , yk ∈ Rp , uk ∈ Rm , wk ∈ W and W ⊆ Rn is the set of possible
disturbances. The system starts from a known initial condition x0 ∈ Rn . Let us define a prediction
horizon of length N . Our goal is to minimize a cost function dependent on history of states and
inputs J(x0 , · · · , xN , u0 , · · · , uN -1 ). Furthermore, the states and inputs need to satisfy
 
xk
∈ Γ ⊆ Rn+m ,
uk (2)
xN ∈ Xf ⊆ R ,n

for all k ∈ Z[0,N -1] and for all possible sequences of disturbances taken from set W.
Each control input can depend on the history of a subset of outputs. This subset is defined
by the so-called information structure. An information structure can be time varying, in the sense
that controllers may measure, memorize or forget different outputs at different times.
The search in the class of all output feedback policies is intractable. Hence, a possible approach
is to restrict the search to the class of controllers that are affine in the history of the outputs. The
output feedback and time varying affine controller is expressed as
k
X
uk = Lk,j yj +gk , (3)
j=0

for all time instants k ∈ Z[0,N -1] .


For every j ∈ Z[0,k] , we consider binary matrices Sk,j ∈ {0, 1}m×p encoding what controllers at
time k know about the outputs at time j ≤ k, that is Sk,j (a, b) = 1 if and only if controller a at
time k knows the b-th output at time j. Let Sk,j ⊆ Rm×p denote the sparsity subspace generated
by the binary matrix Sk,j in the sense that Sk,j = Sparse(Sk,j ). The information structure on the
input can thus be equivalently formulated as

Lk,j ∈ Sk,j , gk ∈ Rm , (4)

3
for every k ∈ Z[0,N -1] and every j ∈ Z[0,k] .
Summarizing the above development we state the optimization problem under study.

Problem 1
min J(x0 , · · · , xN , u0 , · · · , uN -1 )
subject to (1), (2) ∀wk ∈ W ,
(3), (4) ∀k ∈ Z[0,N -1] , ∀j ∈ Z[0,k] .

In the above, the decision variables are the matrices Lk,j and the vectors gk as in (4) for all
k ∈ Z[0,N -1] and j ∈ Z[0,k] . For computational tractability we assume that J(·) is a convex function
of the disturbance-free state and input trajectories (that is, when the disturbances are assumed not
to be present) and that sets Γ, Xf are polytopes:
Γ = {(x, u) ∈ (Rn , Rm ) s.t. U x+V u ≤ b} ,
where U ∈ Rs×n , V ∈ Rs×m and b ∈ Rs , and
Xf = {x ∈ Rn s.t. Rx ≤ z} ,
where R ∈ Rr×n and z ∈ Rr . It is convenient to define the vectors of stacked variables as follows
 T
x = xT 0 · · · xT
N ∈ Rn(N +1) ,
 
T T ∈ Rp(N +1) ,
y = y0T · · · yN
 T
u = uT 0 · · · uT T
N -1 0m×1 ∈ Rm(N +1) ,
 T
w = w0T · · · wN T T
-1 0n×1 ∈ Rn(N +1) .
Equation (1) can be succinctly expressed as
x = Ax0 +Bu+ + ED w ,
(5)
y = Cx+Hw ,
where matrices B, ED , C and H are defined in Appendix A. Their derivation is straightforward
from the recursive application of (1). Similarly, considering (3), the control input can be expressed
as
u = Ly+g , (6)
where L ∈ Rm(N +1)×p(N +1) and g ∈ Rm(N +1) are defined in Appendix A. In order to satisfy (4)
matrix L must lie in the subspace S ⊆ Rm(N +1)×p(N +1) , where S = Sparse(S) and S is obtained
by stacking the matrices Sk,j ’s as in (17).
As was shown in [23], Problem 1 is non-convex regardless of the sparsity constraints being
linear in L. It is known that when an information structure is not given, parametrization of the
controller as a disturbance feedback affine policy restores tractability of Problem 1 [23]. Letting
P = CED +H we define the disturbance feedback controller as
u = QPw+v . (7)
The decision variable Q ∈ Rm(N +1)×p(N +1) is causal as in (17). It is possible to map a disturbance
feedback controller (Q, v) to the unique corresponding output feedback controller (L, g) and vice
versa as follows.
L = Q(CBQ+Ip(N +1) )-1 ,
(8)
g = v-Q(CBQ+Ip(N +1) )-1 (CBv+CAx0 ) ,

4
Q = L(Ip(N +1) -CBL)-1 ,
(9)
v = L(Ip(N +1) -CBL)-1 (CBg+CAx0 )+g .
It is easy to show that the convex cost function J(·) computed over the disturbance-free trajectories
of states and inputs is convex in (Q, v). The state and input constraints (2) are also convex in
(Q, v) and can be expressed as

F v+ max (F QP+G)w ≤ c ,
w∈W N +1

where F ∈ R(N s+r)×m(N +1) , G ∈ R(N s+r)×n(N +1) , c ∈ RN s+r are reported in Appendix A for
completeness. However, the sparsity constraint

L = Q(CBQ+Ip(N +1) )-1 ∈ S ,

would be nonlinear in Q in general. Thus, we consider the following convex program:

Problem 2
min J(x0 , Q, v)
Q,v

subject to Q is causal ,
F v+ max (F QP+G)w ≤ c ,
w∈W N +1
Q ∈ R.

where R is a subspace that must be designed to preserve the sparsity of L through the mapping
(8). To simplify the notation we introduce the following definition.

Definition 1. Let X ∈ Ra×b and Y ∈ Rb×a . The closed-loop function h : Ra×b × Rb×a −→ Ra×b
is defined as h(X, Y ) = −X(Ib − Y X)-1 . Similarly, for a set X ⊆ Ra×b the set h(X , Y ) is defined as

h(X , Y ) = {h(X, Y ), ∀X ∈ X } .

Notice that the operator h(·) maps a disturbance feedback controller Q to a corresponding
output feedback controller L. In particular, the mappings (8) and (9) can be expressed as L =
h(−Q, CB) and Q = −h(L, CB) respectively. Accordingly, to preserve the sparsity of L through
mapping (8) we require that R is designed such that

h(R, CB) ⊆ S , (10)

where we used the fact that h(−R, CB) = h(R, CB) because R is a subspace. Whenever R
satisfies (10) we equivalently say that it is sparsity preserving. Refer to Figure 1 for a visualization
of (10). Notice that h(R, CB) is a non-convex set in general despite R being convex, because h(·)
is a nonlinear map.
We remark that designing R is not a trivial task. For instance, if we simply require Q ∈ S the
resulting L might not lie in S. Therefore, in the next section we search for subspace R so as to
formulate a tractable convex program (Problem 2) whose solution gives a feasible L for the original
problem (Problem 1). This feasible solution will correspond to an upper bound on the minimum
cost of Problem 1.

5
{0, 1
III. D ESIGNING R ESTRICTIONS AND G UARANTEED
(GSS
P ERFORMANCE : B EYOND Q UADRATIC I NVARIANCE
Our goal is using convex programming to compute dis- RG
andtributed
only if controllers
R is QI with whichrespectcomplyto with CB.any predetermined
Since S ✓ R, inform We
theninformation
the solution structure.
space of TheProblem
approach we followthe
2 contains is solution
designing tobecau the
if R is QI with respectspace ato ofCB.
subspace h(·)R so
Problem Since
1.that Problem S ✓ 2 is aR,restrictioninformation
of Problem 1. QCB eredst
Accordingly,
Let (Q, v) bewetherequire solution to Problem 2 when S ✓ R Thi
olution space of Problem 2 contains the solution to the systemexpre
and (11) into t
Q 2 R is QI with respect to CB. Let J be the minimum ?
Indee
L 2 h(R, CB) ✓ S , ?
roblem 1. of Problem 1. It follows that J(Q, v)  J . Hence, QCBQ lower 2 termsS Neb
whereto we J used the found
fact that h( R, CB) QI h(R, CB) that to beth
v) be the solution to bounds can be by determining
? = subspaces
Problem
whichbecause contain R S. 2
is Awhen
a subspace. S
procedure Whenever ✓ R
to determine This
satisfies (11),
R non-trivial infor- can isbe
we definin rep
QI with respect to CB.mation Let
equivalently
Figure J ?
1:relaxations
Sparsity be
say the
that
through
preserving
isminimum
it subspaces
sparsity preserving.
additional one-time-step into
We clarifytwo
delayed this separ Th
as foll
notion in Figure
communication ? 1.derived in our previous work [10].
was andDefiY
m 1. It follows
3
that J(Q, v)  J . Hence, lower terms are {0,
Solution ApproachIII. D ESIGNING R ESTRICTIONS AND G UARANTEED
requ
prese
1}
J ? can3.1beGeneralized
found by determining
sparsity P ERFORMANCE
preserving : QI
subspaces B EYOND subspaces
Q UADRATIC I NVARIANCE that their com (GSS) Re
suffic
tain S. AWeprocedure toleading
describe the key idea determine
Our to thegoal non-trivial
is using
construction ofconvex of infor-
a familyprogramming
sparsity preservingto defining
subspacesdis- thefollow
compute noG
R
as in (10). Consider equationtributed controllers which comply written
with any predetermined ingWet
(8) and notice that L can be equivalently as L = (Im(N +1) +
axations through additional
QCB) one-time-step delayed
−1 Q. By taking the power series expansion of the inverse matrix and exploiting the fact that
information structure. The approach we follow is designing becau as follows.
QCB is nilpotent because its diagonal is zero by construction, we obtain B. L
ation was derived in oura subspace previous R so that
N
X −1
work Problem [10]. 2 is a restriction of Problem Definition 1. ered3:
It
i
Accordingly, L = we(QCB) require (11)a⇥a expres
and
i
Q.
{0, 1} back
ESIGNING R ESTRICTIONS AND G UARANTEED Indeed
i=0
h(R, CB) ✓ S , (11) throu
In general, L might lie in S despite all of the addends in its power series expansion (GSS)(11) not RG (T, Nex
lead
RMANCElying: With
BinEYOND
S. In many casesQ UADRATIC
where
this fact makes
the goal of avoidingbecause
usedI NVARIANCE
we designing thesubspaces
fact Fig.that h( R,
satisfying
1: Mappings (10)CB) = h(R,
a challenging task.CB) to be s
guara
the difficulty R is a subspace. Whenever R satisfies (11), we is repo
highlighted above, our approach is to ensure that every

al is using convex programming to


saytwo compute
thatseparate
it is sparsity dis-
addend in (11) lies in S by means of subspace constraints on Q. This can be done by decomposing
equivalently preserving. RG (T,
We clarify Y maxi
)The=
the term QCBQ, which is quadratic in Q, into factors QCB and Q, which are boththis Su
A. Generalized
notion in required
Figure to 1.sparsity preserving subspaces and
ontrollersaddends
which comply
linear in Q. Both linear terms
of (11) in the form (QCB) with
can be
i Q lie in anyS predetermined
have certain
We describe the key[0,Nidea
for all i ∈ Z
sparsity patterns so that all the
−1] leading to the construction of
. This key idea leads to We
defining say tha
descr
preser
Y
that,
n structure. The approacha we familyfollow
the notion of Generalized Sparsity Subspaces (GSS) as follows.
of subspaces is designing
a×a and G ∈ Rb×a , the because
satisfying (11). Consider equation
the con
Rem
Definition 2. For any matrices(9) T ∈and {0, notice
1}a×b , Ythat∈ {0, L can be equivalently written as L = suffici
1} Generalized h(
R so that Problem
Sparsity Subspace (GSS) 2 RisG(T,a (IYrestriction
) ⊆ Ra×b is definedof as 1Problem 1.
m(N +1) + QCB) Q. By taking the power ered
series in expan-thefollow
lite
Obse
ly, we require RG (T, Y ) sion
= {Q of the inverse
∈ Sparse(T ) s.t.matrix and exploiting
QG ∈ Sparse(Y )} .
is nilpotent because its diagonal is null by construction,
the fact that QCB ing
expressed we as X1 Q
th

We say that these sparsity subspaces are generalized because the constraint that Q lies in S, B.{fLo
as commonly considered in the obtain
h(R, CB) ✓bySletting , T = S and Y = S∆NXwhere
literature [8, 9, 12, 21], can be obtained
(11)
from a Indeed,
specific choice of QCB that
(Q
It i(
1

Next, weback
Rde
T, Y as above. In particular ∆ = iStruct(CB), we have
L = (QCB) Q . (12)
that RCB (S, S∆) = S. This is because if Q ∈ S then QCB ∈ Sparse(S∆) by construction (see CB c
desig
used the Next,fact thatour main
i=0
Lemma 1 ahead).
we derive h( resultR,
When QI holds, it is easy to verify that every addend in (12) lead p
on CB)
the conditions= h(R,
ensuring that a CB)
GSS is sparsity to be
preserving. sparsity throug
to
Y
R is a subspace.
Theorem 1. LetWhenever
T, Y be binary lies inR
matrices satisfies
thesuch
sparsity
that Fig. ≤ S1:(11),
T subspace S ≤and
Mappings
and YT weT. so does
Then, is reported guaran
theL. When
subspace QI in A
RCB (T, Y) is sparsity preserving doesas per not(10).
hold, however, L might lie in S despite all of the maxim A bin
ly say that The it proof
is sparsity preserving.
of Theorem 1 reliesaddends
We
in its power
on the following two lemmas.
clarify this
series expansion not lying Theorem in S. This proce 1:Sup
A. distinguishing
Generalized sparsity preserving subspaces
Figure 1. characteristic of non-QI information
and structures
YT  T.
descriDe
We
makes describe
designingthe
6 key idea leading
subspaces as in (11) to athe construction
challenging task.of that,all of
a family The of restriction
subspaces of satisfying
Problem 1 (11). we propose
Considerpreserving
rests in cir- as
equation YmaxT
p
cumventing the difficulty discussed
(9) and notice that L can be equivalently written as L = above. Our approach is Fo
Remark 1: N h (R
Lemma 1. Let X1 ∈ {0, 1}a×b , X2 ∈ {0, 1}b×a and let X1 = Sparse(X1 ), X2 = Sparse(X2 ). Let
Q1 ∈ X1 and Q2 ∈ X2 . Then Q1 Q2 ∈ Sparse(X1 X2 ) .
P
Proof. Suppose that Q1 Q2 (i, j) 6= 0. Then bk=1 Q1 (i, k)Q2 (k, j) 6= 0, which implies there exists
k̄ such that Q1 (i, k̄) 6= 0 and Q2 (k̄, j) 6= 0. Since Q1 ∈ X1 and Q2 ∈ X2 , then X1 (i, k̄) = 1 and
X2 (k̄, j) = 1. This implies that X1 X2 (i, j) = 1. The same reasoning holds for all indices i, j such
that Q1 Q2 (i, j) 6= 0. Hence, Q1 Q2 ∈ Sparse(X1 X2 ) .

Lemma 2. Let Y, T be binary matrices of compatible dimensions such that Y T ≤ T . Then


Y i T ≤ T for every positive integer i.
Proof. First, we prove that if Y T ≤ T and X ≤ T , then Y X ≤ T . For any i, j such that
Y X(i, j) = 1, there exists k such that Y (i, k) = X(k, j) = 1. Then, T (k, j) = 1 because X ≤ T and
Y T (i, j) = 1 because Y (i, k) = T (k, j) = 1. This implies that Y X ≤ Y T . We know by hypothesis
that Y T ≤ T , hence Y X ≤ Y T ≤ T . Next, we prove the statement by induction. The base case
Y T ≤ T holds by hypothesis. Suppose that Y i−1 T ≤ T . Then Y i T = Y (Y i−1 T ) ≤ T follows by
Y i−1 T ≤ T and the observations above.

We are now ready to prove Theorem 1.

Proof. (Proof of Theorem 1)


Let T ≤ S and YT ≤ T, and take a generic Q ∈ RCB (T, P Y). By definition, Q ∈ Sparse(T)
−1
and QCB ∈ Sparse(Y). Consider the corresponding L = N i=0 (QCB) i Q. The addend corre-

sponding to i = 0 is Q and clearly lies in S, because T ≤ S and thus Q ∈ Sparse(T) ⊆ S. By


Lemma 2, Yi T ≤ T for every positive integer i and thus Sparse(Yi T) ⊆ Sparse(T). Lemma 1
states that (QCB)i Q ∈ Sparse(Yi T) for any positive integer i. Since Sparse(Yi T) ⊆ Sparse(T),
then (QCB)i Q ∈ Sparse(T). Hence, every addend in the power series expansion of L lies in
Sparse(T) ⊆ S and we conclude L ∈ S. Since Q was a generic element of RCB (T, Y), the
statement is proved.

Remark 1. The conditions of Theorem 1 are sufficient to satisfy (10), but not necessary. This is
because L as in (11) can lie in S even if some of the addends do not lie in S.
We note that the notion of sparsity preserving GSS’s extends naturally to the infinite horizon
and unconstrained case. Indeed, the disturbance feedback parameter Q is the state-space equivalent
in finite horizon of the Youla parameter Q(s) used to address the infinite horizon case within transfer
function frameworks [8, 21]. This connection was thoroughly analyzed in [9].

3.2 Lowering the upper bounds


To obtain the least upper bound on the cost of Problem 1 we need to maximize h(RCB (T, Y), CB)
with the constraint that it still is a subset of S. Visually, this is equivalent to enlarging the (possibly
non-convex) set on the right side of Figure 1 so that it still fits inside S.
Suppose that T ≤ S is fixed. By using Theorem 1, the maximization problem described above
T
is equivalent to designing Ymax T T ≤ T and the following holds for any Y satisfying
such that Ymax
YT ≤ T: 
h (RCB (T, Y), CB) ⊆ h RCB (T, YT max ), CB . (12)
Observe that for any function f : D → C and sets X1 ⊆ X2 ⊆ D we have that f (X1 ) ⊆ f (X2 ). As a
consequence, condition (12) is implied by

RCB (T, Y) ⊆ RCB (T, YT


max ) . (13)

7
T
For a fixed T, a binary matrix Ymax T T ≤ T and condition (13) for all Y such that
satisfies Ymax
YT ≤ T if
T T
Ymax T ≤ T, and YT ≤ T =⇒ Y ≤ Ymax . (14)
Recall the definition of a GSS. Condition (14) can be interpreted as requiring that the number of
entries of the term QCB which are set to zero is kept to a minimum, while still ensuring that the
sparsity of L is preserved.
Matrix YmaxT as in (14) is found with a simple procedure described in the following proposition,
whose proof is reported in Appendix C.
Proposition 1. Fix T ≤ S. Let Ymax T (i, j) = 0 if T(i, k) = 0 and T(j, k) = 1, and Y T (i, j) = 1
max
T
otherwise. Then Ymax satisfies (14).
T
For the rest of the paper Ymax will always refer to Y being designed according to Proposition 1.
Summing up, the choice Y = Ymax T ensures that RCB (T, YmaxT ) is sparsity preserving and maximal

as per (12). Hence, for a fixed T ≤ S the least upper bound on the cost of Problem 1 can be found
by solving Problem 2 with RCB (T, Ymax T ).

At this point, you may wonder why we did not restrict our attention to the case where T is
maximal, that is T = S. The reason is that the definition of the GSS RCB (·) depends on the system
S ) is a strict subset of R
CB. Because of this fact it is possible that RCB (S, Ymax T
CB (T, Ymax ) for
some non-maximal T < S. We show this fact through an example.
Example 1. Consider the matrices
     
1 −2 0 1 1 0 1 1 0
G = 2 −3 0 , S = 0 1 1 , T = 0 1 1 ,
3 −1 1 1 0 0 0 0 0

and the GSS’s RG (S, Y ) and RG (T, Y ). Notice that T < S. By applying the maximization
procedure for Y , we compute that
   
1 0 1 1 0 1
S
Ymax = 0 1 0 , T
Ymax = 0 1 1 .
0 0 1 0 0 1

Consider any QS ∈ RG (S, YmaxS ) and any Q ∈ R (T, Y T ). By applying the definition of a GSS,
T G max
it is easy to compute that all such QS and QT can be expressed in the following form
   
α − 32 α 0 β − 23 β 0
QS =  0 0 0 , QT =  0 γ − 23 γ  , (15)
0 0 0 0 0 0

for any α, β, γ ∈ R. Hence, RG (T, Ymax


T ) strictly contains R (S, Y T ), despite T being strictly
G max
sparser than S.
Improving the choice of T based on the system will be considered in future work.

3.3 Connection with quadratic invariance


In order to evaluate how far from optimality the upper bounds on the cost of Problem 1 may be,
it is also worth computing lower bounds on the cost of Problem 1. Lower bounds can be obtained
based on the notion of Quadratic Invariance (QI).

8
Definition 3 (Quadratic Invariance). Let K ⊆ Rb×a be a subspace and G ∈ Ra×b . Then K is
Quadratically Invariant (QI) with respect to G if K1 GK2 ∈ K for every K1 ∈ K and K2 ∈ K.
Definition 3 is the finite dimensional equivalent of QI as in [8]. We state the following result,
whose proof is reported in Appendix D.
Proposition 2. Let R be a subspace such that S ⊆ R and assume R is QI with respect to CB.
Then, Problem 2 is a relaxation of Problem 1. Moreover, Problem 2 is equivalent to Problem 1 if
and only if S is QI with respect to CB and R = S.
According to Proposition 2 lower bounds on the minimum cost of Problem 1 can be computed
by using any QI subspace which contains S. Next, we state the following result about maximal
sparsity preserving GSS’s. The proof is provided in Appendix B.
T ) is QI with respect to CB.
Theorem 2. RCB (T, Ymax
We remark that although maximal sparsity preserving GSS’s are QI subspaces, they are not
sparsity subspaces in general, in the sense that they cannot be expressed as Sparse(R) for some
binary R. On the contrary, all QI sparsity subspaces contained in S can be expressed as GSS’s.
Indeed, let T ≤ S. If Sparse(T) is QI with respect to CB then it is equivalent to RCB (T, T∆)
where ∆ = Struct(CB). This is because if Q ∈ Sparse(T) then QCB ∈ Sparse(T∆) by Lemma 1.
Therefore, the notion of sparsity preserving GSS’s is more general than the one of QI sparsity
subspaces contained in S. This fact is exploited in the next section to derive improved upper
bounds with respect to the approach in [21].

3.4 Independence from system dynamics


A characteristic of the proposed class of relaxations of Problem 1 is the following. For any freely
chosen T ≤ S, the corresponding lowest upper bound to Problem 1 can always be computed and the
resulting controller complies with the information structure. No further analysis on the interplay
between T and the system dynamics is needed. Instead, if we restricted to QI sparsity subsets as
in [21], such an interplay would have to be considered because the condition T∆T ≤ T would have
to be checked before being able to compute an upper bound to Problem 1.
This “freedom” in computing upper bounds comes at the following cost. Depending on the
system, the set RCB (T, Ymax T ) might collapse to a singleton, that is the matrix of all zeros. In

such cases, it is necessary to improve the choice of T.


Example 2. Consider any matrix G ∈ R3×3 which doesn’t have any zero entries, S = I3 and
T = S. Then, Ymax S = I3 . It is easy to verify that the only matrix Q ∈ Sparse(I3 ) such that
QG ∈ Sparse(I3 ) is 03×3 . Hence, RG (I3 , I3 ) is a singleton.
In such cases, an initial reasonable guess to restore feasibility is selecting a new T0 ≤ S such
that Sparse(T0 ) is QI with respect to G. Then RCB (T0 , Ymax T0 ) = Sparse(T0 ) which is not a

singleton. The choice of T0 can be later improved to another T00 ≤ S to obtain a possibly larger
T00 ), which is not a sparsity subspace in general (see Example 1).
GSS RCB (T00 , Ymax

3.5 Input sharing interpretation


Sparsity preserving GSS’s are built on the idea that a given sparsity pattern for the quadratic term
QCBQ can be achieved by appropriately fixing the sparsity patterns of the linear terms QCB and
Q. This separation has an interpretation in terms of the possibility for controllers to share certain
control variables applied in the past.

9
Lemma 3. The input trajectory (6) can be computed as
u = Qy − QCBu+(Im(N +1) +QCB)g . (16)
Proof. Consider the input trajectory as in (6) and the mapping (8). Then
u = Ly+g = (I+QCB)−1 Qy+g ,
which yields (I+QCB)u = Qy+(I+QCB)g and then
u = Qy − QCBu+(Im(N +1) +QCB)g .

We interpret in the next proposition and in the following remark what it means for a GSS to be
sparsity preserving as for (10), in terms of input sharing and privacy of the information available
to controllers.
Proposition 3. If YT ≤ T and Q is required to lie in RCB (T, Y), then the following statement
holds for any a, b ∈ Z[1,m] , c ∈ Z[1,p] , s ∈ Z[0,N −1] and t, v ∈ Z[0,N −2] such that v ≤ t < s.

If uas knows ubt and ubt knows yvc , then uas knows yvc .

Proof. Suppose YT ≤ T. Then, Y(i, k) = T(k, j) = 1 implies T(i, j) = 1. Consider the imple-
mentation of the control law as in (6). Define the integer numbers
a = i mod m, b = k mod m, c = j mod p,
s = bi/mc, t = bk/mc, v = bj/pc ,
where (x mod y) ∈ Z[1,y] for any integers x, y is the remainder after division of x by y with the
convention that y mod y = y, and bf c gives the greatest integer less than or equal to f ∈ R.
Recalling that Q ∈ Sparse(T), QCB ∈ Sparse(Y) and the block structure of Q as in (17), we have
that
1. Y(i, k) = 1 implies that uas knows ubt .
2. T(k, j) = 1 implies that ubt knows yvc .
3. T(i, j) = 1 implies that uas , knows yvc .
The statement follows.

Remark 2. An implementation of the controller as in (16) was first proposed in [24–27] in the
context of signal information structures and later exploited in [28–30] to develop the framework
of System Level Parametrizations for synthesizing localized controllers. These works argue that it
might not be necessary for practical purposes to guarantee that the original output feedback con-
troller is sparse, since the implementation as in (16) only relies on the structure of the disturbance
feedback parameter Q and that of QCB. In this paper, instead, we worked under the assumption
that input variables can only be shared if the output information they are based on is known to the
receiving controller. This is motivated by privacy concerns in modern control systems such as the
power grids within electricity markets; it is desirable to avoid that private information is illicitly
reconstructed with the knowledge of the control inputs applied by other controllers. Theorem 1 and
its interpretation in Proposition 3 show that this assumption guarantees that the output feedback
controller lies in the original sparsity subspace S.

10
4 Application to Platooning
The problem of platooning considers control of a set of vehicles moving on a straight line in the
presence of disturbances. The difficulty in this control problem lies in the fact that each vehicle
has local information, based on measurements of the formation leader and/or the preceding vehicle
only. A review of the most prominent challenges and past solutions can be found in [31, 32]. The
problem of distributed set invariance applied to platooning was considered in [33]. Our goal here
is to show applicability and efficacy of our approach to synthesis when a predecessor-follower [31]
non-QI information structure must be complied with.
We consider the platooning of n vehicles modeled as point masses of mass m. For each vehicle
the engine thrust is modeled as a force control input acting on the point mass. The state space
representation for the system in continuous time is
ẋ(t) = Ac x(t)+Bc u(t)+w(t) ,
y(t) = Cx(t)+w(t) ,
where x(t) ∈ R2n (the first n state variables are the positions, the last n are the velocities), u(t) ∈ Rn
and w(t) ∈ W for every t ∈ R, W ⊂ R2 being a polytope of disturbances for positions and velocities.
Matrices Ac and Bc are expressed as
   
0 In 0
Ac = n×n , Bc = 1n×n .
0n×n 0n×n m In
We assume that the leading vehicle in the formation knows its own absolute position and velocity,
while all the other vehicles can measure their relative distance to the preceding vehicle and their
own absolute velocity. Please see Figure 2 1 . Matrix C defines the outputs of the system according
to this information structure. For example
 
1 0 0 0
0 0 1 0
C= 1 −1 0 0 ,

0 0 0 1
if there were only 2 vehicles.
p1
v3 v2
v1
d3 d2

Car 3 Car 2 Car 1

Figure 2: The leading vehicle knows its own absolute position p1 and absolute velocity v1 . The
follower vehicles know their own absolute velocity vi and the relative distance to the preceding
vehicle di .

A first order approximation of the continuous time model with sampling time Ts = 0.2s (Euler
discretization) is obtained as
xk+1 = Axk +Buk +wk ,
yk = Cxk +wk ,
1
Image downloaded from http://carsinamerica.net/porsche-918

11

where xk ∈ R2n , uk ∈ Rn , yk ∈ R2n , wk ∈ W for every k, A = I2n +Ac Ts , B = I2n + Ac2Ts Bc Ts .
Every vehicle can measure its own two outputs as per Figure 2. Hence, the sparsity subspace
is defined as S = Sparse(S), where each non-zero block of S according to (17) is the matrix
S = In ⊗ 1 1 .

4.1 Setting up the simulation


We set a horizon of N = 15 time steps. We consider the platooning of 8 “Porsche 918 Spyder” 2
cars with mass m = 1700kg. These cars have the capability of accelerating from 0m s−1 to 27m s−1
in 2.2s, which means that their engine can provide a thrust of approximately 20kN. Hence, we
constrain the absolute value of the force input not to exceed 20kN.
Disturbances for positions and velocities are taken from the set W ⊆ R2 , defined as the square
centered at the origin with edges 0.2m, m s−1 parallel to the axes. Hence, we assume that distur-
bances up to 0.1m on positions and up to 0.1m s−1 on velocities are present at each time because of
sliding effects and possible unevenness of the road. We require that the vehicles maintain a safety
distance of 2m between each other at every time.
The cost function J(·) is chosen as a quadratic function of the states penalizing both the distance
of each vehicle from an assigned target position and their velocities. Without any assumptions on
statistics about the disturbances, J(·) is computed over the disturbance-free trajectory of the states.
The distances from the assigned target positions are penalized with a weight of 0.01 from time 0
to time N − 1 and with a weight of 0.2 at time N . The velocities are penalized with a weight of
0.002 from time 0 to time N − 1 and with a weight of 0.2 at time N .
It can be easily verified that S is not QI with respect to CB because SCBS 6∈ Sparse(S).
Hence, there are no known approaches that can solve Problem 1 to optimality by using disturbance
feedback parametrizations. Let J ? be the optimal value of Problem 1. In what follows, we apply
our techniques to derive upper and lower bounds on J ? . Furthermore, we compare the performance
of our feasible controller with the one obtained using techniques from [21].

20
position of the vehicles

10

-10

-20
J GSS
-30 J
0 5 10 15
time steps (Ts = 0.2)

Figure 3: The trajectories marked with diamonds are obtained with the proposed feasible controller comply-
ing with the non-QI information structure. The trajectories marked with circles are obtained by considering
our information relaxation from [12]. The dashed straight lines represent the target positions for the vehicles.
Note that 15 time steps are equivalent to 3 seconds.
2
https://en.wikipedia.org/wiki/Porsche_918_Spyder

12
4.2 Performance bounds beyond QI sparsity constraints
The work in [21] considered upper bounds on the cost of Problem 1 based on determining QI sparsity
subspaces that are subsets of S. Determining the “closest” QI sparsity subspace was shown to be
intractable [21]. For the considered example, we identify a sparsity subspace which is “close” to S.
Consider the binary matrix T < S defined as follows
   
1 1 0 0
I3 ⊗ 0 0 0 0 06×4 
T = 
  .
1 1 0 0 
02×12
0 0 0 1

It can be verified that Sparse(T ) is QI with respect to CAk B for any integer k. Furthermore, T
is as close as possible to S in the sense that switching any entry of T back to one (so that it is
still sparser than S) would compromise QI for all k. Let us now define the corresponding stacked
operators. Let T ∈ {0, 1}m(N +1)×p(N +1) be such that its non-zero blocks as per (17) are all equal
to T . By [9, Theorem 3] we have that Sparse(T) is QI with respect to CB. Since Sparse(T ) is QI
with respect to CAk B for every k and as close as possible to Sparse(S), then Sparse(T) is also a
QI sparsity subspace which is close to S. By solving Problem 2 with R = Sparse(T) the upper
bound J QI = 131.14 on J ? was obtained.
We then solved Problem 2 with the GSS R = RCB (S, Ymax S ) and obtained the upper bound

J GSS = 123.72 on J ? . The feasible controller we obtained is thus 1 − JJGSS = 5.7% more perform-
QI
ing. This improvement was possible thanks to the fact that GSS’s are significantly more general
than QI sparsity subspaces, as highlighted in the closing paragraph of Section 3. We also remark
that RCB (S, Ymax S ) can be easily computed according to the procedure in Proposition 1, whereas

determining a QI subspace which is close to S can be challenging [21].


As discussed in Section 3.3, it is possible to evaluate the precision of the upper bounds on J ?
by computing lower bounds on J ? based on QI subspaces which are supersets of S. Let us then
consider the information relaxation based on additional communication proposed in our work [12].
It can be verified that a QI subspace which is a superset of S is obtained if vehicles propagate the
information to the single vehicle following it. Using this information relaxation we obtained the
lower bound J = 119.37 on J ? . We can thus state that the feasible control policy computed using
−J
our approach is J GSSJ = 6% sub-optimal in the worst case where J ? = J.
All the instances of Problem 2 that we considered were solved with GUROBI [34], called through
MATLAB [35] via YALMIP [36], on a computer equipped with a 16GB RAM and a 4.2 GHz quad-
core Intel i7 processor. The resulting trajectories corresponding to J GSS and J are reported in
Figure 3.

5 Conclusions
We proposed a novel technique to compute robust controllers which comply with any information
structure. Motivated by the fact that designing optimal control policies with partial information is
intractable in general our method involves solving a tractable problem in the disturbance feedback
domain. We recover a feasible output feedback controller which provides an upper bound to the
minimum cost when the information structure is not quadratically invariant. The upper bound we
compute can be less conservative than the one found with approaches based on [21].
Our approach is based on the notion of Generalized Sparsity Subspaces (GSS). We derived
conditions so that if the disturbance feedback parameter lies in a GSS the sparsity of the original

13
output feedback controller is preserved. We showed how to improve the choice of a GSS to obtain
better performance guarantees. Furthermore, we provided an interpretation of GSS’s in terms of
input sharing and privacy of the information known to controllers. The efficacy of the approach
was shown by deriving performance bounds for the platooning of vehicles when an information
structure based on following the preceding vehicle is considered.
An immediate future development is investigating heuristics to tailor the choice of a GSS to any
specific dynamical system (see discussion in the last paragraph of Section 3.2). It would also be
relevant to determine other classes of sparsity preserving subspaces as per Figure 1 which cannot
be expressed as a GSS.

Appendices
A Mathematical Notation
We define the following matrices and vectors. The operator ⊗ is the Kronecker product.
 
L0,0 0m×p ··· 0m×p
 .. .. .. .. 
 . . 
L= . . ,
LN -1,0 · · · LN -1,N -1 0m×p  (17)
0m×p ··· 0m×p 0m×p
 T T T
T
g = g0 · · · gN -1 0m×1 .
The matrix blocks above are Lk,j ∈ Rm×p , gi ∈ Rm as in (4), and the 0m×p blocks enforce causality
of the output feedback controller.
h iT
T
A = In AT · · · AN ∈ Rn(N +1)×n ,
 
0n×n 0n×n ··· 0n×n 0n×n
 In 0n×n ··· 0n×n 0n×n 
 
 A In ··· 0n×n 0n×n 
E=  ∈ Rn(N +1)×n(N +1) ,
 .. .. .. .. .. 
 . . . . . 
AN -1 AN -2 · · · In 0n×n
B = E(IN +1 ⊗ B) ∈ R n(N +1)×m(N +1)
, ED = E(IN +1 ⊗ D) ∈ Rn(N +1)×n(N +1) ,
C = IN +1 ⊗ C ∈ Rp(N +1)×n(N +1) , H = IN +1 ⊗ H ∈ Rp(N +1)×n(N +1) ,
   
IN ⊗ U 0N s×n IN ⊗ V 0N s×m
U= ∈ R(N s+r)×n(N +1) , V= ∈ R(N s+r)×m(N +1) ,
0r×nN R 0r×mN 0r×m
F = UB+V ∈ R(N s+r)×m(N +1) , G = UED ∈ R(N s+r)×n(N +1) ,
 
1 ⊗b
c= N − UAx0 ∈ RN s+r .
z

B Proof of theorem 2
T )i = Y T
First, we prove that (Ymax max for every i. Suppose that there exists index i such that
T T , there would be a k such
Ymax (i, i) is 0. Then, according to the design procedure for Ymax

14
T
that T(i, k) = 0 and T(i, k) = 1, which is absurd. Hence, Im(N +1) ≤ Ymax which implies that
Ymax ≤ (Ymax ) for every i. Since Ymax T ≤ T, then by Lemma 2 we have (Ymax )i T ≤ T, which
T T i T T
T )i ≤ Y T
by Theorem 1 implies (Ymax T i T
max for every i. In summary, (Ymax ) = Ymax for every i.
Now take Q ∈ RCB (T, YmaxT ) and L = h(−Q, CB). By Theorem 1 (see proof of Theorem 1),
T )i = Y T , the expression of L as in (11) and Lemma 1,
L ∈ Sparse(T). Using the fact that (Ymax max
we obtain that
N
X −1
LCB = QCBi ∈ Sparse(Ymax
T
),
i=1
T ). Since Q was a generic element, this proves QI.
hence L ∈ RCB (T, Ymax

C Proof of Proposition 1
Suppose that YmaxT is computed as a function of T as in the proposition. Take any indices i, k such
T (i, j) = 0 for every j 6= i such that T(j, k) = 1 by construction. This
that T(i, k) = 0. Then, Ymax
T T(i, k) = ∨m(N +1) Y T (i, j)T(j, k) = 0, where “∨” indicates the logical “or”
implies that Ymax j=1 max
T T ≤ T. For the second of (14) we reason by contrapositive. Suppose that
operator. Hence, Ymax
T . Then, there are indices i, j such that Y(i, j) = 1 and Y T (i, j) = 0. By construction
Y 6≤ Ymax max
T (i, j) = 0 implies that there exists index k such that T(j, k) = 1 and T(i, k) = 0.
the fact that Ymax
As a consequence, YT(i, k) = ∨r=1
m(N +1)
Y(i, r)T(r, k) = Y(i, j)T(j, k) = 1, where “∨” indicates
the logical “or” operator. This implies that YT 6≤ T because T(i, k) = 0. 

D Proof of Proposition 2
From Lemma 2 in [12], h(R, CB) = R if and only if R is QI with respect to CB. Then, if R is
QI with respect to CB and S ⊆ R, the solution space of Problem 2 contains the solution space
of Problem 1. Hence, Problem 2 is a relaxation of Problem 1. If S is QI with respect to CB and
R = S, then h(R, CB) = S and Problem 2 is equivalent to Problem 1. If Problem 2 is equivalent
to Problem 1, then R is such that h(R, CB) = S. By Lemma 1 in [12], h(R, CB) is convex if and
only it is equal to R. Hence, R = S and h(S, CB) = S which implies that S is QI with respect
to CB. 

Acknowledgment
We thank Nikolai Matni for informative discussion.

References
[1] H. S. Witsenhausen, “A counterexample in stochastic optimum control,” SIAM Journal on
Control, vol. 6, no. 1, pp. 131–147, 1968.

[2] V. D. Blondel and J. N. Tsitsiklis, “A survey of computational complexity results in systems


and control,” Automatica, vol. 36, no. 9, pp. 1249–1274, 2000.

[3] C. H. Papadimitriou and J. Tsitsiklis, “Intractable problems in control theory,” SIAM journal
on control and optimization, vol. 24, no. 4, pp. 639–654, 1986.

15
[4] Y.-C. Ho and C. K’Ai-Ching, “Team decision theory and information structures in optimal
control problems–part i,” IEEE Transactions on Automatic control, vol. 17, no. 1, pp. 15–22,
1972.

[5] P. G. Voulgaris, “A convex characterization of classes of problems in control with specific in-
teraction and communication structures,” in American Control Conference, 2001. Proceedings
of the 2001, vol. 4. IEEE, 2001, pp. 3128–3133.

[6] B. Bamieh and P. G. Voulgaris, “A convex characterization of distributed control problems


in spatially invariant systems with communication constraints,” Systems & Control Letters,
vol. 54, no. 6, pp. 575–583, 2005.

[7] M. C. Rotkowitz, Tractable problems in optimal decentralized control. Stanford University,


2005.

[8] M. Rotkowitz and S. Lall, “A characterization of convex problems in decentralized control,”


IEEE Transactions on Automatic Control, vol. 51, no. 2, pp. 274–286, 2006.

[9] L. Furieri and M. Kamgarpour, “Robust control of constrained systems given an information
structure,” in Decision and Control Conference (CDC), 2017 56th IEEE Conference on. IEEE,
2017.

[10] W. Lin and E. Bitar, “Performance bounds for robust decentralized control,” in American
Control Conference (ACC), 2016. IEEE, 2016, pp. 4323–4330.

[11] ——, “A convex information relaxation for constrained decentralized control design problems,”
arXiv preprint arXiv:1708.03991, 2017.

[12] L. Furieri and M. Kamgarpour, “The value of communication in synthesizing controllers given
an information structure,” arXiv preprint arXiv:1711.05324, 2017.

[13] A. Rantzer, “Scalable control of positive systems,” European Journal of Control, vol. 24, pp.
72–80, 2015.

[14] M. Colombino and R. S. Smith, “A convex characterization of robust stability for positive and
positively dominated linear systems,” IEEE transactions on automatic control, vol. 61, no. 7,
pp. 1965–1971, 2016.

[15] G. Darivianakis, A. Georghiou, R. S. Smith, and J. Lygeros, “A stochastic optimization ap-


proach to cooperative building energy management via an energy hub,” in Decision and Control
(CDC), 2015 IEEE 54th Annual Conference on. IEEE, 2015, pp. 7814–7819.

[16] G. Darivianakis, A. Georghiou, A. Eichler, R. S. Smith, and J. Lygeros, “Scalability through


decentralization: A robust control approach for the energy management of a building commu-
nity,” IFAC-PapersOnLine, vol. 50, no. 1, pp. 14 314–14 319, 2017.

[17] S. Fattahi and J. Lavaei, “Theoretical guarantees for the design of near globally optimal
static distributed controllers,” in Communication, Control, and Computing, 2016 54th Annual
Allerton Conference on. IEEE, 2016, pp. 582–589.

[18] ——, “On the convexity of optimal decentralized control problem and sparsity path,” in Amer-
ican Control Conference (ACC), 2017. IEEE, 2017, pp. 3359–3366.

16
[19] G. Fazelnia, R. Madani, A. Kalbat, and J. Lavaei, “Convex relaxation for optimal distributed
control problems,” IEEE Transactions on Automatic Control, vol. 62, no. 1, pp. 206–221, 2017.

[20] R. Arastoo, N. Motee, and M. V. Kothare, “Optimal sparse output feedback control design: a
rank constrained optimization approach,” arXiv preprint arXiv:1412.8236, 2014.

[21] M. C. Rotkowitz and N. C. Martins, “On the nearest quadratically invariant information
constraint,” IEEE Transactions on Automatic Control, vol. 57, no. 5, pp. 1314–1319, 2012.

[22] N. Matni and J. C. Doyle, “A heuristic for sub-optimal 2 decentralized control subject to
delay in non-quadratically-invariant systems,” in American Control Conference (ACC), 2013.
IEEE, 2013, pp. 5803–5808.

[23] P. J. Goulart, E. C. Kerrigan, and J. M. Maciejowski, “Optimization over state feedback


policies for robust control with constraints,” Automatica, vol. 42, no. 4, pp. 523–533, 2006.

[24] J. Gonçalves, R. Howes, and S. Warnick, “Dynamical structure functions for the reverse engi-
neering of LTI networks,” in Decision and Control, 2007 46th IEEE Conference on. IEEE,
2007, pp. 1516–1522.

[25] E. Yeung, J. Goncalves, H. Sandberg, and S. Warnick, “Network structure preserving model
reduction with weak a priori structural information,” in CDC/CCC 2009. Proceedings of the
48th IEEE Conference on. IEEE, 2009, pp. 3256–3263.

[26] E. Yeung, J. Gonçalves, H. Sandberg, and S. Warnick, “Representing structure in linear inter-
connected dynamical systems,” in Decision and Control (CDC), 2010 49th IEEE Conference
on. IEEE, 2010, pp. 6010–6015.

[27] A. Rai and S. Warnick, “A technique for designing stabilizing distributed controllers with
arbitrary signal structure constraints,” in Control Conference (ECC), 2013 European. IEEE,
2013, pp. 3282–3287.

[28] Y.-S. Wang, N. Matni, S. You, and J. C. Doyle, “Localized distributed state feedback control
with communication delays,” in American Control Conference (ACC), 2014. IEEE, 2014, pp.
5748–5755.

[29] Y. Wang, N. Matni, and J. Doyle, “System level parameterizations, constraints and synthesis,”
in American Control Conference (ACC), 2017. IEEE, 2017, pp. 1308–1315.

[30] Y.-S. Wang, N. Matni, and J. C. Doyle, “Separable and localized system level synthesis for
large-scale systems,” arXiv preprint arXiv:1701.05880, 2017.

[31] Ş. Sabău, C. Oară, S. Warnick, and A. Jadbabaie, “Optimal distributed control for platooning
via sparse coprime factorizations,” IEEE Transactions on Automatic Control, vol. 62, no. 1,
pp. 305–320, 2017.

[32] Y. Zheng, S. E. Li, K. Li, F. Borrelli, and J. K. Hedrick, “Distributed model predictive control
for heterogeneous vehicle platoons under unidirectional topologies,” IEEE Transactions on
Control Systems Technology, vol. 25, no. 3, pp. 899–910, 2017.

[33] S. Sadraddini and C. Belta, “Distributed robust set-invariance for interconnected linear sys-
tems,” arXiv preprint arXiv:1709.10036, 2017.

17
[34] I. Gurobi Optimization, “Gurobi optimizer reference manual,” 2016. [Online]. Available:
http://www.gurobi.com

[35] MATLAB, version 9.1.0 (R2016b). Natick, Massachusetts: The MathWorks Inc., 2016.

[36] J. Löfberg, “Yalmip : A toolbox for modeling and optimization in matlab,” in In Proceedings
of the CACSD Conference, Taipei, Taiwan, 2004.

18

You might also like