You are on page 1of 33

Tutorial on Dynamic

Average Consensus
THE PROBLEM, ITS APPLICATIONS, AND THE ALGORITHMS

SOLMAZ S. KIA, BRYAN VAN SCOY, JORGE CORTÉS, RANDY A. FREEMAN,


KEVIN M. LYNCH, and SONIA MARTÍNEZ

T
echnological advances in ad hoc networking and the system to (in a decentralized fashion) fuse information, com-
availability of low-cost reliable computing, data stor- pute common estimates of unknown quantities, and agree on
age, and sensing devices have made scenarios pos- a common view of the world is critical. These problems can be
sible where the coordination of many subsystems formulated as agreement problems on linear combinations of
extends the range of human capabilities. Smart grid dynamically changing reference signals or local parameters.
operations, smart transportation, smart health care, and sens- This dynamic agreement problem corresponds to dynamic
ing networks for environmental monitoring and exploration average consensus, which, as discussed in “Summary,” is the
in hazardous situations are just a few examples of such net- problem of interest of this article. The dynamic average con-
work operations. In these applications, the ability of a network sensus problem is for a group of agents to cooperate to track
the average of locally available time-varying reference signals,
Digital Object Identifier 10.1109/MCS.2019.2900783 where each agent is capable only of local computations and
Date of publication: 17 May 2019 communicating with local neighbors (see Figure 1).

40 IEEE CONTROL SYSTEMS MAGAZINE » JUNE 2019 1066-033X/19©2019IEEE


CENTRALIZED SOLUTIONS HAVE DRAWBACKS problem because we could rely on the algorithmic solutions
The difficulty of the dynamic average consensus problem available for static average consensus.
is that the information is distributed across the network. A However, this approach does not work because it would
straightforward solution, termed centralized, to the dynamic essentially need a static average consensus algorithm that
average consensus problem is to gather all of the informa- is able to converge infinitely fast. In practice, some time is
tion in a single place, perform the computation (in other required for information to flow across the network, and
words, calculate the average), and then send the solution hence the result of the repeated application of any static
back through the network to each agent. Although simple, average consensus algorithm operates with some error
the centralized approach has numerous drawbacks: 1) the whose size depends on its speed of convergence and how
algorithm is not robust to failures of the centralized agent fast inputs change. To illustrate this point better, we have
(if the centralized agent fails, then the entire computation
fails), 2) the method is not scalable because the amount of
communication and memory required on each agent scales
with the size of the network, 3) each agent must have a Summary
unique identifier (so that the centralized agent counts each
T his article addresses the dynamic average consen-
value only once), 4) the calculated average is delayed by an sus problem and the distributed coordination algo-
amount that grows with the size of the network, and 5) the rithms available to solve it. Such a problem arises in sce-
reference signals from each agent are exposed over the narios with multiple agents, where each one has access to a
entire network (which is unacceptable in applications time-varying signal of interest (for example, a robot sensor
involving sensitive data). The centralized solution is fragile sampling the position of a mobile target of interest or a dis-
due to the existence of a single failure point in the network. tributed energy resource taking a sequence of frequency
This can be overcome by having every agent act as the cen- measurements in a microgrid). The dynamic average con-
tralized agent. In this approach, referred to as flooding, sensus problem consists of having the multiagent network
agents transmit the values of the reference signals across collectively compute the average of the set of time-varying
the entire network until each agent knows each reference signals. Reasons for pursuing this objective are numerous
signal. This may be summarized as first do all communica- and include data fusion, refinement of uncertainty guar-
tions and then do all computations. While flooding fixes antees, and computation of higher-accuracy estimates,
the issue of robustness to agent failures, it is still subject all enabling local decision making with network-wide ag-
to many of the drawbacks of the centralized solution. gregated information. Solving this problem is challenging
Although this approach works reasonably well for small- because the local interactions among agents involve only
size networks, its communication and storage costs scale partial information, and the quantity that the network seeks
poorly in terms of the network size and may incur, depend- to compute is changing as the agents run their routines.
ing on how it is implemented, costs of order O (N 2) per The article provides a tutorial introduction to distributed
agent (for instance, this is the case if each agent maintains methods that solve the dynamic average consensus prob-
which neighbors it has or has not sent each piece of infor- lems, paying special attention to the role of network con-
mation to). This motivates the interest in developing dis- nectivity and incorporating information about the nature
tributed solutions for the dynamic average consensus of the time-varying signals, the performance tradeoffs re-
problem that involve only local interactions and decisions garding convergence rate, steady-state error and memory
among the agents. and communication requirements, and algorithm robust-
ness against initialization errors.
CHALLENGES WITH DYNAMIC PROBLEMS
The static version of the dynamic average consensus prob-
lem (commonly referred to as static average consensus) is
the familiar problem in which agents seek to agree on a spe-
cific combination of fixed quantities. The static problem has k j
been extensively studied in the literature [1]–[4], and several
simple and efficient distributed algorithms exist with exact
l i
convergence guarantees. Given its mature literature, a natu-
ral approach to address the distributed solution of the
dynamic average consensus problem in some literature has uk(t) uj(t)
been to zero-order sample the reference signals and use a
static average consensus algorithm between sampling times ul(t) ui(t)
(for example, see [5] and [6]). If this was a practical approach,
it would mean that there is no need to worry about design- FIGURE 1 A group of communication agents, each endowed with a
ing specific algorithms to solve the dynamic average consensus time-varying reference signal.

JUNE 2019  «  IEEE CONTROL SYSTEMS MAGAZINE  41


the following numerical example. Consider a process described sampling time m, each agent initializes the standard static
by a fixed value plus a sine wave whose frequency and discrete-time Laplacian average consensus algorithm
phase are changing randomly over time. A group of six
i ! " 1, f, N ,,
N
agents with the communication topology of a directed ring x i (k + 1) = x i (k) - d / a ij (x i (k) - x j (k)),
j=1
monitors this process by taking synchronous samples, each
according to by the current sampled reference values x i (0) = u i (m) and
implements it with an admissible time step d until just before
u i (m) = a i (2 + sin (~ (m) t (m) + z (m))) + b i, m ! Z $ 0, the next sampling time m + 1; 2) at time m = 0, agents start
executing a dynamic average consensus algorithm [more
where a i and b i are fixed unknown error sources in the specifically, strategy (S15), which is described in detail later].
measurement of agent i ! " 1, f , 6 , . To reduce the effect Between sampling times m and m + 1, the reference input
of measurement errors, after each sampling, every agent u i (k) implemented in the algorithm is fixed at u i (m), where k
wants the average of the measurements across the net- is the communication time index. Figure 2 compares the
work before the next sampling time. For the numerical tracking performance of these two approaches. It is observed
simu­lations, the values ~ ~ N (0, 0.25), z ~ N (0, (r/2) 2), that the dynamic average consensus algorithm, by keeping a
with N (n, p) indicating a Gaussian distribution with mean memory of past actions, produces a better tracking response
n and variance p, are used. The sampling rate is set to 0.5 Hz than the static algorithm initialized at each sampling time
(Dt = 2  s). For the simulation under study, a 1 = 1.1, a 2 = 1, with the current values. This comparison serves as motiva-
a 3 = 0.9, a 4 = 1.05, a 5 = 0.96, a 6 = 1, b 1 = - 0.55, b 2 = 1, tion for the need to specifically design distributed algorithms
b 3 = 0.6, b 4 = - 0.9, b 5 = - 0.6, and b 6 = 0.4 . To obtain the that take into account the particular features of the dynamic
average, the folllowing two approaches are used: 1) at every average consensus problem.

4 4

2 2
xi

xi

0 0
0 2 4 6 8 10 12 14 16 18 20 0 2 4 6 8 10 12 14 16 18 20
Time Time
(a) (b)
4

4
2
2
xi

xi

0
0

–2
0 2 4 6 8 10 12 14 16 18 20 0 2 4 6 8 10 12 14 16 18 20
Time Time
(c) (d)

FIGURE 2 A comparison of performance between a static average consensus algorithm reinitialized at each sampling time versus a
dynamic average consensus algorithm. The solid lines: red curves (respectively, blue curves) represent the time history of the agree-
ment state of each agent generated by the Laplacian static average consensus approach [respectively, the dynamic average consensus
of (S15)]; # : sampling points at mDt; & : the average at mDt; + : the average of reference signals at kd. The dynamic consensus algo-
rithm very closely tracks the average over time as the static consensus does not have enough time between sampling times to converge.
This trend is preserved even if the frequency of the communication between the agents increases. In these simulations, a = b = 1 in
(S15). (a) Static algorithm; three communications in t ! [m, m + 1] . (b) Static algorithm; 20 communications in t ! [m, m + 1] . (c) Dynamic
algorithm; three communications in t ! [m, m + 1] . (d) Dynamic algorithm; 20 communications in t ! [m, m + 1] .

42  IEEE CONTROL SYSTEMS MAGAZINE  »  JUNE 2019


OBJECTIVES AND ARTICLE ROAD MAP advantage of knowledge of the nature of the reference sig-
The objective of this article is to provide an overview of nals. Many other topics exist that are related to the dynamic
the dynamic average consensus problem that serves as a average consensus problem not explored in this article.
comprehensive introduction to the problem definition, its Several intriguing pointers for such topics are in “Further
applications, and the distributed methods available to Reading.” Throughout the article, unless otherwise noted,
solve them. This article was motivated by the fact that, in network systems are considered whose communication
the literature, many works exist that have dealt with the topology is described by strongly connected and weight-
problem. However, there is not a tutorial reference that balanced directed graphs. In only a few specific cases, the
presents, in a unified way, the developments that have discussion focuses on the setup of undirected graphs, and
occurred over the years. “Summary” encapsulates the these are explicitly mentioned.
contents of the article, emphasizing the value and utility
of its algorithms and results. The primary intention, rather REQUIRED MATHEMATICAL BACKGROUND AND
than providing a full account of all of the available lit- AVAILABLE RESOURCES FOR IMPLEMENTATION
erature, is to introduce, in a tutorial fashion, the main Graph theory plays an essential role in the design and
ideas behind dynamic average consensus algorithms, performance analysis of dynamic consensus algorithms.
the performance tradeoffs considered in their design, “Basic Notions from Graph Theory” provides a brief over-
and the requirements needed for their analysis and con- view of the relevant graph theoretic concepts, definitions,
vergence guarantees. and notations in this article. Dynamic average consensus
The article first introduces the problem definition and algorithms are linear time-invariant (LTI) systems in which
a set of desired properties expected from a dynamic aver- the reference signals of the agents enter the system as an
age consensus algorithm. Next, various applications of external input, in contrast to the (Laplacian) static average
dynamic average consensus in network systems are consensus algorithm, where the reference signals enter as
presented, including distributed formation, distributed initial conditions. Thus, in addition to the internal stability
state estimation, and distributed optimization problems. analysis (which is sufficient for the static average consen-
It is not surprising that the initial synthesis of dynamic sus algorithm), the input-to-state stability (ISS) of the al­­
average consensus algorithms emerged from a careful gorithms must be assessed. A brief overview of the ISS
look at static average consensus algorithms. The section analysis of LTI systems is provided in “Input-to-State Sta-
“A Look at Static Average Consensus Leading up to the bility of Linear Time-Invariant Systems.” All of the algo-
Design of a Dynamic Average Consensus Algorithm” pro- rithms described can be implemented using such modern
vides a brief review of standard algorithms for the static computing languages as C and Matlab. Matlab provides
average consensus and then builds on this discussion to functions for the simple construction, modification, and
describe the first dynamic average consensus algorithm. visualization of graphs.
Various features of these initial algorithms are elaborated
on, and their shortcomings are identified. This sets the DYNAMIC AVERAGE CONSENSUS:
stage in the section “Continuous-Time Dynamic Average PROBLEM FORMULATION
Consensus Algorithms” to introduce various algorithms Consider a group of N agents where each agent is capable of
that address these shortcomings. The design of continu- 1) sending and receiving information with other agents, 2)
ous-time algorithms for network systems is often moti- storing information, and 3) performing local computations.
vated by the conceptual ease for design and analysis, For example, the agents may be cooperating robots or sen-
rooted in the relatively mature theoretical basis for the sors in a wireless sensor network. The communication
control of continuous-time systems. However, the imple- topology among the agents is described by a fixed digraph
mentation of these continuous-time algorithms on cyber- (see “Basic Notions from Graph Theory”). Suppose that
physical systems may not be feasible due to practical each agent has a local scalar reference signal, denoted
constraints, such as limited interagent communication u i (t) : [0, 3) " R in continuous time and u i (k) : N " R in
bandwidth. This motivates the section “Discrete-Time discrete time. This signal may be the output of a sensor
Dynamic Average Consensus Algorithms,” which spe- located on the agent, or it could be the output of another
cifically discusses methods to accelerate the conver- algorithm that the agent is running. The dynamic average
gence rate and enhance the robustness of the proposed consensus problem then consists of designing an algorithm
algorithms. Because the information of each agent takes that allows individual agents to track the time-varying
some time to propagate through the network, it is expected average of the reference signals, given by
that tracking an arbitrarily fast average signal with zero
N
error is not feasible unless agents have some a priori infor- continuous time: u avg (t) : = 1 / u i (t),
N i=1
mation about the dynamics generating the signals. This
N
topic is addressed in the “Perfect Tracking Using A Priori discrete time: u avg (k) : = 1 / u i (k) .
Knowledge of the Input Signals” section, which takes N i=1

JUNE 2019  «  IEEE CONTROL SYSTEMS MAGAZINE  43


For discrete-time signals and algorithms, for any variable p In continuous time, a driving command c i ( J i (t), {I j (t)} j ! Nout
i
)
sampled at time t k, the shorthand notation p(k) or p k is used ! R is sought for each agent i ! {1, f, N} such that (with an
to refer to p (t k). For reasons specified later, the design of dis- appropriate initialization) a local state x i (t) (which is referred
tributed algorithms is of specific interest, meaning that to to as the agreement state of agent i) converges to the average
obtain the average, the policy that each agent implements u avg (t) asymptotically. Formally, for
depends only on its variables (represented by J i, which
includes its own reference signal) and those of its out-neigh- continuous time: xo i = c i ( J i (t), {I j (t)} j ! Nout
i
), i ! {1, f, N},
bors (represented by {I j} j ! Nout
i
). (1)

Further Reading

N umerous works have studied the robustness of dynamic dom link failures [S16] and optimizing the edge weights for fast
average consensus algorithms against a variety of distur- consensus [S17], [7].
bances and sources of error present in practical scenarios.
These include fixed communication delays [S1], additive input
REFERENCES
disturbances [S2], time-varying communication graphs [S3], [S1] H. Moradian and S. S. Kia, “On robustness analysis of a dynamic
and driving command saturation [19]. Variations of the dy- average consensus algorithm to communication delay,” IEEE Trans.
namic average consensus problems explore scenarios where Control Netw. Syst. Aug. 6, 2018. doi: 10.1109/TCNS.2018.2863568.
[S2] G. Shi and K. H. Johansson, “Robust consensus for continuous-time multi-
the algorithm design depends on the specific agent dynamics agent dynamics,” SIAM J. Control Optim., vol. 51, no. 5, pp. 3673–3691, 2013.
[S4], [S5], [71] or incorporates different agent roles, such as in [S3] B. Van Scoy, R. A. Freeman, and K. M. Lynch, “Asymptotic mean
leader–follower networks of mobile agents [15], [S6], [S7]. ergodicity of average consensus estimators,” in Proc. American Con-
trol Conf., 2014, pp. 4696–4701.
When dealing with directed agent interactions, a common [S4] F. Chen, G. Feng, L. Liu, and W. Ren, “Distributed average tracking
assumption in solving the average consensus problem is that of networked Euler–Lagrange systems,” IEEE Trans. Autom. Control,
the communication graph is weight balanced, which is equiv- vol. 60, no. 2, pp. 547–552, 2015.
[S5] S. Ghapania, W. Ren, F. Chen, and Y. Song, “Distributed aver-
alent to the graph consensus matrix W: = I - L being doubly age tracking for double-integrator multi-agent systems with reduced
stochastic. In [S8], it is shown that calculating an average over requirement on velocity measurements,” Automatica, vol. 81, no. 7,
a network requires either explicit or implicit use of either 1) the pp. 1–7, 2017.
[S6] G. Shi, Y. Hong, and K. H. Johansson, “Connectivity and set track-
out-degree of each agent, 2) global node identifiers, 3) random- ing of multi-agent systems guided by multiple moving leaders,” IEEE
ization, or 4) asynchronous updates with specific properties. In Trans. Autom. Control, vol. 57, no. 3, pp. 663–676, 2012.
particular, the balanced assumption is necessary for scalable, [S7] Z. Meng, D. V. Dimarogonas, and K. H. Johansson, “Leader-follow-
er coordinated tracking of multiple heterogeneous Lagrange systems
deterministic, synchronous algorithms. In general, agents may using continuous control,” IEEE Trans. Robot., vol. 30, no. 3, pp. 739–
not have access to their out-degree (for example, agents that 745, 2014.
use local broadcast communication). If each agent knows its [S8] J. M. Hendrickx and J. N. Tsitsiklis, “Fundamental limitations for
anonymous distributed systems with broadcast communications,” in Proc.
out-degree, however, then distributed algorithms may be used Allerton Conf. Communication, Control, and Computing, 2015, pp. 9–16.
to generate weight-balanced and doubly stochastic digraphs [S9] B. Gharesifard and J. Cortés, “Distributed strategies for generating
[S9], [S10]. weight-balanced and doubly stochastic digraphs,” Eur. J. Control, vol.
18, no. 6, pp. 539–557, 2012.
Another approach is to explicitly use the out-degree in the [S10] A. Rikos, T. Charalambous, and C. N. Hadjicostis, “Distributed
algorithm by having agents share their out-weights and use weight balancing over digraphs,” IEEE Trans. Control Netw. Syst., vol.
them to adjust for the imbalances in the graph. This approach 1, no. 2, pp. 190–201, 2014.
[S11] F. Bénézit, V. Blondel, P. Thiran, J. Tsitsiklis, and M. Vetterli, “Weight-
is referred to as the push-sum protocol and has been applied to ed gossip: Distributed averaging using non-doubly stochastic matrices,”
the static average consensus problem (see [S11]–[S14]). Both in Proc. IEEE Int. Symp. Information Theory, 2010, pp. 1753–1757.
of these approaches of dealing with unbalanced graphs require [S12] A. D. Domínguez-García and C. N. Hadjicostis, “Distributed ma-
trix scaling and application to average consensus in directed graphs,”
each agent to know its out-degree. Furthermore, when com- IEEE Trans. Autom. Control, vol. 58, no. 3, pp. 667–681, 2013.
munication links are time varying, these approaches work only [S13] A. Nedic and A. Olshevsky, “Distributed optimization over time-
if the time varying graph remains weight balanced (see [19] and varying directed graphs,” IEEE Trans. Autom. Control, vol. 60, no. 3, pp.
601–615, 2015.
[S15]). If communication failures caused by limited communi- [S14] P. Rezaienia, B. Gharesifard, T. Linder, and B. Touri. Push-sum on
cation ranges or external events, such as obstacle blocking, random graphs. 2017. [Online]. Available: arXiv:1708.00915
destroy the weight-balanced character of the graph, then it is [S15] S. S. Kia, J. Cortés, and S. Martínez, “Distributed event-triggered
communication for dynamic average consensus in networked sys-
still possible to solve the dynamic average consensus problem tems,” Automatica, vol. 59, pp. 112–119, Sept. 2015.
if the expected graph is balanced [S3]. Another set of works has [S16] S. Kar and J. M. F. Moura, “Sensor networks with random links:
explored the question of how to optimize the graph topology Topology design for distributed consensus,” in IEEE Trans. Signal Pro-
cess., vol. 56, no. 7, pp. 3315–3326, 2008.
to endow consensus algorithms with better properties. These [S17] L. Xiao and S. Boyd, “Fast linear iterations for distributed averag-
include designing the network topology in the presence of ran- ing,” in Proc. IEEE Conf. Decision and Control, 2003, pp. 4997–5002.

44  IEEE CONTROL SYSTEMS MAGAZINE  »  JUNE 2019


with proper initialization if necessary, x i (t) " u avg (t) as u nder proper i n it ial i zat ion if necessar y, accompl i­
t " 3. The driving command c i can be a memoryless s h e s x i (t k) " u avg (k) as t k " 3. Algorithm 1 illustrates
function or an output of a local internal dynamics. Note how a discrete-time dynamic average consensus algo-
that, by using the out-neighbors, the convention is made rithm can be executed over a network of N communicat-
that information flows in the opposite direction speci- ing agents.
fied by a directed edge (there is no loss of generality in Also, consider a third class of dynamic average consen-
doing it so, and the alternative convention of using in- sus algorithms in which the dynamics at the agent level
neighbors instead would also be equally valid). are in continuous time, but the communication among the
Dynamic average consensus can also be accomplished agents, because of the restrictions of the wireless commu-
using discrete-time dynamics, especially when the time- nication devices, takes place in discrete time:
varying inputs are sampled at discrete times. In such a
case, a driving command is sought for each agent i ! {1, f, N} continuous time–discrete time:
j
so that xo i (t) = c i ( J i (t), {I j (t k j)} j ! Nout
i
), i ! {1, f, N}, (3)

j
discrete time: x i (t k + 1) = c i ( J i (t k), {I j (t k)} j ! Nout
i
), i ! {1, f, N}, such that x i (t) " u avg (t) as t " 3. Here, t k j ! R is the k jth
(2) transmission time of agent j, which is not necessarily

Basic Notions From Graph Theory

T he communication network of a multiagent cooperative sys-


1 4 2 4
tem can be modeled by a directed graph, or digraph. Here, 2
1 2
we briefly review some basic concepts from graph theory follow- 1 3
ing [S18]. A digraph is a pair G = (V, E), where V = " 1, f, N ,
3 1
1 1
is the node set and E 3 V # V is the edge set. An edge from 0 0 1 0 0 1 1 0
i to j, denoted by (i, j), means that agent j can send informa- 1 0 0 1 1 0 1 1
A= , A= ,
tion to agent i . For an edge (i, j ) ! E, i is called an in-neighbor 0 2 0 0 1 1 0 1
of j, and j is called an out-neighbor of i. We denote the set of 0 0 1 0 0 1 1 0
out-neighbors of each agent i by N iout . A graph is undirected if 1 0 –1 0 2 –1 –1 0
(i, j ) ! E any time ( j, i ) ! E (see Figure S1). –1 2 0 –1 –1 3 –1 –1
L= L=
A weighted digraph is a triplet G = (V, E, A), where (V, E) 0 –2 2 0 –1 –1 3 –1
is a digraph and A ! R N # N is a weighted adjacency matrix 0 0 –1 1 0 –1 –1 2
(a) (b)
with the property that a ij 2 0 if (i, j ) ! E and a ij = 0, otherwise.
A weighted digraph is undirected if a ij = a ji for all i, j ! V. FIGURE S1 Examples of directed and undirected graphs. (a)
The weighted out-degree and weighted in-degree of a node Strongly connected, weight-balanced digraph. (b) Connected
graph with unit edge weights.
i are, respectively, d out (i ) = R Nj = 1 a ji and d in (i) = R Nj = 1 a ji . Let
d out
max = max d
out
(i ) denote the maximum weighted out-de-
where R ! R N # (N - 1) satisfies 6^1/ N h 1 N R@6^1/ N h 1 N R@ =
i ! {1, f, N}
<
gree. A digraph is weight balanced if, at each node i ! V,
6^1/ N h 1 N R@ 6^1/ N h 1 N R@ = I N. Note that for connected
<
the weighted out-degree and weighted in-degree coincide
(although they might be different across different nodes). graphs, Sym (L) = L, and consequently m i = mt i, for all i ! V.
The out-degree matrix D out is the diagonal matrix with en- Intuitively, the Laplacian matrix can be viewed as a diffu-
tries D out
ii = d
out
(i), for all i ! V. The (out-) Laplacian matrix sion operator over the graph. To illustrate this, suppose each
out
is L = D - A. Note that L1 N = 0. A weighted digraph G is agent i ! V has a scalar variable x i ! R. Stacking the vari-
weight balanced if and only if 1 <N L = 0. Based on the structure ables into a vector x, multiplication by the Laplacian matrix
of L, at least one of the eigenvalues of L is zero and the rest gives the weighted sum
of them have nonnegative real parts. Denote the eigenvalues
of L by m i, i ! " 1, f, N ,, where m 1 = 0 and 0 (m i) # 0 (m j), for [Lx] i = / a ij (x i - x j), (S2)
j!V
i 1 j. For strongly connected digraphs, rank (L) = N - 1. For
strongly connected and weight-balanced digraphs, denote where a ij is the weight of the link between agents i and j.
the eigenvalues of Sym (L) = (L + L<) /2 by mt 1, f, mt N, where
mt 1 = 0 and mt i # mt j, for i 1 j. For strongly connected and
weight-balanced digraphs, REFERENCE
[S18] F. Bullo, J. Cortés, and S. Martínez, Distributed Control of Ro-
botic Networks (Applied Mathematics Series). Princeton, NJ: Princeton
0 1 mt 2 I # R < Sym (L) R # mt N I, (S1) Univ. Press, 2009.

JUNE 2019  «  IEEE CONTROL SYSTEMS MAGAZINE  45


Input-to-State Stability of Linear Time-Invariant Systems

F or a linear time-invariant (LTI) system m = - m max (Sym (A)), l = 1. (S8)

.
x = Ax + Bu, x ! R n, u ! R m, (S3) A tighter exponential bound of

the solution for t ! R $ 0 can be written as m = m *, l = v max (P *) /v min (P *) , (S9)

x (t) = e A t x (0) + #0 t e A(t - ) Bu (x) dx. (S4)


x

can also be obtained for any Hurwitz system matrix A, ac-


cording to [S20, Prop. 5.5.33], from the convex linear matrix
For a Hurwitz matrix A, by using the bound
inequality optimization problem
e A t # le - m t, t ! R $ 0, (S5)
(m *, P *) = argmin m subject to (S10a)
for some l, m ! R 2 0, an upper bound on the norm of the trajec-
PA + A < P # - 2mP, P 2 0, m 2 0. (S10b)
tories of (S4) is established as

Similarly, the state of the discrete-time, LTI system


x (t) # le - m t x (0) + #0 t le - m (t - x)
B u (x) dx

l B x k + 1 = Ax k + Bu k, x k ! R n, u k ! R m (S11)
# le - m t x (0) + sup u (x) , 6t ! R $ 0.(S6)
m 0#x#t

The bound shows that the zero-input response decays to zero with initial condition x 0 ! R n satisfies the bound
exponentially fast, whereas the zero-state response is bound-
e tk x0 + B sup u j o, (S12)
v max (P) 1 - tk
ed for every bounded input, indicating an input-to-state stabil- xk #
v min (P) 1-t 0#j1k
ity behavior. Note that the ultimate bound on the system state
is proportional to the bound on the input. where P ! R n # n and t ! R satisfy
Next, how to compute the parameters l, m ! R 2 0 is shown
in (S5). Recall that [S19, Fact 11.15.5] for any matrix A ! R n # n. A < PA - t 2 P # 0, P 2 0, t $ 0. (S13)
Therefore,
REFERENCES
e A t # e mmax (Sym (A)) t . 6t ! R $ 0, (S7) [S19] D. S. Bernstein, Matrix Mathematics: Theory, Facts, and For-
mulas with Application to Linear System Theory. New York: Springer-
where Sym (A) = ^1/2h (A + A <). Therefore, for a Hurwitz matrix Verlag, 2005.
[S20] D. Hinrichsen and A. J. Pritchard, Mathematical Systems Theory
A whose Sym (A) is also Hurwitz, the exponential bound pa- I: Modeling, State Space Analysis, Stability and Robustness. Princ-
rameters can be set to eton, NJ: Princeton Univ. Press, 2005.

also of relevance in scenarios where the agreement state is a


ALGORITHM 1 The execution of a discrete-time
­d y­­namic average consensus algorithm at each agent physical state with more complex dynamics, for example,
i ! {1, f, N }. the position of a mobile agent in a robotic team. In such
cases, this discussion can be leveraged by, for instance,
Input: J i (k) and {I j (k)} j ! N iout having agents compute reference signals that are to be
Output: x i (k + 1), J i (k + 1) , and I i (k + 1) tracked by the states with more complex dynamics. See
Step 1. x i (k + 1) ! c i (J i (t k), {I j (t k)} j ! N iout) “Further Reading” for a list of relevant literature on dynamic
Step 2. Generate J i (k + 1) and I i (k + 1) average consensus problems for higher-order dynamics.
Step 3. Broadcast I i (k + 1) Given the drawbacks of centralized solutions, several desir-
able properties when designing algorithmic solutions to the
dynamic average consensus problem are identified:
synchronous with the transmission time of other agents in »» scalability, so that the amount of computations and
the network. resources required on each agent does not grow with
The consideration of simple dynamics of the form in (1)– the network size
(3) is motivated by the fact that the state of the agents does »» robustness to the disturbances present in practical sce-
not necessarily correspond to some physical quantity but, narios, such as communication delays and packet
instead, to some logical variable on which agents perform drops, agents entering/leaving the network, and
computation and processing. Agreement on the average is noisy measurements

46  IEEE CONTROL SYSTEMS MAGAZINE  »  JUNE 2019


»» correctness, meaning the algorithm converges to the applications in both commercial and military domains. The
exact average or, alternatively, a formal guarantee tasks accomplished by mobile agents often require dynamic
can be given about the distance between the esti- motion coordination and formation among team members.
mate and the exact average. Consensus algorithms have been commonly used in the
Regarding the last property, to achieve agreement, network design of formation control strategies [14]–[16]. These algo-
connectivity must be such that information about the local ref- rithms have been used, for instance, to arrive at agreement
erence input of each agent reaches other agents frequently on the geometric center of formation so that the formation
enough. As the information of each agent takes some time to can be achieved by spreading the agents in the desired
propagate through the network, tracking an arbitrarily fast geometry about this center (see [1]). However, most of the
average signal with zero error is not feasible unless agents existing results are for static formations. Dynamic aver-
have some a priori information about the dynamics generating age consensus algorithms can effectively be used in
the signals. A recurring theme throughout the article is how dynamic formation control, where quantities of interest
the convergence guarantees of dynamic average consensus such as the geometric center of the formation change with
algorithms depend on the network connectivity and rate of time. Figure 3 depicts an example scenario in which a group
change of the reference signal of each agent. of mobile agents tracks a team of mobile targets. Each agent
monitors a mobile target with location x iT . The objective
APPLICATIONS OF DYNAMIC AVERAGE is for the agents to follow the team of mobile targets by
CONSENSUS IN NETWORK SYSTEMS spreading out in a prespecified formation, which consists
The ability to compute the average of a set of time-varying of each agent being positioned at a relative vector b i with
reference signals is useful in numerous applications, which respect to the time-varying geometric center of the target
explains why distributed algorithmic solutions have found team. A two-layer approach can be used to accomplish
their way into many seemingly different problems involv- the formation and tracking objectives in this scenario: a
ing the interconnection of dynamical systems. This section dynamic consensus algorithm in the cyber layer that com-
provides a selected overview of problems to motivate fur- putes the geometric center in a distributed manner and a
ther research on dynamic average consensus algorithms physical layer that tracks this average plus b i. Note that
and illustrate their range of applicability. Other applications dynamic average consensus algorithms can also be
of dynamic average consensus can be found in [7]–[13]. employed to compute the time-varying variance of the
positions of the mobile targets with respect to the geomet-
Distributed Formation Control ric center, and this can help the mobile agents adjust the
Autonomous networked mobile agents are playing an increas- scale of the formation to avoid ­collisions with the target
ingly important role in coverage, surveillance, and patrolling team. Examples of the use of dynamic consensus algorithms

Cyber Layer

N
1
Cyber Layer Computes ∑ xTi (t )
Ni=1
Physical Layer

Mobile Agent i Monitors Target i to Take Measurement xTi (t )


N
1
Objective: xi → ∑ xTi (t ) + bi
Ni=1
xi : Location of Agent i
N
1
bi : Relative Location of Agent i w.r.t to ∑ xTi (t )
Ni=1

FIGURE 3 A two-layer consensus-based formation for tracking a team of mobile targets. The larger triangle robots are the mobile agents,
and the smaller round robots are the mobile moving targets. The physical layer shows the situational distribution of the mobile agents
and the moving targets. The cyber layer shows which mobile agent has a computational capability and the interagent communi-
cation topology.

JUNE 2019  «  IEEE CONTROL SYSTEMS MAGAZINE  47


in this two-layer approach with multiagent systems with »» update stage
first-order, second-order, or higher-order dynamics can be
found in [17]–[19]. Y i (k + 1) = H i (k + 1) < R i (k + 1) -1 H i (k + 1), i ! {1, f, N}, 
y i (k + 1) = H i (k + 1) < R i (k + 1) -1 z i (k + 1), i ! {1, f, N}, 
Distributed State Estimation -1
P + (k + 1) = e Y - (k + 1) + / Y i (k + 1) o , 
N

Wireless sensors with embedded computing and commu- i=1
nication capabilities play a vital role in provisioning real-
xt (k + 1) = P (k + 1) e y (k + 1) + / y i (k + 1) o. 
N
+ + -
time monitoring and control in many applications, such as i=1
environmental monitoring, fire detection, object tracking,
and body area networks. Consider a model of the process of Despite its optimality, this implementation is not desir-
interest given by able in many sensor network applications due to the exis-
tence of a single point of failure at the fusion center and the
x (k + 1) = A (k) x (k) + B (k) ~ (k), high cost of communication between the sensor stations
and the fusion center. An alternative that has previously
where x ! R n is the state, A (k) ! R n # n and B (k) ! R n # m gained interest [5], [20]–[23] is to employ distributed algo-
are known system matrices, and ~ ! R m is the white rithmic solutions that have each sensor station maintain a
Gau s si a n pr o c e s s noi s e w it h E [~ (k) ~ < (k)] = Q 2 0. local filter to process its local measurements and fuse them
L e t the measurement model at each sensor stat ion with the estimates of its neighbors. Some work [20], [24],
i ! {1, f, N} be [25] employs dynamic average consensus to synthesize dis-
tributed implementations of the Kalman filter. For instance,
z i (k + 1) = H i (k + 1) x (k + 1) + o i, one of the early solutions for distributed minimum vari-
ance estimation has each agent maintain a local copy of the
where z i ! R q is the measurement vector, H i ! R q # n is the propagation filter (4) and employ a dynamic average con-
measurement matrix, and o i ! R q is the white Gaussian sensus algorithm to generate the coupling time-varying
measurement noise with E [o i (k) o i (k) <] = R i 2 0. If all of terms (1/N) R Ni = 1 y i (k + 1) and (1/N) R Ni = 1 Y i (k + 1). If agents
the measurements are transmitted to a fusion center, a know the size of the network, then they can duplicate the
Kalman filter can be used to obtain the minimum vari- update equation locally.
ance estimate of the state of the process of interest as
(see Figure 4): Distributed Unconstrained Convex Optimization
»» propagation stage The control literature has introduced numerous distrib-
uted algorithmic solutions [26]–[33] to solve unconstrained
xt - (k + 1) = A (k) xt - (k), (4a) convex optimization problems over networked systems. In
P - (k + 1) = A (k) P - (k + 1) A (k) < + B (k) Q (k) B (k) <, (4b) a distributed unconstrained convex optimization problem,
Y - (k + 1) = P - (k + 1) -1, (4c) a group of N communicating agents, each with access to a
y - (k + 1) = Y - (k + 1) xt - (k + 1); (4d) local convex cost function f i : R n " R, i ! {1, f, N}, seeks

n
atio
m unic
Com ange
R 1 1
Smart Camera (y , Y )
Node 1
Sens
ing Z
one 2 2
Smart Camera (y , Y )
Node 2

Fusion Center
Neighbors

Smart Camera Optimal Estimate = f (∑yi, ∑Yi )


Node N N N
(y , Y )
Smart Camera Node

FIGURE 4 A networked smart camera system that monitors and estimates the position of moving targets.

48  IEEE CONTROL SYSTEMS MAGAZINE  »  JUNE 2019


to determine the minimizer of the joint global optimiza- Note that in (6), the cumulative gradient term (1/N) R Ni = 1
tion problem df i (y (k)) is a source of coupling among the computations
N performed by each agent. It does not seem reasonable to
x * = argmin 1 / f i (x) (5) halt the execution of this algorithm at each step until the
N i=1
agents have determined the value of this term. Instead,
by local interactions with their neighboring agents. This dynamic average consensus can be employed in conjunc-
problem appears in network system applications, such as tion with (6): the dynamic average consensus algorithm
multiagent coordination, distributed state estimation over estimates the coupling term, and this estimate is employed
sensor networks, or large-scale machine-learning prob- in executing (6), which in turn changes the value of the
lems. Some of the algorithmic solutions for this problem coupling term being estimated. This approach is taken in
are developed using agreement algorithms to compute [33] to solve the optimization problem (5) over connected
global quantities that appear in existing centralized algo- graphs and is also pursued in other implementations of
rithms. For example, a centralized solution for (5) is the distributed convex or nonconvex optimization algorithms
Nesterov gradient descent algorithm [34] described by (see, for instance, [35]–[39]).

Distributed Resource Allocation


x (k + 1) = y (k) - h e 1 / d f i ^y ^ k hho, (6a)
N

N i=1 In optimal resource allocation, a group of agents works
cooperatively to meet a demand in an efficient way (see
v (k + 1) = y (k) - e 1 / d f i ^y^ k hho, (6b)
N
h
Fig ure 5). Each agent i ncurs a cost for the resources
ak N i=1

y (k + 1) = (1 - a k + 1) x (k + 1) + a t + 1 v (k + 1), (6c) it provides. Let the cost of each agent i ! {1, f, N} be


modeled by a convex and differentiable function f i: R " R.
T he object ive is to meet the dema nd d ! R so that
where x (0), y (0), v (0) ! R n, and {a k} 3 k = 0 are defined by the total cost f (x) = R Ni = 1 f i (x i) is minimized. Each agent
a n arbitrarily chosen a 0 ! (0, 1) and the update equation i ! {1, f, N}, therefore, seeks to find the ith element of x *
2 2
a k + 1 = (1 - a k + 1) a k , where a k + 1 always takes the unique solu- given by
tion in (0, 1). If all f i, i ! {1, f, N}, are convex, differentiable,
N
and have L-Lipschitz gradients, then every trajectory k7x (k) of x * = argmin x ! RN / f i (x i), subject to x 1 + g + x N - d = 0.
(6) converges to the optimal solution x * for any 0 1 h 1 (1/L). i=1

x4
x5

x3

x1 + x2 + x3 + x4 + x5 = d

x2

x1

FIGURE 5 A network of five generators with connected undirected topology works together to meet a demand of x 1 + x 2 + x 3 + x 4 + x 5 = d
in a manner in which the overall cost R 5i = 1 f i (x i) for the group is minimized.

JUNE 2019  «  IEEE CONTROL SYSTEMS MAGAZINE  49


This problem appears in many optimal decision-making reference signals are also possible) in a dynamic consensus
tasks, such as optimal dispatch in power networks [40], algorithm coupled with the execution of (7).
[41], optimal routing [42], and economic systems [43]. For
instance, the group of agents could correspond to a set of A LOOK AT STATIC AVERAGE CONSENSUS LEADING
flexible loads in a microgrid that receive a request from a UP TO THE DESIGN OF A DYNAMIC AVERAGE
grid operator to collectively adjust their power consump- CONSENSUS ALGORITHM
tion to provide a desired amount of regulation to the bulk Consensus algorithms to solve the static average con-
grid. In this demand-response scenario, x i corresponds to sensus problem have been studied since [48]. The com-
the amount of deviation from the nominal consumption of monality in their design is the idea of having agents
load i, the function fi models the amount of discomfort start their agreement state with their own reference
caused by deviating from it, and d is the amount of regula- value and adjust it based on some weighted linear feed-
tion requested by the grid operator. back, which takes into account the difference between
A centralized algorithmic solution is given by the popu- their agreement state and their neighbors’. This leads to
lar saddle point or primal-dual dynamics [44], [45] associ- algorithms of the form
ated to the optimization problem,
N
continuous time: xo i (t) = - / a ij (x i (t) - x j (t)), (8a)
1 N
no (t) = x (t) + g + x (t) - d, n (0) ! R, (7a) j=1
N

xo i (t) = - df i (x i (t)) - n (t), i ! {1, f, N}, x i (0) ! R. (7b)


i i
discrete time: x (k + 1) = x (k) - / a ij (x i (k) - x j (k)), (8b)
j=1

If the local cost functions are strictly convex, then every tra- for i ! {1, f, N}, with x i (0) = u i constant for both algo-
jectory t7x (t) converges to the optimal solution x *. The rithms. Here, [a ij] N # N is the adjacency matrix of the com-
source of coupling in (7) is the demand mismatch that munication graph (see “Basic Notions from Graph Theory”).
appears in the right-hand side of (7a). However, the dynamic By stacking the agent variables into vectors, the static aver-
average consensus can be employed to estimate this age consensus algorithms can be written compactly using a
quantity online and feed it back into the algorithm. This graph Laplacian as
approach is taken in [46] and [47]. This can be accomplished,
.
for instance, by having agent i use the reference signal continuous time: x (t) = - Lx (t), (9a)
x i (t) - d/N (this assumes that every agent knows the
demand and number of agents in the network, but other discrete time: x (k + 1) = (I - L) x (k), (9b)

with x (0) = u. When the communication graph is fixed, this


system is LTI and can be analyzed using standard time-
1 domain and frequency-domain techniques in control. Spe-
s IN x(t ) cifically, the frequency-domain representation of the static
− average consensus algorithm output signal is given by
x(0)
continuous time: X (s) = [sI + L] -1 x (0) = [sI + L] -1 U (s),
(10a)
L
discrete time: X (z) = [zI - (I - L)] -1 U (z), (10b)
(a)
where X (s) and U (s), respectively, denote the Laplace
1 xk transform of x (t) and u, while X (z) and U (z), respectively,
I
z–1 N
denote the z-transform of X k and u. For static signals,

x0 U (s) = u and U (z) = u.
The block diagram of these static average consensus algo-
rithms is shown in Figure 6. The dynamics of these algorithms
L consists of a negative feedback loop, where the feedback term
(b) is composed of the Laplacian matrix and an integrator [1/s
in continuous time and 1/ (z - 1) in discrete time]. For the
FIGURE 6 A block diagram of the static average consensus algorithms static average consensus algorithms, the reference signal
(9). The input signals are assigned to the initial conditions, that is, enters the system as the initial condition of the integrator
x (0) = u (in continuous time) or x 0 = u (in discrete time). The feed-
back loop consists of the Laplacian matrix of the communication state. Under certain conditions on the communication graph,
graph and an integrator [1/s in continuous time and 1/ (z - 1) in dis- the error of these algorithms can be shown to converge to
crete time]. (a) Continuous time. (b) Discrete time. zero, as stated next.

50  IEEE CONTROL SYSTEMS MAGAZINE  »  JUNE 2019


Theorem 1: Convergence Guarantees of the all poles in the left-half plane and at most one zero pole
Continuous-Time and Discrete-Time Static Average (such signals are asymptotically constant), then all of the
Consensus Algorithms (8) [1] agents implementing (11) over a connected graph track
Suppose that the communication graph is a constant, u avg (t) with zero error asymptotically. As shown later, the
strongly connected, and weight-balanced digraph and convergence properties of (11) can be described more com-
that the reference signal u i at each agent i ! {1, f, N} is a prehensively using time-domain ISS analysis.
constant scalar. Then the following convergence results Define the tracking error of agent i by
hold for the continuous-time and discrete-time static aver-
age consensus algorithms (8): e i (t) = x i (t) - u avg (t), i ! {1, f, N}.
»» continuous time: As t " 3, every agreement state
x i (t), i ! {1, f, N} of the continuous-time static aver- To analyze the system, the error is decomposed into the
age consensus algorithm (8a) converges to u avg with consensus direction (the direction 1 N) and the disagree-
an exponential rate no worse than mt 2, the smallest ment directions (the directions orthogonal to 1 N). To this
nonzero eigenvalue of Sym (L). end, define the transformation matrix T = 6^1/ N h 1 N R @
»» discrete time: As k " 3, every agreement state x ik, i ! where R ! R N # (N -1) is such that T < T = TT < = I N, and con-
{1, f, N} of the discrete-time static average consensus sider the change of variables
algorithm (8b) converges to u avg with an exponential
er = ; E = T < e. (12)
er 1
rate no worse than t ! (0, 1), provided that the Lapla-
er 2:N
cian matrix satisfies t = I N - L - 1 N 1 <N /N 2 1 1.  
Note that, given a weighted graph with Laplacian matrix In the new coordinates, (11) takes the form
L, the graph weights can be scaled by a nonzero constant
d ! R to produce a scaled Laplacian matrix dL (see “Basic N
  ero 1 = 0,   er 1 (t 0) = 1 / (x j(t 0) - u j(t 0)), (13a)
Notions from Graph Theory”). This extra scaling parame- N j =1
ter can then be used to produce a Laplacian matrix that sat- .
isfies the conditions in Theorem 1. ero 2: N =-R < LRer 2: N + R < u,   er 2:N (t 0) = R < x (t 0), (13b)

A First Design for Dynamic Average Consensus where t 0 is the initial time. Using the ISS bound on the
Because the reference signals enter the static average con- trajectories of LTI systems (see “Input-to-State Stability of
sensus algorithms (8) as initial conditions, they cannot Linear Time-Invariant Systems”), the tracking error of
track time-varying signals. Looking at the frequency-
domain representation in Figure 6 of the static average
consensus algorithms (8), it is clear that what is needed
. 1
instead is to continuously inject the signals as inputs u(t ) x(t )
s IN
into the dynamical system. This allows the system to nat- −
urally respond to changes in the signals without any need x(0)
for reinitialization. This basic observation is made in [49],
resulting in the systems shown in Figure 7.
L
More precisely, [49] argues that considering the static
inputs as a dynamic step function, the algorithm (a)
. .
x (t) =-Lx (t) + u (t), x i (0) = u i (0), u(t ) x(t )
u i (t) = u i h (t), i ! {1, f, N}, −

in which the reference value of the agents enters the dynam- 1


L
ics as an external input, results in the same frequency repre- s IN
sentation (10a) [here, h (t) is the Heaviside step function].
Therefore, convergence to the average of reference values is p(0)
guaranteed. Based on this observation, [49] proposes one of (b)
the earliest algorithms for dynamic average consensus: FIGURE 7 A block diagram of the continuous-time dynamic average
consensus algorithms (11) and (16). Whereas the reference signals
N
xo i (t) =- / a ij (x i (t) - x j (t)) + u (t), i ! {1, f, N}, (11a)
. i are applied as initial conditions for the static consensus algorithms,

j =1 the reference signals are applied here as inputs to the system.
Although both systems are equivalent, system (a) is in the form (3)
x i (0) = u i (0). (11b) and explicitly requires the derivative of the reference signals, and
system (b) does not require differentiating the reference signals. (a)
Using a Laplace-domain analysis, [49] shows that, if each Dynamic average consensus algorithm (11). (b) Dynamic average
input signal u i, i ! {1, f, N}, has a Laplace transform with consensus algorithm (16).

JUNE 2019  «  IEEE CONTROL SYSTEMS MAGAZINE  51


each agent i ! {1, f, N} while implementing (11) over a c (3)
lim|x i (t) - u avg (t)|# , i ! {1, f, N }, (15)
strongly connected and weight-balanced digraph is seen t"3 mt 2
in (14), shown at the bottom of the page, for all t ! [t 0, 3) ,
where mt 2 is the smallest nonzero eigenvalue of Sym (L) . provided R Nj = 1 x j (t 0) = R Nj = 1 u j (t 0) . The convergence rate to
[Here, P = ^I N - (1/N ) 1 N 1 <N h is used.] this error bound is no worse than Re(m 2) . Moreover,
The tracking error bound (14) reveals several inter- R Nj= 1 x j (t) = R Nj= 1 u j (t) for t ! [t 0, 3) .  
esting facts. First, it highlights the necessity for the spe- The explicit expression (15) for the tracking error per-
cial initialization R Nj = 1 x j (t 0) = R Nj = 1 u j (t 0) [(11b) satisfies formance is of value for designers. The smallest nonzero
this initialization condition]. Without it, a fixed offset eigenvalue mt 2 of the symmetric part of the graph Laplacian
from perfect tracking is present regardless of the type is a measure of connectivity of a graph [50], [51]. For highly
of reference input signals. Instead, it is expected that a connected graphs (those with large mt 2), it is expected that
proper dynamic consensus algorithm should be capable the diffusion of information across the graph is faster.
of perfectly tracking static reference signals. Next, (14) Therefore, the tracking performance of a dynamic average
shows that (11) renders perfect asymptotic tracking not consensus algorithm over such graphs should be better.
only for reference input signals with decaying rate but Alternatively, when the graph connectivity is low, the
also for unbounded reference signals whose uncom- opposite effect is expected. The ultimate tracking bound
mon parts asymptotically converge to a constant value. (15) highlights this inverse relationship between graph
This is due to the ISS tracking bound depending on connectivity and steady-state tracking error. Given this
. .
< (I N - (1/N ) 1 N 1 <N) u (x) < rather than on < u(x) <. Note that inverse relationship, a designer can decide on the commu-
if the reference signal of each agent i ! {1, f, N} can be nication range of the agents and the expected tracking per-
written as u i (t) = u (t) + ut i (t), where u(t) is the (possibly formance. Various upper bounds of mt 2 that are a function
unbounded) common part and ut i (t) is the uncommon of other graph invariants (such as graph degree or the net-
part of the reference signal, then work size for special families of the graphs [51], [52]) can be
exploited to design the agents’ interaction topology and
` I N - N 1 N 1 N< j u(x) = ` I N - N 1 N 1 N< j ( u (t) 1 N + ûo (t))
1 . 1 .
yield an acceptable tracking performance.

= ` I N - 1 1 N 1 N< j ûo (t) . Implementation Challenges and Solutions


N
Next, some of the features of (11) are discussed from an imple-
This demonstrates that (11) properly uses the local knowl- mentation perspective. First, note that although (11) tracks
edge of the unbounded but common part of the reference u avg (t) with a steady-state error (15), the error can be made
dynamic signals to compensate for the tracking error that infinitesimally small by introducing a high gain b ! R 2 0 to
would be induced due to the natural lag in diffusion of write L as b L. By doing this, the tracking error becomes
information across the network for dynamic signals. Finally, lim t " 3|x i (t) - u avg (t)| # (c (3)/bmt 2), i ! {1, f, N } . However,
the tracking error bound (14) shows that as long as the for scenarios where the agents are first-order physical systems
uncommon part of the reference signals has a bounded xo i = c i (t), the introduction of this high gain results in an
rate, then (11) tracks the average with some bounded error. increase of the control effort c i (t) . To address this, a balance
For convenience, the convergence guarantees of (11) are between the control effort and tracking error margin can be
summarized in the following result. achieved by introducing a two-stage algorithm in which an
internal dynamics creates the average using a high-gain
Theorem 2: Convergence of (11) Over a Strongly dynamic consensus algorithm and feeds the agreement state
Connected and Weight-Balanced Digraph of the dynamic consensus algorithm as a reference signal to
Let G be a strongly connected and weight-balanced the physical dynamics. This approach is discussed further in
digraph. Let the section “Controlling the Rate of Convergence.”
A concern that may exists with (11) is that it requires
sup ` I N - 1 1 N 1 N< j u(x) = c (t) 1 3.
.
N explicit knowledge of the derivative of the reference sig-
x ! [t, 3)
nals. In applications where the input signals are measured
The trajectories of (11) are then bounded and satisfy online, computing the derivative can be costly and prone to

N 2

/ (x j(t 0) - u j(t 0))
sup t0 # x # t < Puo (x) < o f j = 1 p,
. 2

|e i (t)| # < er 2: N (t) < 2 +|er 1 (t)|2 # ee - mt 2 (t - t 0)


< Px (t 0) < + + (14)
mt 2 N

52  IEEE CONTROL SYSTEMS MAGAZINE  »  JUNE 2019


error. The other concern is the particular initialization con- convergence guarantee of the perturbation-free case. Fol-
dition requiring R Ni = 1 x i (t 0) = R Ni = 1 u i (t 0) . To comply with this lowing steps similar to the ones leading to the bound (14),
condition in a distributed setting, agents must initialize the effect of the additive bounded reference signal measure-
with x i (t 0) = u i (t 0) . If the agents are acquiring their signal u i ment perturbation on the convergence of (16) is summarized
from measurements or the signal is the output of a local in the next result.
process, any perturbation in u i (t 0) results in a steady-state
error in the tracking process. Moreover, if an agent (agent Lemma 1: Convergence of (16) Over a Strongly
N ) leaves the operation permanently at any time tr, then Connected and Weight-Balanced Digraph in the
R Ni =-11 x i is no longer equal to R Ni =-11 u i after tr. Therefore, the Presence of Additive Reference Input Perturbations
remaining agents (without reinitialization) carry over a Let G be a strongly connected and weight-balanced
steady-state error in their tracking signal. digraph. Suppose w i (t) is an additive perturbation on the
Interestingly, all of these concerns except for the one measured ­reference input signal u i (t) . Let sup x ! [t, 3) < ( I N -
.
regarding an agent’s permanent departure can be resolved (1/N ) 1 N 1 N< ) u(x)< = c (t) 1 3 and sup x ! [t, 3) <( I N - (1/N )1 N 1 <N)
.
by a change of variables, corresponding to an alternative w(x) < = ~(t) 1 3. Then, the trajectories of (16) are bounded
implementation of (11). Let p i = u i - x i for i ! {1, f, N } . and satisfy
Equation (11) may then be written in the equivalent form
c (3) + ~ (3)
lim|x i (t) - u avg (t)|# , i ! {1, f, N },
N N t"3 mt 2
i
po (t) = / a ij (x (t) - x (t)), / p (t 0) = 0,
i j j
i ! {1, f, N }, (16a)
j=1 j=1
provided R Nj= 1 p j (t 0) = 0. The convergence rate to this error
i i i
 x (t) = u (t) - p (t) . (16b) bound is no worse than Re(m 2) . Moreover, R Nj = 1 p j (t) = 0 for
t ! [t 0, 3) .  
Doing so eliminates the need to know the derivative of The perturbation w i in Lemma 1 can also be regarded as
the reference signals and generates the same trajectories a bounded communication perturbation. Therefore, (16) [and
t 7 x i (t) as (11). We note that the initialization condi- similarly (11)] is considered naturally robust to bounded
tion R Ni = 1 p i (t 0) = 0 can be easily satisfied if each agent communication error.
i ! {1, f, N } starts at p i (0) = 0. Note that this requirement is From an implementation perspective, it is also desirable
mild because p i is an internal state for agent i and, there- that a distributed algorithm is robust to changes in the com-
fore, is not affected by communication errors. This ini- munication topology that may arise as a result of unreliable
tialization condition, however, limits the use of (16) in transmissions, limited communication/sensing range, net-
applications where agents join or permanently leave the net- work rerouting, or the presence of obstacles. To analyze this
work at different points in time. To demonstrate the robust- aspect, consider a time-varying digraph G (V, E(t), A v (t)),
ness of (16) to measurement disturbances, note that any where the nonzero entries of the adjacency matrix A(t)
bounded perturbation in the reference input does not are uniformly lower and upper bounded [in other words,
affect the initialization condition but, rather, appears as a ij (t) ! 6a , ar @, where 0 1 a # ar if ( j, i) ! E(t), and a ij = 0 oth-
an additive disturbance in the ­communication channel. In erwise]. Here, v : [0, 3) " P = {1, f, m} is a piecewise con-
particular, observe the following: stant signal, meaning that it has only a finite number of
discontinuities in any finite time interval and is constant
N N between consecutive discontinuities. Intuitively, consensus
(11a) & / xo i(t) = / u. i(t) in switching networks occurs if there is occasionally enough
i=1 i
 flow of information from every node in the network to every
/ x i(t) = / u i(t) + e / x i(t 0) - / u i(t 0) o ,
N N N N
& (17a) other node.
i=1 i i=1 i
Formally, an admissible switching set S admis is a set of
N N N
  (16a) & / po i (t) = 0 & / p i (t) = / p i (t 0) . (17b) piecewise constant switching signals v : [0, 3) " P with
i=1 i=1 i=1 some dwell time t L (in other words, t k + 1 - t k 2 t L 2 0, for all
k = 0, 1, f), such that
As seen in (17a), if R Ni = 1 x i (t 0) ! R Ni u i (t 0), then R Ni = 1 x i (t) ! »» G (V, E(t), A v (t)) is weight balanced for t $ t 0 .
R Ni u i (t) persists in time. Therefore, if the perturbation on »» The number of contiguous, nonempty, uniformly
the reference input measurement is removed, then (11) still bounded time intervals [t i j , t i j + 1), j = 1, 2, f, starting
t
inherits the adverse effect of the initialization error. Instead, at t i1 $ t 0, with the property that , tii G (V, E(t), A v (t))
j+1
j

as (17b) shows for the case of the alternative algorithm (16), is a jointly strongly connected digraph, goes to infin-
R Ni = 1 p i (t) = 0 is preserved in time as long as the algorithm ity as t " 3.
is initialized such that R Ni = 1 p i (t 0) = 0, which can be easily When the switching signal belongs to the admissible
done by setting p i (t 0) = 0 for i ! {1, f, N } . Consequently, set S admis, [19] shows that there always exists m ! R 2 0 and
-R< L R
when the perturbations are removed, then (16) recovers the l ! R $ 1 such that < e v
< # l e - mt, t ! R $ 0 . Implementing

JUNE 2019  «  IEEE CONTROL SYSTEMS MAGAZINE  53


the change of variables (12), it is shown that the trajectories of whose communication topology is described by a fixed,
(16) satisfy (14), with mt 2 replaced by m and < Px (t 0) < and connected, undirected ring. The objective of these agents is
.
< P u(x) < being multiplied by l. This statement is formalized to follow a set of moving targets (depicted as the round
as follows. robots in Figure 8) in a containment fashion (that is, ensuring
that they are surrounded as they move around the environ-
Lemma 2: Convergence of (11) Over Switching Graphs ment). Let
Let the communication topology be G (V, E(t), A v (t)) where
v ! S admis . Let sup x ! [t, 3) < ^ I N - (1/N) 1 N 1 N h u(x) < = c (t) 1 3.
< .
x lT (t) = (t/20) 2 + 0.5 sin ` (0.35 + 0.05l) t + (5 - l) r j
5 
The trajectories of (11) are bounded and satisfy
+ 4 - 2 (l - 1), l ! {1, 2, 3, 4} (18)

lc (3) be the horizontal position of the set of moving targets (each


lim|x i (t) - u avg (t)|# , i ! {1, f, N },
t"3 m mobile agent tracks one moving target). The term (t/20)2 in
the reference signals (18) represents the component with an
provided R Nj = 1 x j (t 0) = R Nj = 1 u j (t 0) . The convergence rate to unbounded derivative that is common to all agents.
this error bound is no worse than m . Moreover, we have To achieve their objective, the group of agents seeks to com-
R Nj = 1 x j (t) = R Nj = 1 u j (t) for t ! [t 0, 3) .   pute on the fly the geometric center xr T (t) = (1/N )R Nl = 1 x lT (t)
and the associated variance (1/N )R Nl= 1 (x l (t) - xr T (t))2 de­­
Example: Distributed Formation Control Revisited termined by the time-varying position of the moving targets.
We revisit one of the scenarios discussed in the “Applica- The agents implement two distributed dynamic average con-
tions of Dynamic Average Consensus in Network Systems” sensus algorithms: one for computing the center and the other
section to illustrate the properties of (11) and its alternative for computing the variance (as shown in Figure 9). To illustrate
implementation (16). Consider a group of four mobile the properties discussed in this section, consider that agent 4
agents (depicted as the triangle robots in Figure 8) (the green triangle in Figure 8) leaves the network 10 s after the

(a) (b) (c)

FIGURE 8 A simple dynamic average consensus-based containment and tracking of a team of mobile targets. (a) The triangle robots
cooperatively want to contain the moving round robots by making a formation around the geometric center of the round robots that they
are observing. At (b) (after, for example, 10 s from the start of the operation), one of the triangle robots leaves the team. At (c) (after, for
example, 20 s from the start of the operation), a new triangle robot joins the group to take over tracking the abandoned round robot.

i + i 2
xT (t ) Dynamic x i (t ) A xT (t ) – x i (t )B Dynamic
(.)2 Consensus
Consensus –

N
1 N
∑i = 1 xT (t )
i 1
∑i = 1 A xT (t ) – x i (t )B
i 2
x i (t ) → y i (t ) →
N N
N N
1
∑i = 1 (xT (t ) – N1 ∑i = 1 xT (t ))2
i i

N

FIGURE 9 A group of N mobile agents uses a set of dynamic consensus algorithms to asymptotically track the geometric center
x T (t) = (1/N) / Ni = 1 x iT (t) and its variance (1/N) / Ni = 1 (x i (t) - x T (t)) 2. In this setup, each mobile agent i ! {1, f, N} is monitoring its
avg avg

respective target’s position x iT (t).

54  IEEE CONTROL SYSTEMS MAGAZINE  »  JUNE 2019


beginning of the simulation. After 10 s, a new agent, labeled 5 in Figure 10 that all of the agents exhibit convergence with the
(the red triangle in Figure 8), joins the network and starts mon- same rate.
itoring the target that agent 4 was in charge of. For simplicity, The introduction of (16) serves as preparation for a
the simulation is focused on the calculation of the geometric more in-depth treatment of the design of dynamic average
center. For this computation, agents implement (16) with refer- consensus algorithms. This includes a discussion of the
ence input u i (t) = x iT (t), i ! {1, 2, 3, 4} . Figure 10 shows the issues of correct initialization [the steady-state error depends
algorithm performance for various operational scenarios. As on the initial condition x (0) or x 0], adjusting the conver-
forecast by the discussion of this section, the tracking error gence rate of the agents, and the limitation of tracking
vanishes in the presence of perturbations in the input signals (with zero steady-state error) only constant reference
available to the individual agents and switching topologies, signals (and therefore with small steady-state error for
and it exhibits only partial robustness to agent arrivals, depar- slowly time-varying reference signals). To improve clar-
tures, and initialization errors, with a constant bias with ity, continuous-time and discrete-time strategies are dis-
respect to the correct average. Additionally, it is worth noticing cussed separately.

6
4

4
2

2
xi

xi
0
0
–2 uavg(t) x1 x2 x3 x4 x5 uavg(t) x1 x2 x3 x4
–2
0 5 10 15 20 25 30 0 2 4 6 8 10
t t
(a) (b)

3 6
1 2 1 2 1 2 1 2 1 2
2
4 3 4 3 4 3 4 3 4 3 4
[0,1) [2,3) [3,4) [4,5) [5,10)
1
2
xi

xi

0
0
–1
uavg(t) x1 x2 x3 x4 uavg(t) x1 x2 x3 x4
–2 –2
0 2 4 6 8 10 0 2 4 6 8 10
t t
(c) (d)

FIGURE 10 A performance evaluation of dynamic average consensus algorithms (11) and (16) for a group of four agents with reference
inputs (18) for a tracking scenario described in Figure 8. In the simulation in plot (a), the agents are using (16). As guaranteed in Lemma 1
avg
[under proper initialization p i (0) = 0, i ! {1, 2, 3, 4}] during the time interval [0,10] s, the agents are able to track x T (t) with a small error.
3
The challenge presents itself when agent 4 leaves the operation at t = 10 s . Because after agent 4 leaves R i = 1 p i (10 +) ! 0, the remain-
ing agents fail to follow the average of their reference values, which now is (1/3) R 3l = 1 u l. Similarly, even with initialization of p 5 (20) = 0
for the new agent 5, because p 1 (20) + p 2 (20) + p 3 (20) + p 5 (20) ! 0, the agents track the average (1/4) R 4l = 1 x lT (t) with a steady-state
error. In the simulation in plot (b), the agents are using (16), and at time interval [0, 10] s, agent 1’s reference input is subject to a mea-
surement perturbation according to u 1 (t) = x 1T (t) + w 1 (t), where w 1 (t) = - 4 cos (t) at t ! [0, 2], and t ! [3, 5] and w 1 (t) = 0 at other times.
As guaranteed by Lemma 1, despite the perturbation, including the initial measurement error of u 1 (0) = x 1T (0) - 4, (16) has robustness
to the measurement perturbation and recovers its performance after the perturbation is removed. A large perturbation error was used,
so that its effect is observed more visibly in the simulation plots. In the simulation in plot (c), the agents are using (11). Agent 1’s reference
input has an initial measurement error of u 1 (0) = x 1T (0) - 4. Because the measurement error directly affects the initialization condition of
the algorithm, it fails to preserve R 4l = 1 x l (t) = R 4l = 1 u l (t). As a result, the effect of initialization error persists, and the algorithm maintains a
significant tracking error. In the simulation in plot (d), the agents are using (16) [similar results are also obtained for (11)]. The network
communication topology is a switching graph, where the graph topology at different time intervals is shown on the plot. Because the
switching signal v belongs to S admis, as predicted by Lemma 2, the trajectories of the algorithm stay bounded, and once the topology
becomes fully connected, the agents follow their respective dynamic average closely. (a) Agent departure and arrival. (b) Perturbation
of input signals. (c) Initialization error. (d) Switching topology.

JUNE 2019  «  IEEE CONTROL SYSTEMS MAGAZINE  55


CONTINUOUS-TIME DYNAMIC AVERAGE .
x = -a (x - u) - L p x - L <I #t t LI x(x) dx + L <I q(t 0) + u. .
CONSENSUS ALGORITHMS 0

This section discusses various continuous-time dynamic


average consensus algorithms and their performance and Using a time-domain analysis similar to that employed
robustness guarantees. Table 1 summarizes the arguments of for (11), the ultimate tracking behavior of (19) is character-
the driving command of these algorithms in (1) and their spe- ized. Consider the change of variables (12) and
cial initialization requirements. Some of these algorithms,
w=; E = T <q,      
when cast in the form of (1), require access to the derivative of w1
(21a)
the reference signals. Similar to (11), however, this require- w 2:N
-1 < .
ment can be eliminated using alternative implementations. y = w 2: N - a (R L I R) R u , (21b)
< <

Robustness to Initialization to write (20) in the equivalent form


and Permanent Agent Dropout
To eliminate the special initialization requirement and      wo 1 = 0, (22a)
induce robustness with respect to algorithm initialization, .
y 0 0 - R< LI R y
[53] proposes the following alternative dynamic average
> ero 1 H = > 0 -a 0 H > er 1 H
consensus algorithm:
ero 2 : N R < L <I R 0 - aI - R < L p R e 2: N
14444444424444444 43
A
N 
qo i (t) = - / b ij (x i - x j ), (19a) -a (R < L <I R)-1
+> HR < u.
j=1 .
N N 0 (22b)
xo i = -a (x i - u i ) - / a ij (x i - x j ) + / b ji (q i - q j ) + u. i, (19b) I
14444244443
j=1 j=1 B
q i (t 0), x i (t 0) ! R, i ! {1, f, N }, (19c)
Let the communication ranges of the agents be such that
. i
where a ! R 2 0 . Here, u is added to (19b) to allow agents they can establish adjacency relations [a ij] and [b ij] so that
to track reference inputs whose derivatives have unbounded the corresponding L I and L P are Laplacian matrices of
common components. The necessity of having explicit strongly connected and weight-balanced digraphs. Invok-
knowledge of the derivative of reference signals can be ing [53, Lemma 9], matrix A in (22b) is shown to be Hur-
removed by using the change of variables p i = x i - u i, witz. Therefore, using the ISS bound on the trajectories of
i ! {1, f, N } . In (19), the agents are allowed to use two dif- LTI systems (see “Input-to-State Stability of Linear Time-
ferent adjacency matrices, [a ij] N # N and [b ij] N # N, so that Invariant Systems”), the tracking error of each agent
they have an extra degree of freedom to adjust the track- i ! {1, f, N} while implementing (19) over a strongly con-
ing performance of the algorithm. The Laplacian matrices nected and weight-balanced digraph is
associated with adjacency matrices [a ij] and [b ij] are repre-
sented by, respectively, L p (labeled as proportional Lapla-
|e i (t)|# l e - m (t - t0) ; E
w 2: N (t 0)
cian) and L I (labeled as integral Laplacian). The compact er (t 0)

sup ` I N - 1 1 N 1 N< j u (x) ,
representation of (19) is l <B< .
+
m t0 # x # t N (23)
.
q = -L I x,           (20a)
. .
x = -a (x - u) - L p x + L q + u , (20b)
<
I where (l, m) are given by (S5) for matrix A of (22b) and can
be computed from (S9). It is shown that both l and m
which also reads as depend on the smallest nonzero eigenvalues of Sym (L I)
and Sym (L p) as well as a. Therefore, the
tracking performance of (19) depends on
TABLE 1 The arguments of the driving command in (1) for the reviewed both the magnitude of the derivative of
continuous-time dynamic average consensus algorithms together with their
initialization requirements. reference signals and the connectivity of
the communication graph. From this error
Algorithm (11) (19) (24) (25) bound, it is observed that, for bounded
i i . i i i i i i i
dynamic signals with bounded rate, (19) is
J (t) {x (t), u (t)} {x (t), v (t), u(t)} {x (t), z (t), v (t), {x (t), v (t),
. . guaranteed to track the dynamic aver-
u (t), u (t)} u (t), u (t)}
age with an ultimately bounded error.
{I j (t)} j ! N {x j (t)} j ! N
i
out {x j (t), v j (t)} j ! N
i
out {z j (t), v j (t)} j ! N
i
out {v i (t)} j ! N
i
out Moreover, this algorithm does not need
i
out

N any special initialization. The robustness


Initialization x i (0) = u i (0) None None / v j (0) = 0 to initialization can be observed on the
requirement j=1
block diagram representation of (19),

56  IEEE CONTROL SYSTEMS MAGAZINE  »  JUNE 2019


shown in Figure 11(a). For reference, the convergence guar- .
u(t )
antees of algorithm (19) are summarized next.
u(t ) 1 x(t )
αI
s+α I
Theorem 3: Convergence of (19) −
Let L P and L I be Laplacian matrices corresponding to
Lp
strongly connected and weight-balanced digraphs. Let
.
sup x ! [t, 3) < (I N - (1/N ) 1 N 1 <N) u (x) < = c (t) 1 3. Starting from
any x i (t 0), q(t 0) ! R , for any a ! R 2 0 the trajectories of algo- LI 1 LI
s
rithm (25) satisfy
l < B < c (3) (a)
lim|x i (t) - u avg (t)|# , i ! {1, f, N }, .
t"3 m u(t )
u(t ) x(t )
where l, m ! R 2 0 satisfy < e At < # l e - mt [ A and B are given in αI 1
s I
(22b)]. Moreover, − − −

βL
e / x j (t 0) - / u j (t 0) o,
N N N N
/ x j(t) = / u j (t) + e - a (t - t 0)
q(t )
j=1 j=1 j=1 j=1
αβ
L
s
for t ! 6t 0, 3) .   q(0)
Figure 12 shows the performance of (19) in the distributed
formation control scenario represented in Figure 8. This plot (b)
illustrates how the property of robustness to the initializa- FIGURE 11 A block diagram of continuous-time dynamic average con-
tion error of (19) allows it to accommodate the addition and sensus algorithms. These dynamic algorithms naturally adapt to
deletion of agents with satisfactory tracking performance. changes in the reference signals, which are applied as inputs to the
Although the convergence guarantees of (19) are valid for system. Continuous-time algorithm (19) is robust to initialization. To see
why the algorithm is robust, consider multiplying the input signal on the
strongly connected and weight-balanced digraphs, from
left in plot (a) by 1 <N . The output of the integrator block (1/s) is multiplied
an implementation perspective, the use of this strategy over by zero (because L I 1 N = 0) and therefore does not affect the output.
directed graphs may not be feasible. In fact, the presence of Although the output is affected by the initial state of the 1/ (s + a) block,
the transposed integral Laplacian L <I in (20b) requires each this term decays to zero and therefore does not affect the steady state.
agent i ! {1, f, N } to know not only the entries in row i but Also, the requirement of needing the derivative of the input uo (t) can be
removed by a change of variable. The continuous-time algorithm in
also the column i of L I and receive information from the cor-
(25) is not robust to initialization. In this algorithm, the parameter b
responding agents. However, for undirected graph topolo- may be used to control the tracking error size, and a may be used to
gies, this requirement is satisfied trivially as LI< = L I . control the rate of convergence. Furthermore, this algorithm is robust
to reference signal measurement perturbations and naturally pre-
Controlling the Rate of Convergence serves the privacy of the input signals against adversaries [19]. (a)
Continuous-time algorithm (19). (b) Continuous-time algorithm (25).
A common feature of the dynamic average consensus
algorithms presented in the “A First Design for Dynamic
Average Consensus” and “Robustness to Initialization and 4
Permanent Agent Dropout” sections is that the rate of conver-
gence is the same for all agents and dictated by network topol- 2
ogy as well as some algorithm parameters [see (14) and (23)].
xi

However, in some applications, the task is not just to obtain the 0


average of the dynamic inputs but rather to physically track
this value, possibly with limited control authority. To allow the –2 uavg(t) x1 x2 x3 x4 x5
network to prespecify its desired worst rate of convergence b,
0 5 10 15 20 25 30
[54] proposes dynamic average consensus algorithms whose
t
design incorporates two time scales. The first-order-input
FIGURE 12 The performance of dynamic average consensus algo-
dynamic consensus (FOI-DC) algorithm is described as rithm (19) in the distributed formation control scenario of Figure 8. A
group of four mobile agents acquires reference inputs (18) corre-
Z N sponding to the time-varying position of a set of moving targets. The
] eqo i = - / b ij (z i - z j), algorithm convergence properties are not affected by initialization
] j=1 errors, as stated in Theorem 3. This property also makes it robust to
[ N N (24a)
] ezo = - (z i + bu i + uo i) - / a ij (z i - z j) + / b ji (q i - q j),
i agent arrivals and departures. In this simulation, agent 4 leaves the
] network at time t = 10 s , and a new agent 5 joins the network at t = 20 s.
j=1 j=1
\ In contrast to what was observed for (16) and (19) in Figure 10, the
xo i = - bx i - z i, i ! {1, f, N} . (24b) execution recovers its tracking performance after a transient.

JUNE 2019  «  IEEE CONTROL SYSTEMS MAGAZINE  57


The fast dynamics is (24a) and employs a small value of the interaction topology. Moreover, b can be used to
for e ! R 2 0. The fast dynamics, which builds on the propor- regulate the control effort of the integrator dynamics
tional-integral (PI) algorithm (19), is intended to generate xo i = c i (t), i ! {1, f, N} while maintaining a good tracking
the average of the sum of the dynamic input and its first error via the use of small e ! R 2 0 .
derivative. The slow dynamics (24b) then uses the signal
generated by the fast dynamics to track the average of the An Alternative Algorithm for Directed Graphs
reference signal across the network at a prespecified smaller As observed, (19) is not implementable over directed graphs
rate b ! R 2 0 . The novelty is that these slow and fast dynam- because it requires information exchange with both in- and
ics are running simultaneously, and thus, there is no need to out-neighbors, and these sets are generally different. In
wait for convergence of the fast dynamics and then take [19], the authors proposed a modified proportional and
slow steps toward the input average. integral agreement feedback dynamic average consensus
Similar to the dynamic average consensus algorithm (19), algorithm whose implementation does not require the
(24) does not require any specific initialization. The technical agents to know their respective columns of the Laplacian.
approach used in [54] to study the convergence of (24) is based This algorithm is
on the singular perturbation theory [55, Ch. 11], which results
N
in a guaranteed convergence to an e -neighborhood of u avg (t) qo i = ab / a ij (x i - x j), (25a)
for small values of e ! R 2 0. Using time-domain analysis, j=1

information about the ultimate tracking behavior of (19) can N


xo i = - a (x i - u i) - b / a ij (x i - x j) - q i + u , (25b)
.i

be made more precise. For convenience, the changes of vari- j=1
.
ables (12) and (21a) with y = w 2: N - (R T L TI R) -1 R T ( bu + u) N

a nd e z = T T ^z + ^ b N h R j = 1 u j 1 N + ^1 N h R j = 1 uo j 1 N h are ap­­
N N x i (t 0), q i (t 0) ! R s.t. / q j (t 0) = 0, (25c)
j=1
plied to write the FOI-DC algorithm as
i ! {1, f, N}, where a, b ! R 2 0 . Equation (25) in compact
wo 1 = 0, form can be equivalently written as
0 60 –R T L I R@
= . G = e -1 > H; E
.
y y
x = - a (x - u) - bLx - ab # Lx (x) dx - q (t 0) + u,
. t .
; T T E ; E ez
0 –1 0
ez T
R L I R 0 –I –R L p R
t0

144444442 44444443
r A which demonstrates the proportional and integral agree-
– (R T L TI R) –1 60 (N–1) # 1 I N–1@
+> Hf (t),
ment feedback structure of this algorithm. As was done
61 0 1 # N–1@
= G for (11), a change of variables p i = u i - x i can be used to
0 (N–1) # N
14444444244444443 write this algorithm in a form whose implementation
r
B
. does not require the knowledge of the derivative of the
e = –be – e z,
reference signals.
Note an interesting connection between (25) and (16).
. ..
where f (t) = T T (b u + u). Using the ISS bound on the trajec- Writing the transfer function from the reference input to the
tories of LTI systems (see “Input-to-State Stability of Linear tracking error state (25), there is a pole-zero cancellation that
Time-Invariant Systems”), the tracking error of each agent reduces (25) to (11) and (16). Despite this close relationship,
i ! {1, f, N} while implementing the FOI-DC algorithm there are some subtle differences. For example, unlike (11),
with e ! R 2 0 is, (25) enjoys robustness to reference signal measurement per-
turbations and naturally preserves the privacy of the input
e i (t) # e - b (t - t0) e i (t 0) of each agent against adversaries. Specifically, an adversary
+ l sup e e - e m (t - t 0) ; E
-1 y (t 0) with access to the time history of all network communica-
b t0 # x # t e z (t 0) tion messages cannot uniquely reconstruct the reference
r signal of any agent [19], which is not the case for (16).
+ e B sup b u (x) + u (x) m ,
. ..
m t0 # x # t Figure 11(b) shows the block diagram representation of
this algorithm. The next result states the convergence prop-
r
where e A t # le - m t. From this error bound, it is observed erties of (25). See [19] for the proof of this statement, which
that, for dynamic signals with bounded first and second is established using the time-domain analysis implemented
derivatives, the FOI-DC algorithm is guaranteed to track to analyze the algorithms reviewed so far.
the dynamic average with an ultimately bounded error.
This tracking error can be made small using a small Theorem 4: Convergence of (25) Over Strongly Connected and
e ! R 2 0 . Use of small e ! R 2 0 also results in dynamics Weight-Balanced Digraphs for Dynamic Input Signals [19]
(25a) to have a higher decay rate. Therefore, the domi- Let G be a strongly connected and weight-balanced digraph.
nant rate of convergence of the FOI-DC algorithm is Let sup x ! [t, 3) < ^I N - (1/N ) 1 N 1 <N h uo (x) < = c (t) 1 3. For any
determined by b, which can be prespecified regardless a, b ! R 2 0, the trajectories of (25) satisfy

58  IEEE CONTROL SYSTEMS MAGAZINE  »  JUNE 2019


c (3) are unknown, it also suffices to have lower and upper bounds,
lim x i (t) - u avg (t) # , i ! {1, f, N}, (26)
t"3 bmt 2 respectively, on m 2 and m N) .

provided R Nj = 1 q j (t 0) = 0. The convergence rate to the error Nonrobust Dynamic Average Consensus Algorithms
bound is min {a, b Re (m 2)} .   First consider the discretized version of the continuous-time
The inverse relation between b and the tracking error in dynamic average consensus algorithm in (16) (“Euler Dis-
(26) indicates that the parameter b can be used to control cretizations of Continuous-Time Dynamic Average Consen-
the tracking error size, and a can be used to control the rate sus Algorithms” elaborates on the method for discretization
of convergence. and the associated range of admissible step sizes). This algo-
rithm has the iterations
DISCRETE-TIME DYNAMIC N
AVERAGE CONSENSUS ALGORITHMS p ik + 1 = p ik + k I / a ij (x ik - x kj ), p i0 ! R,
i ! {1, f, N}, (27a)
Although the continuous-time dynamic average consensus j=1

algorithms described in the previous section are amenable x ik = u ik - p ik, (27b)


to elegant and relatively simple analysis, implementing
these algorithms on practical cyberphysical systems requires where k I ! R is the step size. The block diagram is pro-
continuous communication between agents. This require- vided in Figure 13(a).
ment is not feasible in practice due to constraints on the com- For discrete-time LTI systems, the convergence rate is
munication bandwidth. To address this issue, the discrete-time given by the maximum magnitude of the system poles. The
dynamic average consensus algorithms where the commu- poles are the roots of the characteristic equation, which for
nication among agents occurs only at discrete-time steps the dynamic average consensus algorithm in Figure 13(a) is
are studied.
0 = zI - (I - k I L) .
The main difference between continuous-time and
discrete-time dynamic average consensus algorithms is If the Laplacian matrix can be diagonalized, then the system
the rate at which their estimates converge to the average can be separated according to the eigenvalues of L and
of the reference signals. In continuous time, the parame- each subsystem analyzed separately. The characteristic equa-
ters may be scaled to achieve any desired convergence tion corresponding to the eigenvalue m of L is then
rate, whereas in discrete time, the parameters must be
carefully chosen to ensure convergence. The problem of 0 = 1 + m k I . (28)
z-1
optimizing the convergence rate has received significant
attention in the literature [56]–[65]. Here, a simple method To observe how the pole moves as a function of the Lapla-
using root locus techniques for choosing the parameters cian eigenvalue, root locus techniques from LTI systems
to optimize the convergence rate is provided. It is also theory can be used. Figure 14(a) shows the root locus of (28)
shown how to further accelerate the convergence by as a function of m. The dynamic average consensus algo-
introducing extra dynamics into the dynamic average rithm poles are then the points on the root locus at gains
consensus algorithm. m i for i ! {1, f, N}, where m i are the eigenvalues of the
The convergence rate of four discrete-time dynamic graph Laplacian. To optimize the convergence rate, the
average consensus algorithms is analyzed in this section, system is designed to minimize t, where all poles corre-
beginning with the discretized version of the continuous- sponding to disagreement directions (that is, those orthog-
time algorithm (16). It is then shown how to use extra onal to the consensus direction 1 N) are inside the circle
dynamics to accelerate the convergence rate and/or obtain centered at the origin of radius t. Because the pole starts at
robustness to initial conditions. Table 2 summarizes the z = 1 and moves left as m increases, the convergence rate is
arguments of the driving command of
these algorithms in (2) and their special
initialization requirements. TABLE 2 The arguments of the driving command in (2) for the reviewed
For simplicity of exposition, assume the discrete-time dynamic average consensus algorithms together with their
communication graph is constant, con- initialization requirements.
nected, and undirected. The Laplacian
Algorithm (27) (29) (30) (31)
matrix is then symmetric and therefore has
i i i i i i i i
real eigenvalues. Because the graph is con-
i
J (t) {u , p }
k k {u , p , p
k k k-1} {u , p , q }
k k k {u ik, p ik, p ik - 1, q ik, q ik - 1}
nected, the smallest eigenvalue is m 1 = 0, {I j (t)} j ! N iout
j
{x k} j ! N iout
j
{x k} j ! N iout,
j j
{x k, p k} j ! N iout
j j
{x k, p k} j ! N iout
and all other eigenvalues are strictly posi- N N
tive, that is, m 2 2 0. Furthermore, assume Initialization / p 0j = 0 / p 0j = 0 None None
that the smallest and largest nonzero eigen- requirement j=1 j=1

values are known (if the exact eigenvalues

JUNE 2019  «  IEEE CONTROL SYSTEMS MAGAZINE  59


Euler Discretizations of Continuous-Time Dynamic Average Consensus Algorithms

T he continuous-time algorithms described in the article can where x ! R n and u ! R m are, respectively, state and input
also give rise to discrete-time strategies. Here, we describe vectors, and d ! R 2 0 is the discretization step size. Let the
how to discretize them so that they are implementable over system matrix A = [a ij] ! R n # n be a Hurwitz matrix with eigen-
wireless communication channels. This can be done by using values " n i ,in= 1, and the difference of the input signal be bound-
the (forward) Euler discretization of the derivatives ed, Du 1 t 1 3. For any d ! (0, dr ) where
x (k + 1) - x (k) n
dr = min ) - 2 2 3
. Re (n i)
x (t) . , , (S17)
d ni i=1

where d ! R 2 0 is the step size. To illustrate the discussion, we the eigenvalues of (I + dA) are all located inside the until circle
develop this approach for (25) over a connected graph topol- in the complex plane. Moreover, starting from any x (0) ! R n,
ogy. The following discussion can also be extended to include the trajectories of (S16) satisfy
iterative forms of the other continuous-time algorithms studied
lt B
in the article. Using the Euler discretization in (25) leads to lim x (k + 1) # , (S18)
k"3 1-~
N
v i (k + 1) = v i (k) + dab / a ij (x i (k) - x j (k)), (S14a)
k
where ~ ! (0, 1), and l ! R 2 0 such that I + dA # l~ k . 
j=1 k
The bounds ~ ! (0, 1) and l ! R 2 0 in I + dA # l~ k when
x i (k + 1) = x i (k) + Du i (k) - da (x i (k) - u i (k)) all the eigenvalues of I + dA are located in the unit circle of
N

- db / a ij (x i (k) - x j (k)) - dv i (k), (S14b) the complex plane can be obtained from the following linear
j=1
matrix inequality optimization problem (see [S21, Theorem
where Du i (k) = u i (k + 1) - u i (k). To implement this iterative form 23.3] for details):
at each time step k, access to the future value of the reference
(~, l, Q) = argmin ~ 2, subject to (S19)
input at time step k + 1 is needed. Such a requirement is not
1 I # Q # I, 0 1 ~ 2 1 1, l 2 1,
practical when the reference input is sampled from a physical l
process or is a result of another online algorithm. This require- (I + dA) < Q (I + dA) - Q # - (1 - ~ 2) I.
ment can be circumvented using a backward Euler discretiza-
Building on Lemma S1, the next result characterizes the
tion, but the resulting algorithm tracks the reference dynamic
admissible discretization step size for (S15) and its ultimate
average with one-step delay. A practical solution that avoids
tracking behavior.
requiring the future values of the reference input is obtained
by introducing an intermediate variable z i (k) = x i (k) - u i (k) and
Theorem S2: Convergence of (S15) Over
representing the iterative algorithm (S14) in the form
Connected Graphs [19]
N
v i (k + 1) = v i (k) + dab / a ij (x i (k) - x j (k)), (S15a) Let G be a connected, undirected graph. Assume that the
j=1 differences of the inputs of the network satisfy max k ! Z $ 0 < (I -
N
z (k + 1) = z (k) - daz (k) - db / a ij (x i(k) - x j(k)) - dv i(k), (S15b)
i i i (1/N ) 1 N 1 <N) Du (k) < = c 1 3. Then, for any a, b 2 0, (S15) over
j=1
G initialized at z i (0) ! R and v i (0) ! R such that R Ni = 1 v j (0) = 0
  x i (k) = z i (k) + u i (k), (S15c)
has bounded trajectories that satisfy
for i ! " 1, f, N , . Equation (S15) is then implementable without
, i ! " 1, f, N , (S20)
dlc
lim x i (k) - u avg (k) #
the use of future inputs. k"3 1-~
The question then is to characterize the adequate step siz-
provided d ! ^0, min " a -1, 2b -1 (m N) -1 ,h . Here, m N is the largest
es that guarantee that the convergence properties of the con-
eigenvalue of the Laplacian, and ~ ! (0, 1) and l ! R 2 0 satisfy
tinuous-time algorithm are retained by its discrete implemen- k
I - d bR < LR # l~ k, k ! Z $ 0.  
tation. Intuitively, the smaller the step size, the better for this
Note that the characterization of the step size requires knowl-
purpose. However, this also requires more communication. To
edge of the largest eigenvalue m N of the Laplacian. Because such
ascertain this issue, the following result is particularly useful.
knowledge is not readily available to the network unless dedicated
distributed algorithms are introduced to compute it, [19] provides
Lemma S1 : Admissible Step Size for the Euler Discretized
the sufficient characterization d ! ^0, min " a -1, b -1 (d max out )
-1
,h
Form of Linear Time-Invariant Systems and a Bound on
along with the ultimate tracking bound
Their Trajectories
, i ! " 1, f, N , .
Consider dc
lim x i (k) - u avg (k) #
k"3 bm 2
.
x = Ax + Bu, t ! R $ 0,

and its Euler discretized iterative form REFERENCE


[S21] W. J. Rugh, Linear Systems Theory. Englewood Cliffs, NJ: Pren-
x (k + 1) = (I + dA) x (k) + dBu (k), k ! Z $ 0, (S16) tice Hall, 1993.

60  IEEE CONTROL SYSTEMS MAGAZINE  »  JUNE 2019


uk xk uk xk

− −
kp
kI I L
I L z−ρ
z−1
kI
I L
p0 z−1
(a) (b)
uk xk uk xk

− −
kp z
kI z I L
I L (z − ρ)2
(z − ρ2)(z − 1) kI z
p0 I L
(z − ρ2)(z − 1)

(c) (d)

FIGURE 13 A block diagram of discrete-time dynamic average consensus algorithms. The algorithms in (b) and (d) use proportional-integral
(PI) dynamics to obtain robustness to initial conditions, whereas those in (c) and (d) use extra dynamics to accelerate the convergence rate.
When the graph is connected and balanced and upper and lower bounds on the nonzero eigenvalues of the graph Laplacian are known,
closed-form solutions for the parameters that optimize the convergence rate are known (see Theorem 5). (a) The nonrobust, nonaccelerated
dynamic average consensus algorithm (27). (b) The robust, nonaccelerated, PI dynamic average consensus algorithm (30). (c) The nonro-
bust, accelerated, dynamic average consensus algorithm (29). (d) The robust, accelerated, PI dynamic average consensus algorithm (31).

optimized when there is a pole at z = t when m = m 2 and at


z = - t when m = m N, that is,
Im(z) Im(z)
0 = 1 + m2 kI and 0 = 1 + m N k I . ρ ρ
t-1 -t - 1

Solving these conditions for k I and t gives −1 1 Re(z) −1 ρ2 1 Re(z)

kI = 2 and t = m N - m 2 .
m2 + mN mN + m2
(a) (b)
While the previous choice of parameters optimizes the
convergence rate, even faster convergence can be achieved
FIGURE 14 The root locus design of dynamic average consensus
by introducing extra dynamics into the dynamic average
algorithms. The dynamic average consensus algorithm poles are
consensus algorithm. Consider the accelerated dynamic the points on the root locus at gains m i for i ! {1, f, N}, where m i
average consensus algorithm in Figure 13(c), given by are the eigenvalues of the graph Laplacian. To optimize the con-
vergence rate, the parameters are chosen to minimize t such that
N all poles corresponding to eigenvalues m i for i ! {2, f, N} are
p ik + 1 = (1 + t 2) p ik - t 2 p ik - 1 + k I / a ij (x ik - x kj), inside the circle centered at the origin of radius t. The dynamic
j=1  average consensus algorithm then converges linearly with rate t.
p 0i ! R, i ! {1, f, N}, (29a) (a) The accelerated, dynamic average consensus algorithm in
Figure 13(a). (b) The nonaccelerated, dynamic average consen-
x ik = u ik - p ik .(29b) sus algorithm in Figure 13(c).

Instead of a simple integrator, the transfer function in the consensus algorithm (29). By adding an open-loop pole at
feedback loop now has two poles (one of which is still at z = t 2 and zero at z = 0, the root locus now goes around the
z = 1) . To implement the dynamic average consensus algo- t circle. Similar to the previous case, the convergence rate is
rithm, each agent must track two internal state variables optimized when there is a repeated pole at z = t when
(p ik and p ik - 1) . This small increase in memory, however, can m = m 2 and a repeated pole at z = - t when m = m N . This
result in a significant improvement in the rate of conver- gives the optimal parameter k I and convergence rate t
gence, as discussed next. given by
Once again, root locus techniques can be used to design
the parameters to optimize the convergence rate. Figure 14(b) kI = 4 and t =
mN - m2
.
^ m2 + mN h
2
shows the root locus of the accelerated dynamic average mN + m2

JUNE 2019  «  IEEE CONTROL SYSTEMS MAGAZINE  61


Laplacian blocks, resulting in a quadratic dependence
1 on the eigenvalues. Instead of a linear root locus, the
Convergence Rate ρ

0.8
design involves a quadratic root locus. Although this
complicates the design process, closed-form solutions
0.6 for the algorithm parameters can still be found [57],
0.4 even for the accelerated version using extra dynamics,
given by
0.2
N
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 q ik + 1 = 2tq ik - t 2 q ik - 1 + k p / a ij ((x ik - x kj) + (p ik - p kj)), (31a)
j=1
λ 2/λN
N

Figure 13(a) (Nonaccelerated, Nonrobust) p i


k+1
i
= (1 + t 2) p - t 2 p
k
i
k-1 + kI / a ij (x ik - x kj), (31b)
j=1
Figure 13(b) (Nonaccelerated, Robust)
x ik = u ik - q ik, p i0, q i0 ! R, i ! {1, f, N}, (31c)
Figure 13(c) (Accelerated, Nonrobust)
Figure 13(d) (Accelerated, Robust)
whose block diagram is in Figure 13(d). The resulting con-
vergence rate is plotted in Figure 15. Although the conver-
FIGURE 15 The convergence rate t as a function of m 2 /m N for the
gence rates of the standard and accelerated PI dynamic
dynamic average consensus algorithms in Figure 13. The acceler-
ated dynamic average consensus algorithms (dashed lines) use average consensus algorithms are slower than those of (27)
extra dynamics to enhance the convergence rate, as opposed to and (29), respectively, they have the additional advantage of
the nonaccelerated algorithms (solid lines). Also, the robust algo- being robust to initial conditions.
rithms (green) use the proportional-integral structure to obtain The following result summarizes the parameter choices
robustness to initial conditions as opposed to the nonrobust algo-
that optimize the convergence rate for each of the discrete-
rithms (blue). The graph is assumed to be constant, connected,
and undirected with Laplacian eigenvalues m i for i ! {1, f, N}. time dynamic average consensus algorithms in Figure 13.
Closed-form expressions for the rates and algorithm parameters The results for the first two algorithms follow from the pre-
are provided in Theorem 5. vious discussion, whereas details of the results for the last
two algorithms can be found in [57].

The convergence rates of both the standard (27) and Theorem 5: Optimal Convergence Rates of
accelerated (29) versions of the dynamic average consen- Discrete-Time Dynamic Average Consensus Algorithms
sus algorithm are plotted in Figure 15 as a function of the Let G be a connected, undirected graph. Suppose the refer-
ratio m 2 /m N . ence signal u i at each agent i ! {1, f, N} is a constant scalar.
Consider the dynamic average consensus algorithms in
Robust Dynamic Average Consensus Algorithms Figure 13, with the parameters chosen according to Table 3
Although the previous dynamic average consensus algo- [the algorithms in Figure 13(a) and (c) are initialized such
rithms are not robust to initial conditions, root locus that the average of the initial integrator states is zero, that
techniques can also be used to optimize the convergence is, R Ni = 1 p i0 = 0@ . The agreement states x ik, i ! {1, f, N} con-
rate of dynamic average consensus algorithms that are verge to u avg exponentially with rate t.  
robust to initial conditions. Consider the discrete-time
version of the PI estimator from (19), whose iterations are PERFECT TRACKING USING A PRIORI KNOWLEDGE
given by OF THE INPUT SIGNALS
The design of the dynamic average consensus algorithms
N described in the discussion so far does not require prior
q i
k+1
i
= tq + k p
k / a ij ((x i
k
j
k
i
k
j
- x ) + (p - p )), (30a)
k
knowledge of the reference signals and is therefore
j=1
N broadly applicable. This also comes at a cost. The conver-
p ik + 1 = p ik + k I / a ij (x ik - x kj), (30b) gence guarantees of these algorithms are strong only
j=1
when the reference signals are constant or slowly varying.
x ik = u ik - q ik, p i0, q i0 ! R, i ! {1, f, N}, (30c)
The error of such algorithms can be large, however, when
the reference signals change quickly in time. This section
with parameters t, k p, k I ! R . The block diagram of this describes dynamic average consensus algorithms, which
algorithm is shown in Figure 13(b). are capable of tracking fast time-varying signals with
Because the dynamic average consensus algorithms either zero or small steady-state error. In each case, their
(27) and (29) have only one Laplacian block in the block design assumes some specific information about the
diagram, the resulting root loci are linear in the Lapla- nature of the reference signals. In particular, consider refer-
cian eigenvalues. For the PI dynamic average consensus ence signals that 1) have a known model, 2) are band lim-
algorithm, however, the block diagram contains two ited, or 3) have bounded derivatives.

62  IEEE CONTROL SYSTEMS MAGAZINE  »  JUNE 2019


TABLE 3  The parameter selection for the dynamic average consensus algorithms of Figure 13 as a function of the minimum
and maximum nonzero Laplacian eigenvalues m 2 and m N , respectively, with m r : = m 2 /m N . N/A: not applicable.

t kI kp
Figure 13(a) mN - m2 2 N/A
mN + m2 m2 + mN
Z 1-t
Figure 13(b)
2
] 8 - 8m r + m r , 1 t (1 - t) m r
] 8 - m r2 0 1 mr # 3 - 5 m2 mN t + mr - 1
[ 2
] (1 - m r) (4 + m r (5 - m2 r)) - m r (1 - m r) , 3 - 5 1 m r # 1
] 2 (1 + m r )
\
Figure 13(c) mN - m2 4 N/A
mN + m2 ( m2 + mN ) 2
Z (1 - t) 2 (2 + 2 1 - m r - m r) k I
Figure 13(d) ] 6 - 2 1 - m r + m r - 4 2 - 2 1 - m r + m r) , 0 1 m # 2 ( 2 - 1)
] 2 + 2 1 - mr - mr
r m2
[
] - 3 - 2 1 - m r + m r + 2 2 + 2 1 - m r - m r , 2 ( 2 - 1) 1 m r # 1
] - 1 - 2 1 - mr + mr
\

Signals With a Known Model (Discrete Time) where the mth divided difference is defined recursively
The discrete-time dynamic average consensus algorithms as D (m) u ik = D (m - 1) u ik - D (m - 1) u ik - 1 for m $ 2 with D (1) u ik =
discussed previously are designed with the idea of tracking u ik - u ik - 1 . The estimate of the average, however, is delayed
constant reference signals with zero steady-state error. To do by m iterations due to the transfer function having a factor
this, the algorithms contain an integrator in the feedback of z -m between the input and output. This problem is fixed
loop. This concept generalizes to time-varying signals by the dynamic average consensus algorithm in Figure 16(b),
with a known model using the internal model principle. given by
Consider reference signals whose z-transform has the form
N
u i (z) = n i (z) /d (z), where n i (z) and d (z) are polynomials in p i1, k + 1 = p i1, k + / a ij ((u ik - u kj ) - (p i1,k - p 1j ,k )), (33a)
z for i ! {1, f, N} . Dynamic average consensus algorithms j=1

can be designed to have zero steady-state error for such sig- N


p 2i , k + 1 = p 2i , k + / a ij ((u ik - u k ) - (p i1, k - p 1, k) - (p i2, k - p 2, k )),
j j j
nals by placing the model of the input signals [that is, d (z)] in
j=1
the feedback loop. Some common examples of models are (33b)
h
N m

d (z ) = )
(z - 1) m, polynomial of degree m - 1 p im, k + 1 = p im, k + / a ij ((u ik - u kj) - / (p ,i,k - p ,j,k)), (33c)
z 2 - 2z cos (~) + 1, sinusoid with frequency ~. m
j=1 ,=1

x =u -/p ,
i
k
i
k
i
,, k p i
,, 0 ! R, , ! {1, f, m}, i ! {1, f, N},
This section focuses on dynamic average consensus algo-  ,=1
(33d)
rithms that track degree m - 1 polynomial reference sig-
nals with zero steady-state error. which tracks degree m - 1 polynomial reference signals
Consider the dynamic average consensus algorithms in with zero steady-state error without delay. Note, however,
Figure 16. The transfer function of each algorithm has m that the communication graph is assumed to be constant to
zeros at z = 1, so the algorithms track degree m - 1 polyno- use frequency-domain arguments; although the output of
mial references signals with zero steady-state error. The the dynamic average consensus algorithm in Figure 16(a) is
time-domain equations for the dynamic average consensus delayed, it also has nice tracking properties when the com-
algorithm in Figure 16(a) are munication graph is time varying, whereas the dynamic
average consensus algorithm in Figure 16(b) does not.
N To track degree m - 1 polynomial reference signals,
x i1, k + 1 = x i1, k - / a ij (x i1,k - x 1j ,k ) + D (m) u k, (32a) each dynamic average consensus algorithm in Figure 16
j=1
N cascades m dynamic average consensus algorithms, each
x i
2, k + 1 =x i
2, k - / a ij (x 2i ,k - x 2j ,k) + x i1,k, (32b) with a pole at z = 1 in the feedback loop. The dynamic
j=1
average consensus algorithm (27) is cascaded in Figure 16(b),
h
N
but any of the dynamic average consensus algorithms
x im, k + 1 = x im, k - / a ij (x im,k - x mj ,k) + x im - 1,k, (32c) from the previous section could also be used. For example,
j=1
the PI dynamic average consensus algorithm could be cas-
x ik = x im, k, x i,, 0 ! R, , ! {1, f, m}, i ! {1, f, N}, (32d) caded m times to track degree m - 1 polynomial reference

JUNE 2019  «  IEEE CONTROL SYSTEMS MAGAZINE  63


signals with zero steady-state error independent of the ini- signals are band limited. In this case, feedforward dynamic
tial conditions. average consensus algorithm designs can be used to achieve
In general, reference signals with model d (z) can be arbitrarily small steady-state error.
tracked with zero steady-state error by cascading simple For this discussion, assume that the reference signals
dynamic average consensus algorithms, each of which tracks are band limited with known cutoff frequency. In particu-
a factor of d (z) . In particular, suppose d (z) = d 1 (z) d 2 (z) f lar, let U i (z) be the z-transform of the ith reference signal
d m (z) . Then, m dynamic average consensus algorithms can {u ik} . Then U i (z) is band limited with cutoff frequency i c if
be cascaded, where the ith component contains the model U i (exp ( ji)) = 0 for all i ! (i c, r] .
d i (z) for i = 1, f, m. Alternatively, a single dynamic average Consider the dynamic average consensus algorithm in
consensus algorithm can be designed that contains the entire Figure 17. The reference signals are passed through a prefil-
model d (z) . This approach using an internal model version ter h (z) and then multiplied m times by the consensus
of the PI dynamic average consensus algorithm is designed matrix I - L with a delay between each (to allow time for
in [66] in both continuous time and discrete time. communication). The transfer function from the input U (z)
In many practical applications, the exact model of the to the output X (z) is
reference signals is unknown. However, it is shown in [67]
that a frequency estimator can be used in conjunction with H (z, L) = h (z) 1m (I - L) m.
z
an internal model dynamic average consensus algorithm to
still achieve zero steady-state error. In particular, the fre- For the tracking error to be small, h (z) must approximate
quency of the reference signals is estimated such that the z m for all i ! [0, i c], where z = exp (ji) and i c is the cutoff
estimate converges to the actual frequency. This time-vary- frequency. In this case, the transfer function in the pass-
ing estimate of the frequency is then used in place of the band is approximately
true frequency to design the feedback dynamic average
consensus algorithm [67]. H (z, L) . (I - L) m,

Band-Limited Signals (Discrete Time) so the error can be made small by making m large enough
To use algorithms designed using the model of the reference (so long as L is scaled such that < I - L - 11 T /N < 2 1 1) .
signals, the signals must be composed of a finite number of Specifically, the prefilter is designed such that h (z) is
known frequencies. When either the frequencies are unknown proper and h (z) . z m for z = exp (ji) for all i ! [0, i c] [note
or there are infinitely many frequencies, dynamic average that h (z) = z m cannot be used because it is not causal]. An
consensus algorithms can still be designed if the reference m-step filter can be obtained by cascading a one-step filter m

uk 1 ... 1 xk
∆(m) I I
z−1 N z−1 N
− −

L L

m Times
(a)
uk ... xk
− −

1 1
I L I L
z−1 N z−1 N

m Times
(b)

FIGURE 16 A block diagram of dynamic average consensus algorithms that track polynomial signals of degree m - 1 with zero steady-state
error when initialized correctly (neither algorithm is robust to initial conditions). The indicated section is repeated in series m times. (a) The
performance of (32) does not degrade when the graph is time varying, but the estimate is delayed by m iterations. Furthermore, the algo-
rithm is numerically unstable when m is large and eventually diverges from tracking the average when implemented using finite precision
arithmetic. (b) The estimate of the average by (33) is not delayed, and the algorithm is numerically stable, but the tracking performance
degrades when the communication graph is time varying. (a) Dynamic average consensus algorithm (32) in [76] where D (m) = (1 - z -1) m is
the mth divided difference (see also [77] for a step-size analysis). (b) Dynamic average consensus algorithm (33), which is the algorithm
in [53] cascaded in series m times.

64  IEEE CONTROL SYSTEMS MAGAZINE  »  JUNE 2019


times in series. In other words, let h (z) = [zf (z)] m, where be used in the feedback loop to provide infinite loop gain
f (z) is strictly proper and approximates unity in the over all frequencies, so no model of the reference signals is
passband. Because f (z) must approximate unity in both required. Furthermore, such continuous-time dynamic
magnitude and phase, a standard lowpass filter cannot be average consensus algorithms are capable of achieving zero
used. Instead, set error tracking in finite time as opposed to the exponential
convergence achieved by discrete-time dynamic average
g ( z)
f (z ) = 1 - , consensus algorithms. One such algorithm is described
lim g (z )
z"3
in [69] as
where g (z) is a proper high-pass filter with cutoff fre-
quency i c . Then f (z) is strictly proper (due to the normal-
.i
xo i = u - k p / sgn (x i (t) - x j (t)), i ! {1, f, N},
izing constant in the denominator) and approximates i
j ! N out
(34a)
unity in the band [0, i c] [because g (z) is high pass]. N N
/ x j (0) = / u j (0) . (34b)
Therefore, a prefilter h (z) that approximates z m in the j=1 j=1
passband can be designed using a standard high-pass
filter g (z) . The block diagram representation in Figure 18(a) indicates
Using such a prefilter, [68] makes the error of the that this algorithm applies sgn in the feedback loop. Under
dynamic average consensus algorithm in Figure 17 arbi- the given assumptions, using a sliding mode argument, the
trarily small if 1) the graph is connected and balanced feedback gain k p can be selected to guarantee zero error
at each time step (in particular, it need not be constant), tracking in finite time, provided that an upper bound c
.
2) L is scaled such that < I - L - 11 T / N < 2 1 1, 3) the of the form sup x ! [0, 3) < u (x) < = c 1 3 is known [69]. The
number of stages m is made large enough, 4) the pre- dynamic consensus algorithm (34) can also be implemented
filter can approximate z m arbitrarily closely in the pass- without derivative information of the reference signals in
band, and 5) exact arithmetic is used. Note that exact an equivalent way as
arithmetic is required for arbitrarily small errors because
rounding errors cause high-frequency components in the N

reference signals.
po i = k p / sgn (x i - x j), / p j (0) = 0, (35a)
j ! N iout j=1

i i i
x = u - p , i ! {1, f, N} . (35b)
Signals With Bounded Derivatives (Continuous Time)
Stronger tracking results can be obtained using algorithms
implemented in continuous time. Here, a number of contin- T he cor respondi ng block diag ra m is show n i n
uous-time dynamic average consensus algorithms are pre- Figure 18(b).
sented that are capable of tracking time-varying reference It is simple to see from the block diagram of Figure 18(a)
signals whose derivatives are bounded with zero error in why (34) is not robust to initial conditions; the integrator
finite time. For simplicity, assume that the communication state is directly connected to the output and therefore
graph is constant, connected, and undirected. Also, the affects the steady-state output in the consensus direction.
reference signals are assumed to be differentiable with This issue is addressed by the dynamic average consensus
bounded derivatives. algorithm in Figure 18(c), given by
In discrete time, zero steady-state error is obtained by
placing the internal model of the reference signals in the po i = k p sgn c / ^x i - x jh m , p i (0) ! R, (36a)
feedback loop. This provides infinite loop gain at the fre- j ! N iout

quencies contained in the reference signals. In continuous xi = ui - / (p i - p j), i ! {1, f, N}, (36b)
time, however, the discontinuous signum function sgn can j ! N iout

uk h(z )IN 1 ... 1


I I xk
z N z N
− −

L L

m Times

FIGURE 17 The feedforward dynamic average consensus algorithm for tracking the average of band-limited reference signals. The prefilter
h (z) is applied to the reference signals before passing through the graph Laplacian. For an appropriately designed prefilter, the dynamic
average consensus algorithm can track band-limited reference signals with arbitrarily small steady-state error when using exact arithmetic
(and small error for finite precision) [68].

JUNE 2019  «  IEEE CONTROL SYSTEMS MAGAZINE  65


which moves the integrator before the Laplacian in the qo i = - aq i + x i, (37a)
feedback loop. However, this dynamic average consensus po i = k p sgn c / ^ q i - q j h , (37b)
algorithm has two Laplacian blocks directly connected, m
j ! N iout
which means that it requires two-hop communication to
implement. In other words, two sequential rounds of com-
xi = ui - / (p i - p j) , p i (0), q i (0) ! R, i ! {1, f, N}, (37c)
j ! N iout
munication are required at each time instant. In the time
domain, each agent must perform the following (in order) places a strictly proper transfer function in the path between
at each time t: 1) communicate p i (t), 2) calculate x i (t), the Laplacian blocks. The extra dynamics, however, cause
3) communicate x i (t), and 4) update p i (t) using the deriva- the output to converge exponentially instead of in finite
tive po i (t). To require only one-hop communication, the time [70].
dynamic average consensus algorithm in Figure 18(d), Alternatively, under the given assumptions, a sliding-
given by mode-based dynamic average consensus algorithm with

. 1
u(t ) I x(t )
s N

x(0)

kp IN B sgn(·) B

(a)
u(t ) x(t )

kp
B sgn(·) B
p(t ) s IN

p(0)
(b)
u(t ) x(t )

kp
L sgn(·) L
p(t ) s IN

(c)
u(t ) x(t )

kp 1
L sgn(·) L I
p(t ) s IN s+α N

(d)
u(t ) x(t )

kp
IN B sgn(·) B
p(t ) s+1
(e)

FIGURE 18 A block diagram of discontinuous dynamic average consensus algorithms in continuous time. In each case, the communica-
tion graph is assumed to be constant, connected, and balanced with Laplacian matrix L = BB <. Furthermore, the reference signals are
assumed to have bounded derivatives. The dynamic average consensus algorithm (34), shown in (a), achieves perfect tracking in finite
time and uses one-hop communication, but it is not robust to initial conditions [that is, the steady-state error is zero only if 1 N< x (0) = 1 N< u (0)].
Furthermore, the derivative of the reference signals is required (see [69]). The dynamic average consensus algorithm (35) shown in (b)
is equivalent to the algorithm in (a), although this form does not require the derivative of the reference signals. In this case, the require-
ment on the initial conditions is 1 <N p (0) = 0. The dynamic average consensus algorithm (36) shown in (c) converges to zero error in finite
time and is robust to initial conditions but requires two-hop communication (in other words, two rounds of communication are performed
at each time instant) (see [70]). The dynamic average consensus algorithm (37) shown in (d) is robust to initial conditions and uses one-
hop communication but converges to zero error exponentially instead of in finite time (see [70]). The dynamic average consensus algo-
rithm (38) shown in (e) is robust to initial conditions and uses one-hop communication, although the error converges to zero exponentially
instead of in finite time (see [71]). (a) The dynamic average consensus algorithm (34). (b) The dynamic average consensus algorithm
(35). (c) The dynamic average consensus algorithm (36). (d) The dynamic average consensus algorithm (37). (e) The dynamic average
consensus algorithm (38).

66  IEEE CONTROL SYSTEMS MAGAZINE  »  JUNE 2019


zero error tracking, which can be arbitrarily initialized, is convergence rate when implemented in discrete time is
provided in [71] as shown, along with how to accelerate the convergence rate
by introducing extra dynamics. Finally, how to use a priori
.i
xo i = u - (x i - u i) - k p / sgn (x i - x j), x i (0) ! R, information about the reference signals to design algo-
j !N iout
rithms with improved tracking performance is shown.
i ! {1, f, N},
The goal is that the article helps the reader obtain an
or equivalently, overview of the progress and intricacies of this topic and
appreciate the design tradeoffs faced when balancing desir-
po i = - p i + k p / sgn (x i - x j), p i (0) ! R , (38a) able properties for large-scale interconnected systems, such
j ! N iout as convergence rate, steady-state error, robustness to initial
x i = u i - p i,   i ! {1, f, N }. (38b) conditions, internal stability, amount of memory required
on each agent, and amount of communication between
However, this algorithm requires both the reference sig- neighboring agents. Given the importance of the ability to
nals and their derivatives to be bounded with known values track the average of time-varying reference signals in net-
c 1 and c 2: sup x ! [0, 3) < u (x) < = c 1 1 3 and sup x ! [0, 3) < uo (x) < = work systems, it is expected that the number and breadth of
c 2 1 3. These values are required to design the proper slid- applications for dynamic average consensus algorithms
ing mode gain k p. will continue to increase in such areas as the smart grid,
The continuous-time finite-time algorithms described in autonomous vehicles, and distributed robotics.
this section exhibit a sliding mode behavior. In fact, they con- Many interesting questions and avenues for further
verge in finite time to the agreement manifold and then slide on research remain open. For instance, the emergence of oppor-
it by switching continuously at an infinite frequency between tunistic state-triggered ideas in the control and coordination
two system structures. This phenomenon is called chattering. of networked cyberphysical systems presents exciting
Recall that commutations at infinite frequency between two opportunities for the development of novel solutions to the
subsystems is called the Zeno phenomenon in the literature on dynamic average consensus problem. The underlying theme
hybrid systems. See [72] for a detailed discussion on the relation of this effort is to abandon the paradigm of periodic or con-
between first-order sliding mode chattering and the Zeno phe- tinuous sampling/control in exchange for deliberate, oppor-
nomenon. From a practical perspective, chattering is undesir- tunistic aperiodic sampling/control to improve efficiency.
able and leads to excessive control energy expenditure [73]. A Beyond the brief incursion on this topic in “Dynamic Aver-
common approach to eliminate chattering is to smooth out the age Consensus Algorithms with Continuous-Time Evolution
control discontinuity in a thin boundary layer around the and Discrete-Time Communication,” further research is
switching surface. However, this approach leads to a tracking needed on synthesizing triggering criteria for individual
error that is proportional to the thickness of the boundary layer. agents that prescribe when information is to be shared
Another approach to address is the use of higher-order sliding with or acquired from neighbors, which lead to con-
mode control; see [74] for details. To the best of our knowledge, vergence guarantees and are amenable to the charac-
higher-order sliding mode control has not been used in the con- teri zation of performance improvements over periodic
text of dynamic average consensus, although there exist results discrete-time implementations.
for other networked agreement problems [75]. The use of event triggering also opens up the way to employ
other interesting forms of communication and computa-
CONCLUSIONS AND FUTURE DIRECTIONS tion among the agents when solving the dynamic average con-
This article has provided an overview of the state of the art sensus problem, such as, for instance, the cloud. In cloud-based
on the available distributed algorithmic solutions to tackle coordination, instead of direct peer-to-peer communication,
the dynamic average consensus problem. It begins by agents interact indirectly by opportunistically communicating
exploring several applications of dynamic average con- with the cloud to leave messages for other agents. These mes-
sensus in cyberphysical systems, including distributed for- sages can contain information about their current esti-
mation control, state estimation, convex optimization, and mates, future plans, or fallback strategies. The use of the
optimal resource allocation. Using dynamic average con- cloud also opens the possibility of network agents with limited
sensus as a backbone, these advanced distributed algo- capabilities taking advantage of high-performance computa-
rithms enable groups of agents to coordinate to solve tion capabilities to deal with complex processes. The time-
complex problems. Starting from the static consensus prob- varying nature of the signals available at the individual agents
lem, we then derived dynamic average consensus algo- in the dynamic average consensus problem raises many inter-
rithms for various scenarios. Continuous-time algorithms esting challenges that must be addressed to take advantage of
are first introduced (along with simple techniques for analyz- this approach. Related to the focus of this effort on the com-
ing them) using block diagrams to provide intuition for munication aspects, the development of initialization-free
t he algorithmic structure. To reduce the communication dynamic average consensus algorithms over directed graphs
bandwidth, how to choose the step sizes to optimize the is also another important line of research.

JUNE 2019  «  IEEE CONTROL SYSTEMS MAGAZINE  67


Dynamic Average Consensus Algorithms With Continuous-Time
Evolution and Discrete-Time Communication

W e discuss here an alternative to the discretization route infinite number of communication rounds are triggered in a fi-
explained in “Euler Discretizations of Continuous-Time nite amount of time), and 3) they have to ensure the network
Dynamic Average Consensus Algorithms” to produce imple- achieves dynamic average consensus, although agents operate
mentable strategies from the continuous-time algorithms with outdated information while inputs are changing with time.
described in the article. This approach is based on the ob- Consider the following event-triggered communication law
servation that, when implementing the algorithms over digital [S15]: each agent is to communicate with its in-neighbors at
platforms, computation can still be reasonably approximated times {t ik} k ! N 1 R $ 0, starting at t i1 = 0, determined by
by continuous-time evolution (given the ever-growing capabili-
ties of modern embedded processors and computers), whereas t ik +1 = argmax $ t ! [t ik, 3) x i (t ik) - x i (t) # e i .. (S22)
communication is a process that still requires proper acknowl-
edgment of its discrete-time nature. The basic idea is to op- Here, e i ! R 2 0 is a constant value that each agent chooses
portunistically trigger, based on the network state, the times for locally to control its inter-event times and avoid Zeno behavior.
information sharing among agents to take place and allow indi- Specifically, the interexecution times of each agent i ! {1, f, N}
vidual agents to determine these autonomously. This has the employing (S22) are lower bounded by
potential to result in more efficient algorithm implementations
` j
because performing communication usually requires more en- xi =
1 ln 1 + ae i , (S23)
a ci
ergy than computation [S22]. In addition, the use of fixed com-
munication step sizes can lead to a wasteful use of the network where c i and h are positive real numbers that depend on the
resources because of the need to select it, taking into account initial conditions and network parameters (we omit for simplici-
worst-case scenarios. These observations are aligned with the ty their specific form, but see [S15] for the explicit expressions).
ongoing research activity [S23], [S24] on event-triggered con- The lower bound (S23) shows that, for a positive nonzero e i,
trol and aperiodic sampling for controlled dynamical systems the interexecution times are bounded away from zero, and it is
that seeks to trade computation and decision making for less guaranteed that, for networks with a finite number of agents,
communication, sensing, or actuator effort while still guaran- the implementation of (S21) with the communication trigger law
teeing a desired level of performance. The surveys [S25], [S26] (S22) is Zeno free. The following result formally describes the
describe how these ideas can be employed to design event- convergence behavior of (S21) under (S22) when the interac-
triggered communication laws for static average consensus. tion topology is modeled by a strongly connected and weight-
Motivated by these observations, [S15] investigates a dis- balanced digraph.
crete-time communication implementation of the continuous-
time algorithm (25) for dynamic average consensus. Under this Theorem S3: Convergence of (S21) Over Strongly
strategy, the algorithm becomes Connected and Weight-Balanced Digraph with Asynchronous
Distributed Event-Triggered Communication [S15]
N
vo i = ab / a ij (xt i - xt j), (S21a) Let G be a strongly connected and weight-balanced digraph.
.i
j =1 Assume the reference signals satisfy sup t ! [0, 3) u (t) = l i 1 3,
N .
xo = u -a (x i - u i) - b / a ij (xt i - xt j) - v i, (S21b)
i .i for i ! {1, f, N}, and sup t ! [0, 3) P N u (t) = c 1 3. For any
j =1 a, b ! R 2 0, (S21) over G starting from x i (0) ! R and v i (0) ! R
with R Nj = 1 v j (0) = 0, where each agent i ! {1, f, N} commu-
for each i ! {1, f, N}, where xt i (t) = x i (t ik) for t ! [t ik, t ik +1), with
nicates with its neighbors at times {t ik} k ! N 1 R $ 0, starting at
{t ik} 1 R $ 0 denoting the sequence of times at which agent i
t i1 = 0, determined by (S22) with e ! R N2 0, satisfies
communicates with its in-neighbors. The basic idea is that
agents share their information with neighbors when the uncer- c +b L e
tainty in the outdated information is such that the monotonic limsup x i (t) - u avg (t) # , (S24)
t"3 bmt 2
convergent behavior of the overall network can no longer be
guaranteed. The design of such triggers is challenging because for i ! {1, f, N} with an exponential rate of convergence of
of the following requirements: 1) triggers need to be distributed min {a, bmt 2}. Furthermore, the interexecution times of agent
so that agents can check them with the information available i ! {1, f, N} are lower bounded by (S23). 
to them from their out-neighbors, 2) they must guarantee the The expected tradeoff between the desire for longer
absence of Zeno behavior (the undesirable situation where an interevent time and the adverse effect on systems convergence

68  IEEE CONTROL SYSTEMS MAGAZINE  »  JUNE 2019


and performance is captured in (S23) and (S24). The lower bound
x i in (S23) on the inter-event times allows a designer to compute 1

rj
j=1
bounds on the maximum number of communication rounds

N
(and associated energy spent) by each agent i ! {1, f, N}
0

1
(and hence the network) during any given time ­interval. It is

N
xi −
­interesting to analyze how this lower bound ­depends on the
−1
various problem ingredients: x i is an increasing function of 0 5 10 15 20
e i and a decreasing function of a and c i . Through the lat- t
ter variable, the bound also depends on the graph topology (a)
and the design parameter b. Given the definition of c i, we 5
can deduce that the faster an input of an agent is changing 4

Agents
(larger l i) or the ­farther the agent initially starts from the av- 3
erage of the inputs, the more often that agent would need to 2
trigger communication. The connection between the network 1
performance and the communication overhead can also be ob- 0 5 10 15 20
served here. Increasing b or decreasing e i to improve the ul- t
timate tracking error bound (S24) results in smaller inter-event (b)
times. Given that the rate of convergence of (S21) under (S22)
is min {a, bmt 2}, decreasing a to increase the inter-event times FIGURE S2 A comparison between the event-triggered algo-
slows down the convergence. rithm (S21) employing the event-triggered communication law
(S25) and the Euler discretized implementation of (25), as
When the interaction topology is a connected graph, the
described in (S15) with fixed step size [S15]. Both of these
properties of the Laplacian enable the identification of an al- algorithms use a = 1 and b = 4. The network is a weight-bal-
ternative event-triggered communication law that, compared to anced digraph of five agents with unit weights. The inputs are
(S22), has a longer inter-event time but similar dynamic aver- r 1 (t) = 0.5 sin (0.8t), r 2 (t) = 0.5 sin (0.7t) + 0.5 cos (0.6t), r 3 (t) =
age tracking performance. Consider the sequence of commu- sin (0.2t) + 1, r 4 (t) = atan (0.5t), and r 5 (t) = 0.1 cos (2t). In plot
(a), the black (respectively, gray) lines correspond to the track-
nication times {t ik} k ! N determined by
ing error of the event-triggered algorithm (S21) employing
event-triggered law (S25) with e i / ^2 d iout h = 0.1 [respectively,
t ik + 1 = argmax " t ! 6t ik, 3 h xt i ^ t h - x i ^ t h 
2
the Euler discretized algorithm (S15) with fixed step size
N d = 0.12]. Recall from “Euler Discretizations of Continuous-
# 1
4d iout
/ a ij ^ t h xt i ^ t h - xt j ^ t h +
2 1 e 2}.
4d iout
i (S25) Time Dynamic Average Consensus Algorithms” that conver-
j=1
gence for (S15) is guaranteed if d ! (0, min {a -1, b -1 (d max out -1
) }),
which, for this example, results in d ! (0, 0.125). The horizontal
Compared to (S22), the extra term 1/ (4d iout) R Nj = 1 a ij (t) ; xt i (t) -
blue lines show the ! 0.05 error bound for reference. Part (a)
xt j (t) ; 2 in the communication law (S25) allows agents to have shows that both algorithms exhibit comparable tracking per-
longer inter-event times. Formally, the interexecution times formance. Part (b) shows the communication times of each
of agent i ! {1, f, N} implementing (S25) are lower bound- agent using the event-triggered strategy. The number of times
ed by that agents {1, f, 5} communicate in the time interval [0, 20]
is (39, 40, 42, 40, 39), respectively, when implementing event-
1 ln 1 + triggered communication (S25). These numbers are sig-
c m, (S26)
ae i
xi =
a 2cr i d iout nificantly smaller than the communication rounds required
by each agent in the Euler discretized algorithm (S15)
for positive constants cr i; see [S15] for explicit expressions. (20/0.12 - 166 rounds).
Numerical examples in [S15] show that the implementation
of (S25) for connected graphs results in inter-event times lon- [S23] W. P. M. H. Heemels, K. H. Johansson, and P. Tabuada, “An in-
troduction to event-triggered and self-triggered control,” in Proc. IEEE
ger than the ones of the event-triggered law (S22). Figure S2 Conf. Decision and Control, 2012, pp. 3270–3285.
shows one of those examples. Similar results can also be de- [S24] L. Hetel et al., “Recent developments on the stability of systems
rived for time-varying, jointly connected graphs (see [S15] for a with aperiodic sampling: An overview,” Automatica, vol. 76, pp. 309–
335, Feb. 2017.
complete exposition). [S25] C. Nowzari, E. García, and J. Cortés. Event-triggered control and
communication of network systems for multi-agent consensus. 2017.
[Online]. Available: arXiv: 1712.00429
REFERENCES [S26] L. Ding, Q. L. Han, X. Ge, and X. M. Zhang, “An overview of recent
[S22] H. Karl and A. Willig, Protocols and Architectures for Wireless advances in event-triggered consensus of multiagent systems,” IEEE
Sensor Networks. Hoboken, NJ: Wiley, 2005. Trans. Cybern., vol. 48, no. 4, pp. 1110–1123, 2018.

JUNE 2019  «  IEEE CONTROL SYSTEMS MAGAZINE  69


We believe that the interconnection of dynamic average The work of J. Cortés was supported by NSF award CNS-
consensus algorithms with other coordination layers in net- 1446891 and Air Force Office of Scientific Research award
work systems is a fertile area for both research and applica- FA9550-15-1-0108. The work of S. Martinez was supported
tions. Dynamic average consensus algorithms are a versatile by the Air Force Office of Scientific Research award FA9550-
tool in interconnected scenarios where it is necessary to com- 18-1-0158 and Defense Advanced Research Projects Agency
pute changing estimates of quantities that are employed by (Lagrange) award N66001-18-2-4027.
other coordination algorithms and whose execution in turn
affects the time-varying signals available to the individual AUTHOR INFORMATION
agents. This was illustrated in the “Applications of Dynamic Solmaz S. Kia (solmaz@uci.edu) is an assistant professor
Average Consensus in Network Systems” section, which of mechanical and aerospace engineering at the University
described how (in resource allocation problems) a group of of California, Irvine (UCI). She received the Ph.D. degree in
distributed energy resources can collectively estimate the mis- mechanical and aerospace engineering from UCI in 2009
match between the aggregated power injection and the desired and the M.Sc. and B.Sc. degrees in aerospace engineering
load using dynamic average consensus. The computed mis- from the Sharif University of Technology, Tehran, Iran, in
match, in turn, informs the distributed energy resources in 2004 and 2001, respectively. She was a senior research engi-
their decision-making process seeking to determine the power neer at SySense Inc., El Segundo, California, from June 2009
injections that optimize their generation cost, which, in turn, to September 2010. She held postdoctoral positions in the
changes the mismatch computed by the dynamic average con- Department of Mechanical and Aerospace Engineering at
sensus algorithm. The fact that the time-varying nature of the the University of California, San Diego, and UCI. She was
signals is driven by a dynamic process that itself uses the a recipient of the University of California President’s Post-
output of the dynamic average consensus algorithms opens doctoral Fellowship from 2012 to 2014 and National Science
the way for the use of many concepts germane to systems and Foundation CAREER Award in 2017. Her main research in-
control, including stable interconnections, ISS, and passivity. terests, in a broad sense, include distributed optimization/
Along these lines, we could also think of self-tuning mecha- coordination/estimation, nonlinear control theory, and
nisms embedded within dynamic average consensus algorith- probabilistic robotics.
mic solutions that tune the algorithm execution based on the Bryan Van Scoy is a postdoctoral researcher at the Uni-
evolution of the time-varying signals. versity of Wisconsin–Madison. He received the Ph.D. de-
Another interesting topic for future research is the privacy gree in electrical engineering and computer science from
preservation of the signals available to the agents in the Northwestern University, Evanston, Illinois, in 2017 and
dynamic average consensus problem. Protecting the privacy the B.S. and M.S. degrees in applied mathematics along
and confidentiality of data is a critical issue in emerging dis- with the B.S.E. in electrical engineering from the University
tributed automated systems deployed in a variety of scenarios, of Akron, Ohio, in 2012. His research interests include dis-
including power networks, smart transportation, the Internet tributed algorithms for multiagent systems and the analy-
of Things, and manufacturing systems. In such scenarios, the sis and design of optimization algorithms.
ability of a network system to optimize its operation, fuse Jorge Cortés is a professor of mechanical and aerospace
information, compute common estimates of unknown quanti- engineering at the University of California, San Diego. He
ties, and agree on a common worldview while protecting sen- received the Licenciatura degree in mathematics from the
sitive information is crucial. In this respect, the design of Universidad de Zaragoza, Spain, in 1997 and the Ph.D. de-
privacy-preserving dynamic average consensus algorithms is gree in engineering mathematics from the Universidad
in its infancy. Interestingly, the dynamic nature of the problem Carlos III de Madrid, Spain, in 2001. He held postdoctoral
might offer advantages in this regard with respect to the static positions with the University of Twente, The Netherlands,
average consensus problem. For instance, in differential pri- and the University of Illinois at Urbana-Champaign. He
vacy, where the designer makes it provably difficult for an was an assistant professor with the Department of Applied
adversary to make inferences about individual records from Mathematics and Statistics, University of California, Santa
published outputs or even detect the presence of an individual Cruz, from 2004 to 2007. He is the author of Geometric, Con-
in the data set, it is known that privacy guarantees weaken as trol and Numerical Aspects of Nonholonomic Systems (Springer-
more queries are made to the same database. However, if the Verlag, 2002) and coauthor (together with F. Bullo and
database is changing, this limitation no longer applies, and S. Martínez) of Distributed Control of Robotic Networks (Princ-
this opens the way to studying how privacy guarantees change eton University Press, 2009). He was an IEEE Control Sys-
with the rate of variation of the time-varying signals in the tems Society Distinguished Lecturer (2010–2014) and is an
dynamic average consensus problem. IEEE Fellow. His current research interests include coop-
erative control; network science; game theory; multiagent
ACKNOWLEDGMENTS coordination in robotics, power systems, and neuroscience;
The work of S.S. Kia was supported by National Science geometric and distributed optimization; nonsmooth analy-
Foundation awards ECCS-1653838 and IIS-SAS-1724331. sis; and geometric mechanics and control.

70  IEEE CONTROL SYSTEMS MAGAZINE  »  JUNE 2019


Randy A. Freeman is a professor of electrical engineer- REFERENCES
ing and computer science at Northwestern University, [1] R. Olfati-Saber, J. A. Fax, and R. M. Murray, “Consensus and cooperation
in networked multi-agent systems,” Proc. IEEE, vol. 95, no. 1, pp. 215–233, 2007.
Evanston, Illinois. He received the Ph.D. degree in electri- [2] W. Ren and R. W. Beard, Distributed Consensus in Multi-Vehicle Cooperative
cal engineering from the University of California, Santa Control (Communications and Control Engineering). New York: Springer-
Barbara, in 1995. He has been associate editor of IEEE Trans- Verlag, 2008.
[3] W. Ren and Y. Cao, Distributed Coordination of Multi-Agent Networks (Com-
actions on Automatic Control. His research interests include munications and Control Engineering). New York: Springer-Verlag, 2011.
nonlinear systems, distributed control, multiagent systems, [4] W. Ren, R. W. Beard, and E. M. Atkins, “Information consensus in multi-
robust control, optimal control, and oscillator synchroni- vehicle cooperative control: Collective group behavior through local inter-
action,” IEEE Control Syst. Mag., vol. 27, no. 2, pp. 71–82, 2007.
zation. He received the National Science Foundation CA- [5] A. T. Kamal, J. A. Farrell, and A. K. Roy-Chowdhury, “Information
REER Award in 1997. He has been a member of the IEEE weighted consensus filters and their application in distributed camera net-
Control System Society Conference Editorial Board since works,” IEEE Trans. Autom. Control, vol. 58, no. 12, pp. 3112–3125, 2013.
[6] D. Tian, J. Zhou, and Z. Sheng, “An adaptive fusion strategy for distrib-
1997 and has served on program and operating committees uted information estimation over cooperative multi-agent networks,” IEEE
for the American Control Conference and the IEEE Confer- Trans. Inf. Theory, vol. 63, no. 5, pp. 3076–3091, 2017.
ence on Decision and Control. [7] P. Yang, R. Freeman, and K. Lynch, “Optimal information propagation
in sensor networks,” in Proc. 2006 IEEE Int. Conf. Robotics and Automation,
Kevin M. Lynch is a professor and chair of the Mechan- pp. 3122–3127.
ical Engineering Department, Northwestern University, [8] K. M. Lynch, I. B. Schwartz, P. Yang, and R. A. Freeman, “Decentralized
Evanston, Illinois. He received the B.S.E.E. degree in elec- environmental modeling by mobile sensor networks,” IEEE Trans. Robot.,
vol. 24, no. 3, pp. 710–724, 2008.
trical engineering from Princeton University, New Jersey, [9] P. Yang, R. A. Freeman, and K. M. Lynch, “Multi-agent coordination by
in 1989 and the Ph.D. degree in robotics from Carnegie decentralized estimation and control,” IEEE Trans. Autom. Control, vol. 53,
Mellon University, Pittsburgh, Pennsylvania, in 1996. He no. 11, pp. 2480–2496, 2008.
[10] P. Yang, R. A. Freeman, G. J. Gordon, K. M. Lynch, S. S. Srinivasa, and R.
is a member of the Neuroscience and Robotics Laboratory Sukthankar, “Decentralized estimation and control of graph connectivity
and the Northwestern Institute on Complex Systems. His for mobile sensor networks,” Automatica, vol. 46, no. 2, pp. 390–396, 2010.
research interests include dynamics, motion planning, [11] R. Aragüés, J. Cortés, and C. Sagüés, “Distributed consensus on robot
networks for dynamically merging feature-based maps,” IEEE Trans. Ro-
and control for robot manipulation and locomotion; self- bot., vol. 28, no. 4, pp. 840–854, 2012.
organizing multiagent systems; and functional electrical [12] S. Das and J. M. F. Moura, “Distributed Kalman filtering with dynam-
stimulation for restoration of human function. He is a ic observations consensus,” IEEE Trans. Signal Process., vol. 63, no. 17, pp.
4458–4473, 2015.
coauthor of the textbooks Principles of Robot Motion (MIT [13] F. Chen and W. Ren, “A connection between dynamic region-following
Press, 2005), Embedded Computing and Mechatronics (Else- formation control and distributed average tracking,” IEEE Trans. Cybern.,
vier, 2015), and Modern Robotics: Mechanics, Planning, and vol. 48, no. 6, pp. 1760–1772, 2018.
[14] J. A. Fax and R. M. Murray, “Information flow and cooperative control
Control (Cambridge University Press, 2017). of vehicle formations,” IEEE Trans. Autom. Control, vol. 49, no. 9, pp. 1465–1476,
Sonia Martínez is a professor of mechanical and aero- 2004.
space engineering at the University of California, San Diego. [15] W. Ren, “Multi-vehicle consensus with a time-varying reference state,”
Syst. Control Lett., vol. 56, no. 2, pp. 474–483, 2007.
She received the Ph.D. degree in engineering mathematics [16] K. D. Listmann, M. V. Masalawala, and J. Adamy, “Consensus for for-
from the Universidad Carlos III de Madrid, Spain, in May mation control of nonholonomic mobile robots,” in Proc. IEEE Int. Conf. Ro-
2002. Following a year as a visiting assistant professor of ap- botics and Automation, 2009, pp. 3886–3891.
[17] M. Porfiri, G. D. Roberson, and D. J. Stilwell, “Tracking and formation
plied mathematics at the Technical University of Catalonia, control of multiple autonomous agents: A two-level consensus approach,”
Spain, she obtained a Postdoctoral Fulbright Fellowship and Automatica, vol. 43, no. 8, pp. 1318–1328, 2007.
held appointments at the Coordinated Science Laboratory [18] S. Ghapani, S. Rahili, and W. Ren, “Distributed average tracking for
second-order agents with nonlinear dynamics,” in Proc. American Control
of the University of Illinois at Urbana-Champaign during Conf., 2016, pp. 4636–4641.
2004 and the Center for Control, Dynamical Systems and [19] S. S. Kia, J. Cortés, and S. Martínez, “Dynamic average consensus under
Computation of the University of California, Santa Barbara, limited control authority and privacy requirements,” Int. J. Robust Nonlinear
Control, vol. 25, no. 13, pp. 1941–1966, 2015.
during 2005. In a broad sense, her main research interests [20] R. Olfati-Saber, “Distributed Kalman filter with embedded consensus
include the control of network systems, multiagent systems, filters,” in Proc. IEEE Conf. Decision and Control, 2005, pp. 8179–8184.
nonlinear control theory, and robotics. For her work on the [21] R. Olfati-Saber, “Kalman-consensus filter: Optimality, stability,
and performance,” in Proc. IEEE Conf. Decision and Control, 2009, pp.
control of underactuated mechanical systems, she received 7036–7042.
the Best Student Paper Award at the 2002 IEEE Conference [22] W. Qi, P. Zhang, and Z. Deng, “Robust sequential covariance intersec-
on Decision and Control. She was the recipient of a National tion fusion Kalman filtering over multi-agent sensor networks with mea-
surement delays and uncertain noise variances,” Acta Autom. Sin., vol. 40,
Science Foundation CAREER Award in 2007. For the paper no. 11, pp. 2632–2642, 2014.
“Motion Coordination with Distributed Information,” [23] G. Wang, N. Li, and Y. Zhang, “Diffusion distributed Kalman filter
coauthored with Jorge Cortés and Francesco Bullo, she over sensor networks without exchanging raw measurements,” Signal Pro-
cess., vol. 132, pp. 1–7, Mar. 2017.
received the 2008 Control Systems Magazine Outstand- [24] R. Aragues, C. Sagues, and Y. Mezouar, “Feature-based map merging
ing Paper Award. She has served on the editorial boards of with dynamic consensus on information increments,” Autonomous Robots,
European Journal of Control (2011–2013) and currently serves vol. 38, no. 3, pp. 243–259, 2015.
[25] W. Ren and U. M. Al-Saggaf, “Distributed Kalman–Bucy filter with
on the editorial board of Journal of Geometric Mechanics and embedded dynamic averaging algorithm,” IEEE Syst. J., vol. 12, no. 2, pp.
IEEE Transactions on Control of Network Systems. 1722–1730, 2018.

JUNE 2019  «  IEEE CONTROL SYSTEMS MAGAZINE  71


[26] A. Nedic and A. Ozdaglar, “Distributed subgradient methods for [53] R. A. Freeman, P. Yang, and K. M. Lynch, “Stability and convergence
multi-agent optimization,” IEEE Trans. Autom. Control, vol. 54, no. 1, pp. properties of dynamic average consensus estimators,” in Proc. IEEE Conf.
48–61, 2009. Decision and Control, 2006, pp. 398–403.
[27] B. Johansson, M. Rabi, and M. Johansson, “A randomized incremental [54] S. S. Kia, J. Cortés, and S. Martínez, “Singularly perturbed filters for
subgradient method for distributed optimization in networked systems,” dynamic average consensus,” in Proc. European Control Conf., 2013, pp.
SIAM J. Control Optim., vol. 20, no. 3, pp. 1157–1170, 2009. 1758–1763.
[28] J. Wang and N. Elia, “A control perspective for centralized and distrib- [55] H. K. Khalil, Nonlinear Systems, 3rd ed. Englewood Cliffs, NJ: Prentice
uted convex optimization,” in Proc. IEEE Conf. Decision and Control, 2011, Hall, 2002.
pp. 3800–3805. [56] B. Van Scoy, R. A. Freeman, and K. M. Lynch, “Optimal worst-case
[29] M. Zhu and S. Martínez, “On distributed convex optimization under dynamic average consensus,” in Proc. American Control Conf., 2015, pp.
inequality and equality constraints,” IEEE Trans. Autom. Control, vol. 57, no. 5324–5329.
1, pp. 151–164, 2012. [57] B. Van Scoy, R. A. Freeman, and K. M. Lynch, “Design of robust dy-
[30] J. Lu and C. Y. Tang, “Zero-gradient-sum algorithms for distributed namic average consensus estimators,” in Proc. IEEE Conf. Decision and Con-
convex optimization: The continuous-time case,” IEEE Trans. Autom. Con- trol, 2015, pp. 6269–6275.
trol, vol. 57, no. 9, pp. 2348–2354, 2012. [58] B. Van Scoy, R. A. Freeman, and K. M. Lynch, “Exploiting memory in
[31] B. Gharesifard and J. Cortés, “Distributed continuous-time convex op- dynamic average consensus,” in Proc. 53rd Annu. Allerton Conf. Communica-
timization on weight-balanced digraphs,” IEEE Trans. Autom. Control, vol. tion, Control, and Computing, 2015, pp. 258–265.
59, no. 3, pp. 781–786, 2014. [59] B. N. Oreshkin, M. J. Coates, and M. G. Rabbat, “Optimization and anal-
[32] S. S. Kia, J. Cortés, and S. Martínez, “Distributed convex optimization ysis of distributed averaging with short node memory,” IEEE Trans. Signal
via continuous-time coordination algorithms with discrete-time communi- Process., vol. 58, no. 5, pp. 2850–2865, 2010.
cation,” Automatica, vol. 55, pp. 254–264, May 2015. [60] T. Erseghe, D. Zennaro, E. Dall’Anese, and L. Vangelista, “Fast con-
[33] G. Qu and N. Li, “Accelerated distributed Nesterov gradient descent sensus by the alternating direction multipliers method,” IEEE Trans. Signal
for convex and smooth functions,” in Proc. IEEE Conf. Decision and Control, Process., vol. 59, no. 11, pp. 5523–5537, 2011.
2017, pp. 2260–2267. [61] E. Kokiopoulou and P. Frossard, “Polynomial filtering for fast conver-
[34] Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic Course, gence in distributed consensus,” IEEE Trans. Signal Process., vol. 57, no. 1,
vol. 87. New York: Springer-Verlag, 2014. pp. 342–354, 2009.
[35] R. Carli, G. Notarstefano, L. Schenato, and D. Varagnolo, “Analysis of [62] Y. Yuan, J. Liu, R. M. Murray, and J. Gonçalves, “Decentralised min-
Newton–Raphson consensus for multi-agent convex optimization under imal-time dynamic consensus,” in Proc. American Control Conf., 2012, pp.
asynchronous and lossy communication,” in Proc. IEEE Conf. Decision and 800–805.
Control, 2015, pp. 418–424. [63] E. Montijano, J. I. Montijano, and C. Sagüés, “Chebyshev polynomials
[36] J. Xu, S. Zhu, Y. C. Soh, and L. Xie, “Augmented distributed gradient in distributed consensus applications,” IEEE Trans. Signal Process., vol. 61,
methods for multi-agent optimization under uncoordinated constant step- no. 3, pp. 693–706, 2013.
sizes,” in Proc. IEEE Conf. Decision and Control, 2015, pp. 2055–2060. [64] M. L. Elwin, R. A. Freeman, and K. M. Lynch, “A systematic design pro-
[37] D. Varagnolo, F. Zanella, P. G. A. Cenedese, and L. Schenato, “New- cess for internal model average consensus estimators,” in Proc. IEEE Conf.
ton–Raphson consensus for distributed convex optimization,” IEEE Trans. Decision and Control, 2013, pp. 5878–5883.
Autom. Control, vol. 61, no. 4, pp. 994–1009, 2016. [65] M. L. Elwin, R. A. Freeman, and K. M. Lynch, “Worst-case optimal av-
[38] P. D. Lorenzo and G. Scutari, “NEXT: In-network nonconvex optimiza- erage consensus estimators for robot swarms,” in Proc. 2014 IEEE/RSJ Int.
tion,” IEEE Trans. Signal Inf. Process. Netw., vol. 2, no. 2, pp. 120–136, 2016. Conf. Intelligent Robots and Systems, 2014, pp. 3814–3819.
[39] A. Nedic, A. Olshevsky, and W. Shi, “Achieving geometric convergence [66] H. Bai, R. A. Freeman, and K. M. Lynch, “Robust dynamic average
for distributed optimization over time-varying graphs,” SIAM J. Optim., consensus of time-varying inputs,” in Proc. IEEE Conf. Decision and Control,
vol. 27, no. 4, pp. 2597–2633, 2017. 2010, pp. 3104–3109.
[40] A. J. Wood, F. Wollenberg, and G. B. Sheble, Power Generation, Operation [67] H. Bai, “Adaptive motion coordination with an unknown reference ve-
and Control, 3rd ed. Hoboken, NJ: Wiley, 2013. locity,” in Proc. American Control Conf., 2015, pp. 5581–5586.
[41] A. Cherukuri and J. Cortés, “Distributed generator coordination for [68] B. Van Scoy, R. A. Freeman, and K. M. Lynch, “Feedforward estimators
initialization and anytime optimization in economic dispatch,” IEEE Trans. for the distributed average tracking of bandlimited signals in discrete time
Control Netw. Syst., vol. 2, no. 3, pp. 226–237, 2015. with switching graph topology,” in Proc. IEEE Conf. Decision and Control,
[42] R. Madan and S. Lall, “Distributed algorithms for maximum lifetime 2016, pp. 4284–4289.
routing in wireless sensor networks,” IEEE Trans. Wireless Commun., vol. 5, [69] F. Chen, Y. Cao, and W. Ren, “Distributed average tracking of multiple
no. 8, pp. 2185–2193, 2006. time-varying reference signals with bounded derivatives,” IEEE Trans. Au-
[43] G. M. Heal, “Planning without prices,” Rev. Economic Stud., vol. 36, no. tom. Control, vol. 57, no. 12, pp. 3169–3174, 2012.
3, pp. 347–362, 1969. [70] J. George, R. A. Freeman, and K. M. Lynch, “Robust dynamic average
[44] K. J. Arrow, L. Hurwicz, and H. Uzawa, Studies in Linear and Nonlinear consensus algorithm for signals with bounded derivatives,” in Proc. Ameri-
Programming. Stanford, CA: Stanford Univ. Press, 1958. can Control Conf., 2017, pp. 352–357.
[45] A. Cherukuri, B. Gharesifard, and J. Cortés, “Saddle-point dynamics: [71] S. Rahili and W. Ren, “Heterogeneous distributed average tracking us-
Conditions for asymptotic stability of saddle points,” SIAM J. Control Op- ing nonsmooth algorithms,” in Proc. American Control Conf., 2017, pp. 691–696.
tim., vol. 55, no. 1, pp. 486–511, 2017. [72] L. Yu, J. P. Barbot, D. Benmerzouk, D. Boutat, T. Floquet, and G. Zheng,
[46] A. Cherukuri and J. Cortés, “Initialization-free distributed coordina- Discussion About Sliding Mode Algorithms, Zeno Phenomena and Observability.
tion for economic dispatch under varying loads and generator commit- New York: Springer-Verlag, 2012, pp. 199–219.
ment,” Automatica, vol. 74, pp. 183–193, Dec. 2016. [73] J.-J. E. Slotine and W. Li, Applied Nonlinear Control. Englewood Cliffs,
[47] S. S. Kia, “Distributed optimal in-network resource allocation algo- NJ: Prentice Hall, 1991.
rithm design via a control theoretic approach,” Syst. Control Lett., vol. 107, [74] L. Fridman and A. Levant, Higher Order Sliding Modes. Boca Raton, FL:
pp. 49–57, Sept. 2017. CRC, 2002, pp. 53–101.
[48] J. Tsitsiklis, “Problems in decentralized decision making and computa- [75] E. G. Rojo-Rodriguez, E. J. Ollervides, J. G. Rodriguez, E. S. Espinoza,
tion,” Ph.D. dissertation, MIT, Cambridge, MA, 1984. P. Zambrano-Robledo, and O. Garcia, “Implementation of a super twisting
[49] D. P. Spanos, R. Olfati-Saber, and R. M. Murray, “Dynamic consensus for controller for distributed formation flight of multi-agent systems based on
mobile networks,” in Proc. IFAC World Congr., 2005, Art. no. Mo-A09-TO/5. consensus algorithms,” in Proc. Int. Conf. Unmanned Aircraft Systems, Mi-
[50] M. Fiedler, “Algebraic connectivity of graphs,” Czechoslovak. Math. J., ami, FL, 2017, pp. 1101–1107.
vol. 23, no. 2, pp. 298–305, 1973. [76] M. Zhu and S. Martínez, “Discrete-time dynamic average consensus,”
[51] N. M. M. de Abreu, “Old and new results on algebraic connectivity of Automatica, vol. 46, no. 2, pp. 322–329, 2010.
graphs,” Linear Algebra Its Applicat., vol. 423, no. 1, pp. 53–73, 2007. [77] E. Montijano, J. I. Montijano, C. Sagüés, and S. Martínez, “Step size
[52] R. Carli, F. Fagnani, A. Speranzon, and S. Zampieri, “Communication analysis in discrete-time dynamic average consensus,” in Proc. American
constraints in the average consensus problem,” Automatica, vol. 44, no. 3, Control Conf., 2014, pp. 5127–5132.
pp. 671–684, 2008. 

72  IEEE CONTROL SYSTEMS MAGAZINE  »  JUNE 2019

You might also like