You are on page 1of 6

Stability and Convergence Properties of

Dynamic Average Consensus Estimators


Randy A. Freeman Peng Yang Kevin M. Lynch

Abstract— We analyze two different estimation algorithms if these inputs are changing too rapidly, then we cannot
for dynamic average consensus in sensing and communication expect accurate tracking as it takes time for information to
networks, a proportional algorithm and a proportional-integral flow across the network. Nevertheless, we will still require
algorithm. We investigate the stability properties of these
estimators under changing inputs and network topologies as bounded behavior of the estimators in the presence of rapidly
well as their convergence properties under constant or slowly- changing inputs.
varying inputs. In doing so, we discover that the more complex In [1], we introduced two algorithms for dynamic average
proportional-integral algorithm has performance benefits over consensus: proportional (or P) dynamic average consensus
the simpler proportional algorithm.
(closely related to the algorithm in [6]) and proportional-
integral (or PI) dynamic average consensus. In both al-
I. I NTRODUCTION
gorithms, each agent implements a linear estimator with
We are pursuing a framework for systematic design of internal state that receives input from neighbors and the
emergent behaviors in sensing and communication networks environment. In this paper we examine the performance of
of mobile agents. The desired emergent behavior is encoded these algorithms under general assumptions on the weighting
in an objective function of the states of the agents in each agent applies to the information it receives from its
the network. Each agent, through communication with its neighbors. For example, as shown in [3], [7], the network
neighbors, simultaneously estimates the global performance may converge more quickly if agents are allowed to apply
of the system and controls its own motion to improve this a negative weight to information from their neighbors. In
performance. For some problems, the distributed system can addition to allowing negative weights, we allow two commu-
be guaranteed to converge to a global performance optimum nicating agents to weight each other’s information differently.
if each agent implements a version of a simple gradient- We provide conditions on these weights guaranteeing input-
following control, despite the nonlinear coupling of the to-state stability (ISS) and convergence of the estimates to
estimators and controllers, changing network topologies, and their correct values. The ISS property was also used in [8]
addition and deletion of agents. This approach is applied to in the analysis of a different class of consensus problems.
formation control in [1]. In Section II we define classes of graph Laplacians in-
To implement this approach to distributed estimation and duced by different weighting schemes between agents in a
control, each agent must maintain an estimate of the global network, and we provide some useful relationships between
performance of the system. As each agent is estimating these classes. Sections III and IV contain our analyses of
the same quantity, this is a consensus estimation problem. the proportional and proportional-integral estimators, respec-
One useful form of this problem is the average consensus tively. In these sections, we account for changing network
problem, where each agent estimates the average of inputs topologies as agents enter or leave the network or gain or lose
to all the agents in the system. The inputs could be sensor connections with neighbors as time progresses. We discover
readings such as agent position, local temperature, distance that the PI estimator, although more complex than the P
to a target, etc. In static average consensus, a snapshot of estimator, generates a better steady-state response to constant
the input vector is used to initialize the estimator states, after or slowly-varying inputs. Furthermore, the P estimator has
which the input is ignored. Each agent communicates its unity high-frequency gain, whereas the PI estimator has zero
estimate, or decision variable, to its neighbors in the com- high-frequency gain and thus provides additional filtering of
munication graph, and each agent uses a linear weighting of any noise present in the input signal. Finally, we present
the information it receives to update its own estimate [2], [3], some simulations in Section V and concluding remarks in
[4], [5]. In dynamic average consensus, the inputs continually Section VI. The appendix contains three technical lemmas
drive the estimators, and the goal of the distributed estimators used in the proofs in Section IV.
is to track the average of the changing inputs [6]. Of course,
II. G RAPH L APLACIANS
This work was supported in part by NSF grants IIS-0308224, ECS-
0115317, and ECS-0601661.
Randy Freeman is with the Department of Electrical Engineering and We consider the collection G of weighted directed graphs
Computer Science, Northwestern University, Evanston, IL 60208, USA having n > 2 nodes labeled 1 through n such that each
freeman@ece.northwestern.edu. ordered pair (i, j) of nodes is connected by a single directed
Peng Yang and Kevin Lynch are with the Department of Mechan-
ical Engineering, Northwestern University, Evanston, IL 60208, USA arc with weight ai j ∈ R (here a weight of zero represents
p-yang@northwestern.edu, kmlynch@northwestern.edu. the absence of an arc, and we explicitly allow negative
weights). The matrix A = [ai j ] ∈ Rn×n represents the adja- We next establish some relationships between these var-
cency matrix for such a graph, and its Laplacian is the matrix ious subsets of L. These relationships can be used to help
L = diag(A1) − A, where 1 ∈ Rn denotes the vector of n ones verify the conditions given in the next sections which guaran-
and diag(A1) is the diagonal matrix whose n diagonal entries tee the stability and convergence properties of our estimators.
are the n elements of the vector A1. Hence the set of all Due to space constraints, their proofs are omitted.
possible Laplacians for such graphs is precisely the set Lemma 1: The following statements are true:
L = L ∈ Rn×n : L1 = 0 = L ∈ Rn×n : LΠ = L , (1)
  (a) Lsym ⊂ Lbal , and if L ∈ Lbal then L + LT ∈ Lsym ,
(b) L+ ⊂ L0 ⊂ ε Lε = L,
S

where Π ∈ Rn×n denotes the projection matrix (c) Lrnk ∪ Lstr ⊂ Lwk ,
11T (d) Lsym ∩ Lstr = Lsym ∩ Lwk ,
Π=I− (2) (e) Lbal ∩ Lpos ∩ Lstr = Lbal ∩ Lpos ∩ Lwk ,
n
(f) Lpos ∩ Lstr ⊂ Lrnk ,
which satisfies Π = ΠT = Π2 > 0 and Π1 = 0. We have not (g) Lsym ∩ Lstr 6⊂ Lrnk ,
assumed that the diagonal weights aii are zero, but because (h) Lbal ∩ Lpos ⊂ L0 ,
these diagonal weights do not affect the Laplacian, we are (i) Lsym ∩ L0 ∩ Lrnk = Lsym ∩ L+ ,
free to ignore them. In fact, if we ignore these diagonal (j) Lbal ∩ L0 ∩ Lrnk 6⊂ L+ , and
weights (or assume they are zero), then we obtain a one- (k) Lbal ∩ Lpos ∩ Lrnk = Lbal ∩ Lpos ∩ L+ .
to-one correspondence between the graphs in G and the If we want to guarantee that a Laplacian L for a strongly
Laplacians in L. connected graph belongs to Lrnk , then it would appear from
If the weights between distinct nodes of the graph are all Lemma 1 parts (f) and (g) that we should constrain the
positive (or zero), then the off-diagonal entries `i j of the graph to have only positive weights. However, the following
Laplacian L will be all nonpositive, so we define result implies that the cases in which L belongs to Lstr but
not to Lrnk are pathological and will disappear under small
Lpos = L ∈ L : `i j 6 0 for i 6= j .

(3)
perturbations of the weights. In other words, if L ∈ Lstr then
We define the set of symmetric Laplacians as we are practically guaranteed that L ∈ Lrnk , even when the
graph has negative weights.
Lsym = L ∈ L : L = LT ,

(4) Lemma 2: Let L : U → L be a real analytic function from
whereas an open subset U ⊂ R p to L such that L(U)∩Lrnk 6= ∅. Then
L−1 (Lrnk ) is open and dense in U.
Lbal = L ∈ L : LT 1 = 0

(5)
III. P ROPORTIONAL DYNAMIC C ONSENSUS
= L ∈ Rn×n : ΠL = LΠ = L

(6)
Suppose each agent implements a proportional dynamic
represents the set of Laplacians for balanced graphs [2]. Note consensus estimator of the form
that the set Lsym ∩ Lpos coincides with the set of weighted  
Laplacians as defined in [9]. Because every Laplacian has an ẇi (t) = −γwi (t) − ∑ ai j (t) xi (t) − x j (t) (10)
eigenvalue at zero with eigenvector 1, the highest possible j6=i
rank of a Laplacian is n − 1, and we define xi (t) = wi (t) + ui (t) , (11)
Lrnk = L ∈ L : rank(L) = n − 1 .

(7) where ui (t) ∈ R is the input, xi (t) ∈ R is the decision variable,
wi (t) ∈ R is the internal estimator state, γ > 0 is a global
An undirected path on a graph in G is a finite sequence estimator parameter, and ai j (t) are piecewise-continuous,
of nodes i1 , i2 , . . . , i p such that either the weight ai j i j+1 or time-varying estimator gains. We impose the constraint that
the weight ai j+1 i j is nonzero for 1 6 j < p. Likewise, a ai j (t) = a ji (t) = 0 if agents i and j cannot communicate with
directed path is a finite sequence of nodes i1 , i2 , . . . , i p such each other at time t. We may write the collection of these n
that ai j i j+1 6= 0 for 1 6 j < p. A directed cycle is a directed estimators in vector form as
path i1 , i2 , . . . , i p for which i1 = i p . A graph in G is weakly
connected when every pair of distinct nodes lies on some ẇ(t) = −γw(t) − L(t)x(t)
undirected path, and it is strongly connected when every pair
 
= − γI + L(t) w(t) − L(t)u(t) (12)
of distinct nodes lies on some directed cycle. We let Lwk
x(t) = w(t) + u(t) (13)
and Lstr denote the sets of Laplacians for all weakly and
strongly connected graphs in G, respectively. Finally, for each where L(t) is the Laplacian for the network graph. We are
real parameter ε we define interested in achieving average consensus, namely, we would
like the error vector
Lε = L ∈ L : Π(L + LT )Π > 2εΠ ,

(8)
11T
and we let L+ denote the union ex (t) = x(t) − u(t) (14)
n
L+ = Lε .
[
(9) to approach zero as t → ∞. However, we will first show that
ε>0 the state equation (12) is ISS under appropriate assumptions.
Theorem 3: Let L : R → Lbal be a piecewise-continuous, if an agent enters or leaves the network, a simultaneous
time-varying, uniformly bounded, balanced Laplacian ma- reinitialization of the estimators will be required to guarantee
trix, suppose there exists ε ∈ R such that L(t) ∈ Lε for zero steady-state error for the new consensus system. In
all t ∈ R, and suppose both γ > 0 and γ + ε > 0. Then the contrast, choosing a small but positive γ would introduce
state equation (12) is ISS. a small steady-state error but would also allow the network
The proof of this result, omitted here, is based on the ISS to slowly “forget” any larger errors introduced by incorrect
Lyapunov function candidate V (t) = |Πw(t)|2 . Note that if estimator initializations. To eliminate steady-state errors en-
γ + ε > 0 but γ = 0 in Theorem 3, then 1T w(t) is a constant, tirely without the need for correct initializations, at least for
and if we exclude this uncontrollable constant state then the the case of constant inputs and Laplacians, we introduce an
remaining dynamics are still ISS. integral term in the estimator as described in the next section.
We next consider convergence, namely, whether or not
IV. P ROPORTIONAL -I NTEGRAL DYNAMIC C ONSENSUS
the error vector ex (t) in (14) approaches zero. We cannot
expect to achieve small steady-state errors when the input Suppose each agent implements a proportional-integral
vector u(t) is changing too rapidly, because it takes time dynamic consensus estimator of the form
for the effects of u(t) to flow across the network. For this 
ẋi (t) = −γxi (t) − ∑ ai j (t) xi (t) − x j (t)

reason we will assume that both u(t) and its derivative u̇(t) j6=i
are bounded.  
+ ∑ b ji (t) wi (t) − w j (t) + γui (t) (17)
Theorem 4: Let L : R → Lbal be a piecewise-continuous, j6=i
time-varying, balanced Laplacian matrix, and suppose there  
ẇi (t) = − ∑ bi j (t) xi (t) − x j (t) , (18)
exist ε,t0 ∈ R such that L(t) ∈ Lε for all t > t0 . Let γ > 0 be j6=i
such that γ + ε > 0, and suppose there exists µ > 0 such that
the input vector u(t) is absolutely continuous and satisfies where ui (t) ∈ R is the input, xi (t) ∈ R is the decision
variable, [xi (t) wi (t)]T ∈ R2 is the internal estimator state,
|γu(t) + u̇(t)| 6 µ (15) γ > 0 is a global estimator parameter, and ai j (t) and bi j (t)
are piecewise-continuous time-varying estimator gains. We
for almost all t > t0 . Then the consensus estimator (12)–(13)
impose the constraint that ai j (t) = a ji (t) = bi j (t) = b ji (t) = 0
is such that the error vector ex (t) in (14) satisfies
if agents i and j cannot communicate with each other at
1 time t. We may write the collection of these n estimators in
|ex (t)| 6 √ |1T w(t0 )|e−γ(t−t0 )
n vector form as
1 µ
+ |Πx(t0 )|e− 2 (γ+ε)(t−t0 ) + −γI − LP (t) LIT (t)
    
(16) ẋ(t) x(t)
γ +ε =
ẇ(t) −LI (t) 0 w(t)
for all t > t0 . 
γI

The proof of this result, omitted here, is based on exam- + u(t) , (19)
0
ining the dynamics of the disagreement vector δ (t) = Πx(t)
as done in [2]. There is nothing in this proof which requires where LP (t) is the proportional Laplacian (constructed from
the parameter ε to be positive; in particular, the estimate (16) the weights ai j ) and LI (t) is the integral Laplacian (con-
holds even when L(t) ≡ 0, namely, when there is no commu- structed from the weights bi j ). Note the presence of the
nication at all between the agents. However, in this case the transposed integral Laplacian in the estimator: to implement
steady-state value of the bound (16) on the error |ex (t)| will this scheme, agent i must know not only the entries in row i
be no smaller than |u| for a constant input u (take µ = γ|u|). of LI (t) (which are its own weights on its neighbors’ data)
To achieve small steady-state errors for constant (or slowly but also those in column i of LI (t) (which are its neighbors’
varying) inputs, we need ε > 0 and γ  ε. Furthermore, we weights on its own data). Thus weight information must be
see from (8) that ε scales with the estimator weights, so communicated between agents in addition to the estimator
when ε > 0 we can simply increase these weights to achieve state values. Of course, such additional communication is
arbitrarily small steady-state errors for given values of γ unnecessary when all agents weight their incoming data the
and µ. However, in doing so we also increase the maximum same way, or more generally when LI (t) ∈ Lsym , but impos-
eigenvalue of the graph Laplacian and thereby reduce the ing this restriction in the design might adversely constrain
robustness of the estimator to communication delays between the achievable performance of the estimator. Otherwise, for
agents [2]. Hence the presence of any such delays will limit asymmetric LI (t), this weight communication effectively
the bound on the derivative u̇(t) under which we can achieve creates the balancing required for the correct convergence
accurate tracking. of the estimates.
The choice γ = 0 is appealing for the case ε > 0 because In contrast to the P estimator in (10)–(11), the PI estimator
then for constant inputs we have δ (t) → 0 as t → ∞ (take in (17)–(18) has no direct feedthrough from the inputs ui (t)
µ = 0). However, in this case 1T w(t) is constant and thus to the decision variables xi (t); in other words, the estimator
ex (t) → 11T w(t0 )/n as t → ∞. In other words, only a correct has zero high-frequency gain. For this reason we would
estimator initialization (say w(t0 ) = 0) will lead to zero expect the PI estimator to provide better filtering of noise
steady-state error. One consequence of this property is that in the inputs ui (t).
We are again interested in achieving average consensus, we may define the constant vectors
namely, we would like the error vector ex (t) in (14) to  T 
r u
approach zero as t → ∞. We first examine the case in which  0 
the proportional and integral Laplacians are constant. z̄ =  .  , ȳ? = −γ(ST LIT Q)−1 ST u (30)
 
Theorem 5: Fix LI ∈ Lrnk and LP ∈ L, let ε ∈ R be such  .. 
that LP ∈ Lε , and suppose the estimator parameter γ > 0 0
is chosen such that γ + ε > 0. Then for any constant input which have the property that
u ∈ Rn and any initial states x(t0 ), w(t0 ) ∈ Rn , the trajectories  
of the system
  z̄
A B 
ȳ?  = 0 . (31)
C D
−γI − LP LIT u
      
ẋ(t) x(t) γI
= + u (20)
ẇ(t) −LI 0 w(t) 0 Next we shift the origin in the state space by defining
   
are such that x(t) and w(t) converge to constant vectors and z(t) z̄
ζ (t) = − (32)
ex (t) → 0 exponentially as t → ∞. y? (t) ȳ?

Proof: Let r = 1/ n and ` ∈ Rn be right and left which from (31) satisfies
unit eigenvectors of LI (respectively) corresponding to its
ζ˙ (t)
      
A B ζ (t) A
eigenvalue at zero, so that LI r = LIT ` = 0. Let Q, S ∈ Rn×(n−1) =
C D 0
=
C
ζ (t) . (33)
ex (t)
be such that [` Q] and [r S] are orthogonal matrices, and
consider the state coordinate change We have left to show that A is Hurwitz, because in this case
 T  both ζ (t) and ex (t) will converge to zero exponentially as
  r t → ∞. We observe from (28) that A is block upper triangular,
x(t) = r S z(t) z(t) = x(t) (21)
ST with a scalar block of −γ in the upper left corner and the
 T  (2n − 2) × (2n − 2) block
  `
w(t) = ` Q y(t) y(t) = w(t) . (22) 
−γI − ST LP S ST LIT Q

QT F= (34)
−QT LI S 0
In these new state coordinates, our system becomes
in the remaining lower right corner. Because ΠS = S we have
 
−γ −rT LP S ST (LP + LPT )S = ST Π(LP + LPT )ΠS
ż(t) = z(t)
0 −γI − ST LP S
   T  > 2εST ΠS = 2εST S = 2εI , (35)
0 0 γr
+ y(t) + u (23) and it follows from Lemma 9, Lemma 10, and the fact that
0 ST LIT Q γST
  γ + ε > 0 that this matrix F in (34) is Hurwitz.
0 0 Neither LP nor LI is required to belong to Lbal or even
ẏ(t) = z(t) (24)
0 −QT LI S to Lpos in Theorem 5. Furthermore, if it is known that ε > 0
  11T (which can be guaranteed via the simplification LP = 0), then
ex (t) = r S z(t) − u. (25)
n any value of γ > 0 will satisfy γ + ε > 0. Hence the primary
The first element y1 of the vector y represents an uncontrol- assumption is that LI ∈ Lrnk , which from Lemma 1 part (c)
lable, unobservable state with dynamics ẏ1 (t) = 0. If we drop implies a weakly connected network.
this state and define y? (t) = [y2 (t) . . . yn (t)]T = QT w(t), then We next turn our attention to the case of time-varying
we can write the remaining dynamics as Laplacians. First, we show that under mild assumptions,
there is a finite uniform L2 -gain from the input u(t) to the
disagreement vector δ (t) = Πx(t).
   
ż(t) z(t)
=A + Bu (26) Theorem 6: Let LP , LI : R → L be piecewise-continuous,
ẏ? (t) y? (t)
  time-varying Laplacian matrices, let ε,t0 ∈ R be such that
z(t)
ex (t) = C + Du , (27) LP (t) ∈ Lε for all t > t0 , and suppose γ + ε > 0. Then the
y? (t)
disagreement vector δ (t) = Πx(t) for the system (19) satisfies
where Z t
1 h i
|δ (τ)|2 dτ 6 |δ (t0 )|2 + |w(t0 )|2

−γ −rT LP S 0
 t0 γ +ε
γ2
Z t
A =  0 −γI − ST LP S ST LIT Q  , (28) + |Πu(τ)|2 dτ (36)
0 −QT LI S 0 (γ + ε)2 t0
 T 
γr for all t > t0 .
  11T
B =  γST  , C = r S 0 , D = − . (29) The proof of this theorem, omitted here, is based on the
n storage function V (t) = |δ (t)|2 + |w(t)|2 .
0
Next, we consider the case in which the integral Lapla-
Now it follows from Lemma 10 that QT LI S is invertible, so cian LI (t) remains balanced even as it changes in time.
Theorem 7: Let LP , LI : R → L be piecewise-continuous, single uncontrollable scalar state which remains constant—
time-varying Laplacian matrices, and suppose there exist all states will converge to zero exponentially under zero
ε ∈ R and ρ > 0 such that LP (t) ∈ Lε and LI (t) ∈ Lbal ∩ Lρ input. Let us suppose for the moment that LI (t) is piecewise-
for all t ∈ R. Suppose further that kLP (t)k and kLI (t)k are C1 and such that LI (t) ∈ Lrnk for all t ∈ R. It follows
uniformly bounded in t, and let γ > 0 be such that γ + ε > 0. from [10, Theorem 2.4] that there exists a piecewise-C1 unit
Then excluding a single uncontrollable scalar state which vector `(t) such that LIT (·)`(·) ≡ 0. Here `(·) is a left unit
remains constant, the remaining dynamics of the system (19) eigenvector of the single zero eigenvalue of LI (·) and hence
are ISS. If in addition ε > 0 and LP (t) ∈ Lbal for all t ∈ R, the matrix `(·)`T (·) is uniquely determined by LI (·), namely,
then for any β > 0 we can choose a sufficiently small γ > 0 `(·)`T (·) ≡ I − LI (·)LI+ (·), where LI+ represents the Moore-
such that for any constant input u and any initial state we Penrose pseudoinverse of LI . The following theorem, whose
have |ex (t)| 6 β |u| for sufficiently large t. proof we omit due to space constraints, is a straightforward
Proof: Because LI (t) ∈ Lbal , we can choose ` = r consequence of the results in [11].
and Q = S in the coordinate change (21)–(22). In the Theorem 8: Let LP : R → L be a piecewise-continuous,
new coordinates we again obtain ẏ1 (t) = 0 together with time-varying Laplacian matrix such that kLP (·)k is uniformly
(26)–(27), where now the matrix A in (28) depends on bounded, let ε ∈ R be such that LP (t) ∈ Lε for all t ∈ R,
time A = A(t) through the Laplacians. Introducing z? (t) = and let γ > 0 be such that γ + ε > 0. Let LI : R → L be a
[z2 (t) . . . zn (t)]T = ST x(t), we can write these dynamics as piecewise-C1 time-varying Laplacian matrix such that both
kLI (·)k and kLI+ (·)k are uniformly bounded, and suppose
ż1 (t) = −γz1 (t) − rT LP (t)Sz? (t) + γrT u(t) (37) LI (t) ∈ Lrnk for all t ∈ R. Suppose kL̇I (·)k is uniformly
     
ż? (t) z? (t) γST bounded over its intervals of existence, and suppose the
= F(t) + u(t) , (38)
ẏ? (t) y? (t) 0 lengths of these intervals are bounded away from zero.
Finally, suppose there exist T > 0 and µ < 1 such that
where F(t) is the matrix (34) which is now time-varying
Z t+T
through the Laplacians. We conclude from Lemma 11 with 1
`(τ)`T (τ) dτ 6 µI (42)
ε2 = γ + ε > 0 and ρ1 = ρ that these dynamics are ISS T t
Next suppose u is constant and LP (t) ∈ Lbal for all t ∈ R. for all t ∈ R, where `(·) is a unit vector with LIT (·)`(·) ≡ 0.
Then (37) becomes Then under zero input u(·) ≡ 0, the zero equilibrium state
ż1 (t) = −γz1 (t) + γrT u (39) (x(·), w(·)) ≡ (0, 0) of the system (19) is exponentially stable.
In particular, the system (19) is ISS.
which implies
V. S IMULATIONS
z1 (t) = e−γ(t−t0 ) z1 (t0 ) − rT u + rT u
 
We first considered a constant graph with Laplacian
= e−γ(t−t0 ) rT x(t0 ) − u + rT u
 
(40)
1

and therefore  
−γ(t−t0 ) 11
T   2 2 −2 0 0
ex (t) = e x(t0 ) − u + Sz? (t) .
(41) 1
 −1 1 0 0 
n 1
L= . (43)
 0 −1 2 −1 
If ε > 0 then we can apply Lemma 11 to (38) with ε2 = ε
0 0 1 −1
and ρ1 = ρ, and we conclude that |Sz? (t)| = |z? (t)| converges −1

to a ball whose size is proportional to γ|u| (with the con- This graph is unbalanced and weakly connected with both
stant of proportionality being independent of γ). Thus by positive and negative weights, and its Laplacian L belongs to
choosing γ sufficiently small we can arbitrarily reduce the Lrnk ∩ Lε with ε = −0.77. For γ = 1.5 and a constant input
influence of the input u on the steady state value of |z? (t)| vector u = [3 −2 −3 2]T , the P estimator converged to
and thus also on |ex (t)| through (41). a steady-state value of ex = [0.8 −0.9 0.1 5.8]T from a
We see in Theorem 7 that by choosing γ to be small zero initial state, whereas the PI estimator with LP = LI = L
relative to ε and ρ, we can guarantee small steady-state errors converged to ex = 0 from any initial state.
for constant (or slowly varying) inputs, even for arbitrarily We next ran a simulation of the P and PI estimators for
fast time-varying Laplacians. However, the error ex (t) in (41) a network with five nodes whose graph Laplacians switch
has a term which decays slowly when γ is small. We at random times, with the last switch at time t = 100 and
can avoid exciting this slow mode by choosing the initial with 20 other switches before that. At each switch, the new
estimator states as x(t0 ) = u, but initialization errors would Laplacians were randomly generated subject to the following
be only slowly forgotten. constraints. The Laplacian L(t) for the P estimator in (12)–
Finally, we examine the case in which LI (t) is time- (13) was chosen identical to the proportional Laplacian LP (t)
varying but unbalanced; in this case we provide conditions for the PI estimator in (19). This Laplacian was constrained
under which the zero state of the system (19) under zero to belong to the set Lsym ∩Lpos ∩Lrnk for each t. The integral
input u(t) ≡ 0 is exponentially stable (and hence the system Laplacian LI (t) in (19) was constrained only to the set Lrnk
with input is ISS). In particular, there will no longer be a for each t (and it was usually unbalanced with both positive
1

Lemma 11: Let A, B : R → R p×p be piecewise-continuous


10
PI
P
0
10
and such that
kA(t)k2 6 ε1 , A(t) + AT(t) 6 −2ε2 I , and
−1
10
2ρ1 I 6 B(t) + BT(t) 6 ρ2 I (45)
−2
10 for all t ∈ R and for some constants ε1 , ε2 , ρ1 , ρ2 > 0. If σ
is a constant such that 0 < σ < 1 and
|ex|

−3
10 ρ1 ε2
σ6 , (46)
−4
ε1 + ρ1 ρ2
10
then the matrices
   
−5
10 A(t) BT(t) I −σ I
F(t) = , P= , and
−B(t) 0 −σ I I
−6  
10
0 20 40 60 80 100 120 ε2 I 0
t
Q= (47)
0 σ ρ1 I
Fig. 1. Comparison of |ex (t)| for the P and PI estimators. satisfy (1 − σ )I 6 P 6 (1 + σ )I (48)
PF(t) + F T(t)P + Q 6 0 (49)
and negative weights). We chose γ = 0.1 and the same
for all t ∈ R. Furthermore, for any t0 ∈R, the state χ(t) of
constant input vector u in each estimator, and the results
the system I
from a nonzero initial state are shown in Fig. 1. Observe χ̇(t) = F(t)χ(t) + ν(t) (50)
that after the last switch, the PI estimator error goes to zero 0
but the P estimator exhibits nonzero steady-state error (note with bounded, piecewise-C0 input ν : R → R p satisfies
the logarithmic scale on the vertical axis in Fig. 1). 1 + σ h −α(t−t0 ) 1 i
|χ(t)|2 6 e |χ(t0 )|2 + kνk2∞ (51)
VI. C ONCLUDING R EMARKS 1−σ ακ
for all t > t0 , where κ can be any constant chosen such that
We have provided stability and convergence analyses for
0 < κ < min{ε2 , ρ1 }, and
two different estimation algorithms for dynamic average
consensus in sensing and communication networks. For 1 
α= min ε2 − κ, σ (ρ1 − κ) . (52)
constant, strongly connected graphs, the PI estimator solves 1+σ
the consensus problem under constant (or slowly-varying) In particular, the system (50) is ISS.
inputs for virtually any choices of the estimator weights. R EFERENCES
Under more restrictive assumptions on these choices, both
[1] R. A. Freeman, P. Yang, and K. M. Lynch, “Distributed estimation
the P and the PI estimators remain stable in the presence of and control of swarm formation statistics,” in Proceedings of the 2006
changing graph topologies and inputs. More work is needed American Control Conference, (Minneapolis, MN), pp. 749–755, June
to fully investigate the noise and robustness properties of 2006.
[2] R. Olfati-Saber and R. M. Murray, “Consensus problems in networks
these estimators. In addition, it would be useful to have of agents with switching topology and time-delays,” IEEE Trans.
heuristics for decentralized weight selection for the purpose Automat. Contr., vol. 49, pp. 1520–1533, Sept. 2004.
of improving estimator performance. [3] L. Xiao and S. Boyd, “Fast linear iterations for distributed averaging,”
Syst. Contr. Lett., vol. 53, pp. 65–78, Sept. 2004.
[4] L. Xiao, S. Boyd, and S. Lall, “A scheme for robust distributed sensor
A PPENDIX fusion based on average consensus,” in Int. Conf. Info. Processing in
Due to space constraints, we omit the proofs of the Sensor Networks, pp. 63–70, Apr. 2005.
[5] W. Ren, R. W. Beard, and E. M. Atkins, “A survey of consensus
following lemmas. problems in multi-agent coordination,” in Proceedings of the 2005
Lemma 9: Suppose matrices A, B ∈ R p×p are such that American Control Conference, (Portland, Oregon), pp. 1859–1864,
A + AT < 0 and B is invertible. Then the matrix June 2005.
  [6] D. P. Spanos, R. Olfati-Saber, and R. M. Murray, “Dynamic consen-
A BT sus for mobile networks,” in Proceedings of the 2005 IFAC World
F= (44)
−B 0 Congress, (Prague), 2005.
[7] P. Yang, R. A. Freeman, and K. M. Lynch, “Optimal information
is Hurwitz. propagation in sensor networks,” in IEEE Int. Conf. Robotics and
Automation, (Orlando, Florida), pp. 3122–3127, May 2006.
Note that the conclusion of Lemma 9 fails to hold in [8] D. B. Kingston, W. Ren, and R. W. Beard, “Consensus algorithms are
general if A is merely Hurwitz. input-to-state stable,” in Proceedings of the 2005 American Control
Lemma 10: Suppose the matrix A ∈ R p×p has rank p − 1, Conference, (Portland, Oregon), pp. 1686–1690, June 2005.
and let `, r ∈ R p be left and right eigenvectors (respectively) [9] C. Godsil and G. Royle, Algebraic Graph Theory, vol. 207 of Graduate
Texts in Mathematics. New York: Springer-Verlag, 2001.
of its zero eigenvalue. Let B ∈ R p×(p−1) be a matrix whose [10] J.-L. Chern and L. Dieci, “Smoothness and periodicity of some matrix
columns form a basis for span{`}⊥ , and let C ∈ R p×(p−1) be decompositions,” SIAM J. Matrix Anal. Appl., vol. 22, no. 3, pp. 772–
a matrix whose columns form a basis for span{r}⊥ . Then 792, 2000.
[11] A. P. Morgan and K. S. Narendra, “On the stability of nonautonomous
BTAC is invertible. differential equations ẋ = [A + B(t)]x, with skew symmetric matrix
A time-varying extension of Lemma 9 is as follows: B(t),” SIAM J. Control Optim., vol. 15, pp. 163–176, Jan. 1977.

You might also like