You are on page 1of 7

Proceedings of the FrB10.

46th IEEE Conference on Decision and Control
New Orleans, LA, USA, Dec. 12-14, 2007

Distributed Kalman Filtering for Sensor Networks
R. Olfati-Saber

Abstract— In this paper, we introduce three novel distributed of consensus filters is fusion of sensor and covariance data
Kalman filtering (DKF) algorithms for sensor networks. The obtained at each node.
first algorithm is a modification of a previous DKF algorithm The existing DKF algorithm of the author suffers from a
presented by the author in CDC-ECC ’05. The previous algo-
rithm was only applicable to sensors with identical observation key weakness: the algorithm is only valid for sensors with
matrices which meant the process had to be observable by identical sensing models. In other words, it is not applicable
every sensor. The modified DKF algorithm uses two identical to heterogeneous multi-sensor fusion. To be more precise, let
consensus filters for fusion of the sensor data and covariance
information and is applicable to sensor networks with different zi (k) = Hi (k)x(k) + vi (k)
observation matrices. This enables the sensor network to act
as a collective observer for the processes occurring in an be the sensing model of node i in a sensor network. Here,
environment. Then, we introduce a continuous-time distributed x(k) denotes the state of a dynamic process
Kalman filter that uses local aggregation of the sensor data
but attempts to reach a consensus on estimates with other x(k + 1) = Ak x(k) + Bk w(k)
nodes in the network. This peer-to-peer distributed estimation
method gives rise to two iterative distributed Kalman filtering driven by zero-mean white Gaussian noise w(k). Then, the
algorithms with different consensus strategies on estimates. DKF algorithm in [10] is applicable to sensors with identical
Communication complexity and packet-loss issues are dis- Hi ’s. The reason is that under the assumption of identical
cussed. The performance and effectiveness of these distributed
Kalman filtering algorithms are compared and demonstrated
observation matrices, or Hi = H, ∀i, we get
on a target tracking task.
zi (k) = H(k)x(k) + vi (k) = s(k) + vi (k)
Index Terms— sensor networks, distributed Kalman filtering,
consensus filtering, sensor fusion and zi ’s possess the structure necessary for the distributed
averaging feature of the low-pass consensus filter in [14].
This limitation of the existing DKF algorithm motivates
Distributed estimation and tracking is one of the most us to develop novel distributed Kalman filtering algorithms
fundamental collaborative information processing problems for sensor networks that have broader range of applications.
in wireless sensor networks (WSN). Multi-sensor fusion and In particular, we are interested in DKF algorithms capable
tracking problems have a long history in signal processing, of performing the following tracking and estimation tasks:
control theory, and robotics [1], [2], [3], [6], [16]. Moreover, 1) The Hi ’s are different across the entire network1 and
estimation issues in wireless networks with packet-loss have the process with state x(k) is collectively observable
been the center of much attention lately [19], [7], [18]. by all the sensors.
Decentralized Kalman filtering [22], [15] involves state es- 2) Extended Kalman filtering (EKF) for sensors with
timation using a set of local Kalman filters that communicate different nonlinear sensing models
with all other nodes. The information flow is all-to-all with
communication complexity of O(n2 ) which is not scalable zi (k) = hi (x(k)) + vi (k)
for WSNs. Here, we focus on scalable or distributed Kalman
which after linearization leads to the case above (up
Filtering algorithms in which each node only communicates
to trivial terms) with Hi ’s being the Jacobian of the
messages with its neighbors on a network.
partial derivatives of hi (x) w.r.t. x.
Control-theoretic consensus algorithms have proven to
be effective tools for performing network-wide distributed We refer to each node of the distributed Kalman filter
computation tasks such as computing aggregate quantities that provides a state estimate as a Microfilter. A DKF is a
and functions over networks [13], [12], [17], [8], [23], [24], Networked System (or Swarm) of interacting microfilters with
[21], [5]. These algorithms are closely related to gossip-based identical architectures. Each microfilter is computationally
algorithms in computer science literature [9], [4]. implemented as an embedded module in a sensor.
Recently in [10], the author introduced a distributed The main results of this paper are as follows: i) resolving
Kalman Filtering (DKF) algorithm that uses dynamic con- the limitation of the existing DKF by introducing a novel
sensus algorithms [14], [20]. The DKF algorithm consists of microfilter architecture with identical high-pass consensus
a network of micro-Kalman Filters (MKFs) each embedded filters. The revised DKF algorithm is applicable to sensors
with a low-pass and a band-pass consensus filter. The role with different Hi ’s, ii) presenting an alternative distributed
Kalman filtering strategy which does not involve consensus
Reza Olfati-Saber is an Assistant Professor at Thayer School of Engineer-
ing, Dartmouth College, Hanover, NH 03755, email: 1 Some of the Hi ’s could be equal, but all of them are not the same.

1-4244-1498-9/07/$25.00 ©2007 IEEE. 5492

the covariance matrix of the noise v = in [10]. . . Pk = Σk|k−1 . USA. if one can (approximately) compute the The graph G is undirected. Then. and yi is its output. . [20] to perform distributed averaging—hence. the estimates estimation and comparison between DKF algorithms on of the state of the process can be expressed as target tracking. 2. P0 ). . zn (k)) ∈ Rnp be the collective introducing a continuous-time distributed Kalman filter. (1) tecture in Fig. and sensor data of the entire sensor network at time k. the microfilter archi- x(k + 1) = Ak x(k) + Bk w(k). . we present a modified version of the DKF algorithm uncorrelated. z(1). 12-14. Covariance High-Pass (10) Data Pi (k + 1) = Ak Mi (k)ATk + Bk Qi (k)BkT . . x(0) ∼ N (x̄(0). x̂k = E(xk |Zk ). Let us define two network-wide aggregate microfilter by high-gain versions of the high-pass consensus quantities: the fused inverse-covariance matrices filter in [20]. iii) Let z(k) = col(z1 (k). Finally. vn ) is R = diag(R1 . Mk = Σk|k (6) Our main results including the DKF algorithms with consen. . 1. The objective is to perform distributed state estimation (or gorithm emerges. one can define a central estimate x̂(k) are made in Section V. otherwise. Dec. Both in the form wk and vk are zero-mean white Gaussian noise (WGN) and  X X x(0) ∈ Rm is the initial state of the target. In n the following. (β ∼ O(1/λ2 )) for randomly generated ad hoc topologies 5493 . (Micro-KF Iterations. . x̂i (k) = x̂(k) for all i. The high-pass consensus filter is a linear system and we assume the Hi ’s are different (see Section I). We propose to use a high-gain version of tracking) for a process/target that evolves according to the dynamic consensus algorithm of Spanos et al. S(k) = H (k)Ri−1 (k)Hi (k) (8) n i=1 i and the fused sensor data Node i Microfilter Architecture n 1X T y(k) = H (k)Ri−1 (k)zi (k). . where Σk|k−1 and Σk|k are the estimation error covariance sus on state estimates are presented in Section III. . (9) Sensor n i=1 i Data High-Pass Consensus Filter Theorem 1. a distributed Kalman filtering al- V . Consensus Filter x̄(k + 1) = Ak x̂(k). let L = D − A be the zi (k) = Hi (k)x(k) + vi (k). Filter Iterations x̂(k) = x̄(k) + Mi (k)[y(k) − S(k)x̄(k)]. performed. i. The above theorem holds regardless of how the networked- Similar to the problem set up in [10]. Let Ni = {j : (i. . 1. (3) where ui is the input of node i.BASED F USION OF S ENSORY DATA Assuming that the measurement noise of the sensors are First. n}. . Rn ) (the time indices and band-pass consensus filters in the architecture of the are dropped). x̄k = E(xk |Zk−1 ). Hn ). The gain β > 0 is relatively large where δkl = 1 if k = l. E) and n nodes. (7) C ONSENSUS . . . qi is the state of the consensus E[vi (k)vj (l)T ] = Ri (k)δkl δij . z i ∈ Rp (2) Laplacian matrix of G and λ2 = λ2 (L) denote its algebraic connectivity. and δkl = 0. . . D ISTRIBUTED K ALMAN F ILTER : T YPE I: x̂(k) = x̄(k) + Kk (z(k) − H x̄(k)). associated with the data z(k) given by II. Given iv) providing a base performance standard for distributed the information Zk = {z(0). . concluding remarks col(H1 . β > 0 the measurement noise is given by j∈Ni j∈Ni (11)  yi = qi + u i E[w(k)w(l)T ] = Q(k)δkl . we provide the equations of the micro-Kalman 1 X 0T filter (MKF) and the high-pass consensus filters (CFs). (5) The outline of the paper is as follows: a DKF algorithm with identical consensus filters is introduced in Section II. . where Qi (k) = nQ(k) and Pi (0) = nP0 . . . consider a sensor wide fusion task necessary to compute y(k) and S(k) is network with an ad hoc topology G = (V. 1. H2 . 2007 FrB10. The architecture of the microfilter of the type-I DKF algorithm. V = {1. . The key modification is to replace the low-pass col(v1 . Simulation matrices and Σ0|−1 = P0 . . (4) filter. z(k)}. Defining an output matrix H = results are given in Section IV. New Orleans. The resulting microfilter is shown in Fig. The statistics of  q̇i = β (qj − qi ) + β (uj − ui ). .46th IEEE CDC. and E ⊂ V × averages y(k) and S(k). the local and central state estimates for all nodes are the same. Now. [10]) Suppose every node Micro of the network applies the following iterations Kalman x̂i Mi (k) = (Pi (k)−1 + S(k))−1 . Moreover.e. .5 filtering and instead uses consensus on state estimates. Fig. j) ∈ E} be the set of neighbors of The sensing model of the ith sensor is node i on graph G.

(LKF Iterations) Let Nic = V \ Ji be the set of non-neighboring nodes of node i. we use the local Kalman filtering iterations for node i are in the form discrete-time versions of both consensus filters. In [20]. According to Algorithm 1. without further yi = q i + u i information exchange regarding states/estimates). msgi = (qi (k). 7: end while Proposition 1. β}. the equation of the dynamic consensus algorithm is given as ṗ = −Lp + u̇ which reduces to (12) In this section. Algorithm 1 Distributed Kalman Filtering Algorithm with Before presenting the type-II DKF algorithms. We refer to this second class of the DKF algorithms as type-II algorithms. Expressing this filter in conditions. one can use local Kalman Uj = HjT Rj−1 Hj . 2007 FrB10. The q̇ = −β L̂q − β L̂u information in each message can be contained in one or (12) multiple packets (in case the message size is large). to discuss a more primitive DKF algorithm that involves local Kalman filtering and forms the basis of our main results. Ui ) Pi (k + 1) = Ak Mi (k)ATk + Bk Q(k)BkT . covariance information X qi ← qi + β [(qj − qi ) + (uj − ui )] Ri . ui . D ISTRIBUTED K ALMAN F ILTER : T YPE II: C ONSENSUS ON E STIMATES Remark 1. Moreover. 6: Update the state of the Micro-KF: Therefore. x̄i (k + 1) = Ak x̂i (k). HiT (k)Ri−1 (k)Hi (k) with zero initial states qi (0) = 0 and j∈Ji Xi (0) = 0. In local Kalman filtering. ∀j ∈ Ni ∪ {i} communicate its measurement zi . discrete-time is trivial (see [12] for hints on the right choice of the step-size). we first need high-pass consensus filtering of the sensed data. we discuss an alternative approach to for m = 1 by defining q = p − u and assuming the graph is distributed Kalman filtering that relies on communicating weighted with weights in {0. New Orleans. the To compute y(k) and S(k) in a distributed way. Packet-loss can be treated For aPconnected network. x̄i = x(0) 2: while new data exists do A. The message consists of the state and input of its consensus filters. Pi = nP0 . 5494 . Should j∈Ni one avoid using any form of consensus (i. j∈Ji (up to minor conditions given in [20]). ∀j ∈ Ni ∪ {i} filtering. state estimates between neighboring nodes. local Kalman j∈Ni filtering (LKF) does not perform well for a minority of nodes Si = Xi + Ui and their neighbors that make relatively poor observations 5: Estimate the target state using Micro-KF: due to environmental or geometric conditions. The resulting algorithm can act as a base (or mini- X mum) performance standard for any current (or future) DKF Xi ← Xi + β [(Xj − Xi ) + (Uj − Ui )] algorithms for sensor networks. Then. assume node i receives no information from its non-neighbors. node i can use a central Kalman filter that only utilizes the observations and output matrices of the nodes in Pi ← AMi AT + nBQB T Ji = Ni ∪ {i}. to all of its neighbors. Dec. Mi (k) = (Pi (k)−1 + Si (k))−1 . USA. pi (t) asymptotically converges to as loss of a communication link. Algorithm 1 is the (13) new type-I DKF algorithm with identical consensus filters. The collective dynamics of this CF is is fully compatible with packet-based communication in given by ( broadcast mode in real-world wireless sensor networks. node i sends the message x̂i (k) = x̄i (k) + Mi (k)[yi (k) − Si (k)x̄i (k)]. III. what is the optimal state estimate by each node? The answer to this 4: Update the state of the covariance CF: question is rather simple. The p=q+u message size is O(m(m + 1)) with m being the dimension where L̂ = L ⊗ Im is the m-dimensional graph Laplacian. Local Kalman Filtering 3: Update the state of the data CF: Assume that each node i of the sensor network can uj = HjT Rj−1 zj . Each node uses the inputs ui (k) = HiT (k)Ri−1 (k)zi (k) and Ui (k) = X Si (k) = HjT (k)Rj−1 (k)Hj (k). Under mild connectivity 1/n i ui (t) as t → ∞ (see [20]). The outputs yi (k) and Si (k) of the high-pass X consensus filters asymptotically converge to y(k) and S(k) yi (k) = HjT (k)Rj−1 (k)zj (k).e. and output matrix Hi with its neighbors Ni . Intuitively. This leads to the following primitive DKF x̄i ← Ax̂i algorithm with no consensus on data/states/estimates. Xi = 0m×m . 1: Initialization: qi = 0.46th IEEE CDC.5 that are rather sparse. packet-loss will not affect consensus algorithms. In fact. Xi (k). 12-14. This communication scheme where node i locally computes yi (k) and Si (k). node i can assume that no nodes Mi = (Pi−1 + Si )−1 other than its neighbors Ni exist as the information flow from x̂i = x̄i + Mi (yi − Si x̄i ) non-neighboring nodes to node i is prohibited in this case. of the state x of the process/target.

. there is no guarantee solving the problem in continuous-time and then “inferring” that the state estimates remain cohesive (or close to each the discrete-time version of the distributed Kalman filter with other). clusive neighbors Ji . Later. ϕi ) 2 For simplicity of notations.5 Proof: The proof follows from algebraic manipulation In the following. x̂n ). estimates is highly undesirable for a peer-to-peer network of estimators. Hn ) is observable. New Orleans. Algorithm 2 is a type-II DKF algorithm that attempts to reduce the disagreement regarding the state estimates in local The noise statistics is rather analogous to the discrete-time Kalman filtering using an ad hoc approach by implementing problem setup. Si = Uj with the input noise we = Bw + Kv.” K = P H T R−1 (14) T T T −1 Algorithm 2 Distributed Kalman Filtering Algorithm with Ṗ = AP + P A + BQB − P H R HP an ad hoc consensus step on estimates. Here is our main result: msgi = (ui . The Consider a continuous-time (CT) linear system represent- disagreement potential [13] of the state estimates is defined ing a target as ẋ = A(t)x + B(t)w 1 X ΨG (x̂) = x̂T L̂x̂ = kx̂j − x̂i k2 2 and a sensing model (i. Assume the pair (A. The error dynamics j∈Ji without noise is a stable linear system with a Lyapunov function 4: Compute the intermediate Kalman estimate of the target state: V (η) = η T P (t)−1 η (16) Mi = (Pi−1 + Si )−1 ϕi = ξi + Mi (yi − Si ξi ) Proof: By direct differentiation. This form of group disagreement regarding the state cohesive estimates. ξi ← Ax̂i Consider a network with n sensors with the following sensing model: 7: end while zi (t) = Hi (t)x + vi (17) Let us refer to ϕi as the intermediate estimate of the state with E[vi (t)viT (s)] = Ri δ(t − s). ∀j ∈ Ji . Each node sends the following message with H = col(H1 . Thus. . we have 5: Estimate the target state after a Consensus step: V̇ = η̇ T P −1 η + η T P −1 η̇ − η T P −1 Ṗ P −1 η X x̂i = ϕi +  (ϕj − ϕi ) = η T [(A − KH)T P −1 + P −1 (A − KH)]η j∈Ni − η T [P −1 A + AT P −1 + P −1 BQB T P −1 − H T R−1 H]η {This is equivalent to “moving towards the average = −η T [H T R−1 H + P −1 BQB T P −1 ]η < 0 intermediate estimate of the neighbors. matrices is dropped. ξi = x(0) in continuous-time: 2: while new data exists do 3: Locally aggregate data and covariance matrices: Lemma 1. the time-dependence of many time-varying to its neighbors. Dec. The continuous-time Kalman filter is in the a consensus step right after the estimation step.”} 6: Update the state of the local Kalman filter: for all η 6= 0. 2007 FrB10. we provide a more rigorous derivation of of the equations of the information form of the Kalman a type-II DKF algorithm that aims at reducing disagreement filter with restricted information to the observations of the of estimates due to local Kalman filtering over the set of in- inclusive neighbors Ji of node i. . B. . z = H(t)x + v. . USA. we provide a rigorous way of performing this x̂˙ = Ax̂ + K(z − H x̂) “consensus on estimates. Let η = x − x̂ denote the estimation error of the Kalman filter in continuous-time. The following lemma is the key in derivation of the DKF 1: Initialization: Pi = P0 . . ∀j ∈ Ji . yi = uj η̇ = (A − KH)η + we (15) j∈Ji X Uj = HjT Rj−1 Hj . . H) of the target. in form2 : Algorithm 3. 12-14. .46th IEEE CDC. 5495 . The estimation error Ji = Ni ∪ {i} dynamics is in the form X uj = HjT Rj−1 zj . η = 0 is globally asymptotically stable and V (η) is a valid Lyapunov function for the error Pi ← AMi AT + BQB T dynamics. Continuous-Time Distributed Kalman Filter Let L̂ = L⊗Im be the m-dimensional Laplacian of G.j)∈E where x̂ = col(x̂1 . Ui . Our algorithm design method relies on According to this LKF algorithm.

(Kalman-Consensus filter) Consider a sensor where Λ is a positive definite block-diagonal matrix with network with continuous-time sensing model in (17). USA. . Due to the each node applies the following distributed estimation algo. ∀j ∈ Ji . Algorithm 3 has the best performance on a tracking task. . Suppose diagonal blocks HiT Ri−1 HiT + Pi−1 BQB T P −1 . the estimator dynamics can be written as Ji = Ni ∪ {i} X X uj = HjT Rj−1 zj . . ηn ).l = col{Hj }j∈Ji meaning that the (18) aggregate observation of all sensors in Ji are used as zi . we get 5: Update the state of the Kalman-Consensus filter: Pi ← AMi AT + BQB T X V̇ = ηiT Pi−1 η̇i + η̇iT Pi−1 ηi − ηiT Pi−1 Ṗi Pi−1 ηi . i IV.5 Proposition 2. x) ∈ Rmn 1: Initialization: Pi = P0 . . x̄i = x(0) 2: while new data exists do and let x̂ = col(x̂1 . rithm asymptotically x̂i = x. ∀i as t → ∞. γ > 0 Interestingly. the collective dynamics of the estimation errors ηi = x − x̂i (without noise) is From the continuous-time DKF algorithm in Proposition 2. Furthermore. x = col(x. . In simulation results. All Hi ’s in Proposition 2 are. Hi.46th IEEE CDC. By calculating V̇ (η). Note that x̂j − x̂i = ηi − ηj and thus. V̇ ≤ −2ΨG (η) ≤ 0 and asymptotically all estimators agree x̂1 = · · · = x̂n = x.l ’s (l j∈Ni means “local”) and Hi. 2007 FrB10. . the second term is proportional to the ẋ = A0 x + B0 w negative of the disagreement function −ΨG (η) in consensus theory [13]. i=1 ηi Pi ηi . fact that V̇ (η) = 0 implies all ηi ’s are equal and η = 0. Algorithm 3 Kalman-Consensus filter: DKF Algorithm with Proof: Define vectors an estimator that has a rigorously derived consensus term. we will see that a standard Kalman filter (only the estimator is modified). . η = col(η1 . ∀j ∈ Ji . X x̂˙ i = Ax̂i + Ki (zi − Hi x̂i ) + γPi (x̂j − x̂i ) Remark 2. Hence X msgi = (ui . 12-14. x̄i ) η̇iT Pi−1 ηi = ηiT FiT Pi−1 ηi + γηiT (ηj − ηi ) j∈Ni and the nodes communicate their predictions x̄i ’s as well Note that the evolution of Pi is the same as the one for as their sensed data. x̂n ) be the vector of all the estimates 3: Locally aggregate data and covariance matrices: of the nodes. Thus with   0 −1 A0 = 2 V̇ (η) = −η T Λη − 2γη T L̂η ≤ −2γΨg (η) ≤ 0 (19) 1 0 5496 . Hi can also be the observation matrix of node Ṗi = APi + Pi AT + BQB T − Ki Ri KiT i and the result will still hold. Dec. Ki = Pi HiT Ri−1 . . Si = Uj which gives the following error dynamics for the ith node j∈Ji X η̇i = (A − Ki Hi )ηi + γPi (ηj − ηi ) 4: Compute the Kalman-Consensus estimate: j∈Ni Mi = (Pi−1 + Si )−1 or X X η̇i = Fi ηi + γPi (ηj − ηi ) x̂i = x̄i + Mi (yi − Si x̄i ) + Mi (x̄j − x̄i ) j∈Ni j∈Ni with Fi = A − Ki Hi . the message broadcasted to all the neigh- bors by node i is and after transposition. . Adding the three terms in V̇ (η) and using Lemma 1 gives Algorithms 2 and 3 can be effectively applied to mobile X sensor networks by assuming the time-dependence of the set V̇ = − ηiT [HiT Ri−1 HiT + Pi−1 BQB T P −1 ]ηi of neighbors Ni (t) as in [11]. . with a Kalman-Consensus estimator and initial conditions C. This is our main type-II n T −1 distributed Kalman filtering algorithm. . New Orleans. yi = uj x̂˙ i = Ax̂i + Ki (zi − Hi x̂i )) − γPi (ηj − ηi ) j∈Ji j∈Ni X Uj = HjT Rj−1 Hj . in fact. . Iterative Kalman-Consensus Filter Pi (0) = P0 and x̂i (0) = x(0). S IMULATION R ESULTS XX + 2γ ηiT (ηj − ηi ) i j∈Ni Consider a target with dynamics For undirected graphs. Then. aPstable linear system with a Lyapunov function V (η) = Algorithm 3 can be inferred. the last term remains the same. i x̄i ← Ax̂i But 6: end while X ηiT Pi−1 η̇i = ηiT Pi−1 Fi ηi + γηiT (ηj − ηi ) j∈Ni In Algorithm 3. Ui .

Fig. 3 reveals more complex distributed estimation algorithms that involve that the pairs of algorithms (A0.e. Fig. Dec. 3 demonstrates the comparison of the four distributed standard (A0) that can be employed to judge the usefulness of Kalman filtering algorithms. no particular set β = 7. This the estimate of the state. The estimates of A1 are somewhat dispersed on this target tracking task. . (b) Fig.A3) perform of the network topology.015 (≈ 70 Hz). Clearly. The initial conditions are x0 = (15. This result demonstrates the importance of having a base Fig. For Algorithm 1. only Kalman-Consensus filters (A2. (a) zi = Hi x + vi where either Hi = √Hx = (1. Larger values of β force a smaller step-size . 3. we use the following measure Pn in a satisfactory manner. or along the y-axis. we works. P0 = 10I2 . kδk = ( i=1 δi2 )1/2 with δi = x̂i − µ and µ = n1 i x̂i . Comparison of the performance of distributed Kalman filtering algorithms: (a) average estimation error per node and (b) disagreement of the estimates kδk. 2). . Ri = c2v i for i = 1. A sensor network with randomly located nodes is used in this experiment (see Fig. A quick look at Fig. Finally. New Orleans. 2. The nodes make noisy measurement of the position of the target either along the x-axis. P One can also utilize Ψg (x̂) instead. We compare sure of performance in distributed estimation in sensor net- four algorithms A0. more. or fusion centers exist and every node is supposed to know a higher rate of information exchange in the network. or Hi = Hy = (0.5 and B0 = c2w I2 with cw = 5 (i. we assume that each set of inclusive neighbors Ji of node i contains nodes with observation matrices Hx and Hy . 50 with cv = 30. A2. and A3. . 2 0 6 The step-size is  = 0. but is observable by all the sensors. 2. A relatively high disagreement is a critical weakness of the high-pass consensus filter and in the estimates of the different nodes goes against the thus Algorithm 1. 2007 FrB10. A3. A3 performs better than all other algorithms sive estimates. 1). 4 compares the estimates of all nodes by A1 vs. Half of the nodes sense along the x-axis and the other half sense along the y-axis. A sensor network with 50 nodes and 242 links.e. A1. Each curve is determined by averaging over 10 random runs of each algorithm). We use the discrete-time model x(k + 1) = Ax(k) + Bw(k) of this target with parameters 2 2 3 3 A = I2 + A0 + A + A0 . for a relatively long period. In a peer-to-peer estimation architecture. purpose of performing distributed estimation without leaders. B = B0 . Moreover. 12-14. i.A1) and (A2. a similar manner (have comparable performances). To measure the disagreement of the estimates independent Therefore.A3) behave in further information exchange in terms of states/estimates. 5497 . both A2 and A3 perform significantly better than A0 Clearly. A3 has a superior performance and provides cohe- and A1. . Let us refer to local Kalman filtering with no exchange Estimation error by itself is no longer an important mea- of state or estimates as Algorithm 0 (or A0). the target is not observable by individual sensors. 0). Further. −10)T . Furthermore. a point moving on noisy circular trajectories). USA.46th IEEE CDC.

Automatica (submitted). Trans. J. 1979. Tsitsiklis. W. A. Consensus and continuous-time DKF algorithm. cussions on distributed estimation and suggesting Lemma 1. 5498 . Spanos. Gehrke. and D. (CDC-ECC ’05). Gupta. Many thanks goes to Jeff Shamma for several useful dis. Sheen. Feb 1993. B. Fox. on Automatic Control. S. Ghosh. IEEE Pervasive Computing. IEEE Trans. Speyer. filtering for Location estimation. Englewood Cliffs. Jan.5 (a) (b) (c) (d) (e) (f) (g) (h) Fig. [14] R. 50(5):655–661. Gossip-based computation of [24] J. Bar-Shalom and T. Hightower. Tracking and Data Association. Ren and R. L. YBS Publishing. Jan. C ONCLUSIONS [10] R. Consensus seeking in multiagent systems R EFERENCES under dynamically changing interaction topologies. Mesbahi. 2005. 2004. 1982. Nov. across a stocjastic packet-dropping link. 2003. On a tracking task. 44th IEEE Conference on Decision and Control. PhD thesis. (INFOCOM ’05). 1988. and S. 2003. Computation. and R. Kempe. Poola. Comparison between the estimates of all nodes: (a)-(d) Algorithm 1 and (e)-(h) Algorithm 3 V. Boyd. Two Kalman-Consensus Conference. 1986. Sep. N. Shamma. Consensus problems in networks different nodes. Gossip algorithms: [20] D. Proc. Borriello. and R. N. Distributed algorithms for reaching consensus on arbitrary distributed Kalman filtering in sensor networks with quantifiable per- functions. [1] B. Schenato. Sep. [7] V. Sep. Oct. Fortmann. [18] L. 95(1):163–187. Murray. Proceedings of the IEEE. Prabhakar. decentralized linear-quadratic-gaussian control problem. Optimal Filtering. Sinopoli. Murray. Proc. Proceedings of the 24th Annual on Mobile Networks. Agreement over random networks. P. 95. Schenato. USA. Proceedings of the IEEE. Franceschetti. Int. The objective in type-II cooperation in networked multi-agent systems. M. IEEE Trans.. [3] Y. Prentice-Hall. Joint Conference of the IEEE Computer and Communications Societies 2005. 50(11):1867–1872. Bertsekas. Dynamic Consensus design. Proc. L. Dobra. M. P. and M. 2007. Liao. [9] D. 2005. Algorithm 3 has the best overall performance. D. of the 2006 American Control rigorously derived and analyzed. Massachusetts Institute of Technology. [17] W. on Automatic Control. and J. Olfati-Saber. A fully ACKNOWLEDGMENT decentralized multi-sensor system for tracking and surveillance. 2(3):24– [22] J. Cortés. 2004. IEEE Trans. [16] D. July–Sept. Hatano and M. Durrant-Whyte. D. O. Cambridge. A. A continuous-time DKF algorithm is [11] R. 44th IEEE Conference on perform better than A1 and a primitive local Kalman filtering Decision and Control. Hassibi. and R. This led to the addition of a consensus term of agents with switching topology and time-delays. MA. Olfati-Saber. Murray. Olfati-Saber. CT. Fax. Problems in Decentralized Decision Making and Control Conference. Bar-Shalom and X. New Orleans. Kalman filtering with intermittent observations. on Automatic Control. pages 6698–6703. on Automatic Control. S. March 2005. and J. L. 2005 and 2005 European Control Conference approach. An algorithm for tracking multiple targets. B. Department of Electrical Engineering and [8] Y. Olfati-Saber and J. Murray. R. Rao. Li. 2007. 1995. 49(9):1520–1533. 24(6):843–854. NJ. Multitarget-Multisensor Tracking: and S. A. 12(1):20–44. and G. Consensus filters for sensor step in Algorithm 2. Distrbuted tracking for mobile sensor networks with information-driven mobility. on Automatic Control. Nov. Fourth International Symposium on Information Processing [6] D. Feb 1979. S. Distributed asyn- aggregate information. R. [4] S. of the 44th Annual IEEE Symposium on chronous deterministic and stochastic gradient optimization algo- Foundations of Computer Science (FOCS ’03). Foundations of control and estimation over lossy networks. Laboratory for Information and Decision Systems. S. IEEE Computer Science. and R. DKF algorithms is to reduce disagreement of the estimates by [13] R. Journal of Robotics Research. June 2005. formance. Bayesian in Sensor Networks. D. analysis and applications. B. 4. Approximate [5] J. of the 2005 Automatic [23] J. Trans. July 2007. Storrs. D. [15] B. rithms. Olfati-Saber. Sastry. 2005. Shah. Sastry. Athans. The 16th IFAC World Congress. DKF algorithm [10]. Dec. 2005. Spanos. IEEE Principles and Techniques. [19] B. pages 1653–1664. A. Olfati-Saber. Dec 1979. Moore. Schulz. pages 133–139. 49(9):1453–1464. J. Y. M. Tsitsiklis. K. [2] Y. Poola. M. Dec. Beard. Murray. Academic Press. IEEE Trans. Dec. M. Sinopoli. B. M. IEEE Trans. H. 8(3):482–491. Anderson and J. pages 360–365. filtering algorithms in discrete-time were inspired by this [12] R. April 2005. M. Jordan. pages 8179– introduced. E.46th IEEE CDC. I. Reid. 12-14. M. 2006. 2007 FrB10. B. Computation and transmission requirements for a 33. R. 2005 and Three novel distributed Kalman filtering algorithms were 2005 European Control Conference (CDC-ECC ’05). Olfati-Saber and R. 24(2):266–269. Czech. Algorithm 1 is a modification of a previous 8184. Spanos. Prague. On LQG control on Automatic Control. [21] D. Franceschetti. 31(9):803–812. F. K. on in the estimator of Algorithm 3 and an ad hoc consensus Automatic Control. both A2 and A3 networks and distributed sensor fusion. Distributed Kalman filter with embedded consensus filters.