Professional Documents
Culture Documents
Abstract—The last decade has witnessed of a rising surge However, due to its probabilistic aspect, gossip based dis-
interest in Gossip protocols in distributed systems. In particular, semination implies high redundancy with nodes receiving the
as soon as there is a need to disseminate events, they become a key same message several times. Many algorithms have been stud-
functional building block due to their scalability, robustness and
fault tolerance under high churn. However, Gossip protocols are ied to limit the number of exchanged messages to disseminate
known to be bandwidth intensive. A huge amount of algorithms data, using different combination of approaches such as push
has been studied to limit the number of exchanged messages using (a node can push a message it knows to its neighbors), pull (a
different combination of push/pull approaches. We are revisiting node pulls a messages it does not know from its neighbors)
the state of the art by applying Random Linear Network Coding or push-pull (a mix of both) for either single or multi-source
to further increase performances. In particular, the originality
of our approach is to combine sparse vector encoding to send gossip protocols [12] [13] [14].
our network coding coefficients and Lamport timestamps to split In this paper, we make a significant step beyond these
messages in generations in order to provide an efficient gossiping. protocols, and provide better performance with respect to the
Our results demonstrate that we are able to drastically reduce state of the art of multi-source gossip protocols.
the bandwidth overhead and the delay compared to the state of The key principle of our approach is to consider the re-
the art.
Index Terms—network coding, gossip-based dissemination, dundancy as a key advantage rather than as a shortcoming by
peer-to-peer network, epidemic algorithm, rumor spreading leveraging on the Random Linear Network Coding (RNLC)
techniques to provide efficient multi-source gossip based dis-
I. I NTRODUCTION semination. Indeed, it has been shown that RLNC improves
Distributed systems are becoming increasing large and com- the theoretical stopping time, i.e., the number of rounds
plex, with an ever-growing shift in their scale and dynamics. until protocol completeness, by sending linear combination
This trend leads to revisit key known challenges such as of several messages instead of a given plain message, which
inconsistencies across distributed systems due to an inherent increases the probability of propagating something new to
increase of unpredictable failures. To be highly resilient to recipients [15], [16].
such failures, an efficient and robust data dissemination pro- Unfortunately, applying RNLC to multi-source gossip pro-
tocol is the cornerstone of any distributed systems. The last tocols is not without issues, and three key challenges remain
decade gossip protocols, also named epidemics protocols, have open. First, existing approaches suppose that a vector, where
been widely adopted as a key functional building block to build each index identifies a message with its associated coefficient
distributed systems. For instance, gossip protocols have been as a value, is disseminated. This approach implies a small
used for overlay construction [1], [2], membership and failure namespace. In the context of multi source, the only option
detection [3]–[6], aggregating data [7] and live streaming [8], is to choose a random identifier over a sufficiently large
[9]. This wide adoption comes from the resilience of gossip namespace to have a negligible collision probability. However
protocols while being simple and naturally distributed [10] [3] this does not scale. Some algorithms provide a mechanism to
[11]. Gossip based dissemination can be simply represented rename messages to a smaller namespace [17], but this kind of
as the random phone call problem; at the beginning, someone techniques are not applicable to gossip protocols as they would
learns a rumor, and calls a set of random friends to propagate substantially increase the number of exchanged messages, and
it. As soon as someone learns a new rumor, in turn, she inherently the delay. Second, to reduce the complexity of
randomly propagates it to her own set of friends, and so on the decoding process, messages are split in groups named
recursively. Further, depending on whether there are one or generations. Existing rules to create generations require having
more sources of rumors (i.e. dissemination of one or multiple only one sender, which is impractical in the context of multiple
messages to all nodes), gossip protocols may be either single sources. Third, the use of RNLC implies linear combinations
or multi source. In both cases, randomness and recursive of multiple messages. This leads to potential partial knowledge
probabilistic exchanges provide scalability, robustness and of messages received, making precise message pull requests
fault tolerance under high churn to disseminate data while useless and breaks pull frequency adjustment based on missing
staying simple. messages count.
In this paper, we introduce CHEPIN, a CHeaper EPidemic
To appear in Proc. IEEE INFOCOM 2019. dissemINation approach for multi-source gossip protocols.
To the best of our knowledge, our approach is the first of ordering messages, Pulp [14] piggybacks a list of recently
one to apply RNLC to multi-source gossip protocols, while received messages identifiers in every sent messages, allowing
overcoming all the inherent challenges involved by the use of multiple sources.
network-coding techniques. Gossip-based disseminations are caracterized by the recep-
More precisely, we make the following contributions. tion of many redundant messages in the push and pull phases
• We solve the identifier namespace size via the use of in order to receive every message with high probability. By
sparse vectors. The data sent over the network stays using Random Linear Network Coding, we aim at increasing
reasonable. the usefulness of exchanged messages.
• We create generations for messages from multiple sources d) Random Linear Network Coding: To improve dis-
by leveraging on Lamport timestamps. All messages shar- semination, some protocols use erasure coding [20] [19] or
ing the same clock are in the same generation whatever Random Linear Network Coding [21] but need encoding at
its source. the source or message ordering, which limits these techniques
• We overcome the issue of partial message knowledge by to single-source scenarios. Theoretical bounds have also been
providing an adaptation of push, pull, push-pull gossip studied for multi-sender scenarios [15] [16] but they do not
protocols. We pull specific generations instead of specific consider generations and suppose that messages are previously
messages. ordered. Network Coding is also used on wireless networks
• We introduce updated algorithms to make our approach [22] [23] but with different system models, as emitted mes-
applicable to the current state of the art of multi-source sages can be received by every node within range.
gossip protocols. Applying RLNC gossip in a multi-sender scenario implies
• Finally, we are evaluating CHEPIN thoroughly by simu- determining to which generation a message will belong to
lation. We show that our solution reduces the bandwidth without additional coordination, and finding a way to link net-
overhead by 25% and the delivery delay by 18% with work coding coefficients to their respective original messages
respect to P ULP [14], while keeping the same properties. inside a generation.
II. R ELATED WORK III. BACKGROUND ON NETWORK CODING AND
Epidemic protocols were introduced in 1988 by Xerox CHALLENGES
researchers on replicated databases [18]. They introduce three
protocols to exchange rumors: push, pull and push pull. A D
a) Push protocols: to transmit rumors, push-based pro- M1 2
M1 1 1 +c12M
tocols imply that informed nodes relay the message to their c1 , c 1M
C c 2, c 2M 1
neighbors. Some protocols are active, as they have a back- 2
M 1 + c 2M 2
ground thread that regularly retransmits received rumors, like M2 2
balls and bins [12]. But it can also be implemented as a B E
reactive protocols, where rumors are directly forwarded to the
node’s neighbors upon reception, like infect and die and infect Fig. 1: With RLNC, C can send useful information to D and
forever protocols [13]. Push protocols are particularly efficient E without knowing what they have received
to quickly reach most of the network, however reaching all the
nodes takes more time and involves significant redundancy, RLNC [24] provides a way to combine different messages
and thus bandwidth consumption. on a network to improve their dissemination speed by increas-
b) Pull protocols: nodes that miss messages ask other ing the chance that receiving nodes learn something new. In
nodes for the missing messages. As a consequence, pull Figure 1, node C cannot know what D and E have received.
protocols more efficiently reach the last nodes of the network, By sending a linear combination of M 1 and M 2 , nodes D
as inherently, they get messages with higher probability. How- and E can respectively recover M 2 and M 1 with the help of
ever, they require sending more messages over the network: (i) the plain message they also received. Without RLNC, node C
one to ask a missing message, and (ii) one other for the reply would have to send both M 1 and M 2 to D and E involving
that contains the missing message. Furthermore a mechanism two more messages. Every message must have the same size,
or a rule is needed to know what are the missing messages to defined as L bits thereafter. To handle messages of different
pull, which explains why these protocols are generally used size, it is possible to split or pad the message to have a final
in conjunction with a push phase. size of L bits.
c) Push-Pull protocols: the aim is to conciliate the best The message content has to be split as symbols over a
from push and pull protocols by reaching as many nodes as field F2n . The F28 field has interesting properties when doing
possible without redundancy on the push phase. Then, nodes RLNC. First, a byte can be represented as a symbol in this
that have not received the message will send pull requests to field. Thereafter, this field is small enough to speed up some
other nodes of the network. By ordering messages, Interleave computing with discrete logarithm tables and at the same
[19] proposes a solution to discover the missing messages in time sufficiently large to guarantee linear independence of the
the pull phase, but works only with a single source. Instead random coefficients with very high probability.
Encoded messages are linear combinations over F2n of mul- knows by advance how many messages it will send and can
tiple messages. This linear combination is not a concatenation: assign each message a position in the vector and start creating
if the original messages are of size L, the encoded messages random linear combinations. In the case of multiple sources,
will be of size L too. An encoded message carries a part of the number of message is not available, and waiting to have
the information of all the original messages, but not enough to enough messages to create a generation could delay message
recover any original message. After receiving enough encoded delivery and above all, the network traffic required to order
messages, the original messages will be decodable. the messages in the dense vector would ruin the benefits of
To perform the encoding, the sources must know n original network coding.
messages defined as M 1 , ..., M n . Each time a source want to c) Pulling with RLNC: When doing pull-based rumor
create an encoded message, it randomly chooses a sequence mongering, a node must have a way to ask what rumors it
of coefficients c1 , ..., cP
n , and computes the encoded message needs. Without network coding, it simply sends a message
n
X as follows: X = i=1 ci M i . An encoded message thus identifier to ask for a message. But sending a message iden-
consists of a sequence of coefficients and the encoded infor- tifier in the case of network coding raises several questions:
mation: (c, X). does the node answer only if it has decoded the message? Or if
Every participating node can recursively encode new mes- it can generate a linear combination containing this message?
sages from the one they received, including messages that have d) Estimating the number of missing packets: Some
not been decoded. A node that received (c1 , X 1 ), ..., (cm , X m ) algorithms need to estimating the number of missing messages.
encoded messages, can encode a new message (c0 , X 0 ) en- Without network coding, the number of missing packets cor-
coded by choosing a random set of coefficientsPd1 , ..., dm , responds to the number of missing messages. But with RLNC,
m
computing the new encoded information X 0 = j=1 dj X
j
it is possible to have some linear combination for a given set
0
and computing the new sequence of coefficients ci = of messages but without being able to decode them. All the
P m j
j=1 dj ci . messages are considered as missing but one packet could be
An original message M i can be considered as an encoded enough to decode everything.
message by creating a coefficient vector 0, ..., 1, .., 0 where 1
IV. C ONTRIBUTION
is at the ith position. The encoding of a message can therefore
be considered as a subset of the recursive encoding technique. A. System model
Even if there is no theoretical limit on the number n Our model consists of a network of n nodes that all run
of messages that can be encoded together, there are two the same program. These nodes communicate over a unicast
reasons to limit it. First, Gauss-Jordan elimination has a O(n3 ) unreliable and fully connected medium, such as UDP over
complexity, which becomes rapidly too expensive to compute. Internet. Nodes can join and leave at any moment, as no
Then, the more the messages encoded together, the bigger graceful leave is needed, crashes are handled like departures.
the sequence of coefficients while the encoded information We consider that each nodes can obtain the address of some
remains stable. In extreme cases this can result in sending other nodes of the service via a Random Peer Sampling
mainly coefficients on the network instead of information. service [25]. There is no central authority to coordinates nodes,
To encode more data, splitting messages in groups named all operations are fully decentralized and all exchanges are
generations solves the previous problems, as only messages asynchronous. We use the term message to denote the payload
in the same generation are encoded together. that must be disseminated to every node of the network, and
However applying network coding to epidemic dissemina- the term packet to denote the exchanged content between two
tion raises several challenges. nodes. We consider the exchange of multiple messages and
a) Assigning a message to a generation: Generations any node of the network can inject messages in the network,
often consist of integers attached to a message. Messages without any prior coordination between nodes (messages are
with the same generation value are considered in the same not pre-labeled).
generation and can be encoded between them. The value must
be assigned in such a way than enough messages are in the B. Solving RLNC challenges
same generation to benefit from RLNC properties but not too Due to our multiple independent source model and our goal
many to keep the decoding complexity sufficiently low and to cover push and pull protocols, we propose new solutions to
limit the size of the coefficients sent on the network. In a single the previously stated challenges:
source scenario, the size of the generation is a parameter of a) Assigning generations with Lamport timestamps:
the protocol, and is determined by counting the number of With multiple senders, we need to find a rule that is applicable
messages sent in a given generation. However, with multiple only with the knowledge of a single node when assigning
sources, there is no way to know how many messages have a generation as relying on network would be costly, slow
been sent in a given generation. and unreliable. We chose Lamport timestamps [26] to delimit
b) Sending coefficients on the network: Coefficients are generations by grouping every messages with the same clock
generally sent under the form of a dense vector over the on the same generation. This method doesn’t involve sending
network. Each value in the vector is linked to a message. On new packets as the clock is piggybacked on every network
a single source scenario, that is not a problem, the source coded packet. When a node wants to disseminate a message,
it appends its local clock to the packet and update it, when Algorithm 1 Process RLNC Packets
it receives a message, it uses its local clock and the message 1: g ← 0 . Encoding generation
clock to update its clock. This efficiently associated messages 2: rcv ← S ET() . Received packets
that are disseminated at the same time into the same genera- 3: ids ← O RDERED S ET() . Known message identifiers
tion. 4: dlv ← O RDERED S ET() . Delivered message identifiers
b) Sending coefficients in sparse vectors: When multiple
nodes can send independent messages, they have no clue on 5: function P ROCESS PACKET(p)
which identifiers are assigned by the other nodes. Conse- 6: if p = ∅ then
quently, they can only rely on their local knowledge to choose 7: return False
their identifiers. Choosing identifiers randomly on a namespace
where all possible identifiers are used would lead to conflicts. 8: hg 1 , c1 , i ← p
That is why we decided to use a bigger namespace, where 9: oldRank ← R ANK(g 1 , rcv)
conflict probabilities are negligible when identifiers are chosen 10: rcv ← S OLVE(g 1 , rcv ∪ {p})
randomly. On a namespace of this size, it is impossible to send
a dense vector over the network, however we can send a sparse 11: if oldRank = R ANK(g 1 , rcv) then
vector: instead of sending a vector c1 , ..., cn , we send m tuples, 12: return False . Packet was useless
corresponding to the known messages, containing the message
id and their coefficient: (id(M i1 ), ci1 ), ..., (id(M im ), cim ). 13: for all hid, i ∈ c1 do
c) Pulling generations instead of messages: A node 14: ids ← ids ∪ {hg 1 , idi} . Register new identifiers
sends a list of generations that it has not fully decoded to its
neighbors. The target node answers with one of the generation 15: for all hg 2 , c2 , ei ∈ rcv do
it knows. To determine if the information will be redundant, 16: hid, i ← c2 [0]
there is no other solution than asking the target node to 17: if g 1 = g 2 ∧ LEN(c2 ) = 1 ∧ hg 2 , idi ∈
/ dlv then
generate a linear combination and try to add it to the node’s 18: dlv ← dlv ∪ {hg 2 , idi}
local matrix. During our tests, it appears that blindly asking 19: D ELIVER(e) . New decoded message
generations didn’t increase the number of redundant packets
compared to a traditional Push-Pull algorithm asking for a
20: if g 1 > g ∨ R ANK(g, rcv) ≥ 1 then
message identifier list while greatly decreasing message sizes.
21: g ← MAX(g 1 , g) + 1 . Update Lamport Clock
d) Count needed independent linear combinations: To
provide adaptiveness, estimating the number of useful packets 22: return True . Packet was useful
needed to receive all the missing messages is needed by some
protocols. Without network coding, the number of needed
packets corresponds to the number of missing messages. With
network coding, partially decoded packets are also considered message identifiers similar to ids, but contains only identifiers
as missing messages, but to decode them we need fewer pack- of decoded messages.
ets than missing messages. In this case, the number of useful The presented procedure relies on some primitives. R ANK
packets needed corresponds to the number of independent returns the rank of the generation, by counting the number
linear combinations needed. of packets associated to the given generation. S OLVE returns
a new list of packets after applying a Gaussian elimination
C. CHEPIN on the given generation and removing redundant packets.
To ease the integration of RLNC in gossip-based dissem- D ELIVER is called to notify a node of a message (if the same
ination algorithms, we encapsulated some common logic in message is received multiple time, it is delivered only once).
algorithm 1. We represent a network-coded packet by a triplet: This procedure updates the 3 global sets previously defined,
hg, c, ei, where g is the generation number, c an ordered set delivers decoded messages and return the usefulness of the
containing the network coding coefficients and e the encoded given packet. Internally, the node starts by adding the packet
payload. to the matrix and do a Gaussian elimination on the packet’s
We define 3 global sets: rcv, ids and dlv. rcv contains generation (line 10), if decoding the packet didn’t increase
a list of network coded packets as described before that are the matrix rank, the packet was useless and the processing
modified each time a new one is received to stay linear stops here. Otherwise, the node must add unknown message
independent until all messages are decoded. ids contains a identifiers from the packet coefficient list to the known iden-
list of known messages identifiers under the form hg, gidi tifiers set. After that, the node delivers all decoded messages
where g is the generation and gid is the identifier of the thanks to the received packet and stores their identifiers in dlv.
message inside the generation. By using this tuple as unique Finally, the node checks if the clock must be updated.
identifier, we can reduce the number of bytes of gid as Algorithms 2 and 3 show how the above procedures can
the probability of collision inside a generation is lower than be used to implement push and pull gossip protocols. For
the one in the whole system. Finally dlv contains a list of push, we do not directly forward the received packet, but
instead forward a linear combination of the received packet’s Algorithm 3 Pull
generation after adding it to our received packet list. For Pull, 1: rotation ← 0 . Rotation position
we request generations instead of messages. Like existing
protocols, we keep a rotation variable that rotates the set 2: function NCP ULLT HREAD(headers)
of missing identifiers, allowing missing generations to be 3: ask ← O RDERED S ET()
generated in a different order on the next execution of the 4: rotation ← rotation + 1 mod |ids \ dlv|
code block. 5: for all m ∈ ROTATE(ids \ dlv, rotation) do
6: ask ← ask ∪ G EN(m, rcv)
Algorithm 2 Push 7: S END(P ULL, ask, headers)
1: k, ittl ← ... . Push fanout and initial TTL
2: dottl, dodie ← ... . Push strategy 8: function NCP ULL(asked, headers)
9: p←∅
3: function S END T O N EIGHBOURGS(h, headers) 10: if ∃ g ∈ asked, R ANK(g, rcv) > 0 then
4: for k times do 11: p ← R ECODE(g, rcv)
5: p ← R ECODE(h, rcv)
12: S END(P ULL R EPLY, p, headers)
6: S END(P USH, p, headers)
ttl
ttl
ttl
(a) 4 5.2 4 3 2.3 2.1 4 2.2 1.9 1.4 0.9 0.7 4 10.2 7.8 5.7 4.4 3.8
14: NCP ULL(asked, G ET H EADERS ()) 3 7.3 4.7 4.2 3.9 3.3 3 4.1 2 1.8 1.8 1.5 3 14.6 10.1 8.5 7.1 6.1
2 6.9 7.6 6.1 5.1 4.6 2 13.3 5 2.6 2 1.8 2 22.4 15.3 9 9.9 9.4
2 3 4 5 6 2 3 4 5 6 2 3 4 5 6
15: upon receive P ULL R EPLY(p, headers) fanout fanout fanout
16: ids ← ids ∪ headers.tw
17: NCP ULL R EPLY(p)
6 1.8 1.7 2.3 3.1 3.8 6 1.8 0.9 0.5 0.4 0.3 6 5.6 3.4 1.9 1.9 1.9
5 1.9 1.7 1.8 2.3 2.8 5 1.8 1.3 0.7 0.5 0.4 5 6.1 4.2 3.1 2 2.1
18: thread every ∆pull
ttl
ttl
ttl
4 2 1.8 1.7 1.7 1.8 4 1.8 1.6 1.2 0.8 0.6 4 7.3 4.8 4.1 3.6 2.9
19: NCP ULLT HREAD(G ET H EADERS ()) (b) 3 2.1 1.9 1.8 1.7 1.7 3 1.9 1.7 1.6 1.4 1.2 3 8.7 6.1 4.9 4.5 4.2
2 2.3 2.2 2.1 2 1.9 2 2.2 1.9 1.7 1.7 1.6 2 9.7 8.9 7.8 6.4 5.5
200 250 4
bandwidth (KB/s)
Packet ratio: 2.698 max: 3.6s, mean: 0.67s
Bandwidth ratio: 2.120
200
delay (sec)
150 3
packet/s
(a) 150
100 2
100
50 50 1
0 0 0
0.0 2.5 5.0 7.5 10.0 0.0 2.5 5.0 7.5 10.0 0 1 2 3 4 5 6
time (sec) time (sec) send time (sec)
200 250 4
bandwidth (KB/s)
Packet ratio: 2.267 max: 3s, mean: 0.55s
200
Bandwidth ratio: 1.842
delay (sec)
150 3
packet/s
150
100 2
100
(b) 50 1
50
0 0 0
0.0 2.5 5.0 7.5 10.0 0.0 2.5 5.0 7.5 10.0 0 1 2 3 4 5 6
time (sec) time (sec) send time (sec)
Fig. 3: Comparison of exchanged packet rate, used bandwidth and message delay for 3a pulp original (k = 6, ttl = 4) and 3b
our algorithm (k = 5, ttl = 4)
5e+04
bytes sent
2.4e+09
0e+00 2.0e+09
1.6e+09
-5e+04
50 100 150 200 250 300
-1e+05 msg/sec
0.0 2.5 5.0 7.5 10.0
time (sec)
bytes sent
2.4e+09
avg. pull period (sec)
0.25
0.20 2.0e+09
0.15 1.6e+09
0.10
0.0 0.5 1.0 1.5 2.0
0.05 latency multiplier
0.00 2.5e+09
0.0 2.5 5.0 7.5 10.0
2.0e+09
bytes sent
time (sec)
1.5e+09
1.0e+09
60
5.0e+08
avg. missing
0.0e+00
40 0 250 500 750 1000
nodes
20
0
0.0 2.5 5.0 7.5 10.0
time (sec) Fig. 5: Variation of network configuration
Fig. 4: How adaptiveness algorithms impact the protocol overhead is the one byte coefficient value.
efficiency We use an Overnet trace to simulate churn [28]. The trace
contains more than 600 active nodes over 900 with continuous
churn—around 0.14143 events per second.
when the number of messages per second decreases, and so We use this trace replayed at different speed to evaluate
less useful. the impact of churn on the delivery delays of our messages,
At an extreme, when we have only one message per as plotted on Figure 6. We chose to re-execute the trace at
generation, we have the same model as Pulp: asking for a list different speed: 500, 1000 and 2000 times faster for respec-
of generation identifiers is similar to asking a list of message tively 71, 141 and 283 churn events per second. We see that the
identifiers in Pulp. The network coded packets will contain original Pulp algorithm is not affected by churn, as the average
a generation identifier, a sparse vector containing only one and maximum delivery delays stay stable and similar to those
message identifier and one coefficient and the message. As a without churn. Considering the average delay, it’s also the case
generation identifier plus a message identifier of our algorithm for our algorithm, where the average delay didn’t evolve. The
has the same size of a message identifier in Pulp, the only maximum delay doesn’t evolve significantly either. However
max 95th 75th 50th 25th 5th
4 4 R EFERENCES
Pulp - Churn x500 - avg = 0.67 sec NC - Churn x500 - avg = 0.55 sec
3 3
[1] M. Jelasity and Ö. Babaoglu, “T-man: Gossip-based overlay topology
delay (sec)
delay (sec)
management,” in Engineering Self-Organising Systems, Third Interna-
2 2 tional Workshop, ESOA 2005, Utrecht, The Netherlands, July 25, 2005,
Revised Selected Papers, 2005, pp. 1–15.
1 1 [2] S. Bouget, Y.-D. Bromberg, A. Luxey, and F. Taı̈ani, “Pleiades: Dis-
tributed Structural Invariants at Scale,” in DSN 2018. Luxembourg,
0 0 Luxembourg: IEEE, Jun. 2018, pp. 1–12.
0 1 2 3 4 5 6 0 1 2 3 4 5 6
[3] A. Lakshman and P. Malik, “Cassandra: A Decentralized Structured
send time (sec) send time (sec)
Storage System,” SIGOPS Oper. Syst. Rev., vol. 44, no. 2, pp. 35–40,
4 4 Apr. 2010.
Pulp - Churn x1000 - avg = 0.66 sec NC - Churn x1000 - avg = 0.55 sec
[4] G. DeCandia, D. Hastorun, M. Jampani et al., “Dynamo: Amazon’s
3 3
highly available key-value store,” in Proceedings of Twenty-first ACM
delay (sec)
delay (sec) SIGOPS Symposium on Operating Systems Principles, ser. SOSP ’07.
2 2
New York, NY, USA: ACM, 2007, pp. 205–220.
1 1
[5] A. Dadgar, J. Phillips, and J. Currey, “Lifeguard : Swim-ing with
situational awareness,” CoRR, vol. abs/1707.00788, 2017.
0 0
[6] A. Das, A. Das, I. Gupta, and A. Motivala, “Swim: Scalable weakly-
0 1 2 3 4 5 6 0 1 2 3 4 5 6 consistent infection-style process group membership protocol,” in In
send time (sec) send time (sec) Proc. 2002 Intnl. Conf. Dependable Systems and Networks (DSN 02,
4 4 2002, pp. 303–312.
Pulp - Churn x2000 - avg = 0.67 sec NC - Churn x2000 - avg = 0.55 sec
[7] M. Jelasity, A. Montresor, and O. Babaoglu, “Gossip-based aggregation
3 3 in large dynamic networks,” ACM Trans. Comput. Syst., vol. 23, no. 3,
delay (sec)
delay (sec)