You are on page 1of 12

Eclipse Attacks on Overlay Networks

Threats and Defenses
Atul Singh∗ , Tsuen-Wan “Johnny” Ngan∗ , Peter Druschel† , and Dan S. Wallach∗
∗ Department of Computer Science, Rice University
† Max Planck Institute for Software Systems

Abstract— Overlay networks are widely used to deploy func- The overlay’s integrity depends on the ability of correct nodes
tionality at edge nodes without changing network routers. Each to communicate with each other over a sequence of overlay
node in an overlay network maintains connections with a number links. In an Eclipse attack [5], [37], a modest number of
of peers, forming a graph upon which a distributed application or
service is implemented. In an “Eclipse” attack, a set of malicious, malicious nodes conspire to fool correct nodes into adopting
colluding overlay nodes arranges for a correct node to peer the malicious nodes as their peers, with the goal of dominating
only with members of the coalition. If successful, the attacker the neighbor sets of all correct nodes. If successful, an Eclipse
can mediate most or all communication to and from the victim. attack enables the attacker to mediate most overlay traffic and
Furthermore, by supplying biased neighbor information during effectively “eclipse” correct nodes from each others’ view. In
normal overlay maintenance, a modest number of malicious
nodes can eclipse a large number of correct victim nodes. the extreme, an Eclipse attack allows the attacker to control
This paper studies the impact of Eclipse attacks on structured all overlay traffic, enabling arbitrary denial of service or
overlays and shows the limitations of known defenses. We censorship attacks.
then present the design, implementation, and evaluation of a The Eclipse attack is closely related to the Sybil attack [14],
new defense, in which nodes anonymously audit each other’s where a single malicious node assumes a large number of
connectivity. The key observation is that a node that mounts an
Eclipse attack must have a higher than average node degree.
different identities in the overlay. Clearly, a successful Sybil
We show that enforcing a node degree limit by auditing is an attack can be used to induce an Eclipse attack. However,
effective defense against Eclipse attacks. Furthermore, unlike Eclipse attacks are possible even in the presence of an effective
most existing defenses, our defense leaves flexibility in the defense against Sybil attacks, such as certified node identi-
selection of neighboring nodes, thus permitting important overlay ties [5]. In a decentralized overlay, nodes periodically discover
optimizations like proximity neighbor selection (PNS).
new neighbors by consulting the neighbor sets of existing
neighbors. Malicious nodes can exploit this by advertising
neighbor sets that consist of only other malicious nodes. Thus,
Overlay networks facilitate the deployment of distributed a small number of malicious nodes with legitimate identities
application functionality at edge nodes without the need to is sufficient to carry out an Eclipse attack.
modify existing network infrastructure. Overlays serve as a Castro et al. identify the Eclipse attack as a threat in
platform for many popular applications, including content structured overlay networks [5]. To defend against this attack,
distribution networks like BitTorrent, CoDeeN, and Coral [10], they propose the use of Constrained Routing Tables (CRT),
[16], [40], file-sharing systems like Gnutella, KaZaA, and which imposes strong structural constraints on neighbor sets.
Overnet/eDonkey [18], [23], [30] and end-system multicast In this defense, nodes have certified, random identifiers and
systems like ESM, Overcast, NICE, and CoolStreaming [1], a node’s neighbor set contains nodes with identifiers closest
[8], [22], [42]. Moreover, a large number of research projects to well-defined points in the identifier space. The certified
explore the use of overlays to provide decentralized network identifiers prevent Sybil attacks, and the CRTs thwart Eclipse
services [25], [31], [33], [38], [43]. attacks. However, this defense leaves no flexibility in neighbor
Robust overlays must tolerate participating nodes that devi- selection and therefore prevents optimizations like proximity
ate from the protocol. One reason is that the overlay member- neighbor selection (PNS) [6], [20], an important and widely
ship is often open or only loosely controlled. Even with tightly used technique to improve overlay efficiency.
controlled membership, some nodes may be compromised due This paper presents a defense against Eclipse attacks based
to vulnerabilities in operating systems or other node soft- on anonymous auditing of nodes’ neighbor sets [35]. If a node
ware [44]. To deal with these threats, overlay applications can has significantly more links than the average, it might be
rely on replication, self-authenticating content [26], Byzantine mounting an Eclipse attack. When all nodes in the network
quorums [24], or Byzantine state machines [7] to mask the perform this auditing routinely, attackers are discovered and
failure or corruption of some overlay nodes. can be removed from the neighbor sets of correct nodes.
In an overlay network, each node maintains links to a The defense is applicable to homogeneous structured overlays;
relatively small set of peers called neighbors. All commu- experimental results indicate that it is highly effective and
nication within the overlay, be it related to maintaining the efficient for overlays with low to moderate membership churn,
overlay or to application processing, occurs on these links. i.e., with session times on the order of hours.

thereby preventing Sybil attacks. If a malicious node advertises a First. [25]. numbered rows since they require shorter prefix matches. Unstructured Overlays like PNS. This can be accomplished. and the j th column contains nodes whose (i + 1)th digit fense mechanisms. this flexibility in the choice of neighbors that malicious nodes Section VI discusses the results. As the nodes that satisfy the structural constraints for a given such. [10]. where section. Obviously. [33]. Existing Defenses II. among all by steering random walks towards other malicious nodes. resources and raises concerns about scalability. and the effectiveness of our auditing technique. delay [20].g. consisting of a set of peer nodes table (CRT) [5] impose strong structural constraints on the with which the nodes maintains connections. Overlay networks consist of a set of nodes connected by a 1) Stronger structural constraints: Overlays like CAN [31]. [31]. it tifier. unstructured overlays are vulnerable to Eclipse attacks. Indeed. such a service is a combination of trusted id certification. BACKGROUND Decentralized defenses against the Eclipse attack have been In this section. Each node in an overlay the original Chord [38]. we provide some background on overlay proposed that require additional constraints on neighbor selec- networks. centralized membership service (e. the secure routing primitive has signifi- Unstructured overlays (e. each node has exactly one unique. They fall into two categories: structural constraints and to Eclipse attacks. Thus. to a desired point in the id space. Second. It is isting defenses. Section V presents experimental nodes are more flexible in choosing neighbors for the low results on the impact of Eclipse attacks. digits. Typically. Correct nodes off-line authority that issues cryptographically signed identi- will in turn pass along this biased membership information fiers [5]. [38]. Section III describes our proposed defense. C. Moreover. id density tests. defend against Eclipse attacks is that it removes flexibility in neighbor selection. Generally. the set of positive and a large fraction of overlay nodes are within 1ms of 160-bit integers. Each node selects as its can trivially bias the neighbor selection of a correct nodes neighbors the nodes with minimal network delay. in their ids is j. Such a service keeps track of the overlay a randomly selected node is malicious. neighbor set member. and we discuss prior work in making overlays robust tion. Section VII covers related can exploit to mount an Eclipse attack on structured overlays. Since a small number of malicious nodes cannot be within a low network delay of all correct nodes. e. if the resolution of delay measurements is 1ms identifier from a large numeric space. it was nodes whose identifiers satisfy certain constraints relative to its observed that from the perspective of a typical Internet host. then the defense is not effective. work and Section VIII concludes. we provide some background on overlay networks and the ith row refers to nodes whose ids share the a prefix of i their vulnerability to the Eclipse attack and present existing de.g. This defense assumes that the delay measurements cannot [43]) maintain a specific graph structure that enables reliable be manipulated by the attacker. A node selects its neighbors from the set of each other. the tracker in chance of yielding a malicious node as the probability that BitTorrent [10]). Each neighbor set member is defined to be the all participating nodes and their neighbor relations form an overlay node with identifier closest to a particular point in the overlay graph. identifier space. constraints on neighbor selection. reliability and The main drawback of using strong structural constraints to availability.. the limitations of ex. unforgeable iden- neighbor set that consists only of other malicious nodes.. the identifier of a node determines its In Section IV. to implement our defense. the malicious nodes attract a lot of has a mechanism to locate the live node whose id is closest correct neighbors.g. a own identifier. and Pastry with a constrained routing maintains a neighbor set.. our experimental results in Section V-B . Also. Structured overlay networks (e. two conditions: formation from their peers. This mechanism ensures Eclipse attacks could easily be prevented by using a that a query for a randomly chosen id has about the same trusted. we discuss the auditing technique necessary eligibility as a neighbor for any given other node. it B. a joining node 2) Proximity constraints: Hildrum and Kubiatowicz [21] starts from a bootstrap node and discovers additional neighbors describe a different defense against the Eclipse attack based by performing random walks in the overlay. and undesirable in many environments. the overlay to their peers. since it requires dedicated redundant routing probes using the constrained routing tables. [23]) impose no cant overhead. may bias the node selection of correct nodes. preventing important overlay optimizations A. [18]. proximity constraints. Moreover. for instance. For example. unless an appropriate defense is in place. However. Structured Overlays is therefore difficult for them to mount an Eclipse attack. it is effective only if object location within a bounded number of routing hops. In the next the neighbor sets in Tapestry and Pastry form a matrix. pseudo-random For instance. Castro et al. pairs of nodes are sufficiently separated in the delay space. physical network like the Internet. Over time. by a trusted. The constraints differ for different neighbor set large number of other nodes appear within a narrow band of slots and some are more restrictive than others. [5] membership and offers unbiased referrals among nodes that described a routing primitive that accomplishes this through wish to acquire overlay neighbors. The rest of this paper is organized as follows. This constraint defeats Eclipse attacks under In decentralized overlays.g. The union of neighbor sets. each node is assigned a unique. nodes receive membership in.. Malicious nodes on proximity neighbor selection.

and x removes that member from The defense is based on a very simple observation: During its neighbor set. among the set of with the degree limit. Network prox. If the number section. then the expected number requirements of our auditing mechanism are weaker than those . it checks the signature other correct nodes as their neighbors. increasing overlay size. and digitally signs the correct nodes. Then f 0 N(1 − f )Oexp ≤ N f Imax . When x receives the reply. There are N f malicious nodes. the size of the neighbor set required by the overlay protocol. Moreover. it immediately drops the connection to nodes that satisfy any structural constraints imposed by the that neighbor. For this purpose. Each node in the system periodically audits neighboring nodes In this section. as described in the previous neighbor set by asking it for its backpointer set. III. it is not sufficient to bound node in-degrees. Design fraction of out-degree of correct nodes consumed by malicious To enable anonymous auditing. Let the expected out-degree of correct nodes be Oexp . t = 1). A NONYMOUS AUDITING N(1 − f ) correct nodes. tends towards the allowed bound. we observe that maintaining strict structural The next important question is how to enforce the degree constraints provides an effective defense against Eclipse at. The obvious solution of a centralized service that keeps tacks. A practical alternative is a distributed mechanism. of malicious entries in the neighbor sets of correct nodes is the effectiveness of the PNS-based defense diminishes with bounded by f /(1 − f ). and so We also analyze the effectiveness of our anonymous auditing the total in-degree of malicious nodes is at most N f Imax . With IV. to Imax = tOexp . Asserting freshness necessary to bound the out-degree of nodes. high-resolution delay measurements and they may be effective where participating nodes are responsible for monitoring each only for small overlays. the subject of the next section. [27]. E NFORCING D EGREE B OUNDS We enforce the degree bound through distributed auditing. thus forcing overlay networks [13]. that the overlay supports a secure routing primitive using a Periodically. x includes a random nonce in the challenge. on the other hand. and f 0 ≤ f t/(1 − f ). Overview auditee has failed the test. problem as the centralized membership service discussed in imity based defenses. the in-degree of attacker nodes in the overlay in-degree of correct nodes. we describe a new defense against Eclipse to ensure compliance with the degree bound. it is essential that the identity of the auditor Next. The technique. thereby preventing correct nodes from choosing response. then the expected fraction of malicious nodes in the auditee could easily produce a fake response of the allowed neighbor set of correct nodes can be bounded by f /(1 − f ). To prevent an attacker from consuming the an Eclipse attack. Although there are If we bound the out-degree and in-degree of every node to many general-purpose anonymous channel mechanisms for the expected size of the neighbor set (i. channel that hides the auditor’s identity. Otherwise. Ensuring auditor anonymity is where f is the fraction of malicious nodes in the overlay. [32]. anonymity of the auditor — a necessary building block for degree of any given node. but it introduces additional overhead and prevents track of each overlay member’s degree suffers from the same important performance optimizations like PNS. for some constant t ≥ 1. Eclipse attack is most effective when most of the out-degree of correct nodes are consumed by malicious nodes. Let f 0 be the A. malicious or correct. Thus. In the general case of an overlay where the neighbor relation To ensure that replies to an audit challenge are fresh and is not reflexive. the anonymity every node to have the same degree. Section II. When an Eclipse attack is for correct nodes to choose neighbors whose auditing node finds one of its neighbors not in compliance in-degree is not significantly above average. authentic. we show that if each overlay node has the same remains hidden from the auditee. one way to prevent an set maintain a neighbor set of the appropriate size. other’s degree. a certain threshold. the nodes that have x in their neighbor set. it is also and the nonce before accepting the reply. of entries in that backpointer set is greater than the in-degree bound. [45]. our distributed auditing mechanism to enforce degree bounds. Our technique each node x in the overlay is required to maintain a set of all requires that each participating node carries a certificate. Consequently. the degree of such nodes naturally overlay protocol.e. It is further assumed list as the backpointer set of x. Mechanisms to Enforce Degree Bound In summary. a malicious degree. their total out-degree is N(1 − f )Oexp . B. then the A. Correct nodes and authenticity ensures that a correct node cannot be framed choose neighbors whose in-degree and out-degree are below by a malicious node that fabricates or replays audit responses. bound. size that includes the auditor. Therefore. [17]. x anonymously challenges each member of its constrained routing table (CRT). We refer to this binding its node identifier to a public key. The Malicious nodes could try to consume all the in-degrees of auditee includes the nonce in its reply. a similar auditing procedure is graph must be much higher than the average in-degree of performed to ensure that the members of a node’s backpointer correct nodes in the overlay. we describe a mechanism to preserve the We further assume that we can successfully bound the in. we need a communication nodes. or x does not appear in the backpointer list. depend on that with realistic delay values measured in the Internet. In this section.. attacks based on enforcing node degree bound.

2 show the probability that a malicious node the source of an audit challenge by correlating the discovery is detected for different settings of n. a correct node x wants to audit a node z in its routing table. each challenge is relayed through an interme. it is sufficient to ensure that the identity of audits are independent of each other. the malicious node’s ran- if either (a) it did not receive any response after a sufficient dom response includes the auditor and passes the chal- timeout or (b) the returned set does not contain x or. i. y is malicious: In this case. the audits can be viewed the challenger is hidden only most of the time. y subset passes the audit is 1/r. with the least probability that the . to relay cious node is audited via a correct anonymizer node. x will not detect that z is not in compliance. by setting n = 24 model. The a challenge to node z. then x would falsely suspect z. malicious node is audited through a correct anonymizer node. an auditor n   n waits for a random delay between discovering the anonymizer ∑ i [ f + (1 − f )c/r]i[(1 − f )(1 − c)]n−i . case is indistinguishable from case 2. y is correct: z is in compliance the malicious node answers the challenge with a probability and the audit always succeeds. (1) diate node. z is correct and y is malicious. respectively. Suppose 0.e. are of type 4). all remaining cases be in compliance or violation of the degree bound. allowed size. assuming that auditing is done colludes with z. Analysis set to the maximum allowed size.. optimal. Consider the possible adversarial strategies when a mali- Node x picks a random node y. called an anonymizer. so z can either guess or not respond at ratio. or z guesses correctly. the malicious node’s Intuitively. is weaker requirements to reduce overhead. from the attacker’s that the adversary seeks to maximize the in-degree of its nodes. The mechanism used to choose such an anonymizer i=0 node is described in Section IV-C. the time to detect a node with excessive degree. To obscure the k−1   sender’s identity. A node is We further assume that the intermediate nodes used by an audited regularly only by each member of its neighbor and auditor to relay challenges are malicious with probability f . Appendix C. y is malicious: y can drop the 1) With probability f . or the challenge or response was 4) With probability (1 − f )(1 − c). 3) With probability (1 − f )c(1 − 1/r). probability that a correct node is considered malicious in an We designed an anonymous channel that exploits these individual audit.2.provided by the above mentioned mechanisms. we can fix n and pick k such that In the following. A correct node can fail failure of sender anonymity can only moderately increase the an audit only if the intermediate node is malicious. we determine the probability that a malicious node is overload ratio. (2) i=k node and using the discovered anonymizer in relaying the challenges. k. and does not respond with a probability of 1 − c. there must be at least k instances of case 1 or anonymizers at random times before z can be considered to 2 and not a single instance of case 3 (i. • Case 3: z is correct. For a malicious node to pass an audit consisting of n Therefore. n ∑ i f n−i (1 − f )i . Consider a simple test where x marks node z suspicious 2) With probability (1 − f )c/r. the less likely a randomly selected subset would detected and that a correct node is falsely blamed. x needs to perform several audits through different challenges. Since sets. For each challenge. For instance.02%. consisting of n challenges. Therefore. backpointer set. From x’s perspective. z. lenge. the ratio of the size of its true set versus the maximal all. case 4 suggests that if not respond. causing x to falsely suspect z. Given an upper bound on f .2.4]. (c) the returned set size is greater than the degree bound. we consider a worst-case adversarial this probability is very small. allowing z to craft a custom response for completely anonymously. finally.e. the probability that a malicious node To prevent a malicious node from correlating different passes the audit undetected is challenges from the same auditor. All results reflect a setting that is. Thus. and f . it is sufficient to ensure that any Consider a strategy where a node is considered malicious challenge is equally likely to come from any member of these if it answers less than k out of n challenges correctly. We assume detected. cases 1 and 2 suggest that auditing z only once random response does not include the auditor and fails is not sufficient since y could be malicious and colluding with the challenge. this probability is less than auditing. The overload ratio r is the ratio of the true size of the target’s B. y is correct: z does not know the maximal allowed size from its true set. We describe the design using an example. this of c. the malicious node does dropped by the network. and assuming f ≤ 0. Furthermore. Let r be the overload auditor’s identity. Likewise. thus the easier a malicious node would get This guides the choice of the various parameters. This is to prevent a malicious node from inferring Fig. There are four cases to consider: malicious node can respond with a random subset of the • Case 1: z is malicious. Additionally. malicious node can pass the challenge. perspective. An occasional as Bernoulli trials [12. auditors randomize the interval between subsequent challenges. that malicious nodes have an overload ratio r = 1. Thus. 1 and Fig.. contain the auditor. where all the malicious nodes are colluding to defeat and k = 12. the anonymizer is colluding and the challenge or response. there are four possible cases: • Case 4: z is correct. Note that the larger the Next. assuming traffic with the challenge traffic. We further assume that when the x. Then the probability that the randomly selected • Case 2: z is malicious.

auditors use a random node among the ` nodes with ids closest to H(x) malicious node would be detected. assume f ≤ 0. Dark nodes represent the anonymizer nodes. If the chosen at least one-half malicious nodes. This can be done by drawing a random number in n   n the id space and using secure routing to select the live overlay ∑ i f i (1 − f )n−i .4 the anonymizer node is found.2 are detected traffic will learn nothing useful about who is performing any with probability at least 95. because all example. if the chosen node happens to be malicious. Since all 0. Moreover. it Probability of detection selects the node with id numerically closest to the hash H(x) 0. By setting n = 24 and k = 12. it can be used continuously. This optimal setting was (see Section II-C) as the anonymizer for a given challenge. only 1% of all anonymizer sets of size 16 contain overhear the routing request from the auditor. f = 0. Discovery of Anonymizer Nodes our auditing implementation. The selection their list by determining the latest set of ` closest nodes mechanism must ensure that the anonymizer node is malicious to H(x). The Random node: A node is chosen randomly from the en. B. In order to maintain a sufficient size of the anonymizer set under churn (i.2 Correct. while only 0.g.2%. 3(c). 1. SHA-1) as the anonymizer node. n = 16 Malicious. n = 16 Correct.8 Node closest to H(x): When a node wishes to audit x. one can pick the values represents a random set of nodes in the overlay with only a for n and k to minimize the probability of falsely blaming fraction f being malicious. shown by of size n is malicious can be bounded above by Fig. 0.e. Probability of marking a malicious/correct node suspicious vs.4 Malicious. The problem i=dn/2e with this approach is that a malicious node could potentially For example. the auditors use the same set of nodes to select anonymizers and probability of falsely blaming a correct node is around 0. n.5 node X. their challenges are interleaved. Nodes A. probability that at least half of the nodes in an anonymizer set tire population of the overlay before each audit. 3(a). a sufficiently large number of nodes.1% of size 24 . Malicious nodes that observe discovery routing traffic 0 8 10 12 14 16 18 20 22 24 Number of challenges per audit will learn nothing about the source of a particular challenge. Random node among the ` closest to H(x): Here. all auditors periodically refresh mechanism is the selection of anonymizer nodes. we pick k = n/2. assigned randomly and cannot be forged. 3.4 0.8 Probability of suspicion C C C 0. However. Probability of detecting a malicious node with different numbers However. the challenges will be mixed k = n/2 k = n/4 together.6 B B B 0. 3(b). k = n/2. malicious nodes that observe but malicious nodes with overload ratio r ≥ 1. while effectively detecting malicious nodes. the attacker 1 can infer the source with high probability. node is subsequently used in an audit challenge. as long as it remains the node with id closest to H(x). exactly as if anonymizers were correct nodes.2. Different ways to select anonymizer nodes. we set ` to be equal to the number of with probability no more than f and that the identity of the challenges per audit. requiring a node to answer at least half of the audits correctly. we used it in C. n = 8 0 Fig. every audit of node x is ineffective. Once 0. determined empirically for each data point. n = 8 0. Since the last technique is the most robust. the distribution of We consider three techniques to select the anonymizer malicious nodes in a vicinity of a point in the identifier nodes: space set can be approximated by a binomial distribution. 1 X X X 0.25. Per our assumption that nodeIds are and c that minimized the probability of detection.2 0. this anonymizer set Thus..6 (e. n = 24 A A A Malicious. shown in Fig. Fig. and C audit 0 0. 2. For chosen at random from the full overlay. node with id closest to that chosen number. given audit.3 0. Therefore. Fraction of malicious nodes Fig. anonymizer node used by an auditor reveals nothing about the How big must an anonymizer set be? For overlays with identity of the auditor.9%. fraction of malicious nodes. n = 24 (a) (b) (c) Correct.1 0. given an upper bound for f . by finding the k shown by Fig. the arrival and A critical element of our distributed anonymous auditing departure of overlay nodes). then of challenges per audit.2 k = 3n/4 auditors of a given node use the same anonymizer node and have random challenge times.

Experimental Setup We use MSPastry [28].4 sets to balance the probability of having an anonymizer set 0. which comes with a packet-level B. 10. obtained with the King tool [19].2 of maintaining such an anonymizer set. Malicious nodes maintain an out-degree This experimental setup. Next.3 with at least one-half malicious nodes and the overhead 0. as expected. however. 5. baseline Fraction malicious Therefore.2 unless a Pastry overlay with PNS.contain at least one-half malicious nodes in the anonymizer 0. In and a set of measured pair-wise latency values for up to this case. Malicious nodes collude to maximize malicious nodes at different overlay sizes vs. node degrees? A. 4..6 10000 nodes we need to choose n to be 13. When the overlay network stabilizes. 4 shows the fraction of otherwise specified.000 node network part of the overlay maintenance protocol. the probability drops exponentially with increasing 0. the fraction is over 90% for a 1000- (PNS) [6] and an optimized version of Castro et al. Observe that with time.000 real Internet host.’s secure node overlay and approaches 100% for overlays of 10. nodes as possible. we evaluate the impact of an Eclipse attack on a ogy models were used in the simulations: The GT-ITM transit. we 0 need to choose the size of the anonymizer set n to satisfy 0 5 10 15 20 Time (Hours) n   n i N ∑ f (1 − f )n−i < 1 . in order to consume as much of the in-degree of good overlays. To ensure that the expected number of anonymizer sets with at least one-half 0.1 malicious nodes is less than one.5 5000 nodes 2000 nodes anonymizer set size. the GT-ITM model was used in the neighbor set entries is over 70% for a 1000-node overlay and simulations. and 10000. an Eclipse attack is extremely effective. we evaluate the effectiveness of Eclipse attacks on The fraction of malicious nodes is set to f = 0. Fig. King delay sets. neighbors regardless of the delay in the physical network. 1000. and 29.2. Results of simulations model. Also. Pastry overlay when PNS is turned off. benefits from a good of 16 per routing table row. . malicious nodes nodes over the full routing table drops to less than 30% within misroute join messages of correct nodes to each other and 10 hours of simulation for all system sizes.000 routing primitive. For instance.1 • How effective is the existing defense based on proximity neighbor selection (PNS) against Eclipse attacks? 0 0 5 10 15 20 • Is degree bounding a more effective defense? What is the Time (Hours) impact of degree bounding on the performance of PNS? • Is distributed auditing effective and efficient at bounding Fig. The fraction for the supply references exclusively to other malicious nodes as top row drops from 78% to 41% for a 10. for N = 100. Eclipse Attacks discrete-event simulator. Fig.2 • How serious are Eclipse attacks on structured overlays? 0.3 empirically: 0. when nodes pick stub network topology model. Pastry implements proximity neighbor selection identifier are weakest. we repeated the experiment with the degree leads to a quicker detection of the malicious nodes. at i=dn/2e i different network sizes for GT-ITM topology. Fraction of malicious nodes in routing tables of correct nodes. in defending against Eclipse attacks is significantly reduced. the average fraction of malicious neighbor sets of correct nodes. we set out to answer the following questions 0. E VALUATION 0.4 In this section. respectively. which is the optimal strategy separation of nodes in the delay space of the GT-ITM topology from the perspective of the attacker. particularly in small possible. the simulation the number of entries referring to malicious nodes in the time. at different network sizes with King latencies. is static unless otherwise stated. In the top row Pastry was configured with routing base b = 4 and leaf set of the routing table. by itself. where the constraints on a neighbor’s size ` = 16. Two different physical network topol. First. 5000 nodes 0. 5 shows that the effectiveness of PNS and diminishes the overall impact of the attack. i. the fraction of malicious Unless stated differently. These results suggest that PNS. initialize their routing tables to refer to good nodes whenever reduces the impact of an Eclipse attack. in the Internet are used. Malicious nodes within 10 hours. more than 80% for overlays with 5000 nodes. described in [36]. In particular. 21. Fraction of malicious nodes in routing tables of correct nodes.e. consisting of 5050 routers [41]. To see if the result holds up when real delays measured we performed show that any attempt of using a larger out. 0. Fig. assume f = 0.6 10000 nodes set. The overlay membership nodes or more.5 1740 nodes baseline Fraction malicious V. we can dynamically set the size of anonymizer 0. in an overlay of size N.

requiring which is the ratio of overlay routing delay versus the direct better mechanisms to defeat such attacks. Malicious nodes attempt to maximize their in-degree. PNS becomes less effective. because they are typically to consume a higher fraction of the out-degree of correct used for the first hop.45 64 0. Fraction of malicious nodes in top row of routing table of correct nodes. we first approximately 25% in an overlay with 20. we varied the in-degree limit from 16 through 64 entries per routing table row.35 0. This allows an evaluation of an “ideal” degree against Eclipse attacks so long as a tight in-degree bound is bounding defense.42 0. Fraction of malicious nodes in top row of routing table of correct nodes.000 5.24 0. The effectiveness of our defense decreases when the in the results for row zero. no anonymous auditing is Fig. an Eclipse attack can be highly nodes. Experimentally. 6 and 7. The perfectly maintained. Effectiveness of Degree Bounding absence of attacks.6 1 0. making PNS less effective as a defense The results show that in-degree bounding is effective at mechanism.8 1740 nodes 1.2 Ratio of in-degree and out-degree bound 0 0 5 10 15 20 Fig. this fraction is approximately f t/(1 − f ) = 0.8 2000 nodes TABLES WITH DIFFERENT DEGREE BOUNDS PER ROW. when the bound is set to the expected average which explains the high fraction of entries to malicious nodes degree. a large fraction of nodes lie within a table entries for each row was very similar. Next we evaluate the effectiveness of auditing in defending then. will refuse to increase the in-degree of a node that is against the Eclipse attacks.37 0. overlay is large and the in-degree bound t is loose. Delay stretch versus the ratio of in-degree and out-degree bound for Time (Hours) a network with 20. for t = 1. but correct nodes will check a node’s in-degree before adding it D.35 0.50 Time (Hours) 3 Fig.25 for routing table rows with weak constraints on the identifier.5 CRT 0. 6. already over the limit.24 0. enforced.. in the entire routing table.000 nodes.31 0.6 Number of nodes Bound 1. since Unfortunately.5 1 10000 nodes 2 Delay stretch 5000 nodes 0. TABLE I 1 10000 nodes F RACTION OF MALICIOUS NODES IN CORRECT NODES ’ ROUTING 5000 nodes 0. it can increase routing delays even in the C. Fig.24 0. 8.000 nodes with no perform an idealized experiment where the degree bounds are malicious nodes and an in-degree bound of 16 per row.38 0. We use an oracle to determine if a node penalty decreases to about 8% for a bound of 32.000 10. We observed a delay stretch increase of To measure the effectiveness of degree bounding. as shown in has exceeded its degree bounds. and that the resulting increase in delay stretch is tation of bounds enforcement. at different network sizes with King latencies. Thus. IP delay. effective at intercepting overlay traffic. i.24 0. tolerable.31 0. small delay band. as shown in Fig.24 0. attacks alone will not be effective in the real Internet.24 0.2 48 0.000 20. 2. Fraction malicious baseline 0. We conclude that in-degree bounding is highly effective performed. We measure the delay stretch. Since degree bounding puts extra constraints on the choice of neighbors.000 0. 8.4 PNS with degree bound PNS without degree bound 0 1 2 3 4 0. maintaining a low fraction of malicious routing table entries. at different network sizes for GT-ITM topology.e.29 0. Table I presents the average fraction of malicious routing table entries As the overlay size increases.24 0. Effectiveness of Auditing to the routing table during a routing update.5 baseline Fraction malicious 0.24 32 0. We also evaluated the impact of in-degree bounding on We conclude that a PNS-based defense against Eclipse routing delays under PNS. In this experiment.48 0 0 5 10 15 20 unlimited 0.33 0. independent of any particular implemen. Correct nodes. we simulate a . 7. the entries in row zero are used most often malicious nodes are able to exploit the loose in-degree bounds in routing a node’s own messages. It is easiest for the malicious nodes to supply suitable entries As expected. The fraction of malicious routing In the actual Internet.4 16 0.

Alternatively.45 expected number of anonymizer sets with at least one-half 0. 20. Under churn. respectively. 1) In-degree distribution: Fig. As we will show. A new 0.5 2 2. and 15% churn every hour. Clearly. but much lower than the session times reported 2) Detecting malicious nodes: Fig. auditing starts after 1. respectively. the . However. 37. with each being audited at a random instant an Eclipse attack.e.5 3 3. Fig. 9. where the average session time was reported as in-degree.6 0. as described in Section IV.4 5% 0. they respond to audits coming through correct anonymizer node with the optimal probability. which would defeat the goal of an fraction to increase with time for different churn settings until Eclipse attack. the connections to its previous neighbors. has in-degree equal or below the allowed bound respectively. After this. 0. Since the auditing rate the fraction of malicious neighbors over time in the entire is once every 2 minutes in our simulations and it takes 24 routing table and in just the top row. Correct nodes require an 0. This shows that our auditing scheme has successfully higher than those reported by Bolosky et al. while malicious nodes do not impose any Time (Hours) limit.3 0. the node could try to reestablish auditing is started. we static membership and churn scenarios. 1 0. with static membership this purpose. we simulated both than 16. [3] for a corporate caught each and every case of a malicious node with excessive environment..1 Correct nodes 0.3 responses has to be at least 12 to pass an audit iteration.5 needs to be alive for an hour for meaningful audit data to hours of simulated time. We set n = 24 to ensure that the Fraction malicious 0.9 10% 0. 0. Fraction of malicious nodes in the top row of the routing table We assume malicious nodes employ the optimal strategy of correct nodes.2 0.7 hours. such a node would loose all point to malicious nodes rather than correct nodes.55 10% 5% 0. simulated approximately 10 hours of operation with auditing 5%. Before auditing has begun. to avoid being detected once malicious nodes do not. in this Within 2 simulated hours after auditing has begun. a node audits each of its bution of in-degree for both correct and malicious nodes during overlay neighbors. the routing tables start with around 40% of the entries of Therefore. a higher auditing rate and proportionally higher overhead. 11.5 4 4.6 15% 0. 11 show for some file-sharing networks [2]. 10. 9 shows a cumulative distri- About once every two minutes. Fraction of malicious nodes over the full routing table of correct nodes. out of the 24 challenges. a node the impact of auditing more clearly.25 iteration is then started immediately.5 No churn network with 2000 nodes.2 Malicious nodes 0. higher churn would require an Eclipse attack immediately.4 0. i. causing the its overlay connections.5 5 set row. for the above churn rates. Note that correct nodes enforce an in- One concern is that a malicious node could leave the overlay degree bound of 16 per row from the start of simulation while just before auditing finishes.. the neighbors would continue their audits where they fraction of malicious nodes drops to below 30% and 25% for left off.5 1 1.5 5 0 Time (Hours) 0 50 100 150 200 250 300 350 In-degree Fig.5 2 2. In-degree distribution before auditing starts. no churn). to maintain high in-degree without getting caught. However.5 1 1. These churn rates are of 16.5 4 4. Caching audit state for a few days is sufficient for the top row and overall.4 malicious nodes is less than 1. nodes being malicious).3 0.25 0. however. as described in Section IV-B.2 in-degree bound of 16 per routing table row and backpointer 0.8 No churn Fraction malicious Cumulative fraction 0.45 15% 0. 10 and Fig. every node in the system. in the 2 minute interval.35 0. Fig. whereas the malicious nodes begin be collected. The average lifetime enabled. Assuming static membership (i.5 0. the number of correct 0.e.35 C. respectively. the target environment for our defense is an overlay correct nodes referring to malicious nodes (versus 20% of the with low to moderately high churn. and 7 hours. 10%. malicious. malicious nodes have been able to get in-degrees far larger To evaluate the effectiveness of auditing. correct or of a node in our simulation is indefinite. 10. To show challenges to decide whether a node is suspicious. Therefore. the case.5 3 3. We simulated 0%. but before auditing has started. the newer correct nodes an audit completes. as well as with a churn of 5% and 10% per hour.7 0.

1) Network overhead: The goal of this experiment is to We therefore conclude that auditing is effective. Without degree sets. which is set to one audit every 2 minutes in E. for maintaining the Pastry overlay with PNS. The rest of this section compares degree bounding with the but would also require a longer time to detect malicious previously proposed defense by Castro et al. With 10 hours of auditing.5 3 3. We then measure the total number of approximately 100 connections between correct nodes were message exchanged. 12 shows the total over. suspicious marks auditing. Comparison with Previous Technique our simulation. 1 Cumulative fraction of messages fraction of malicious nodes in the top row for 15% churn is 0.000 total connections.5 5 0 2 4 6 8 10 12 14 Time (Hours) Application data rate (msg/node/min) Fig.5 hours and and communication delay. we head for maintaining the overlay. This includes the basic cost unmark 1 suspicious node every week in a 2000 node network. an improvement of 3%.5 1 1. bounding. Fig. [5] based on a nodes and would likewise be less effective under churn.5 4 4. nodes 0. To ensure that the false positives do not degree bounding. . 7 Overall 14 Total overhead (w/o degree bouding) Auditing overhead Application overhead (w/o degree bouding) Overhead (message/node/sec) 6 Secure routing overhead Total overhead (degree bounding) Overhead (msg/node/sec) 12 Application overhead (degree bounding) 5 10 4 8 3 6 4. 14.1 With degree bounding Without degree bounding We observe that the fraction of malicious nodes in the top 0 0. Recall that the CRT is necessary to re-audited.e. Total cost of maintaining the overlay.1 1 10 100 row stabilized around 27%.7 we considered the in-degree distribution of malicious nodes 0. the cost of degree-bounded or not. We compare both network overhead before. In this experiment. which is either that the absolute auditing cost (2 msg/node/sec).4 0. we first allow the system to run for 1. The spike in secure routing overhead is once every 2 minutes and the churn rate is set to 5% per at this time is the result of every node searching for the hour. The false positive rate could be reduced further by implement secure routing. 0. the maintenance cost of the overlay is the cost of our auditing scheme has only a 10−3 false positive rate after maintaining two routing tables: one PNS and one CRT. are discarded at a rate similar to the false positive rate.75 2 4 1 2 0 0 0. performing more audits before find anonymizer sets.3 thus have not yet been sufficiently audited. 0. i.8 above 30%. We simulate an application that sends false positives. the auditing rate then enable auditing. per hour. 13.e. which includes the maintenance cost and incorrectly marked as suspicious and removed from neighbor the secure routing cost for the two techniques. A lower auditing rate would reduce the cost. As constrained routing table. which is in turn needed to securely increasing the value of n (i.. To explain why the fraction does not drop further. Thus. with a small compare the total cost of delivering a given application mes- increase in overhead. This is out of the roughly 96. 3) Communication overhead: Fig. After 10 hours of simulation.2 msg/node/sec). This shows the fundamental tradeoff between churn rate and auditing rate. including the maintenance 4) False positives: One concern for our auditing scheme is overhead of the overlay.6 relative to the lifetime of these nodes. 0. cost (4. reaching a conclusion). sage to the correct destination. If it fails. In both techniques. The figure shows through the normal PNS-based routing table. 12. shown separately. is proportional to the auditing rate. we observed that traffic at a constant rate. As expected. and these unmarked nodes are treated as new nodes and are freshly the cost of auditing. the additional This introduces a small background churn in the system since cost of maintaining the constrained routing table (CRT). Cumulative distribution of delays observed with and without same effect as auditing once every 2 minutes at a churn of 10% degree bounding. The cost of delivering application messages at different rates with and without degree bounding. Higher churn requires more auditing.2 We then doubled the auditing rate to once every minute.5 2 2. and the total maintenance using secure routing using the CRT.5 with higher in-degree are nodes that are relatively new and 0. there is an additional cost of anonymous become a concern in a long running system.2 msg/node/sec) are very low.9 0. an application message is first routed anonymizer nodes it will subsequently use. the message is then re-sent secure routing (0. The auditing cost. Auditing Delay (seconds) at a rate of once every minute under a 15% churn rate has the Fig.

D ISCUSSION exchange of data [39]. Eclipse attacks on hierarchical overlay systems can provide good network locality properties in structured Our technique uses deviations in the degree of a given peer-to-peer overlay. secure routing is done to securely select auditor sets. [6] and Gummadi et al. Fig. In our experiments. if it responds. [5] propose the use of a routing failure test and redundant suspect the malicious node. malicious nodes attempt to malicious node. generating such proofs could require complex cryp. The figure asymmetry deliberately for performance reasons. Adversary response strategy attacks in an overlay are an open research problem. The knees in the curves are due to in unstructured overlays. In this attack. while it is 5. For auditing to work the two techniques. a malicious node responds to an anony- VII. Localized attacks With the proposed defense. prefix match). and to be included in that row the malicious does not rely on any cryptographic functionality other than nodes have to have sufficiently low delay to the target nodes authenticated audit messages and certified node identifiers.05 seconds. and could also cause problems when the auditing latency to the victim nodes. Such an attack may not be detected by degree the malicious node from the system as soon as it is detected. Unlike the random subset source node repeatedly ask for the next hop of a message. however. They propose iterative routing. Securing such heterogeneous p2p words. with an average of less than maintain connections to a large number of ordinary nodes. If a correct node. a significantly higher-than-average node degree. bounding. such a malicious node’s degree never drops below the and contacts the nodes in the path successively. they pose a security threat since they are in a position routing mechanism by more than half in most cases. we have limited ourselves to defending against presents the cumulative distribution of delays observed for Eclipse attacks in structured overlays. and under PNS). As a while delivering the same fraction without degree bounding related example. So. For exam- shows that the total overhead with auditing is lower. Effective defenses against such localized B. Moreover. In other to eclipse the entire overlay. an adversary could attempt to mount a local- behave correctly to others. because it does not require malicious node to have However. Auditing in unstructured overlays to obtain the correct destination for a given key. 14 In this paper.8 seconds with degree bounding. Limitations of auditing E. solely for the discovery of anonymizer degree bounding. [4]. The cost of maintaining such a structured overlay correctly delivered within 5. each node independently dis- covers malicious nodes. Fig. Castro et bound. However. different set of solutions. A. victim nodes are locally surrounded by malicious nodes in the physical network. unless the application overlays is an interesting research problem and may require a message rate is very low. after detecting a ized Eclipse attack. can be made very low. the strong constraints on the node as an indication of a possible Eclipse attack. An attacker node can remain in the Instead of trying to attack all correct nodes in the overlay system after attacking a subset of nodes while appearing to simultaneously. could present a verifiable proof of misbehavior occupy row zero of the routing tables of only a small set of to other correct nodes. R ELATED WORK mous challenge with a probability of less than one. malicious nodes can routing to improve the chance of successful routing. Kademlia [25] structured overlay to maintain a “distributed tracker”. since the nodes included in the fixed subset will never al. (Recall that row zero is the most system has false positives.4 seconds without unstructured overlay. neighbor sets required to defend against Eclipse attacks leave . modern BitTorrent clients optionally use a took 16 seconds. as shown by Castro et al. They also in some cases achieve a total degree that is 1–2% higher than put strong structural constraints on the neighbor set to thwart with the random strategy. which locates peers for the otherwise unstructured VI. Neither strategy. strategy. this type of attack is possible only if the is robust against auditing errors. [20] show that proximity neighbor selection C. Our technique. 2) Communication delay: Another metric we used to com- pare the two schemes is the delay observed by the application D. then it would be possible to remove victim nodes. flexible row. 4. 13 compares the total cost in messages per node the defense is not directly applicable to systems that use per second of routing the application messages. it presents a subset of its true neighbor / backpointer Sit and Morris [37] consider routing under attack in peer- set. malicious nodes to maintain a degree that is significantly above Despite the structural constraints enforced by the overlay the average degree in the overlay. on the other hand.g. The average delay with degree to maintain an additional structured overlay alongside the bounding is 3. membership protocol (e. Castro et al. unless an ple. where the subset of the maximal allowed size. more than 90% of messages were nodes. superpeers in KaZaA aggregate index information and application rarely sends messages. allows Eclipse attacks. the cost of auditing pays off. the colluding nodes have to have low network nodes. An alternative strategy would be to always return a fixed to-peer systems.75 msg/node/min. As a result. Therefore. Also observe that the auditing technique Unless such superpeers can be authenticated and implicitly reduces the reliance on (and thus the overhead of) the secure trusted. there needs to be a mechanism the fact that in our implementation. A straightforward solution is in discrete iterations [36]. tographic operations or Byzantine agreement between correct However.

Karlsruhe. Druschel. Induced Churn as Shelter from Routing-Table Poisoning.-G. W. June 2004. Theimer. Aug. S. Malkhi and M. and D. S. N. TX. Shenker. and S. Tarzan: A peer-to- results show that the defense can prevent attacks effectively in peer anonymizing network layer. to fault-tolerance in peer-to-peer networks. Y. Kubiatowicz. Kacholia. 2006. Freedman. L. Technical Report to periodically reset the PNS routing table to a constrained MSR-TR-2003-52. optimizations like proximity neighbor selection. June 2003. Early experience with an Internet broadcast system based identifiers be changed periodically. Rowstron. 1999. This work originated during an internship of the first author and I. by NSF (CNS-0509297 and ANI-0225660). J. Condie. Ng. [11] T. Louisiana. R. R. MA. Costa. and R. Italy. D. McGraw Hill. B. and [21] K. Stoica. Chu. Mazières. have proposed a novel defense that prevents Eclipse attacks [15] A. J. S. San nodes. [23] KaZaA. Saroiu. K. We [14] J. May 1997. Feb. The Sybil Attack.clip2. Unlike our design. Cambridge. CA. H. E. Liskov. C. Feb. New Orleans. Freudenthal. Byzantine quorum systems. and 2003. Asymptotically efficient approaches support. Overcast: Reliable multicasting with an overlay network. Cambridge. Cohen. Nov. http://www. Moreover. Saia et al. [1] S. France. and D. MA. Fiat and Saia [15] consider a butterfly network of Workshop on Economics of Peer-to-Peer Systems. 2005. Leiserson. Therefore. G. In Proceedings of Networked System based on a constrained routing table. Introduction to Algorithms. L. selection on performance and resilience of structured p2p networks. This research was supported by Texas ATP (003604. Jan. Gribble. E. Castro and B. Reiter. Cates. Lee. Cambridge. This paper has shown that Eclipse attacks on overlays are [12] T. Castro. Ganjam. 2001. Proximity neighbor selection in tree-based structured peer-to-peer overlays. However. F. Ratnasamy. V. O’Toole. [19] K. CA. Gummadi. Stein. NY. Saia. P. at Microsoft Research. M. 2nd edition edition. Zhan. Gummadi. 2002. In Proceedings of International Con- while still allowing flexibility in neighbor selection. B. and P. P. D. In Proceedings of routing. C ONCLUSIONS Feb. E. Understanding Availability. 2004. Zhang. D. VIII. El Paso. 2002. and J. Sorrento. Hildrum and J. from exploiting network proximity or node capacity comes at [5] M. 2002. CA. Secure routing for structured peer-to-peer overlay networks. Dec. M. Incentives build robustness in BitTorrent. T. Diego. R. San Diego. CA. advice. Johnson. [9]. than previously proposed techniques. We wish to thank Miguel Aug. In Proceedings multicast using overlays. which limits its applica. In Proceedings of 13th USENIX Security even when they control only a small fraction of overlay nodes. and C. against Eclipse attacks based on induced churn. it is important to defend against Eclipse attacks. MA. Gifford. Y. Conference. Voelker. Ithaca. for typical systems and for all [18] The Gnutella protocol specification. routing table (CRT). Banerjee. San Diego. CA. Mar. Democratizing overlays with moderate churn and. and J. Hildrum and Kubiatowicz [21] propose the use of wide [3] W. CA. Morris. [7] M. Censorship resistant peer-to-peer content addressable using anonymous auditing to bound the degree of overlay networks. A. In Proceedings of USENIX Operating System Design and Implementation R EFERENCES (OSDI). and P. Sripanidkulchai. Germany. K. our defense is more efficient GnutellaProtocol04. Ganesh. Sankararaman. Rao. We thank the anonymous reviewers [22] J. Druschel. Fiat and but very low application traffic. Maniatis. San of Annual ACM Symposium on Theory of Computing (STOC). optimizations. it permits important Design and Implementation (NSDI). Oct. S. Sit. and G. A. 2004. ings of USENIX Operating System Design and Implementation(OSDI). J. a structured overlay. by Microsoft Research. Florence.kazaa. In Proceedings of Network and Distributed System Security Symposium. and A. S. and M. Performance and dependability fault-tolerance per redundant overlay node than multiple paths. Resilient [24] D. Bolosky. In Proceedings of 1st International Workshop on Peer-to-Peer Systems (IPTPS). of malicious nodes infiltrating the routing tables of correct [8] Y. MA. signed overlay structures. Kaashoek. Mar. San Francisco. Rowstron. Chun. Experimental [17] M. Kubiatowicz. S. This defense can be used in homogeneous structured Francisco. more than one starting point for each message. 2000. In Proceedings of ACM Internet Measurement Workshop. J. The idea is [6] M. June 2004. and A. flexibility in neighbor selection and therefore prevent such [2] R. of structured peer-to-peer overlays. nodes. In Proceedings of USENIX Annual Technical bility to systems that can deal with the resulting Castro and Antony Rowstron for their ideas. C. In Proceedings of ACM SIGMETRICS. Rivest. Symposium. Wallach. June virtual nodes. The impact of DHT routing geometry on resilience and proximity. Castro. unlike previous defenses content publication with Coral. and A. Bhattacharjee. Ely. In Proceedings of 17th International Symposium on Distributed Computing. http://dss. E. J. In Proceedings of ACM SIGMETRICS. Tor: The second- controlling a large fraction of the neighbors of correct nodes generation onion router. 2003. Syverson. S. R. Feasibility of a paths. Practical byzantine fault tolerance. ference on Dependable Systems and Networks (DSN 2004). [4] M. where they add redundancy to the routing tables and use serverless distributed file system deployed on an existing set of desktop two nodes for each hop. [11] proposed a novel defense Boston. S. on overlay multicast. San Diego. [34] and Naor and In Proceedings of 4th International Workshop on Peer-to-Peer Systems Wieder [29] also use ideas related to wide paths and recursive (IPTPS). 0079-2001). J. Marseille. Hellerstein. as noted by Chun et al. In Proceed- and periodically change node identifiers to mitigate the effect ings of USENIX Operating System Design and Implementation(OSDI). J. Freedman. K. 2002. [9] B. [16] M. A. Castro. They show that this provides better PCs. Berkeley. In Proceedings of 1st International Workshop on Peer-to-Peer Systems (IPTPS). June 2003.pdf. In Proceed- the price of increased vulnerability against targeted attacks. P. Mathewson. King: Estimating Latency between Arbitrary Internet End Hosts. [10] B. a real threat: attackers can disrupt overlay communication by [13] R. Rowstron. where fault-tolerance is achieved by having 2003. In Proceedings of ACM SIGCOMM. . In Proceedings of 2th International Workshop on Peer-to-Peer Systems (IPTPS). CA. for their helpful comments. Impact of neighbor Other works achieve fault-tolerance through specially de. Bhagwan. In Proceedings of Symposium on Discrete Algorithms. Gribble. Boston. the performance improvement Italy. 2002. Microsoft Research. Dingledine. Hu. June 2000. this approach requires that node and H. Douceur. Condie et al. IX. Feb. 2003. Gummadi. S. S. Jannotti. rate limit the updates of routing tables. Zhao. Recently. Srinivasan. Douceur. Savage. ACKNOWLEDGEMENTS [20] K. Cormen. Mar.

com/. C. and S. Francis. G. McSherry. Karp. Sept. Aug. [36] A. K. S. M. In Proceedings of IFIP/ACM Middleware.bittorrent. Rubin. A simple fault tolerant distributed hash table. M. and D. 2003. AP3: Anonymization of group communication. MA. Zhuang. In Proceedings of 1st International Workshop on Peer-to- Peer Systems (IPTPS). CA. http://www. Li. J. Dynamically fault-tolerant content addressable networks. D. R. In Proceedings of ACM SIGOPS European Workshop. Singh. and A. [40] L. Berkeley. MA.[25] P. Germany. Sit and R. Miami. distributed object location and routing for large-scale peer-to-peer systems. Yum. [26] D. [43] B. and H. Aug. Castro.html. Reliability and security in the CoDeeN content distribution network. In Proceedings of Networked System Design and Implementation (NSDI). [34] J. In Proceedings of ACM SIGCOMM. Technical Report TR05-459. Communications of the ACM. [41] E. Wallach. Rowstron. F. F. and A. Feb. [29] M. Mar. J. Mar. Zhang. 2002. Zhang. http://research. In Proceedings of 1st International Workshop on Peer-to-Peer Systems (IPTPS).com/ trackerless. 1999. U. SC. P. MA. [27] A. 2001. Cambridge. San Diego. Heidelberg. [31] S. A scal- able content-addressable network.overnet. Boston. Mar. MA. Gribble. D. Kaashoek. 2004. May 2005. Belgium. Karlin. Druschel. Security considerations for peer-to-peer distributed hash tables. D. Rowstron. M. Zegura. Mar. Zhao. Singh. 2002. P. Fiat. Balakrishnan. Wang. L. C. [35] A. Ngan. Leuven. In Proceedings of SIGOPS European Workshop. Naor and U. Massachusetts.-W. Chord: A scalable peer-to-peer lookup service for Internet applications. 2006. Joseph. 2001. Calvert. Jan. Chien. and E. [32] M. 1999. Ratnasamy. FL. Wallach. A. Belgium. R. 2004. http://www. Defending against Eclipse attacks in overlay networks. In Proceedings of 2nd International Workshop on Peer-to-Peer Systems (IPTPS). Cashmere: Resilient anonymous˜antr/ Pastry/. Reis. Handley. Cambridge. [42] X. DONet: A data-driven overlay network for efficient live media streaming. In Proceedings of USENIX Annual Technical Conference. Technical Report UCB-CSD-01-1141. 42(2):32–48. Zhou. A first look at peer-to-peer worms: Threats and defenses. Separating key management from file system security. Kademlia: A peer-to-peer information system based on the XOR metric. Wieder. [30] OverNet. Mar. Pastry: Scalable. Maymounkov and D. Mazières. Zhou. Oberoi. Implementation and evaluation of secure routing primitives. R. Liu. 1996. 2001. V. San Francisco. [37] E. Y. Y. and S. Druschel. Costa. 2005. San Diego. Rowstron and P. F. Karger. Immorlica. Peterson. Apr. [33] A. CA. M. Kaashoek. B. CA. Stoica. Bhattacharjee. K. P. Charleston. 2005. [45] L. and S. Shenker. Boston. CA. S. Nov. D. and L. Saroiu. and D. T. In Proceedings of Symposium on Operating System Principles (SOSP). [44] L. A. and P. Morris. 2002. Park. Anonymous Web transactions with Crowds. Kubiatowicz. Pai. Reiter and A. Druschel. N. Kaminsky. June 2004. J. How to model an internetwork. In Proceedings of IEEE INFOCOM. Feb. Sept. Dec. and A. Zhao. A. P. Saia. In Proceedings of IEEE INFOCOM. Pang. Cornell. Berkeley. Morris. In Proceedings of 1st International Workshop on Peer-to-Peer Systems (IPTPS). 2001. [39] Trackerless in BitTorrent. Feb. Mazières. K. M. In Proceedings of 4th International Workshop on Peer-to-Peer Systems (IPTPS). Leuven. [28] MSPastry. Cambridge. . NY. Tapestry: An infrastructure for fault-resilient wide-area location and routing. and S. Rice University. Post. In Proceedings of ACM SIGCOMM. F. [38] I. Mislove. M. Druschel. B. Witchel.