Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
2Activity
0 of .
Results for:
No results containing your search query
P. 1
Chord a Scalable Peer-To-Peer Lookup Service for Internet Applications

Chord a Scalable Peer-To-Peer Lookup Service for Internet Applications

Ratings: (0)|Views: 18|Likes:
Published by hemaccna

More info:

Published by: hemaccna on Dec 28, 2009
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

08/14/2010

pdf

text

original

 
Chord: A Scalable Peer-to-peer Lookup Service for InternetApplications
Ion Stoica
 
, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan
¡
MIT Laboratory for Computer Sciencechord@lcs.mit.eduhttp://pdos.lcs.mit.edu/chord/ 
Abstract
A fundamental problem that confronts peer-to-peer applications isto efficiently locate the node that stores a particular data item. Thispaper presents
Chord 
, a distributed lookup protocol that addressesthis problem. Chord provides support for just one operation: givena key, it maps the key onto a node. Data location can be easilyimplemented on top of Chord by associating a key with each dataitem, and storing the key/data item pair at the node to which thekey maps. Chord adapts efficiently as nodes join and leave thesystem, and can answer queries even if the system is continuouslychanging. Results from theoretical analysis, simulations, and ex-periments show that Chord is scalable, with communication costand the state maintained by each node scaling logarithmically withthe number of Chord nodes.
1. Introduction
Peer-to-peer systems and applications are distributed systemswithout any centralized control or hierarchical organization, wherethe software running at each node is equivalent in functionality.A review of the features of recent peer-to-peer applications yieldsa long list: redundant storage, permanence, selection of nearbyservers, anonymity, search, authentication, and hierarchical nam-ing. Despite this rich set of features, the core operation in mostpeer-to-peer systems is efficient location of data items. The contri-bution of this paper is a scalable protocol for lookup in a dynamicpeer-to-peer system with frequent node arrivals and departures.The
Chord protocol
supports just one operation: given a key,it maps the key onto a node. Depending on the application usingChord, that node might beresponsible for storing avalue associatedwith the key. Chord uses a variant of consistent hashing [11] toassign keys to Chord nodes. Consistent hashing tends to balanceload, since each node receives roughly the same number of keys,
 
University of California, Berkeley. istoica@cs.berkeley.edu
¡
Authors in reverse alphabetical order.This research was sponsored by the Defense Advanced ResearchProjects Agency (DARPA) and the Space and Naval Warfare Sys-tems Center, San Diego, under contract N66001-00-1-8933.
Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.
SIGCOMM’01,
August 2731, 2001, San Diego, California, USA.Copyright 2001 ACM 1581134118/01/0008 ...
$
5.00.
and involves relatively little movement of keys when nodes joinand leave the system.Previous work on consistent hashing assumed that nodes wereaware of most other nodes in the system, making it impractical toscale to large number of nodes. In contrast, each Chord node needs“routing” information about only a few other nodes. Because therouting table is distributed, a node resolves the hash function bycommunicating with a few other nodes. In the steady state, inan
¢
-node system, each node maintains information only about
£¥¤§¦©¨ ¢
other nodes, and resolves all lookups via
£¥¤§¦©¨ ¢
mes-sages to other nodes. Chord maintains its routing information asnodes join and leave the system; with high probability each suchevent results in no more than
£¥¤§¦©¨ ¢
messages.Three features that distinguish Chord from many other peer-to-peer lookup protocols are its simplicity, provable correctness, andprovable performance. Chord is simple, routing a key through a se-quence of 
£¥¤§¦©¨ ¢
other nodes toward the destination. A Chordnode requires information about
£¥¤§¦©¨ ¢
other nodes for
efficient 
routing, but performance degrades gracefully when that informa-tion is out of date. This is important in practice because nodes will join and leave arbitrarily, and consistency of even
£¥¤§¦©¨ ¢
statemay behard tomaintain. Onlyone piece information per node needbe correct in order for Chord to guarantee correct (though slow)routing of queries; Chord has a simple algorithm for maintainingthis information in a dynamic environment.The rest of this paper is structured as follows. Section 2 com-pares Chord to related work. Section 3 presents the system modelthat motivates the Chord protocol. Section 4 presents the baseChord protocol and proves several of its properties, while Section 5presents extensions to handle concurrent joins and failures. Sec-tion 6 demonstrates our claims about Chord’s performance throughsimulation and experiments on a deployed prototype. Finally, weoutline items for future work in Section 7 and summarize our con-tributions in Section 8.
2. Related Work
While Chord maps keys onto nodes, traditional name and lo-cation services provide a
direct 
mapping between keys and val-ues. A value can be an address, a document, or an arbitrary dataitem. Chord can easily implement this functionality by storing eachkey/value pair at the node to which that key maps. For this reasonand to make the comparison clearer, the rest of this section assumesa Chord-based service that maps keys onto values.DNS provides a host name to IP address mapping [15]. Chordcan provide the same service with the name representing the keyand the associated IP address representing the value. Chord re-quires no special servers, while DNS relies on a set of special root
 
servers. DNS names are structured to reflect administrative bound-aries; Chord imposes no naming structure. DNS is specialized tothe task of finding named hosts or services, while Chord can alsobe used to find data objects that are not tied to particular machines.The Freenet peer-to-peer storage system [4, 5], like Chord, isdecentralized and symmetric and automatically adapts when hostsleave and join. Freenet does not assign responsibility for docu-ments to specific servers; instead, its lookups take the form of searches for cached copies. This allows Freenet to provide a degreeof anonymity, but prevents it fromguaranteeing retrieval of existingdocuments or from providing low bounds on retrieval costs. Chorddoes not provide anonymity, but its lookup operation runs in pre-dictable time and always results in success or definitive failure.The Ohaha system uses a consistent hashing-like algorithm formapping documents to nodes, and Freenet-style query routing [18].As a result, it shares some of the weaknesses of Freenet. ArchivalIntermemory uses an off-line computed tree to map logical ad-dresses to machines that store the data [3].TheGlobesystem[2]has awide-arealocationservicetomapob- ject identifiers to the locations of moving objects. Globe arrangesthe Internet as a hierarchy of geographical, topological, or adminis-trative domains, effectively constructing a static world-wide searchtree, much like DNS. Information about an object is stored in aparticular leaf domain, and pointer caches provide search shortcuts [22]. The Globe system handles high load on the logical rootby partitioning objects among multiple physical root servers us-ing hash-like techniques. Chord performs this hash function wellenough that it can achieve scalability without also involving anyhierarchy, though Chord does not exploit network locality as wellas Globe.The distributed data location protocol developed by Plaxton
et al.
[19], a variant of which is used in OceanStore [12], is perhapsthe closest algorithm to the Chord protocol. It provides strongerguarantees than Chord: like Chord it guarantees that queries makea logarithmic number hops and that keys are well balanced, but thePlaxton protocol also ensures, subject to assumptions about net-work topology, that queries never travel further in network distancethan the node where the key is stored. The advantage of Chordis that it is substantially less complicated and handles concurrentnode joins and failures well. The Chord protocol is also similar toPastry, the location algorithm used in PAST [8]. However, Pastryis a prefix-based routing protocol, and differs in other details fromChord.CAN uses a
  
-dimensional Cartesian coordinate space (for somefixed
  
) to implement a distributed hash table that maps keys ontovalues [20]. Each node maintains
£¥¤
  
state, and the lookup costis
£¥¤
  
¢
¢¡£¥¤
. Thus, in contrast to Chord, the state maintained by aCAN node does not depend on the network size
¢
, but the lookupcost increases faster than
¦©¨¢
. If 
  §¦
¦©¨¢
, CAN lookup timesand storage needs match Chord’s. However, CAN is not designedtovary
  
as
¢
(andthus
¦©¨¢
) varies, sothismatchwillonlyoccurfor the “right”
¢
corresponding to the fixed
  
. CAN requires anadditional maintenance protocol toperiodically remap theidentifierspace onto nodes. Chord also has the advantage that its correctnessis robust in the face of partially incorrect routing information.Chord’s routing procedure may be thought of as a one-dimensional analogue of the Grid location system [14]. Grid relieson real-world geographic location information to route its queries;Chord maps its nodes to an artificial one-dimensional space withinwhich routing is carried out by an algorithm similar to Grid’s.Chord can be used as a lookup service to implement a varietyof systems, as discussed in Section 3. In particular, it can helpavoid single points of failure or control that systems like Napsterpossess [17], and the lack of scalability that systems like Gnutelladisplay because of their widespread use of broadcasts [10].
3. System Model
Chord simplifies the design of peer-to-peer systems and applica-tions based on it by addressing these difficult problems:
¨
Load balance:
Chord acts as a distributed hash function,spreading keys evenly over the nodes; this provides a degreeof natural load balance.
¨
Decentralization:
Chord is fully distributed: no node ismore important than any other. This improves robustness andmakes Chord appropriate for loosely-organized peer-to-peerapplications.
¨
Scalability:
The cost of a Chord lookup grows as the log of the number of nodes, so even very large systems are feasible.No parameter tuning is required to achieve this scaling.
¨
Availability:
Chord automatically adjusts its internal tablesto reflect newly joined nodes as well as node failures, ensur-ing that, barring major failures inthe underlying network, thenode responsible for a key can always be found. This is trueeven if the system is in a continuous state of change.
¨
Flexible naming:
Chord places no constraints on the struc-ture of the keys it looks up: the Chord key-space is flat. Thisgives applications a large amount of flexibility in how theymap their own names to Chord keys.The Chord software takes the form of a library to be linked withthe client and server applications that use it. The application in-teracts with Chord in two main ways. First, Chord provides a
lookup(key)
algorithm that yields the IP address of the noderesponsible for the key. Second, the Chord software on each nodenotifies the application of changes in the set of keys that the nodeis responsible for. This allows the application software to, for ex-ample, move corresponding values to their new homes when a newnode joins.The application using Chord is responsible for providing any de-sired authentication, caching, replication, and user-friendly namingof data. Chord’s flat key space eases the implementation of thesefeatures. For example, an application could authenticate data bystoring it under a Chord key derived from a cryptographic hash of the data. Similarly, an application could replicate data by storing itunder two distinct Chord keys derived from the data’s application-level identifier.The following are examples of applications for which Chordwould provide a good foundation:
Cooperative Mirroring,
as outlined in a recent proposal [6].Imagine a set of software developers, each of whom wishesto publish a distribution. Demand for each distribution mightvary dramatically, from very popular just after a new releaseto relatively unpopular between releases. An efficient ap-proach for this would be for the developers to cooperativelymirror each others’ distributions. Ideally, the mirroring sys-tem would balance the load across all servers, replicate andcache thedata, and ensure authenticity. Such asystemshouldbe fully decentralized in the interests of reliability, and be-cause there is no natural central administration.
Time-Shared Storage
for nodes with intermittent connectivity. If a person wishes some data to be always available, but their2
 
Server
ChordChordChordFile SystemBlock StoreBlock StoreBlock Store
ClientServer
Figure 1: Structure of an example Chord-based distributedstorage system.
machine is onlyoccasionally available, theycan offer tostoreothers’ data while they are up, in return for having their datastored elsewhere when they are down. The data’s name canserve as a key to identify the (live) Chord node responsiblefor storing the data item at any given time. Many of thesame issues arise as in the Cooperative Mirroring applica-tion, though the focus here is on availability rather than loadbalance.
Distributed Indexes
tosupport Gnutella- or Napster-likekeywordsearch. A key in this application could be derived from thedesired keywords, while values could be lists of machinesoffering documents with those keywords.
Large-Scale Combinatorial Search,
such as code breaking. Inthis case keys are candidate solutions to the problem (such ascryptographic keys); Chord maps these keys to the machinesresponsible for testing them as solutions.Figure 1 shows a possible three-layered software structure for acooperative mirror system. The highest layer would provide a file-like interface tousers, including user-friendly naming and authenti-cation. This “filesystem” layer might implement named directoriesand files, mapping operations on them to lower-level block opera-tions. The next layer, a “block storage” layer, would implementthe block operations. It would take care of storage, caching, andreplication of blocks. The block storage layer would use Chord toidentify the node responsible for storing a block, and then talk tothe block storage server on that node to read or write the block.
4. The Base Chord Protocol
The Chord protocol specifies how to find the locations of keys,how new nodes join the system, and how to recover from the failure(or planned departure) of existing nodes. This section describes asimplified version of the protocol that does not handle concurrent joins or failures. Section 5 describes enhancements to the base pro-tocol to handle concurrent joins and failures.
4.1 Overview
At its heart, Chord provides fast distributed computation of ahash function mapping keys to nodes responsible for them. It uses
consistent hashing
[11, 13], which has several good properties.With high probability the hash function balances load (all nodesreceive roughly the same number of keys). Also with high prob-ability, when an
¢
¢¡
node joins (or leaves) the network, only an
£¥¤
¤£¦¥
¢
fraction of the keys are moved to a different location—this is clearly the minimum necessary to maintain a balanced load.
0
6
1234567
12successor(2) = 3successor(6) = 0successor(1) = 1
Figure 2: An identifier circle consisting of the three nodes 0, 1,and 3. In this example, key 1 is located at node 1, key 2 at node3, and key 6 at node 0.
Chord improves the scalability of consistent hashing by avoid-ing the requirement that every node know about every other node.A Chord node needs only a small amount of “routing” informa-tion about other nodes. Because this information is distributed, anode resolves the hash function by communicating witha few othernodes. In an
¢
-node network, each node maintains informationonly about
£¥¤§¦¨ ¢
other nodes, and a lookup requires
£¥¤§¦©¨ ¢
messages.Chord must update the routing information when a node joins orleaves the network; a join or leave requires
£¥¤§¦©¨ ¢
messages.
4.2 Consistent Hashing
The consistent hash function assigns each node and key an
§
-bit
identifier 
using a base hash function such as SHA-1 [9]. A node’sidentifier is chosen by hashing the node’s IP address, while a keyidentifier is produced by hashing the key. We will use the term“key” to refer to both the original key and its image under the hashfunction, as the meaning will be clear from context. Similarly, theterm “node” will refer to both the node and its identifier under thehash function. The identifier length
§
must be large enough tomake theprobability of twonodes or keys hashing tothesameiden-tifier negligible.Consistent hashing assigns keys to nodes as follows. Identifiersare ordered in an
identifier circle
modulo
¨©
. Key
is assigned tothe first node whose identifier is equal to or follows (the identifierof)
in the identifier space. This node is called the
successor node
of key
, denoted by
successor 
¤
. If identifiers are represented asa circle of numbers from
to
¨©£
, then
!"#$&%
¤
is thefirst node clockwise from
.Figure 2 shows an identifier circle with
§
¦
('
. The circle hasthree nodes: 0, 1, and 3. The successor of identifier 1 is node 1, sokey 1 would be located at node 1. Similarly, key 2 would be locatedat node 3, and key 6 at node 0.Consistent hashing is designed to let nodes enter and leave thenetwork with minimal disruption. To maintain the consistent hash-ing mapping when a node
)
joins the network, certain keys previ-ously assigned to
)
’s successor now become assigned to
)
. Whennode
)
leaves the network, all of its assigned keys are reassignedto
)
’s successor. No other changes in assignment of keys to nodesneed occur. In the example above, if a node were to join with iden-tifier 7, it would capture the key with identifier 6 from the nodewith identifier 0.The following results are proven in the papers that introducedconsistent hashing [11, 13]:T
HEOREM
1.
For any set of 
¢
nodes and 
0
keys, with high probability:1. Each node is responsible for at most 
¤
¤£2143
¤0¥
¢
keys
3

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->