P. 1
tam-chord

tam-chord

|Views: 6|Likes:
Published by sushmsn

More info:

Published by: sushmsn on Nov 01, 2010
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PPT, PDF, TXT or read online from Scribd
See more
See less

11/01/2010

pdf

text

original

Chord: A Scalable Peer-to-Peer Lookup Protocol for Internet Applications

Stoica et al.

Presented by Tam Chantem March 30, 2007

Outline
‡ ‡ ‡ ‡ Problem Approach Results Conclusion

03/30/07

Chord: Stoica et al.

2

Peer-to-Peer Applications
‡ More and more popular ‡ Decentralized ‡ Information is everywhere

03/30/07

Chord: Stoica et al.

3

Peer-to-Peer Applications
‡ More and more popular ‡ Decentralized ‡ Information is everywhere
± How to find data?

03/30/07

Chord: Stoica et al.

4

Peer-to-Peer Applications
‡ More and more popular ‡ Decentralized ‡ Information is everywhere
± How to find data? ± As nodes come and go«

03/30/07

Chord: Stoica et al.

5

Some Solutions
‡ Exhaustive search ‡ Centralized directory servers

03/30/07

Chord: Stoica et al.

6

Some Solutions
‡ Exhaustive search ‡ Centralized directory servers

Not Scalable

03/30/07

Chord: Stoica et al.

7

Some Solutions
‡ Exhaustive search ‡ Centralized directory servers

Not Scalable
‡ Partial search ‡ Caching

03/30/07

Chord: Stoica et al.

8

Some Solutions
‡ Exhaustive search ‡ Centralized directory servers

Not Scalable
‡ Partial search ‡ Caching

False negative, Inconsistency
03/30/07 Chord: Stoica et al. 9

Chord
‡ Distributed hash table across nodes ‡ Given a key, map key to node storing the data ‡ Flat naming for keys and nodes

03/30/07

Chord: Stoica et al.

10

Using Chord
3. Contact 292.164.2.3

Application A
1. Lookup(key) 2. IP = 292.164.2.3

Chord
Node i
03/30/07 Chord: Stoica et al. 11

Hashing in Chord
‡ Consistent hash function: m-bit IDs
± Hash key p key ID ± Hash node¶s IP address p node ID

‡ Use hashing to map key ID to node ID

03/30/07

Chord: Stoica et al.

12

Chord Ring
‡ Organize nodes based on their ID
N56 N8

N14 N42 N38
03/30/07

N32
Chord: Stoica et al. 13

Key Mapping
‡ Key k p First node n, n¶s ID u k ‡ Balance load with high probability

03/30/07

Chord: Stoica et al.

14

Key Mapping Example
N56 K54 N8 K10 N14 N42 K38 N38 N32
03/30/07 Chord: Stoica et al.

K24 K30
15

Key Location
‡ Linear time if nodes keep track of one successor ‡ Keep track of more successors to get logarithmic time
± m successors if ID is m-bit long

‡ Use finger table

03/30/07

Chord: Stoica et al.

16

Finger Table
‡ Want to reduce distance from node that makes query to target node by half each time ‡ The ith entry of the table of n is node that succeeds n by at least 2i-1 nodes
finger[i] = successor(n + 2i-1) % m

03/30/07

Chord: Stoica et al.

17

Constructing a Finger Table
‡ We don¶t know whether a node exists

03/30/07

Chord: Stoica et al.

18

Constructing a Finger Table
‡ We don¶t know whether a node exists ‡ So to fill an entry:
± Compute: key = successor(n + 2i-1) % m ± Do lookup(key) ± Fill entry with the node that has key

03/30/07

Chord: Stoica et al.

19

Constructing a Finger Table
‡ We don¶t know whether a node exists ‡ So to fill an entry:
± Compute: key = successor(n + 2i-1) % m ± Do lookup(key) ± Fill entry with the node that has key

‡ Also keep track of predecessor
03/30/07 Chord: Stoica et al. 20

Locating a Key
‡ Go to largest node that precedes key
N8 + 1 N8 + 2 N8 + 4 N8 + 8 N8 + 16 N8 + 32
03/30/07

N14 N14 N14 N32 N32 N42
21

Chord: Stoica et al.

Looking up K54 by N8
N56 K54 N8 K10

N14 N42 K38 N38 N32
03/30/07 Chord: Stoica et al.

K24 K30
22

Looking up K54 by N8
N56 K54 N8 K10

N14 N42 K38 N38 N32
03/30/07 Chord: Stoica et al.

K24 K30
23

Looking up K54 by N8
N56 K54 N8 K10

N14 N42 K38 N38 N32
03/30/07 Chord: Stoica et al.

K24 K30
24

Accounting for Volatility
‡ Node relies on successor(s) for correctness ‡ How can we ensure this when nodes leave/join the network?

03/30/07

Chord: Stoica et al.

25

Key Remapping
N56 K54 N8 K10 N14 N42 K38 N38 N32
03/30/07 Chord: Stoica et al.

K24 K30
26

Key Remapping
N56 K54 N8 K10 N14 N42 K38 N38 N32
03/30/07 Chord: Stoica et al.

K24 K30
27

Key Remapping
N56 K54 N8 K10 N14 N42 N38 N32
03/30/07 Chord: Stoica et al.

K24 K30
28

Joining
N56 K54 N8 New Node

N14 N42 K38 N38 N32
03/30/07 Chord: Stoica et al.

K10 K24 K30
29

Joining
N56 K54 N8 New Node Requests

N14 N42 K38 N38 N32
03/30/07 Chord: Stoica et al.

K10 K24 K30
30

Joining
N56 K54 N8 New Node

Successor info

N14 N42 K38 N38 N32
03/30/07 Chord: Stoica et al.

K10 K24 K30
31

Joining
N56 K54 N8

N14 N42 K38 N38 N32
03/30/07

K10

K24 K30

N26
32

Chord: Stoica et al.

Joining
N56 K54 N8

N14 N42 K38 N38 N32
03/30/07 Chord: Stoica et al.

K10 K24 N26 K30
33

Periodic Stabilizing
N56 K54 predecessor? N42 K38 N38 N32
03/30/07 Chord: Stoica et al.

N8

N14 K10 K24 N26 K30
34

Periodic Stabilizing
N56 K54 N8

N26 N42 K38 N38 N32
03/30/07 Chord: Stoica et al.

N14 K10 K24 N26 K30
35

Periodic Stabilizing
N56 K54 N8

N14 N42 K38 N38 N32
03/30/07 Chord: Stoica et al.

K10 N26 K24

K30

36

Hard Cases
‡ Data may not be found while pointers are being moved ‡ In reality, may not have time for stabilization

03/30/07

Chord: Stoica et al.

37

Performance
‡ Good load balancing ‡ Logarithmic path length for lookups ‡ ~0.15% of lookups fail
± When join/leave rate is 0.40 per second ± Ask incorrect successor
03/30/07 Chord: Stoica et al. 38

Chord
‡ Locate distributed data based on key ‡ Features:
± Load balancing ± High availability ± Scalability

03/30/07

Chord: Stoica et al.

39

Discussion
‡ Strengths and weaknesses of Chord? ‡ How can we improve Chord? ‡ Chord instead of DNS?

03/30/07

Chord: Stoica et al.

40

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->