You are on page 1of 110

SESSION 11

DISTRIBUTED COMPUTING

18CS3109

1
Session Outcome: Student will be able to describe the Leader Election
Problem.
Active Learning
Teaching– Learning Methods
Time(min) Topic BTL
Method

5 Recap/poll question Lecturing/Quiz

Introduction to Leader
15 1 Lecturing
Election Problem
5 Queries discussion Discussion
Understand the Leader Lecturing/ Discussion on
20 2
election Problem Problem Assignment
5 Quiz Quiz

© 2019 KL University – The contents of this presentation are an intellectual and copyrighted property of KL University. ALL RIGHTS RESERVED 2
1. LEADER ELECTION IN RINGS
Introduction:

The topology of the Message Passing system is a ring. Rings are a convenient
structure for Message Passing systems and corresponding to physical
communication systems, for example, token rings.

Group of processors much choose one among them to be the leader – Leader
Election problem.
2. ELECTION ALGORITHMS
1. Bully Algorithm
2. Ring Algorithm
3. MUTUAL EXCLUSION IN SHARED
MEMORY
1. Formal Model for Shared Memory Systems
i) Systems
ii) Complexity Measures
iii) Pseudo code Conventions
2. The Mutual Exclusion Problem
3. Mutual Exclusion using Powerful Primitives
i) Binary Test & Set Registers
ii) Read-Modify-Write Registers
4. CLASSICAL ALGORITHMS

1. Token Based Algorithm


i) Central Solution
ii) Ring Based Mutual Exclusion

2. Non-Token Based Algorithm


i) Ricart-Agrawala’s Algorithm
ii) Maekawa’s Algorithm
LEADER ELECTION IN RINGS
The Leader Election Problem
➢The leader election problem has several variants, and most general
one is defined below.

➢Informally, the problem is for each processor eventually to decide


that either it is the leader or it is not the leader, subject to the
constraint that exactly one processor decides that it is the leader.

➢For example, when a deadlock is created, because of processors


waiting in a cycle for each other, the deadlock can be broken by
electing one of the processor as a leader and removing it from the
cycle.
© 2019 KL University – The contents of this presentation are an intellectual and copyrighted property of KL University. ALL RIGHTS RESERVED 8
LEADER ELECTION IN RINGS
In terms of formal model, an algorithm is said to solve the
leader election problem if it satisfies the following conditions:
➢Each processor has a set of elected(won) and not-elected(lost)
states.
➢Once an elected state is entered, processor is always in an
elected state (and similarly for not-elected).
➢In every admissible execution:
❖ Exactly one processor (the leader) enters an elected state and
❖ All the remaining processors enters a not-elected state.
LEADER ELECTION IN RINGS
Uses of Leader Election
A leader can be used to coordinate activities of the system:
➢Find a spanning tree using the leader as the root
➢Reconstruct a lost token in a token-ring network
LEADER ELECTION IN RINGS
Ring Networks
➢In an oriented ring, processors have a consistent notion of left
and right.

➢For example, if messages are always forwarded on channel 1,


they will cycle clockwise around the ring
LEADER ELECTION IN RINGS
Why to study Rings?
➢Simple Starting point, easy to analyze
➢Abstraction of a token ring
➢Lower bounds and impossibility results for ring topology also
apply to arbitrary topologies.
LEADER ELECTION IN RINGS

Click to add text


LEADER ELECTION IN RINGS
❖ If identifier=own identifier
• Declares itself as leader
• Sends termination message to left and terminates as
a leader.
• A processor that receives a termination message forwards
it to the left and terminates as a non-leader.
Click to add text
LEADER ELECTION IN RINGS
Correctness:
➢Elects a processor with largest id.
➢Message containing largest id passes through every processor.

Click to add text


Time: o(n)

Message Complexity: Depends on how the ids are arranged.


➢Largest id travels all around the ring (n messages)
➢2nd Largest id travels until reaching largest
➢3rd Largest id travels until reaching largest or second largest etc.
SESSION 12,13
DISTRIBUTED COMPUTING
Click to add text

18CS3109

1
Session Outcome: Student will be able to describe the Leader Election
Problem.
Active Learning
Methods
Time(min) Topic BTL Teaching– Learning Method

5 Recap/ poll Question 1 Quiz


Introduction to Anonymous
20 rings and an O(n2) algorithm 1 Lecturing
O(nlog n)
Brain Storming
5 Queries clarification 1 Lecturing
questions

20 Bully algorithm with example Leactring

Problem solving using tools Lecturing/Discussion on Analytical Problem as


40 3
on bully algorithm Analytical Problem Assignment using tool
10 Queries Discussion

© 2019 KL University – The contents of this presentation are an intellectual and copyrighted property of KL University. ALL RIGHTS RESERVED 2
LEADER ELECTION IN RINGS
Analysis of 𝑶(𝒏𝟐) Algorithm

2
LEADER ELECTION IN RINGS
Analysis of 𝑶(𝒏𝟐) Algorithm
➢Clearly algorithm never sends more than 𝑂(𝑛 2) messages in any
admissible execution.

➢Moreover, there is an admissible execution in which the


algorithm sendsΘ(𝑛2) messages.

➢Consider the ring where the identifiers of the processors are


0,…….., n-1 and they are ordered as in figure.
LEADER ELECTION IN RINGS
Analysis of 𝑶(𝒏𝟐) Algorithm
➢In this configuration, the message of processor with identifieri
is sent exactly i+1 times.

➢Thus the total number of messages, including the n


termination messages, is
𝑛 + σ𝑛−1
𝑖=0 𝑖 + 1 =Θ(𝑛 2
)
LEADER ELECTION IN RINGS
Analysis of 𝑶(𝒏𝟐) Algorithm

➢Total no. of messages is n+(n-1)+(n-2)+…..+ 1= Θ(𝑛 2)


LEADER ELECTION IN RINGS
ii) An 𝑂(𝑛 log𝑛) Algorithm / HS Algorithm
➢Hirschberg and Sinclair (1980)
➢Each processor tries to probe successively larger neighborhoods
in both directions.
➢If probe reaches a node with a larger id, the probe stops.
➢If probe reaches end of its neighbourhood, then a reply is sent
back to initiator.
➢If initiator gets back replies from both directions, then go to next
phase.
➢If processor receives a probe with its own id, it elects itself.
LEADER ELECTION IN RINGS
Phase 0: Send (id) to left and right
LEADER ELECTION IN RINGS
If : received id > current id
Then: send a reply(OK)
LEADER ELECTION IN RINGS
If: a node receives both replies
Then: it becomes a temporary leader and proceeds to the next phase
LEADER ELECTION IN RINGS
Phase 1: send(id) to left and right
LEADER ELECTION IN RINGS
If: received id >current id
Then: forward(id)
LEADER ELECTION IN RINGS
If: received id > current id
Then: send a reply(id)
LEADER ELECTION IN RINGS
If: a node receives a reply with another id
Then: forward it
If: a node receives both replies
Then: it becomes a temporary leader
LEADER ELECTION IN RINGS
Phase 2: send id
LEADER ELECTION IN RINGS
If: received id > current id
Then: send a reply
LEADER ELECTION IN RINGS
If: a node receives both replies
Then: it becomes temporary leader
LEADER ELECTION IN RINGS
Phase 3: The node with id 8 will receive its own probe
message, and then becomes the leader!
LEADER ELECTION IN RINGS
In general:
LEADER ELECTION IN RINGS
➢Correctness: Similar to LCR algorithm.

➢Message Complexity:
❖ Each message belongs to a particular phase and is
initiated by a particular processor.
❖ Probe distance in phase i is 2𝑖 .
❖ Number of messages initiated by a processor in phase i is
at most 4·2𝑖 (probes and replies in both directions).
LEADER ELECTION IN RINGS
How many processors initiate probes in phase k ?
➢For k = 0, every proc. does
➢For k > 0, every proc. that is a "winner" in phase k-1
does
• "winner" means has largest id in its 2k-1 neighborhood
LEADER ELECTION IN RINGS
➢Maximum number of phase k-1 winners occurs when
they are packed as densely as possible:

➢Total number of phase k-1 winners is at most


n/(2k-1 + 1)
LEADER ELECTION IN RINGS
How many phases are there?
➢At each phase the number of (phase) winners is cut
approx. in half
❖ from n/(2k-1 + 1) to n/(2k + 1)

➢So after approx. log2 n phases, only one winner is left.


❖ more precisely, max phase is log(n–1)+1
LEADER ELECTION IN RINGS
➢Total number of messages is sum, over all phases, of
number of winners at that phase times number of messages
originated by that winner:

≤ 4n + n +  4•2k•n/(2k-1+1)
phase 0 msgs

termination
msgs < 8n(log n + 2) + 5n
msgs for
phases 1 to
= O(n log n) log(n–1)+1
ELECTION ALGORITHMS
Bully Algorithm
➢The bully algorithm is a method for dynamically electing a
coordinator or leader from a group of processes.

➢The process with the highest process ID number from


amongst the non-failed processes is selected as the
coordinator.
ELECTION ALGORITHMS
Assumptions
The algorithm assumes that
➢The system is synchronous
➢Processes may fail at any time, including execution of the
algorithm.
➢There is a failure detector which detects failed processes.
➢Message delivery between processes is reliable.
➢Each process knows its own process id and address, and that
of every other process.
ELECTION ALGORITHMS
Message Types:
➢Election Message: Send to announce election.
➢OK (Answer/Alive) Message: Responds to the election
message.
➢Coordinator (Victory Message): Sent by winner of the
election to announce victory
ELECTION ALGORITHMS
Algorithm:
When a process P notices that the coordinator is no longer
responding to requests, it initiates an election.
➢P sends an ELECTION message to all processes with higher
process IDs than itself.
➢If P receives no Answer after sending an Election message,
then it broadcasts a Coordinator (Victory) message to all other
processes and becomes the Coordinator.
➢If one of the higher ID answers, it takes over.
ELECTION ALGORITHMS
When a process gets an election message from one of its lower
process ID.
➢The receiving process sends an OK message back to the
sender to indicate that he is alive and will take over.

➢Eventually, all processes give up apart of one and that one


will be the new coordinator.

➢The new coordinator announces its victory by sending


COORDINATOR message informing it’s the new coordinator
ELECTION ALGORITHMS
If a process that was previously down comes back:
➢It holds an election.

➢If it happens to be the highest process currently running, it


will win the election and take over the coordinator’s job.

➢Biggest process always wins and hence the name Bully


algorithm.
ELECTION ALGORITHMS
ELECTION ALGORITHMS
ELECTION ALGORITHMS
ELECTION ALGORITHMS
ELECTION ALGORITHMS
Complexity:
➢Worst Case:Initiator is node with lowest ID, O(𝒏𝟐 ) messages
➢Best Case: Initiator is node with highest ID, n-1 messages
ELECTION ALGORITHMS
Ring Algorithm:
➢Processes are arranged in a logical ring, each process knows
the structure of the ring.
➢A process initiates an election if it just recovered from failure
or it notices that the coordinator has failed.
➢Initiator sends Election message to closest downstream node
that is alive.
❖ Election message is forwarded around the ring.
❖ Each process adds its own ID to the Election message
ELECTION ALGORITHMS
➢When Election message comes back, initiator picks node with
highest ID and sends a Coordinator message specifying the
winner of the election.
➢Coordinator message is removed when it has circulated once.

Complexity:
➢2n messages always.
ELECTION ALGORITHMS
•SESSION 14

DISTRIBUTED COMPUTING

18CS3109

1
Session Number: 14 Type B
Session Outcome: Student will be able to summarize Mutual Exclusion
Teaching– Active Learning
Time(min) Topic BTL Learning Methods
Method
05 Poll/Recap Discussion/ Quiz

Introduction to
15 Mutual Exclusion 1 Lecturing
in Shared Memory

Clarifying doubts
05 Discussion
through public chat

Formal model for


20 Shared Memory 2 Lecturing
Systems
Summary of the
5 1 Discussion
session/ Quiz
© 2019 KL University – The contents of this presentation are an intellectual and copyrighted property of KL University. ALL RIGHTS RESERVED 2
MUTUAL EXCLUSION IN SHARED
MEMORY
1. Formal Model for Shared Memory Systems
i) Systems
ii) Complexity Measures
iii) Pseudo code Conventions
2. The Mutual Exclusion Problem
3. Mutual Exclusion using Powerful Primitives
i) Binary Test & Set Registers
ii) Read-Modify-Write Registers
MUTUAL EXCLUSION IN SHARED
MEMORY
➢In a shared memory system, processors communicate via a
common memory area than contains a set of shared variables.

➢Several types of variables can be employed.

➢The type of a shared variable specifies the operations that can be


performed on it and the values returned by the operations.

➢The most common type is a read/write register, in which the


operations are the familiar reads and writes such that each read
returns the value of the latest preceding write.
MUTUAL EXCLUSION IN SHARED
MEMORY
➢Other types of shared variables support more powerful
operations, such as
❖ read-modify-write
❖ test & set or compare & swap

➢Registers can be further characterized according to their access


patterns, that is, how many processors can access a specific
variable with a specific operation.
MUTUAL EXCLUSION IN SHARED
MEMORY
1. Formal Model for Shared Memory Systems
➢As in the case of message-passing systems, processors are
modeled as state machines and executions are modeled as
alternating sequences of configurations and events.

➢The difference is nature of configurations and events.

➢Here the new features of model are described


MUTUAL EXCLUSION IN SHARED
MEMORY
i) Systems
➢It is assumed that the system contains
n processors: 𝑝0 , . . . . . . , 𝑝𝑛−1
m registers: 𝑅0 ,…….., 𝑅𝑚−1

➢As in the message-passing case, each processor is modeled as


a state machine, but there are no special inbuf or outbuf state
components.
MUTUAL EXCLUSION IN SHARED
MEMORY
➢Each register has a type, which specifies:
1. The values that can be taken on by the register
2. The operations that can be performed on the register
3. The value to be returned by each operation (if any)
4. The new value of the register resulting from each operation

➢An initial value can be specified for each register


MUTUAL EXCLUSION IN SHARED
MEMORY
➢For instance, an integer-valued read/write register R can take on
all integer values and has operations
❖ read(R,v)
❖ write(R,v)

➢The read(R,v) operation returns the value v, leaving R


unchanged.

➢The write(R,v) operation takes an integer input parameter v,


returns no value and changes R’s value to v.
MUTUAL EXCLUSION IN SHARED
MEMORY
➢A configuration in the shared memory model is a vector
𝐶 = (𝑞0 ,…., 𝑞𝑛−1 , 𝑟0 ,……, 𝑟𝑚−1 )
where,
𝑞𝑖 is a state of 𝑝𝑖
𝑟𝑗 is a value of Register 𝑅𝑗
mem(C)−The state of the memory in C, namely ( 𝑟0 ,……, 𝑟𝑚−1 )

➢In an initial configuration, all processors are in their initial


states and all registers contain initial values.
MUTUAL EXCLUSION IN SHARED
MEMORY
➢The eventsin a shared memory system are computation steps by
the processors and are denoted by the index of the processor.
➢At each computation step by processor 𝑝𝑖 , the following
happen atomically:
1. 𝑝𝑖 chooses a shared variable to access with a specific operation, based on
𝑝𝑖 ’s current state.
2. The specified operation is performed on the shared variable.
3. 𝑝𝑖′ s state changes according to 𝑝𝑖 ’s transition function, based on 𝑝𝑖 ’s
current state and the value returned by the shared memory operation
performed.
MUTUAL EXCLUSION IN SHARED
MEMORY
➢An execution segment of the algorithm to be a (finite or
infinite) sequence is defined with the following form:
𝐶0,∅1 ,𝐶1,∅2 ,𝐶2, ∅3 ,……..
where each 𝐶𝑘,𝑖𝑠 𝑎 𝑐𝑜𝑛𝑓𝑖𝑔𝑢𝑟𝑎𝑡𝑖𝑜𝑛 and each ∅𝑘 is an 𝑒𝑣𝑒𝑛𝑡.
MUTUAL EXCLUSION IN SHARED
MEMORY
ii) Complexity Measures
➢Obviously in shared memory systems there are no messages to
measure.

➢Instead, we focus on the space complexity, the amount of


shared memory needed to solve problems.

➢The amount is measured in two ways, the number of distinct


shared variables is required and the amount of shared space
(number of bits, or equivalently, how many distinct values)
required.
MUTUAL EXCLUSION IN SHARED
MEMORY
iii) Pseudo code Conventions
➢The pseudo code will involve accesses both to local variables,
which are part of the processor’s state, and to shared variable.

➢The names of shared variables are capitalized (Want), whereas


the name of local variables are in lower case (last).
MUTUAL EXCLUSION IN SHARED
MEMORY
2. The Mutual Exclusion Problem
➢The mutual exclusion problem concerns a group of processors:
❖ Processors occasionally need access to some resource
❖ Resource cannot be used simultaneously by more than a
single processor, for example, some output device.
MUTUAL EXCLUSION IN SHARED
MEMORY
➢Each processor may need to execute a code segment called a
critical section, such that,
❖ At any time, at most one is in the critical section (mutual
exclusion), and
❖ If one or more processors try to enter the critical section,
❖ Then one of them eventually succeeds as long as no
processor stays in the critical section forever (no deadlock).
MUTUAL EXCLUSION IN SHARED
MEMORY
Mutual Exclusion
➢When a processor is accessing a shared variable, the processor is
said to be in a CS (critical section).

➢No two process can be in the same CS at the same time. This is
called mutual exclusion.
MUTUAL EXCLUSION IN SHARED
MEMORY
Assume the program of a processor is partitioned into the
following lines:
❖Entry(trying): The code executed in preparation for entering
the critical section.

❖Critical: The code to be protected from concurrent execution

❖Exit: The code executed on leaving the critical section

❖Remainder: The rest of the code.


MUTUAL EXCLUSION IN SHARED
MEMORY

Entry

Remainder Critical

Exit
MUTUAL EXCLUSION IN SHARED
MEMORY
➢Each processor cycles through these sections in the order:
❖ Remainder
❖ Entry
❖ Critical
❖ Exit
➢If a processor wants to enter the critical section it first executes
the entry section
➢After that, the processor enters the critical section
➢Then the processor releases the critical section by executing the
exit section and returning to the remainder section.
MUTUAL EXCLUSION IN SHARED
MEMORY
➢A mutual exclusion algorithm consists of code for entry and
exit sections and should work no matter what goes in the
critical and remainder sections.

➢In particular, a processor may transition from the remainder


section to the entry section any number of times, either finite or
infinite.

➢We assume that the variables, both shared and local, accessed in
the entry and exit sections are not accessed in the critical and
remainder section.
MUTUAL EXCLUSION IN SHARED
MEMORY
An algorithm for a shared memory system solves the mutual
exclusion problem with no deadlock (or no lockout) if the following
hold:

Mutual Exclusion: In every configuration of every execution, at most


one processor is in the critical section.

No deadlock: In every admissible execution, if some processor is in the


entry section in a configuration, then there is a later configuration in
which some processor is in the critical section.

No lockout: In every admissible execution, if some processor is in the


entry section in a configuration, then there is a later configuration in
which that same processor is in the critical section.
MUTUAL EXCLUSION IN SHARED
MEMORY
3. Mutual Exclusion using Powerful Primitives
i) Binary Test & Set Registers
❖Test and Set Shared Variable
A test-and-set variable V holds two values, 0 or 1, and
supports two (atomic) operations:
• test&set (V):
temp:= V
V := 1
return temp
• reset(V):
V := 0
MUTUAL EXCLUSION IN SHARED
MEMORY
❖Mutual Exclusion using a test&set register
Initially V equals 0

<Entry>:
1: wait until test&set(V)=0
<Critical Section>
<Exit>:
2: reset(V):
<Remainder>
MUTUAL EXCLUSION IN SHARED
MEMORY
No Deadlock:
➢Claim: V = 0 if no processor is in CS.
➢Proof is by induction on events in execution,and relies on fact that
mutual exclusion holds.
➢Suppose there is a time after which a processor p is in its entry section
but no processor ever enters CS.
No Lockout
➢One processor could always grab V (i.e., win the test&set competition)
and starve the others.
➢No Lockout does not hold.
➢Thus Bounded Waiting does not hold.
MUTUAL EXCLUSION IN SHARED
MEMORY
ii) Read-Modify-Write Registers
➢The state of this kind of variable can be anything and of any
size.

➢Variable V supports the (atomic) operation


rmw(V,f ), where f is any function
temp := V
V := f(V)
return temp

➢This variable type is so strong there is no point in having


multiple variables (from a theoretical perspective).
MUTUAL EXCLUSION IN SHARED
MEMORY
➢Conceptually, the list of waiting processors is stored in a shared
circular queue of length n.

➢Each waiting processor remembers in its local state its location


in the queue (instead of keeping this info in the shared
variable)

➢Shared RMW variable V keeps track of active part of the queue


with first and last pointers, which are indices into the queue
(between 0 and n-1)
❖ so V has two components, first and last
MUTUAL EXCLUSION IN SHARED
MEMORY
Conceptual Data Structure
MUTUAL EXCLUSION IN SHARED
MEMORY
Code for entry section:
// increment last to enqueue self
position := rmw(V,(V.first,V.last+1))
// wait until first equals this value
repeat
queue := rmw(V,V)
until (queue.first = position.last)

Code for exit section:


// increment first to dequeue self
rmw(V,(V.first+1,V.last))
MUTUAL EXCLUSION IN SHARED
MEMORY
Correctness Sketch
➢Mutual Exclusion:
❖ Only the processor at the head of the queue (V.first) can
enter the CS, and only one processor is at the head at any
time.

➢n-Bounded Waiting:
❖ FIFO order of enqueue, and fact that no processor stays in CS
forever, give this result.
MUTUAL EXCLUSION IN SHARED
MEMORY
Space Complexity
➢The shared RMW variable V has two components in its state,
first and last.

➢Both are integers that take on values from 0 to n-1, n different


values.

➢The total number of different states of V thus is n2 .

➢Thus the required size of V in bits is 2*log2n .


MUTUAL EXCLUSION IN SHARED
MEMORY
Spinning
➢A drawback of the RMW queue algorithm is that processors in entry
section repeatedly access the same shared variable called spinning.

➢Having multiple processors spinning on the same shared variable can be


very time-inefficient in certain multiprocessor architectures.

➢Alter the queue algorithm so that each waiting processor spins on a


different shared variable.
SESSION 15

DISTRIBUTED COMPUTING

18CS3109

1
Session Number: 15 Type A
Session Outcome: Student will able to understand the concept of token based algorithms

Teaching– Active Learning


Time(min) Topic BTL Learning Methods
Method
05 Poll/ Recap Lecturing/quiz
Introduction to
15 Token Based 1 Lecturing
Algorithm
Queries
05 Discussion
explanation
Explanation of 5 minute paper
15 2 Lecturing
Central Solution
Quiz/Summary of
5 1 Lecturing
the session

© 2019 KL University – The contents of this presentation are an intellectual and copyrighted property of KL University. ALL RIGHTS RESERVED 2
CLASSICAL ALGORITHMS
1. Token Based Algorithm
i) Central Solution
➢ Elect a central master (or leader)
• Use one of our election algorithms!
➢ Master keeps
• A queue of waiting requests from processes who wish to access the CS
• A special token which allows its holder to access CS
➢ Actions of any process in group:
• enter()
• Send a request to master
• Wait for token from master
• exit()
• Send back token to master
CLASSICAL ALGORITHMS
➢Master Actions:
• On receiving a request from process Pi
if (master has token)
Send token to Pi
else
Add Pi to queue
• On receiving a token from process Pi
if (queue is not empty)
Dequeue head of queue (say Pj), send that process the token
else
Retain token
CLASSICAL ALGORITHMS
➢Safety – at most one process in CS
• Exactly one token

➢Liveness – every request for CS granted eventually


• With N processes in system, queue has at most N processes
• If each process exits CS eventually and no failures, liveness guaranteed

➢FIFO Ordering is guaranteed, in order of requests received at master


CLASSICAL ALGORITHMS
Analyzing Performance:
➢Efficient mutual exclusion algorithms use fewer messages, and make
processes wait for shorter durations to access resources. Three metrics:
➢Bandwidth: Total number of messages sent in each enter and exit
operation.
➢Client delay: Delay incurred by a process at each enter and exit operation
(when no other process is in, or waiting) (We will prefer mostly the
enter operation.)
➢Synchronization delay: Time interval between one process exiting the
critical section and the next process entering it (when there is only one
process waiting)
CLASSICAL ALGORITHMS
➢Bandwidth: the total number of messages sent in each enter and exit
operation.
• 2 messages for enter
• 1 message for exit
➢Client delay: the delay incurred by a process at each enter and exit
operation (when no other process is in, or waiting)
• 2 message latencies (request + grant)
➢Synchronization delay: the time interval between one process exiting
the critical section and the next process entering it (when there is only
one process waiting)
• 2 message latencies (release + grant)
CLASSICAL ALGORITHMS
➢The master is the performance bottleneck and SPoF (single point of
failure)
SESSION 16

DISTRIBUTED COMPUTING

18CS3109

1
Session Number: 16 Type B
Session Outcome: Student will be able to summarize about Ring based Mutual Exclusion

Teaching– Active
Time(min) Topic BTL Learning Learning
Method Methods
05 Poll/Recap Lecturing/Quiz
Introduction to
15 Ring based 1 Lecturing
Algorithm
Queries
05 Discussion
explanation
Ring Based Analytical
20 Mutual 3 Lecturing Problem as
Exclusion Assignment
Summary of the
5 1 Lecturing
session

© 2019 KL University – The contents of this presentation are an intellectual and copyrighted property of KL University. ALL RIGHTS RESERVED 2
CLASSICAL ALGORITHMS
ii) Ring Based Mutual Exclusion

Currently holds token,


N12 N3 can access CS

N6
N32

N80 N5

Token:
CLASSICAL ALGORITHMS
Cannot access CS anymore
N12 N3
Here’s the token!

N6
N32

N80 N5

Token:
CLASSICAL ALGORITHMS

N12 N3

N6 Currently holds token,


N32 can access CS

N80 N5

Token:
CLASSICAL ALGORITHMS
➢N Processes organized in a virtual ring
➢Each process can send message to its successor in ring
➢Exactly 1 token
➢enter()
• Wait until you get token
➢exit() // already have token
• Pass on token to ring successor
➢If receive token, and not currently in enter(), just pass on
token to ring successor
CLASSICAL ALGORITHMS
➢Safety
• Exactly one token
➢Liveness
• Token eventually loops around ring and reaches requesting process (no
failures)
➢Bandwidth
• Per enter(), 1 message by requesting process but up to N messages
throughout system
• 1 message sent per exit()
CLASSICAL ALGORITHMS
➢Client delay: 0 to N message transmissions after entering enter()
• Best case: already have token
• Worst case: just sent token to neighbor

➢Synchronization delay between one process’ exit() from the CS and the next
process’ enter():
• Between 1 and (N-1) message transmissions.
• Best case: process in enter() is successor of process in exit()
• Worst case: process in enter() is predecessor of process in exit()

You might also like