You are on page 1of 72

Asad Waqar Malik

Synchronization asad.malik@seecs.edu.pk
SEECS- A 203

(Mutual Exclusion)
Ext - 2126

Chapter-6 (Tanenbaum)

3
Mutual Exclusion

❖ Fundamental of distributed systems is concurrency and collaboration


❖ Simultaneous access to same resources
❖ To prevent concurrent access corrupt the resources or make it inconsistent
❖ Distributed mutual exclusion algorithms can be classified into two different categories
❖ Token based approach
❖ Avoid starvation
❖ Deadlocks
❖ Problem is rather serious, lost of token (process holding it crashed)
❖ Permission based approach

4
Why Mutual Exclusion In The Cloud

❖ Bank’s servers in the cloud: Two of your customers


make simultaneous deposits of $10,000 into your bank
account
❖ Both read initial of $1000 concurrently from the bank’s
cloud server
❖ Both add $10,000 to this amount
❖ Both write the final amount to the server
❖ What is wrong?

5
Why Mutual Exclusion
❖ Bank’s servers in the cloud: Two of your customers make
simultaneous deposits of $10,000 into your bank account
❖ Both read initial of $1000 concurrently from the bank’s cloud
server
❖ Both add $10,000 to this amount
❖ Both write the final amount to the server
❖ You lost $10,000!
❖ Need mutually exclusive access to your account entry at the
server
❖ Or mutually exclusive access to executing the code that modifies
the account entry
6
Problem Statement for Mutual Exclusion

❖ Critical Section: Piece of code (at all processes) for which we


need to ensure there is at most one process executing it at any
point of time
❖ Each process can call three functions
❖ enter - to enter the critical section
❖ AccessResource - to run the critical section code
❖ exit - to exit the critical section

7
Bank Example
Same piece of code

❖ Process - I ❖ Process - II
❖ enter(S); ❖ enter(S);
❖ accessResource ❖ accessResource
❖ obtain bank amount ❖ obtain bank amount
❖ add in deposit ❖ add in deposit
❖ update bank amount ❖ update bank amount
❖ accessResource - end ❖ accessResource - end
❖ exit(S) ❖ exit(S)

8
Processes Sharing An OS: Semaphores
❖ Semaphore == an integer that can only be accessed via
two special functions
❖ Semaphore S: 1
1. wait(S)
while(1) //Execution of loop is atomic
if(S > 0)
S- -;
break;
2. Signal(S)
❖ S++

9
Bank Example using Semaphore
Same piece of code
❖ Semaphore S : 1 ❖ Semaphore S : 1
❖ Process - I ❖ Process - II
Wait (S); Wait (S);
❖ accessResource ❖ accessResource
❖ obtain bank amount ❖ obtain bank amount
❖ add in deposit ❖ add in deposit
❖ update bank amount ❖ update bank amount
❖ accessResource - end ❖ accessResource - end
Signal (S) Signal (S)
10
Bank Example using Semaphore
Same piece of code
❖ Semaphore S : 1 ❖ Semaphore S : 1
❖ Process - I ❖ Process - II
❖ Wait (S); ❖ Wait (S);
❖ accessResource ❖ accessResource
❖ obtain bank amount ❖ obtain bank amount
❖ add in deposit ❖ add in deposit
❖ update bank amount ❖ update bank amount
❖ accessResource - end ❖ accessResource - end
❖ Signal (S) ❖ Signal (S)
11
Approaches to solve distributed mutual
exclusion
❖ Distributed systems
❖ Process communicating by exchange messages
❖ Need to guarantee 3 properties
❖ Safety - At most one process executes in CS at any time
❖ Liveness - Every request for a CS is granted eventually
❖ Ordering - Requests are granted in the order they were
made

12
Mutual Exclusion

In permission based approach, different ways


❖ Centralized Algorithm
❖ Decentralized Algorithm
❖ Distributed Algorithm

13
Design Your Mutual Exclusion - Centralized Algorithm

14
Distributed Mutual Exclusion - System
Model

System Model
❖ Each pair of processes is connected by reliable
channels
❖ Messages are eventually delivered to recipient and in
FIFO order
❖ Process do not fail

15
Central Solution
❖ Elect a central master or leader
Master keeps
❖ A queue of waiting requests from processes who wish to access the CS
❖ A special token which allows its holder to access CS
Actions of any process in group
❖ enter( )
❖ send a request to master
❖ wait for token from master
❖ exit( )
❖ send back token to master

16
Central Solution

17
Analysis of Central Algorithm
❖ Safety - at most one process in CS
❖ Exactly one token
❖ Liveness - every request for CS granted eventually
❖ With N processes in system, queue has at most N
processes
❖ If each process exits CS eventually and no failure,
liveness guaranteed
❖ FIFO Ordering is guaranteed in order of requests received
at master
18
Analysis of Central Algorithm
❖ Bandwidth: the total number of messages sent in each
enter and exit operation
❖ 2 messages for entry
❖ 1 message for exit
❖ Synchronization delay: the time interval between one
process exiting the critical section and the next process
entering it (when there is only one process waiting)
❖ 2 messages

19
Central Solution

❖ Master - single point of failure, bottleneck


❖ Unable to detect dead coordinator

20
Token Ring Algorithm

21
Token Ring Algorithm

❖ Completely different approach


❖ Logical ring, each process is assigned a position in a ring
❖ Ordering does not matter
❖ All that matter is that each process knows who is next in
line after itself
❖ Acquire token and use shared resources, else forward

22
Token Ring

❖ Only one process has the token


❖ Starvation cannot occur
❖ Token lost, regenerated must
difficult to detect

23
Analysis of Ring Based Mutual Exclusion
❖ Safety: Exactly one token

❖ Liveness: Token eventually loops around ring and


reaches requesting process

❖ Bandwidth per entry : 1 message by requesting process


but N messages throughout system
❖ 1 message sent per exit()

24
Analysis of Ring Based Mutual Exclusion

❖ Synchronization delay between one process exit() from


the CS and the next process enter()
❖ Best case: process in enter() is successor of process in
exit()
❖ Worst case: process in enter() is predecessor of process
in exit()

25
Distributed Algorithm
Think about distributed algorithm where every node
take part in decision

26
Distributed Algorithm
Can we use our total ordering multicast algorithm here

27
Distributed Algorithm -
Ricart Agrawala’s Algorithm

❖ To access a shared resource, the process builds a message containing the


name of the resource, its process number and its current logical time
❖ Then it sends the message to all processes (reliably)
❖ Three different cases for the replies:
I) If the receiver already has access to the resource, it simply does not
reply. Instead, it queues the request
II) If the receiver is not accessing the resource and does not want to access
it, it sends back an OK message to the sender
III) If the receiver wants to access the resource as well but has not yet done
so, it compares the timestamp of the incoming message with the one
contained in the message that it has sent everyone. The lowest one
wins. Sends an OK if incoming timestamp is lower
28
Ricart Agrawala’s Algorithm

❖ Message include <Pi, Ti, R>


❖ Enter critical section at Process i
❖ Set state to Wanted
❖ Multicast “Request” <Ti, Pi, R> to all processes
❖ Wait until all processes send back “Reply”
❖ Change state to “Held” and enter the CS

29
Ricart Agrawala’s Algorithm
❖ On receipt of a Request <Tj, Pj> at Pj
❖ If state == Held
❖ Add request to queue
❖ If state == Wanted && (Tj,j) < (Ti,i)
❖ Add request to queue
❖ Else
❖ Send Reply

30
Ricart Agrawala’s Algorithm
Case-I

31
Ricart Agrawala’s Algorithm
Case-I

32
Ricart Agrawala’s Algorithm
Case-II

33
Ricart Agrawala’s Algorithm
Case-II

34
Ricart Agrawala’s Algorithm
Case-II

35
Ricart Agrawala’s Algorithm
Case-II

36
Ricart Agrawala’s Algorithm
Case-II

37
Ricart Agrawala’s Algorithm
Case-II

38
Distributed Algo.

❖ Number of messages required per entry is 2(n-1), more traffic


❖ No single point of failure, n point of failure, if any process crashes,
it will fail to respond to request
❖ The silence will be interpreted as denial of permission
❖ Performance bottleneck again like centralized algorithm
❖ Slower

39
Decentralized Algorithm
(Home study)

40
Decentralized Algorithm

❖ Resource is assumed to be replicated n times


❖ Each replica has its own coordinator
❖ Process wants to access the resource, need to get majority of votes
m > n , where m is number of votes
❖ If process gets less then m votes, back off for a randomly chosen
time

41
Issues with Decentralized Alog.

❖ Coordinators are not maintaining any history, incase of reset or


failure they can vote to different process
❖ For violation required k coordinator to reset
❖ Many nodes wants to access the same resource, no one will get m
votes
❖ Starvation

42
Election Algorithms

43
Election Algorithms

❖ Many algorithms require one process to act as coordinator, initiator or


otherwise perform a special role.
❖ There are different algorithms for coordinator selection
❖ The Bully Algorithm
❖ A Ring Algorithm

44
The Bully Algorithm

❖ Process notices that coordinator is no longer responding to


requests.
❖ It initiate an election as follows
I. P send an ELECTION message to all processes with higher
number
II. If no one responds, P wins the election and becomes coordinator
III. If one of the higher-ups answers, it takes over. P’s job is done

The biggest guy in town always win – bully


algorithm
45
The Bully Algo.

46
The Bully Algo.

47
A Token Ring Algorithm

48
A Ring Alog.

❖ The processes are logically arranged in a ring.


❖ When any process notices that the coordinator is nor responding, it builds an
ELECTION message containing its own process number and sends it to its successor
❖ At each step, the sender adds its own number to the list in the message thus making
itself be a candidate.
❖ the message reaches the process that started it all. Then it looks through the
message and decides which process has the highest number and that becomes the
coordinator
❖ Then the message type is changed to COORDINATOR and the message circulates
once again so everyone knows the new coordinator and the new ring
configuration

49
A Ring Alog.

50
Chang-Roberts algorithm

Algorithms
ü Every process sends an election message with its id to the left if it has
not seen a message with a higher id
ü Forward any message with an id greater than own id to the left
ü If a process receives its own election message it is the leader
ü It is uniform: number of processors does not need to be known to the
algorithm
Chang-Roberts algorithm
1
8 8
Each node sends 1
a message with 5
2
its id
to the left 2
5
neighbor

6
3
6
3
4
7 7
52 4
If: message received id > current node id
Then: forward message
5
8
1
2

5 8

6
7

3
4
7
53 6
If: message received id > current node id
Then: forward message

8
1
7 2

3 8
4
7
54
If: message received id > current node id
Then: forward message

7
8
1
2

3
4
7
55 8
If: message received id > current node id
Then: forward message

8
1
2

3
4
8 7
56
If: a node receives its own message

Then: it elects itself a leader


8
8
1
2

3
4
7
57
If: a node receives its own message

Then: it elects itself a leader

8
1
leader
2

3
4
7
58
Hirschberg-Sinclair algorithm

• Assume ring is bidirectional


• Carry out elections on increasingly larger sets
• Pi is a leader in phase r=0,1,2,… iff it has the largest id of
all nodes that are at a distance 2r or less from it; to
establish that, it sends probing messages on both sides
Hirschberg-Sinclair algorithm
Phase 0: send(id,step counter)
to 1-neighborhood

1
8 8
1 8
5
2 2
1
2
5
6

3 5 6
4
6
3 3
7
4
7 7
4
61
If: received id > current id
Then: send a reply(OK)

8
1
2

3
4
7
62
If: a node receives both replies
Then: it becomes a temporal leader
and proceed to next phase

8
1
2

3
4
7
63
Phase 1: send(id,1,1) to left and right
adjacent in the 2-neighborhood

8
8
1
8 2
5
5 6

5 6
6
3 7
4
7
7
64
If: received id >
current id
Then: forward(id,1,2)

8 6
1
8 5
2

5 8

7 6

3 7
6
4
5 7
65
At second step: since step counter=2, I’m on
the boundary of the 2-neighborood

If: received id > current id


Then: send a reply(id)
8
1
2

3
4
7
66
If: a node receives a reply with another id
Then: forward it
If: a node receives both replies
Then: it becomes a temporal leader

8
1
2

3
4
7 67
Phase 2: send id to 2 -neighborhood
2

8
1
8 2

5
8
7
7
6

3
4
7
68
If: received id> current id
Then: send a reply

8
1
2

3
4
7
69
If: a node receives both replies
Then: it becomes the leader

8
1
2

3
4
7
70
Phase 3: send id to 8-neighborhood
Þ The node with id 8 will receive its own probe message, and
then becomes leader!

8
1
leader
2

3
4
7 71
Election In Wireless Environments
Ø Unreliable, and processes may move
Ø Network topology constantly changing
Ø Consider static ad hoc network
Algorithm
1. Any node starts by sending out an ELECTION message to neighbors
2. When a node receives an ELECTION message for the first time, it
forwards to neighbors, and designates the sender as its parent
3. It then waits for responses from its neighbors
❖ Responses may carry resource information
4. When a node receives an ELECTION message for the second time, it
just OKs it
(b)
(a)

(c) (d)
Elections in Wireless Environment
c c

(e) The build-tree phase. (f) Reporting of best node


to source.

You might also like