You are on page 1of 9

UNIT III DISTRIBUTED MUTEX & DEADLOCK

Distributed mutual exclusion algorithms: Introduction – Preliminaries – Lamport‘s algorithm – Ricart-


Agrawala algorithm – Maekawa‘s algorithm – Suzuki–Kasami‘s broadcast algorithm. Deadlock detection
in distributed systems: Introduction – System model – Preliminaries – Models of deadlocks – Knapp‘s
classification – Algorithms for the single resource model, the AND model and the OR model

Distributed Mutual Exclusion Algorithms

Definition:-
3.1 Mutual exclusion:
 Access to shared resources or data by processes running concurrently executes so in a way that
is mutually exclusion
 Only one process is allowed to execute the critical section (CS) at any given time.
 No common physical clock and unpredictable delays.
 Mutual exclusion cannot be implemented in a distributed system via shared variables
(semaphores) or a local kernel.
 Distributed mutual exclusion can only be implemented by message forwarding.
Distributed Mutual Exclusion Algorithms:

 The lack of shared memory in a distributed system makes it difficult to maintain shared
variables for mutual exclusion.
 Distributed mutual exclusion algorithms must deal with unexpected, unpredictable message
delays and imperfect information about the system state.
 There are three basic approaches to achieve Distributed mutual exclusion:
1 Token based approach
2 Non-token based approach
3 Quorum based approach

 Token Based Approach:-


1. A unique token is shared among the sites.
2. If a site has the token, it is permitted to enter into Critical Section( CS).
3. Because the token is unique, mutual exclusion is guaranteed.
 Non-token based approach:
1. Two or more rounds of message communications are exchanged among the sites to decide
which site will be added to the critical section ( CS) next.
 Approach based on a quorum.
1. Each site requests permission to execute the CS from a subset of sites (referred to as a quorum).
2. A common site may be found in any two quorums.
3. This common site is responsible for ensuring that only one request executes the Critical Section
(CS) at any time.
System Model
1. The system consists of N number of sites represented as S1, S2, SN .At any instant, a site may
have several requests for Critical section (CS).
2. We can assume that a single process is running on each site. The process at site (Si) is denoted
by Process (pi).
3. A site can be in one of the three states: requesting, executing, or neither requesting nor
executing the Critical State (CS) (i.e., idle).
4. When the site is in the'requesting the Critical state (CS'), it is blocked and cannot make any
additional requests for the critical section (CS).
5. The site is executing outside the Critical state (CS) means it is represented as ‘idle’ state.
6. In token-based algorithms, a site may also be in an idle token state, which occurs when a site
holding the token is execuing outside the Critical state CS.
7. At any instant, a site may have several pending requests for Critical state CS. A site queues up
these requests and serves them one at a time.

 Requirements of Mutual Exclusion Algorithms

1 Safety Property: At any instant, only one process can execute the critical section.
2 Liveness Property: This property states the absence of deadlock and starvation. Two or more sites
should not endlessly wait for messages which will never arrive.
3 Fairness: Each process gets a fair chance to execute the CS. Fairness property generally means the CS
execution requests are executed in the order of their arrival (time is determined by a logical clock) in the
system.

Performance Metrics
The primary objective of mutual exclusion is to gurentee that one reuest access the critical section at a
time. The performance is generally measured by the following metrics:

1. Message complexity: The number of messages required per critical state (CS) execution by a
site.
2. Synchronization delay: After a site leaves the CS, it is the time required and before the next site
enters the CS
3. Response time:
The time interval a request waits for its CS execution to be over after its request messages have
been sent out

4. System throughput:
The rate at which the system executes requests for the CS. system throughput= 1 / (SD+E)
Where SD is the synchronization delay and E is the average critical section execution time.
5. Low and High Load Performance:
 Under low load conditions, there is seldom more than one request for the critical
section present in the system simultaneously.
 Under heavy load conditions, there is always a pending request for critical section at a
site.
6. Fault Tolerence
The ability of the system to recover from faults.
3.2. LAMPORT’S ALGORITHM
 It is a Non-token based Algorithm

• In a distributed system of it consist of n no of sites, it creates the Request set Every site
(Si) keeps a queue called 'request queue' Ri = { S1, S2, ...,Sn} ,which contains mutual
exclusion requests ordered by their timestamps.
• Requests for Critical site CS are executed in the increasing order of timestamps and time
is determined by logical clocks.
• The algorithm requires messages to be delivered in FIFO first in first out order between
every pair of sites.
 Requesting the critical section:
1. When a site Si wants to enter the Critical state (CS), it broadcasts a
REQUEST(tsi , i) message to all other sites and places the request on request
queue i . ((tsi , i) denotes the timestamp of the request.)
2. When a site Sj receives the REQUEST(tsi , i) message from site Si ,places site Si ’s
request on request queuej and it returns a timestamped REPLY message to Si .
 Executing the critical section
Site Si enters the critical section CS when the following two conditions
1. Hold Si has received a message with timestamp larger than (tsi , i) from all other
sites.
2. Si ’s request is at the top of request queuei
 Releasing the critical section:
1. Site Si , upon exiting the CS, removes its request from the top of its
request queue and broadcasts a timestamped RELEASE message to all
other sites.
2. When a site Sj receives a RELEASE message from site Si , it removes Si ’s
request from its request queue.

Conider the following example it

Step 1 :-
Let consider this example it consist of three sites S1, S2, S3

Step 2 :-
The site S3 first iniates the critical request first by creating a time_Stamp 1 , S3 in the request queue
(Ts:1,3) , and broadccast the request message to neighbouring sites intimating that it places the request
first at time stamp 1

Step 3 :-
After some time instant the site S1 places the request for critical section in the request set at time
stamp 2 and broadccast the request message to all the neighbouring sites.
Step 4:-
In lampfort algorithm Requests for Critical site CS are executed in the increasing order of timestamps
and time is determined by logical clocks. In this example critical state for site 3 will be executed first
because it places the request at time stamp 1. Now the neighbouring sites sends the reply message to all
other sites intimating that s3 will enters into critical section and remaning sites has to wait until it
completes the critical section.

Step 5:-
After completion of critical section the site s3 will broadcast the Release message to all other
neighbouring sites that it has completed the critical section. Now the lamport algorithm compares the
request queue time stamp , in that next time stamp was created by site s1 , Now S1 will enters into
execution of Critical state.
 Performance of lamport Algorithm

1. For each CS execution, Lamport’s algorithm requires (N − 1) REQUEST messages, (N − 1)


REPLY messages, and (N − 1) RELEASE messages.
2. Thus, Lamport’s algorithm requires 3(N − 1) messages per Critical state CS invocation.
3. Synchronization delay in the algorithm is T.

 Optimizationoff Lamport’s Algorithm


1. Lamport Algorthm can be optimized from 3(N-1) messages to 2(N-1) Messages per Critical
section Invocation by reducing the reply messages in certain situvations.

For example we can reduce the reply messages from site2 to site1. Because the Lamport algorithm will
execute in FIFO order.

3.3 Ricart-Agrawala algorithm

1. The Ricart-Agrawala algorithm assumes the communication channels are


FIFO. The algorithm uses two types of messages: REQUEST and REPLY.
2. A process sends a REQUEST message to all other processes to request
their permission to enter the critical section.
3. A process sends a REPLY message to a process to give its permission to
that process to enter into CS based on the priority of timestamp request.
4. Processes use Lamport-style logical clocks to assign a timestamp to
critical section requests and timestamps are used to decide the priority of
requests.
5. Each process pi maintains the Request-Deferred array, RDi , the size of
which is the same as the number of processes in the system
 Description of the Algorithm
Requesting the critical section:

1. When a site Si wants to enter the CS, it broadcasts a


timestamped REQUEST message to all other sites.
2. When site Sj receives a REQUEST message from site Si , it
sends a REPLY message to site Si if site Sj is neither requesting
nor executing the CS,
3. if the site Sj is requesting and Si ’s request’s timestamp is
smaller than site Sj ’s own request’s timestamp.
4. Otherwise, the reply is deferred and Sj sets RDj [i]=1
5. Conider the following example it
 Executing the critical section:
1. Site Si enters the CS after it has received a REPLY message from every site it sent a REQUEST
message to neighbouring sites.
2. After completion of Crtical Section from site s3 , it will sends a reply
messages to process site s1 , Now next critical section process site s1 will
starts execution based on the next highest priority
Step 1 :- Let consider this example it consist of three sites s1,s 2, s3

Step 2 :-The site s3 first iniates the critical request first by creating a time_Stamp 1 , p3 in the request
queue (Ts:1,3) , and broadccast the request message to neighbouring process intimating that it places
the request first at time stamp 1

Step 3 :-
After some time the site s1 places a request for CS in the Request queue at
time stamp 2 and broadcast the reply messages to all other sites.
Step 4:-
Site S3 enters the CS after it has received REPLY message form all
the sites in its request set.

Step 5:-
After completion of Crtical Section from Process s3 , it will sends a reply
messages to process s1 , Now next critical section process s1 will starts execution
based on the next highest priority .
Step 6:-
When a site sends on REPLY messages to all the deferred requests,
the site with the next higher priority receives the last needed REPLY
message and enters the CS.

You might also like