Professional Documents
Culture Documents
Definition:-
3.1 Mutual exclusion:
Access to shared resources or data by processes running concurrently executes so in a way that
is mutually exclusion
Only one process is allowed to execute the critical section (CS) at any given time.
No common physical clock and unpredictable delays.
Mutual exclusion cannot be implemented in a distributed system via shared variables
(semaphores) or a local kernel.
Distributed mutual exclusion can only be implemented by message forwarding.
Distributed Mutual Exclusion Algorithms:
The lack of shared memory in a distributed system makes it difficult to maintain shared
variables for mutual exclusion.
Distributed mutual exclusion algorithms must deal with unexpected, unpredictable message
delays and imperfect information about the system state.
There are three basic approaches to achieve Distributed mutual exclusion:
1 Token based approach
2 Non-token based approach
3 Quorum based approach
1 Safety Property: At any instant, only one process can execute the critical section.
2 Liveness Property: This property states the absence of deadlock and starvation. Two or more sites
should not endlessly wait for messages which will never arrive.
3 Fairness: Each process gets a fair chance to execute the CS. Fairness property generally means the CS
execution requests are executed in the order of their arrival (time is determined by a logical clock) in the
system.
Performance Metrics
The primary objective of mutual exclusion is to gurentee that one reuest access the critical section at a
time. The performance is generally measured by the following metrics:
1. Message complexity: The number of messages required per critical state (CS) execution by a
site.
2. Synchronization delay: After a site leaves the CS, it is the time required and before the next site
enters the CS
3. Response time:
The time interval a request waits for its CS execution to be over after its request messages have
been sent out
4. System throughput:
The rate at which the system executes requests for the CS. system throughput= 1 / (SD+E)
Where SD is the synchronization delay and E is the average critical section execution time.
5. Low and High Load Performance:
Under low load conditions, there is seldom more than one request for the critical
section present in the system simultaneously.
Under heavy load conditions, there is always a pending request for critical section at a
site.
6. Fault Tolerence
The ability of the system to recover from faults.
3.2. LAMPORT’S ALGORITHM
It is a Non-token based Algorithm
• In a distributed system of it consist of n no of sites, it creates the Request set Every site
(Si) keeps a queue called 'request queue' Ri = { S1, S2, ...,Sn} ,which contains mutual
exclusion requests ordered by their timestamps.
• Requests for Critical site CS are executed in the increasing order of timestamps and time
is determined by logical clocks.
• The algorithm requires messages to be delivered in FIFO first in first out order between
every pair of sites.
Requesting the critical section:
1. When a site Si wants to enter the Critical state (CS), it broadcasts a
REQUEST(tsi , i) message to all other sites and places the request on request
queue i . ((tsi , i) denotes the timestamp of the request.)
2. When a site Sj receives the REQUEST(tsi , i) message from site Si ,places site Si ’s
request on request queuej and it returns a timestamped REPLY message to Si .
Executing the critical section
Site Si enters the critical section CS when the following two conditions
1. Hold Si has received a message with timestamp larger than (tsi , i) from all other
sites.
2. Si ’s request is at the top of request queuei
Releasing the critical section:
1. Site Si , upon exiting the CS, removes its request from the top of its
request queue and broadcasts a timestamped RELEASE message to all
other sites.
2. When a site Sj receives a RELEASE message from site Si , it removes Si ’s
request from its request queue.
Step 1 :-
Let consider this example it consist of three sites S1, S2, S3
Step 2 :-
The site S3 first iniates the critical request first by creating a time_Stamp 1 , S3 in the request queue
(Ts:1,3) , and broadccast the request message to neighbouring sites intimating that it places the request
first at time stamp 1
Step 3 :-
After some time instant the site S1 places the request for critical section in the request set at time
stamp 2 and broadccast the request message to all the neighbouring sites.
Step 4:-
In lampfort algorithm Requests for Critical site CS are executed in the increasing order of timestamps
and time is determined by logical clocks. In this example critical state for site 3 will be executed first
because it places the request at time stamp 1. Now the neighbouring sites sends the reply message to all
other sites intimating that s3 will enters into critical section and remaning sites has to wait until it
completes the critical section.
Step 5:-
After completion of critical section the site s3 will broadcast the Release message to all other
neighbouring sites that it has completed the critical section. Now the lamport algorithm compares the
request queue time stamp , in that next time stamp was created by site s1 , Now S1 will enters into
execution of Critical state.
Performance of lamport Algorithm
For example we can reduce the reply messages from site2 to site1. Because the Lamport algorithm will
execute in FIFO order.
Step 2 :-The site s3 first iniates the critical request first by creating a time_Stamp 1 , p3 in the request
queue (Ts:1,3) , and broadccast the request message to neighbouring process intimating that it places
the request first at time stamp 1
Step 3 :-
After some time the site s1 places a request for CS in the Request queue at
time stamp 2 and broadcast the reply messages to all other sites.
Step 4:-
Site S3 enters the CS after it has received REPLY message form all
the sites in its request set.
Step 5:-
After completion of Crtical Section from Process s3 , it will sends a reply
messages to process s1 , Now next critical section process s1 will starts execution
based on the next highest priority .
Step 6:-
When a site sends on REPLY messages to all the deferred requests,
the site with the next higher priority receives the last needed REPLY
message and enters the CS.