You are on page 1of 74

Introduction to Deadlock

System Model:
A system model or structure consists of a fixed number of resources to be circulated among
some opposing processes.

Resources: CPU, memory, files, devices, semaphores, etc.


1. divided into several types,
2. each type has one or more identical instances

Processes:
1. Request resource.
(If resource cannot be granted immediately, process waits until it can acquire
resource.) 2. Use resource.
3. Release resource.

Every process needs some resources to complete its execution. However, the resource is
granted in a sequential order.

1. The process requests for some resource.


2. OS grant the resource if it is available otherwise let the process waits.
3. The process uses it and release on the completion.

A Deadlock is a situation where each of the computer process waits for a resource which is
being assigned to some another process. In this situation, none of the process gets executed
since the resource it needs, is held by some other process which is also waiting for some other
resource to be released.

A deadlock happens in operating system when two or more processes need some resource to
complete their execution that is held by the other process.

Operating Systems - Unit 3 - Lecture 23 – Deadlock in Operating Systems GNIT, Hyderabad.


In the above diagram, the process 1 has resource 1 and needs to acquire resource 2. Similarly
process 2 has resource 2 and needs to acquire resource 1. Process 1 and process 2 are in
deadlock as each of them needs the other’s resource to complete their execution but neither of
them is willing to relinquish their resources.

Let us assume that there are three processes P1, P2 and P3. There are three different
resources R1, R2 and R3. R1 is assigned to P1, R2 is assigned to P2 and R3 is assigned to
P3.

After some time, P1 demands for R2 which is being used by P2. P1 halts its execution since it
can't complete without R2. P2 also demands for R3 which is being used by P3. P2 also stops its
execution because it can't continue without R3. P3 also demands for R1 which is being used by
P1 therefore P3 also stops its execution.

In this scenario, a cycle is being formed among the three processes. None of the process is
progressing and they are all waiting. The computer becomes unresponsive since all the
processes got blocked.

Difference between Starvation and Deadlock


Sr. Deadlock Starvation

1 Deadlock is a situation where no process Starvation is a situation where the low


got blocked and no process proceeds priority process got blocked and the
high priority processes proceed.

2 Deadlock is an infinite waiting. Starvation is a long waiting but not


infinite.

3 Every Deadlock is always a starvation. Every starvation need not be deadlock.

4 The requested resource is blocked by the The requested resource is


other process. continuously be used by the higher
priority processes.
Operating Systems - Unit 3 - Lecture 23 – Deadlock in Operating Systems GNIT, Hyderabad.
5 Deadlock happens when Mutual It occurs due to the uncontrolled
exclusion, hold and wait, No preemption priority and resource management.
and circular wait occurs simultaneously.

Necessary conditions for Deadlocks

1. Mutual Exclusion

A resource can only be shared in mutually exclusive manner. It implies, if two process
cannot use the same resource at the same time.

2. Hold and Wait

A process waits for some resources while holding another resource at the same time.

3. No preemption

The process which once scheduled will be executed till the completion. No other process
can be scheduled by the scheduler meanwhile.

4. Circular Wait

All the processes must be waiting for the resources in a cyclic manner so that the last
process is waiting for the resource which is being held by the first process

Coffman Conditions
A deadlock occurs if the four Coffman conditions hold true. But these conditions are not mutually
exclusive.

The Coffman conditions are given as follows −

• Mutual Exclusion
There should be a resource that can only be held by one process at a time. In the
diagram below, there is a single instance of Resource 1 and it is held by Process 1 only.
Operating Systems - Unit 3 - Lecture 23 – Deadlock in Operating Systems GNIT, Hyderabad.

• Hold and Wait


A process can hold multiple resources and still request more resources from other
processes which are holding them. In the diagram given below, Process 2 holds
Resource 2 and Resource 3 and is requesting the Resource 1 which is held by Process
1.

• No Preemption
A resource cannot be preempted from a process by force. A process can only release a
resource voluntarily. In the diagram below, Process 2 cannot preempt Resource 1 from
Process 1. It will only be released when Process 1 relinquishes it voluntarily after its
execution is complete.

• Circular Wait
A process is waiting for the resource held by the second process, which is waiting for the
resource held by the third process and so on, till the last process is waiting for a resource
held by the first process. This forms a circular chain. For example: Process 1 is allocated
Resource2 and it is requesting Resource 1. Similarly, Process 2 is allocated Resource 1
and it is requesting Resource 2. This forms a circular wait loop.

Operating Systems - Unit 3 - Lecture 23 – Deadlock in Operating Systems GNIT, Hyderabad.

Deadlock Detection
A deadlock can be detected by a resource scheduler as it keeps track of all the resources that
are allocated to different processes. After a deadlock is detected, it can be resolved using the
following methods −

• All the processes that are involved in the deadlock are terminated. This is not a good
approach as all the progress made by the processes is destroyed.
• Resources can be preempted from some processes and given to others till the deadlock is
resolved.
Deadlock Prevention
It is very important to prevent a deadlock before it can occur. So, the system checks each
transaction before it is executed to make sure it does not lead to deadlock. If there is even a
slight chance that a transaction may lead to deadlock in the future, it is never allowed to
execute.

Deadlock Avoidance
It is better to avoid a deadlock rather than take measures after the deadlock has occurred. The
wait for graph can be used for deadlock avoidance. This is however only useful for smaller
databases as it can get quite complex in larger databases.
Operating Systems - Unit 3 - Lecture 23 – Deadlock in Operating Systems GNIT, Hyderabad.
Strategies for handling Deadlock

Following are the four strategies for handling of deadlocks


1. Deadlock Ignorance
2. Deadlock prevention
3. Deadlock avoidance
4. Deadlock detection and recovery

1. Deadlock Ignorance
Deadlock Ignorance is the most widely used approach among all the mechanism. This is being
used by many operating systems mainly for end user uses. In this approach, the Operating
system assumes that deadlock never occurs. It simply ignores deadlock. This approach is best
suitable for a single end user system where User uses the system only for browsing and all
other normal stuff.
There is always a tradeoff between Correctness and performance. The operating systems like
Windows and Linux mainly focus upon performance. However, the performance of the system
decreases if it uses deadlock handling mechanism all the time if deadlock happens 1 out of 100
times then it is completely unnecessary to use the deadlock handling mechanism all the time.
In these types of systems, the user has to simply restart the computer in the case of deadlock.
Windows and Linux are mainly using this approach.

2. Deadlock prevention
Deadlock happens only when Mutual Exclusion, hold and wait, No preemption and circular wait
holds simultaneously. If it is possible to violate one of the four conditions at any time then the
deadlock can never occur in the system.
The idea behind the approach is very simple that we have to fail one of the four conditions but
there can be a big argument on its physical implementation in the system. We will discuss it later
in detail.

3. Deadlock avoidance
In deadlock avoidance, the operating system checks whether the system is in safe state or in
unsafe state at every step which the operating system performs. The process continues until the
system is in safe state. Once the system moves to unsafe state, the OS has to backtrack one
step.
In simple words, The OS reviews each allocation so that the allocation doesn't cause the
deadlock in the system.
We will discuss Deadlock avoidance later in detail.

4. Deadlock detection and recovery


This approach let the processes fall in deadlock and then periodically check whether deadlock
occur in the system or not. If it occurs then it applies some of the recovery methods to the
system to get rid of deadlock.

Operating Systems - Unit 3 - Lecture 24 - Strategies for handling Deadlock GNIT, Hyderabad.
Deadlock Prevention

If we simulate deadlock with a table which is standing on its four legs then we can also simulate
four legs with the four conditions which when occurs simultaneously, cause the deadlock.
However, if we break one of the legs of the table then the table will fall definitely. The same
happens with deadlock, if we can be able to violate one of the four necessary conditions and
don't let them occur together then we can prevent the deadlock.
Let's see how we can prevent each of the conditions.

1. Mutual Exclusion
Mutual section from the resource point of view is the fact that a resource can never be used by
more than one process simultaneously which is fair enough but that is the main reason behind
the deadlock. If a resource could have been used by more than one process at the same time
then the process would have never been waiting for any resource.
However, if we can be able to violate resources behaving in the mutually exclusive manner then
the deadlock can be prevented.
Spooling
For a device like printer, spooling can work. There is a memory associated with the printer which
stores jobs from each of the process into it. Later, Printer collects all the jobs and print each one
of them according to FCFS. By using this mechanism, the process doesn't have to wait for the
printer and it can continue whatever it was doing. Later, it collects the output when it is
produced.
Although, Spooling can be an effective approach to violate mutual exclusion but it suffers from
two kinds of problems.
1. This cannot be applied to every resource.
2. After some point of time, there may arise a race condition between the processes to get
space in that spool.

We cannot force a resource to be used by more than one process at the same time since it will
not be fair enough and some serious problems may arise in the performance. Therefore, we
cannot violate mutual exclusion for a process practically.

2. Hold and Wait

Operating Systems - Unit 3 - Lecture 24 - Strategies for handling Deadlock GNIT, Hyderabad.
Hold and wait condition lies when a process holds a resource and waiting for some other
resource to complete its task. Deadlock occurs because there can be more than one process
which are holding one resource and waiting for other in the cyclic order.
However, we have to find out some mechanism by which a process either doesn't hold any
resource or doesn't wait. That means, a process must be assigned all the necessary resources
before the execution starts. A process must not wait for any resource once the execution has
been started.
!(Hold and wait) = !hold or !wait (negation of hold and wait is, either you don't hold or you
don't wait)
This can be implemented practically if a process declares all the resources initially. However,
this sounds very practical but can't be done in the computer system because a process can't
determine necessary resources initially.
Process is the set of instructions which are executed by the CPU. Each of the instruction may
demand multiple resources at the multiple times. The need cannot be fixed by the OS. The
problem with the approach is:
1. Practically not possible.
2. Possibility of getting starved will be increases due to the fact that some process may hold a
resource for a very long time.
3. No Preemption
Deadlock arises due to the fact that a process can't be stopped once it starts. However, if we
take the resource away from the process which is causing deadlock then we can prevent
deadlock. This is not a good approach at all since if we take a resource away which is being
used by the process then all the work which it has done till now can become inconsistent.
Consider a printer is being used by any process. If we take the printer away from that process
and assign it to some other process then all the data which has been printed can become
inconsistent and ineffective and also the fact that the process can't start printing again from
where it has left which causes performance inefficiency.

4. Circular Wait
To violate circular wait, we can assign a priority number to each of the resource. A process can't
request for a lesser priority resource. This ensures that not a single process can request a
resource which is being utilized by some other process and no cycle will be formed.

Among all the methods, violating Circular wait is the only approach that can be implemented
practically.

Operating Systems - Unit 3 - Lecture 24 - Strategies for handling Deadlock GNIT, Hyderabad.
Deadlock avoidance
In deadlock avoidance, the request for any resource will be granted if the resulting state of the
system doesn't cause deadlock in the system. The state of the system will continuously be
checked for safe and unsafe states.
In order to avoid deadlocks, the process must tell OS, the maximum number of resources a
process can request to complete its execution.
The simplest and most useful approach states that the process should declare the maximum
number of resources of each type it may ever need. The Deadlock avoidance algorithm
examines the resource allocations so that there can never be a circular wait condition.
Safe and Unsafe States
The resource allocation state of a system can be defined by the instances of available and
allocated resources, and the maximum instance of the resources demanded by the processes.

Operating Systems - Unit 3 - Lecture 24 - Strategies for handling Deadlock GNIT, Hyderabad.
A state of a system recorded at some random time is shown below.
Resources Assigned
Process Type 1 Type 2 Type 3 Type 4

A 3 0 2 2

B 0 0 1 1
C 1 1 1 0

D 2 1 4 0

Resources still needed


Process Type 1 Type 2 Type 3 Type 4

A 1 1 0 0

B 0 1 1 2

C 1 2 1 0

D 2 1 1 2

1. E = (7 6 8 4)
2. P = (6 2 8 3)
3. A = (1 4 0 1)

Above tables and vector E, P and A describes the resource allocation state of a system. There
are 4 processes and 4 types of the resources in a system. Table 1 shows the instances of each
resource assigned to each process.
Table 2 shows the instances of the resources, each process still needs.

Vector E is the representation of total instances of each resource in the system. Vector P
represents the instances of resources that have been assigned to processes. Vector A
represents the number of resources that are not in use.
A state of the system is called safe if the system can allocate all the resources requested by all
the processes without entering into deadlock.
If the system cannot fulfill the request of all processes then the state of the system is
called unsafe.
The key of Deadlock avoidance approach is when the request is made for resources then
the request must only be approved in the case if the resulting state is also a safe state.

Operating Systems - Unit 3 - Lecture 24 - Strategies for handling Deadlock GNIT, Hyderabad.
Resource Allocation Graph
The resource allocation graph is the pictorial representation of the state of a system. As its name
suggests, the resource allocation graph is the complete information about all the processes
which are holding some resources or waiting for some resources.
It also contains the information about all the instances of all the resources whether they are
available or being used by the processes.
In Resource allocation graph, the process is represented by a Circle while the Resource is
represented by a rectangle. Let's see the types of vertices and edges in detail.

Vertices are mainly of two types, Resource and process. Each of them will be represented by a
different shape. Circle represents process while rectangle represents resource. A resource can
have more than one instance. Each instance will be represented by a dot inside the rectangle.

Operating Systems - Unit 3 - Lecture 25 - Deadlock Detection and Recovery GNIT, Hyderabad.
Edges in RAG are also of two types, one represents assignment and other represents the wait
of a process for a resource. The above image shows each of them.
A resource is shown as assigned to a process if the tail of the arrow is attached to an instance
to the resource and the head is attached to a process.
A process is shown as waiting for a resource if the tail of an arrow is attached to the process
while the head is pointing towards the resource.

Operating Systems - Unit 3 - Lecture 25 - Deadlock Detection and Recovery GNIT, Hyderabad.
Example
Let's consider 3 processes P1, P2 and P3, and two types of resources R1 and R2. The
resources are having 1 instance each.
According to the graph, R1 is being used by P1, P2 is holding R2 and waiting for R1, P3 is
waiting for R1 as well as R2.
The graph is deadlock free since no cycle is being formed in the graph.

Deadlock Detection using RAG


If a cycle is being formed in a Resource allocation graph where all the resources have the single
instance then the system is deadlocked.
In Case of Resource allocation graph with multi-instanced resource types, Cycle is a necessary
condition of deadlock but not the sufficient condition.
The following example contains three processes P1, P2, P3 and three resources R2, R2, R3. All
the resources are having single instances each.

If we analyze the graph then we can find out that there is a cycle formed in the graph since the
system is satisfying all the four conditions of deadlock.

Operating Systems - Unit 3 - Lecture 25 - Deadlock Detection and Recovery GNIT, Hyderabad.
Allocation Matrix
Allocation matrix can be formed by using the Resource allocation graph of a system. In
Allocation matrix, an entry will be made for each of the resource assigned. For Example, in the
following matrix, en entry is being made in front of P1 and below R3 since R3 is assigned to P1.
Process R1 R2 R3

P1 0 0 1

P2 1 0 0

P3 0 1 0

Request Matrix
In request matrix, an entry will be made for each of the resource requested. As in the following
example, P1 needs R1 therefore an entry is being made in front of P1 and below R1.
Process R1 R2 R3

P1 1 0 0

P2 0 1 0

P3 0 0 1

Avail = (0,0,0)
Neither we are having any resource available in the system nor a process going to release.
Each of the process needs at least single resource to complete therefore they will continuously
be holding each one of them.
We cannot fulfill the demand of at least one process using the available resources therefore the
system is deadlocked as determined earlier when we detected a cycle in the graph.

Operating Systems - Unit 3 - Lecture 25 - Deadlock Detection and Recovery GNIT, Hyderabad.
Deadlock Detection and Recovery
In this approach, The OS doesn't apply any mechanism to avoid or prevent the deadlocks.
Therefore the system considers that the deadlock will definitely occur. In order to get rid of
deadlocks, The OS periodically checks the system for any deadlock. In case, it finds any of the
deadlock then the OS will recover the system using some recovery techniques.
The main task of the OS is detecting the deadlocks. The OS can detect the deadlocks with the
help of Resource allocation graph.

In single instanced resource types, if a cycle is being formed in the system then there will
definitely be a deadlock. On the other hand, in multiple instanced resource type graph,
detecting a cycle is not just enough. We have to apply the safety algorithm on the system by
converting the resource allocation graph into the allocation matrix and request matrix.

In order to recover the system from deadlocks, either OS considers resources or

processes. For Resource

Preempt the resource

We can snatch one of the resources from the owner of the resource (process) and give it to the
other process with the expectation that it will complete the execution and will release this
resource sooner. Well, choosing a resource which will be snatched is going to be a bit difficult.

Rollback to a safe state

System passes through various states to get into the deadlock state. The operating system can
rollback the system to the previous safe state. For this purpose, OS needs to implement check
pointing at every state.

The moment, we get into deadlock, we will rollback all the allocations to get into the previous
safe state.
Operating Systems - Unit 3 - Lecture 25 - Deadlock Detection and Recovery GNIT, Hyderabad.
For Process

Kill a process

Killing a process can solve our problem but the bigger concern is to decide which process to
kill. Generally, Operating system kills a process which has done least amount of work until now.

Kill all process

This is not a suggestible approach but can be implemented if the problem becomes very
serious. Killing all process will lead to inefficiency in the system because all the processes will
execute again from starting.

Operating

Systems - Unit 3 - Lecture 25 - Deadlock Detection and Recovery GNIT, Hyderabad.

Banker's Algorithm in Operating System

It is a banker algorithm used to avoid deadlock and allocate resources safely to each process
in the computer system. The 'S-State' examines all possible tests or activities before deciding
whether the allocation should be allowed to each process. It also helps the operating system to
successfully share the resources between all the processes.

The banker's algorithm is named because it checks whether a person should be sanctioned a
loan amount or not to help the bank system safely simulate allocation resources. In this section,
we will learn the Banker's Algorithm in detail. Also, we will solve problems based on the
Banker's Algorithm.

To understand the Banker's Algorithm first we will see a real word example of it.

Suppose the number of account holders in a particular bank is 'n', and the total money in a bank
is 'T'. If an account holder applies for a loan; first, the bank subtracts the loan amount from full
cash and then estimates the cash difference is greater than T to approve the loan amount.
These steps are taken because if another person applies for a loan or withdraws some amount
from the bank, it helps the bank manage and operate all things without any restriction in the
functionality of the banking system.

Similarly, it works in an operating system. When a new process is created in a computer


system, the process must provide all types of information to the operating system like upcoming
processes, requests for their resources, counting them, and delays. Based on these criteria, the
operating system decides which process sequence should be executed or waited so that no
deadlock occurs in a system. Therefore, it is also known as deadlock avoidance algorithm or
deadlock detection in the operating system.

Operating Systems - Unit 3 - Lecture 26 - Banker Algorithm in Deadlock GNIT, Hyderabad.


Advantages
Following are the essential characteristics of the Banker's algorithm:
1. It contains various resources that meet the requirements of each process. 2. Each
process should provide information to the operating system for upcoming resource
requests, the number of resources, and how long the resources will be held. 3. It helps the
operating system manage and control process requests for each type of resource in the
computer system.
4. The algorithm has a Max resource attribute that represents indicates each process can
hold the maximum number of resources in a system.
Disadvantages
1. It requires a fixed number of processes, and no additional processes can be started in the
system while executing the process.
2. The algorithm does no longer allows the processes to exchange its maximum needs while
processing its tasks.
3. Each process has to know and state their maximum resource requirement in advance for
the system.
4. The number of resource requests can be granted in a finite time, but the time limit for
allocating the resources is one year.

When working with a banker's algorithm, it requests to know about three things: 1. How much
each process can request for each resource in the system. It is denoted by the [MAX]
request.
2. How much each process is currently holding each resource in a system. It is denoted by
the [ALLOCATED] resource.
3. It represents the number of each resource currently available in the system. It is denoted
by the [AVAILABLE] resource.

Operating Systems - Unit 3 - Lecture 26 - Banker Algorithm in Deadlock GNIT, Hyderabad.


Following are the important data structures terms applied in the banker's algorithm as follows:
Suppose n is the number of processes, and m is the number of each type of resource used in a
computer system.
1. Available: It is an array of length 'm' that defines each type of resource available in the
system. When Available[j] = K, means that 'K' instances of Resources type R[j] are
available in the system.
2. Max: It is a [n x m] matrix that indicates each process P[i] can store the maximum number
of resources R[j] (each type) in a system.
3. Allocation: It is a matrix of m x n orders that indicates the type of resources currently
allocated to each process in the system. When Allocation [i, j] = K, it means that process
P[i] is currently allocated K instances of Resources type R[j] in the system.
4. Need: It is an M x N matrix sequence representing the number of remaining resources for
each process. When the Need[i] [j] = k, then process P[i] may require K more instances
of resources type Rj to complete the assigned work. Need[i][j] = Max[i][j] - Allocation[i][j].
5. Finish: It is the vector of the order m. It includes a Boolean value (true/false) indicating
whether the process has been allocated to the requested resources, and all resources
have been released after finishing its task.

Operating Systems - Unit 3 - Lecture 26 - Banker Algorithm in Deadlock GNIT, Hyderabad.


The Banker's Algorithm is the combination of the safety algorithm and the resource
request algorithm to control the processes and avoid deadlock in a system: Safety
Algorithm
It is a safety algorithm used to check whether or not a system is in a safe state or follows the
safe sequence in a banker's algorithm:
1. There are two vectors Wok and Finish of length m and n in a safety
algorithm. Initialize: Work = Available
Finish[i] = false; for i = 0, 1, 2, 3, 4… n - 1.

2. Check the availability status for each type of resources [i], such as:
Need[i] <= Work
Finish[i] == false

If the i does not exist, go to step 4.

3. Work = Work +Allocation(i) // to get new resource allocation


Finish[i] = true
Go to step 2 to check the status of resource availability for the next

process. 4. If Finish[i] == true; it means that the system is safe for all

processes.

Operating Systems - Unit 3 - Lecture 26 - Banker Algorithm in Deadlock GNIT, Hyderabad.


Resource Request Algorithm
A resource request algorithm checks how a system will behave when a process makes each
type of resource request in a system as a request matrix.
Let create a resource request array R[i] for each process P[i]. If the Resource Requesti [j] equal
to 'K', which means the process P[i] requires 'k' instances of Resources type R[j] in the system.

1. When the number of requested resources of each type is less than the Need resources, go
to step 2 and if the condition fails, which means that the process P[i] exceeds its maximum claim
for the resource. As the expression suggests:
If Request(i) <= Need
Go to step 2;

2. And when the number of requested resources of each type is less than the available resource
for each process, go to step (3). As the expression suggests:
If Request(i) <= Available

Else Process P[i] must wait for the resource since it is not available for use.

3. When the requested resource is allocated to the process by changing


state: Available = Available – Request

Allocation(i) = Allocation(i) + Request (i)

Needi = Needi - Requesti

When the resource allocation state is safe, its resources are allocated to the process P(i). And if
the new state is unsafe, the Process P (i) has to wait for each type of Request R(i) and restore
the old resource-allocation state.

Operating Systems - Unit 3 - Lecture 26 - Banker Algorithm in Deadlock GNIT, Hyderabad.


Example: Consider a system that contains five processes P1, P2, P3, P4, P5 and the three
resource types A, B and C. Following are the resources types: A has 10, B has 5 and the
resource type C has 7 instances.
Process Allocation
Max
Available
ABC
ABC
ABC

P1 010 753 332

P2 200 322

P3 302 902

P4 211 222

P5 002 433

Answer the following questions using the banker's algorithm:


1. What is the reference of the need matrix?
2. Determine if the system is safe or not.
3. What will happen if the resource request (1, 0, 2) for process P1 can the system accept
this request immediately?
Ans. 1: Context of the need matrix is as follows:
Need [i] = Max [i] - Allocation [i]
Need for P1: (7, 5, 3) - (0, 1, 0) = 7, 4, 3
Need for P2: (3, 2, 2) - (2, 0, 0) = 1, 2, 2
Need for P3: (9, 0, 2) - (3, 0, 2) = 6, 0, 0
Need for P4: (2, 2, 2) - (2, 1, 1) = 0, 1, 1
Need for P5: (4, 3, 3) - (0, 0, 2) = 4, 3, 1
Process Need
ABC

P1 743

P2 122

Operating Systems - Unit 3 - Lecture 26 - Banker Algorithm in Deadlock GNIT, Hyderabad.


P3 600

P4 011

P5 431

Hence, we created the context of need matrix.


Ans. 2: Apply the Banker's Algorithm:
Available Resources of A, B and C are 3, 3, and 2.
Now we check if each type of resource request is available for each
process. Step 1: For Process P1:
Need <= Available
7, 4, 3 <= 3, 3, 2 condition is false.
So, we examine another process, P2.
Step 2: For Process P2:
Need <= Available
1, 2, 2 <= 3, 3, 2 condition true
New available = available + Allocation
(3, 3, 2) + (2, 0, 0) => 5, 3, 2
Similarly, we examine another process P3.
Step 3: For Process P3:
P3 Need <= Available
6, 0, 0 < = 5, 3, 2 condition is false.
Similarly, we examine another process, P4.
Step 4: For Process P4:
P4 Need <= Available
0, 1, 1 <= 5, 3, 2 condition is true
New Available resource = Available + Allocation
5, 3, 2 + 2, 1, 1 => 7, 4, 3
Similarly, we examine another process P5.
Step 5: For Process P5:
P5 Need <= Available
4, 3, 1 <= 7, 4, 3 condition is true
New available resource = Available + Allocation
7, 4, 3 + 0, 0, 2 => 7, 4, 5

Operating Systems - Unit 3 - Lecture 26 - Banker Algorithm in Deadlock GNIT, Hyderabad.


Now, we again examine each type of resource request for processes P1 and
P3. Step 6: For Process P1:
P1 Need <= Available
7, 4, 3 <= 7, 4, 5 condition is true
New Available Resource = Available + Allocation
7, 4, 5 + 0, 1, 0 => 7, 5, 5
So, we examine another process P2.
Step 7: For Process P3:
P3 Need <= Available
6, 0, 0 <= 7, 5, 5 condition is true
New Available Resource = Available + Allocation
7, 5, 5 + 3, 0, 2 => 10, 5, 7
Hence, we execute the banker's algorithm to find the safe state and the safe sequence
like P2, P4, P5, P1 and P3.
Ans. 3: For granting the Request (1, 0, 2), first we have to check that Request <= Available,
that is (1, 0, 2) <= (3, 3, 2), since the condition is true. So the process P1 gets the request
immediately.
Operating Systems - Unit 3 - Lecture 26 - Banker Algorithm in Deadlock GNIT, Hyderabad.
Process Synchronization: Critical Section Problem in OS

What is Process Synchronization?


Process Synchronization is the task of coordinating the execution of processes in a way that
no two processes can have access to the same shared data and resources.
It is specially needed in a multi-process system when multiple processes are running together,
and more than one processes try to gain access to the same shared resource or data at the
same time.
This can lead to the inconsistency of shared data. So the change made by one process not
necessarily reflected when other processes accessed the same shared data. To avoid this type
of inconsistency of data, the processes need to be synchronized with each other. How Process
Synchronization Works?
For Example, process A changing the data in a memory location while another process B is
trying to read the data from the same memory location. There is a high probability that data
read by the second process will be erroneous.
Sections of a Program
Here, are four essential elements of the critical section:
• Entry Section: It is part of the process which decides the entry of a particular process. •
Critical Section: This part allows one process to enter and modify the shared variable. •
Exit Section: Exit section allows the other process that are waiting in the Entry Section,
to enter into the Critical Sections. It also checks that a process that finished its execution
should be removed through this Section.
• Remainder Section: All other parts of the Code, which is not in Critical, Entry, and Exit
Section, are known as the Remainder Section.

Operating Systems - Unit 3 - Lecture 27 - Critical Section Problem GNIT, Hyderabad.


On the basis of synchronization, processes are categorized as one of the following two types: •
Independent Process : Execution of one process does not affects the execution of
other processes.
• Cooperative Process : Execution of one process affects the execution of other
processes.
Process synchronization problem arises in the case of Cooperative process also because
resources are shared in Cooperative processes.

Race Condition
When more than one processes are executing the same code or accessing the same memory or
any shared variable in that condition there is a possibility that the output or the value of the
shared variable is wrong so for that all the processes doing the race to say that my output is
correct this condition known as a race condition. Several processes access and process the
manipulations over the same data concurrently, then the outcome depends on the particular
order in which the access takes place.
A race condition is a situation that may occur inside a critical section. This happens when the
result of multiple thread execution in the critical section differs according to the order in which
the threads execute.
Race conditions in critical sections can be avoided if the critical section is treated as an atomic
instruction. Also, proper thread synchronization using locks or atomic variables can prevent race
conditions.

Critical Section Problem


Critical section is a code segment that can be accessed by only one process at a time. Critical
section contains shared variables which need to be synchronized to maintain consistency of
data variables.

Operating Systems - Unit 3 - Lecture 27 - Critical Section Problem GNIT, Hyderabad.

In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements: • Mutual Exclusion
: If a process is executing in its critical section, then no other process is allowed to
execute in the critical section.
• Progress : If no process is executing in the critical section and other processes are
waiting outside the critical section, then only those processes that are not executing in
their remainder section can participate in deciding which will enter in the critical
section next, and the selection can not be postponed indefinitely.
• Bounded Waiting : A bound must exist on the number of times that other processes
are allowed to enter their critical sections after a process has made a request to enter
its critical section and before that request is granted.

Peterson’s Solution
Peterson’s Solution is a classical software based solution to the critical section
problem. In Peterson’s solution, we have two shared variables:
• boolean flag[i] :Initialized to FALSE, initially no one is interested in entering the critical
section
• int turn : The process whose turn is to enter the critical section.

Operating Systems - Unit 3 - Lecture 27 - Critical Section Problem GNIT, Hyderabad.


Peterson’s Solution preserves all three conditions :
• Mutual Exclusion is assured as only one process can access the critical section at any
time.
• Progress is also assured, as a process outside the critical section does not block other
processes from entering the critical section.
• Bounded Waiting is preserved as every process gets a fair chance.

Disadvantages of Peterson’s Solution


• It involves Busy waiting
• It is limited to 2 processes.

Operating Systems - Unit 3 - Lecture 27 - Critical Section Problem GNIT, Hyderabad.


TestAndSet
TestAndSet is a hardware solution to the synchronization problem. In TestAndSet, we
have a shared lock variable which can take either of the two values, 0 or 1. 0 Unlock
1 Lock
Before entering into the critical section, a process inquires about the lock. If it is
locked, it keeps on waiting until it becomes free and if it is not locked, it takes the
lock and executes the critical section.

In TestAndSet, Mutual exclusion and progress are preserved but bounded waiting
cannot be preserved.

Semaphores

A semaphore is a signaling mechanism and a thread that is waiting on a semaphore


can be signaled by another thread. This is different than a mutex as the mutex can be
signaled only by the thread that called the wait function.
A semaphore uses two atomic operations, wait and signal for process
synchronization. A Semaphore is an integer variable, which can be accessed only
through two operations wait() and signal().
There are two types of semaphores: Binary Semaphores and Counting
Semaphores
• Binary Semaphores: They can only be either 0 or 1. They are also known as
mutex locks, as the locks can provide mutual exclusion. All the processes
can share the same mutex semaphore that is initialized to 1. Then, a
process has to wait until the lock becomes 0. Then, the process can
make the mutex semaphore 1 and start its critical section. When it
completes its critical section, it can reset the value of mutex semaphore to
0 and some other process can enter its critical section.
• Counting Semaphores: They can have any value and are not restricted over
a certain domain. They can be used to control access to a resource that
has a limitation on the number of simultaneous accesses. The

Operating Systems - Unit 3 - Lecture 27 - Critical Section Problem GNIT, Hyderabad.


semaphore can be initialized to the number of instances of the resource.
Whenever a process wants to use that resource, it checks if the number of
remaining instances is more than zero, i.e., the process has an instance
available. Then, the process can enter its critical section thereby
decreasing the value of the counting semaphore by 1. After the process is
over with the use of the instance of the resource, it can leave the critical
section thereby adding 1 to the number of available instances of the
resource.

Operating Systems - Unit 3 - Lecture 27 - Critical Section Problem GNIT, Hyderabad.


Classical problems of Synchronization with Semaphore Solution
we will see number of classical problems of synchronization as examples of a large class of
concurrency-control problems. In our solutions to the problems, we use semaphores for
synchronization, since that is the traditional way to present such solutions. However, actual
implementations of these solutions could use mutex locks in place of binary semaphores.
These problems are used for testing nearly every newly proposed synchronization scheme. The
following problems of synchronization are considered as classical problems: 1. Bounded-buffer
(or Producer-Consumer) Problem,
2. Dining-Philosophers Problem,
3. Readers and Writers Problem,
4. Sleeping Barber Problem
These are summarized, for detailed explanation, you can view the linked articles for
each. 1. Bounded-buffer (or Producer-Consumer) Problem:
Bounded Buffer problem is also called producer consumer problem. This problem is
generalized in terms of the Producer-Consumer problem. Solution to this problem is,
creating two counting semaphores “full” and “empty” to keep track of the current number of
full and empty buffers respectively. Producers produce a product and consumers consume
the product, but both use of one of the containers each time.

2. Dining-Philosphers Problem:
The Dining Philosopher Problem states that K philosophers seated around a circular table
with one chopstick between each pair of philosophers. There is one chopstick between each
philosopher. A philosopher may eat if he can pickup the two chopsticks adjacent to him. One
chopstick may be picked up by any one of its adjacent followers but not both. This problem
involves the allocation of limited resources to a group of processes in a deadlock-free and
starvation-free manner.

Operating Systems - Unit 3 - Lecture 28 – Classical Problems of Synchronization GNIT, Hyderabad.


3. Readers and Writers Problem:
Suppose that a database is to be shared among several concurrent processes. Some of
these processes may want only to read the database, whereas others may want to update
(that is, to read and write) the database. We distinguish between these two types of
processes by referring to the former as readers and to the latter as writers. Precisely in OS
we call this situation as the readers-writers problem. Problem parameters:
∙ One set of data is shared among a number of processes.
∙ Once a writer is ready, it performs its write. Only one writer may write at a time. ∙
If a process is writing, no other process can read it.
∙ If at least one reader is reading, no other process can write.
∙ Readers may not write and only read.

4. Sleeping Barber Problem:


Barber shop with one barber, one barber chair and N chairs to wait in. When no customers
the barber goes to sleep in barber chair and must be woken when a customer comes in.
When barber is cutting hair new customers take empty seats to wait, or leave if no vacancy.

Operating Systems - Unit 3 - Lecture 28 – Classical Problems of Synchronization GNIT, Hyderabad.


Operating

Systems - Unit 3 - Lecture 28 – Classical Problems of Synchronization GNIT, Hyderabad.

Producer-Consumer solution
In computing, the producer–consumer problem (also known as the bounded-buffer problem) is a
classic example of a multi-process synchronization problem.
The problem describes two processes; the producer and the consumer that shares a common
fixed-size buffer use it as a queue.
The producer’s job is to generate data, put it into the buffer, and start again. At the same time,
the consumer is consuming the data (i.e., removing it from the buffer), one piece at a time.
Problem: Given the common fixed-size buffer, the task is to make sure that the producer can’t
add data into the buffer when it is full and the consumer can’t remove data from an empty
buffer.
Solution: The producer is to either go to sleep or discard data if the buffer is full. The next time
the consumer removes an item from the buffer, it notifies the producer, who starts to fill the
buffer again. In the same manner, the consumer can go to sleep if it finds the buffer to be
empty. The next time the producer puts data into the buffer, it wakes up the sleeping consumer.

// C program for the above approach

#include <stdio.h>
#include <stdlib.h>

// Initialize a mutex to 1
int mutex = 1;
// Number of full slots as 0
int full = 0;

// Number of empty slots as size


// of buffer
int empty = 10, x = 0;

// Function to produce an item and


// add it to the buffer
void producer()

Operating Systems - Unit 3 - Lecture 28 – Classical Problems of Synchronization GNIT, Hyderabad.


{
// Decrease mutex value by 1
--mutex;

// Increase the number of full


// slots by 1
++full;

// Decrease the number of


empty // slots by 1
--empty;

// Item produced
x++;
printf("\nProducer produces"
"item %d",
x);

// Increase mutex value by 1


++mutex;
}
// Function to consume an item
and // remove it from buffer
void consumer()
{
// Decrease mutex value by 1
--mutex;

// Decrease the number of full


// slots by 1
--full;

// Increase the number of empty

Operating Systems - Unit 3 - Lecture 28 – Classical Problems of Synchronization GNIT, Hyderabad.


// slots by 1
++empty;
printf("\nConsumer consumes "
"item %d",
x);
x--;

// Increase mutex value by 1


++mutex;
}

// Driver Code
int main()
{
int n, i;
printf("\n1. Press 1 for Producer"
"\n2. Press 2 for Consumer"
"\n3. Press 3 for Exit");

// Using '#pragma omp parallel


for' // can give wrong value due to
// synchronisation issues.

// 'critical' specifies that code is


// executed by only one thread at a
// time i.e., only one thread enters
// the critical section at a given
time #pragma omp critical

for (i = 1; i > 0; i++) {

printf("\nEnter your choice:");


scanf("%d", &n);

Operating Systems - Unit 3 - Lecture 28 – Classical Problems of Synchronization GNIT, Hyderabad.


// Switch Cases
switch (n) {
case 1:

// If mutex is 1 and empty


// is non-zero, then it is
// possible to produce
if ((mutex == 1)
&& (empty != 0)) {
producer();
}

// Otherwise, print buffer


// is full
else {
printf("Buffer is full!");
}
break;

case 2:
// If mutex is 1 and full
// is non-zero, then it is
// possible to consume
if ((mutex == 1)
&& (full != 0)) {
consumer();
}

// Otherwise, print Buffer


// is empty
else {
printf("Buffer is empty!");
}

Operating Systems - Unit 3 - Lecture 28 – Classical Problems of Synchronization GNIT, Hyderabad.


break;

// Exit Condition
case 3:
exit(0);
break;
}
}
}
Operating Systems - Unit 3 - Lecture 28 – Classical Problems of Synchronization GNIT, Hyderabad.
Classical problems of Synchronization with Semaphore Solution

we will see number of classical problems of synchronization as examples of a large class of


concurrency-control problems. In our solutions to the problems, we use semaphores for
synchronization, since that is the traditional way to present such solutions. However, actual
implementations of these solutions could use mutex locks in place of binary semaphores.
These problems are used for testing nearly every newly proposed synchronization scheme. The
following problems of synchronization are considered as classical problems: 1. Bounded-buffer
(or Producer-Consumer) Problem,
2. Dining-Philosophers Problem,
3. Readers and Writers Problem,
4. Sleeping Barber Problem

The Dining Philosopher Problem – The Dining Philosopher Problem states that K philosophers
seated around a circular table with one chopstick between each pair of philosophers. There is
one chopstick between each philosopher. A philosopher may eat if he can pick up the two
chopsticks adjacent to him. One chopstick may be picked up by any one of its adjacent
followers but not both.

Operating Systems - Unit 3 GNIT, Hyderabad.


Semaphore Solution to Dining Philosopher –
Each philosopher is represented by the following pseudocode:

process P[i]
while true do
{ THINK;
PICKUP(CHOPSTICK[i], CHOPSTICK[i+1 mod 5]);
EAT;
PUTDOWN(CHOPSTICK[i], CHOPSTICK[i+1 mod 5])
}
There are three states of the philosopher: THINKING, HUNGRY, and EATING. Here there are
two semaphores: Mutex and a semaphore array for the philosophers. Mutex is used such that
no two philosophers may access the pickup or putdown at the same time. The array is used to
control the behavior of each philosopher. But, semaphores can result in deadlock due to
programming errors.
Code –
#include <pthread.h>
#include <semaphore.h>
#include <stdio.h>

#define N 5
#define THINKING 2
#define HUNGRY 1
#define EATING 0
#define LEFT (phnum + 4) % N
#define RIGHT (phnum + 1) % N

int state[N];
int phil[N] = { 0, 1, 2, 3, 4 };

sem_t mutex;
sem_t S[N];

void test(int phnum)


{
if (state[phnum] == HUNGRY
&& state[LEFT] != EATING
&& state[RIGHT] != EATING) {
// state that eating
state[phnum] = EATING;

sleep(2);

Operating Systems - Unit 3 GNIT, Hyderabad.


printf("Philosopher %d takes fork %d and %d\n",
phnum + 1, LEFT + 1, phnum + 1);

printf("Philosopher %d is Eating\n", phnum + 1);

// sem_post(&S[phnum]) has no effect


// during takefork
// used to wake up hungry philosophers
// during putfork
sem_post(&S[phnum]);
}
}

// take up chopsticks
void take_fork(int phnum)
{

sem_wait(&mutex);

// state that hungry


state[phnum] = HUNGRY;

printf("Philosopher %d is Hungry\n", phnum + 1);

// eat if neighbours are not eating


test(phnum);

sem_post(&mutex);

// if unable to eat wait to be signalled


sem_wait(&S[phnum]);

sleep(1);
}

// put down chopsticks


void put_fork(int phnum)
{

sem_wait(&mutex);

// state that thinking


state[phnum] = THINKING;

printf("Philosopher %d putting fork %d and %d down\n",


phnum + 1, LEFT + 1, phnum + 1);
printf("Philosopher %d is thinking\n", phnum + 1);

test(LEFT);
test(RIGHT);

Operating Systems - Unit 3 GNIT, Hyderabad.


sem_post(&mutex);
}

void* philospher(void* num)


{

while (1) {

int* i = num;

sleep(1);

take_fork(*i);
sleep(0);

put_fork(*i);
}
}

int main()
{

int i;
pthread_t thread_id[N];

// initialize the semaphores


sem_init(&mutex, 0, 1);

for (i = 0; i < N; i++)

sem_init(&S[i], 0, 0);

for (i = 0; i < N; i++) {

// create philosopher processes


pthread_create(&thread_id[i], NULL,
philospher, &phil[i]);

printf("Philosopher %d is thinking\n", i + 1);


}

for (i = 0; i < N; i++)

pthread_join(thread_id[i], NULL);
}

Note – The below program may compile only with C compilers with semaphore and pthread
library.

Operating Systems - Unit 3 GNIT, Hyderabad.


Readers-Writers Problem

Consider a situation where we have a file shared between many people.


∙ If one of the people tries editing the file, no other person should be reading or writing at the
same time, otherwise changes will not be visible to him/her.
∙ However if some person is reading the file, then others may read it at the same time.

Precisely in OS we call this situation as the readers-writers problem


Problem parameters:
∙ One set of data is shared among a number of processes
∙ Once a writer is ready, it performs its write. Only one writer may write at a time ∙
If a process is writing, no other process can read it
∙ If at least one reader is reading, no other process can write
∙ Readers may not write and only read

#include<semaphore.h>
#include<stdio.h>
#include<stdlib.h>
#include<unistd.h>
#include<pthread.h>
sem_t x,y;
pthread_t tid;
pthread_t writerthreads[100],readerthreads[100];
int readercount = 0;

void *reader(void* param)


{
sem_wait(&x);
readercount++;
if(readercount==1)
sem_wait(&y);
sem_post(&x);
printf("%d reader is inside\n",readercount);
usleep(3);
sem_wait(&x);
readercount--;
if(readercount==0)
{
sem_post(&y);
}
sem_post(&x);
printf("%d Reader is leaving\n",readercount+1);
return NULL;
}

void *writer(void* param)


{
printf("Writer is trying to enter\n");

Operating Systems - Unit 3 GNIT, Hyderabad.


sem_wait(&y);
printf("Writer has entered\n");
sem_post(&y);
printf("Writer is leaving\n");
return NULL;
}

int main()
{
int n2,i;
printf("Enter the number of readers:");
scanf("%d",&n2);
printf("\n");
int n1[n2];
sem_init(&x,0,1);
sem_init(&y,0,1);
for(i=0;i<n2;i++)
{
pthread_create(&writerthreads[i],NULL,reader,NULL);
pthread_create(&readerthreads[i],NULL,writer,NULL); }
for(i=0;i<n2;i++)
{
pthread_join(writerthreads[i],NULL);
pthread_join(readerthreads[i],NULL);
}

Operating Systems - Unit 3 GNIT, Hyderabad.


Sleeping Barber problem in Process Synchronization

Problem : The analogy is based upon a hypothetical barber shop with one barber. There is a
barber shop which has one barber, one barber chair, and n chairs for waiting for customers if
there are any to sit on the chair.
∙ If there is no customer, then the barber sleeps in his own chair.
∙ When a customer arrives, he has to wake up the barber.
∙ If there are many customers and the barber is cutting a customer’s hair, then the remaining
customers either wait if there are empty chairs in the waiting room or they leave if no chairs
are empty.
Solution : The solution to this problem includes three semaphores. First is for the customer
which counts the number of customers present in the waiting room (customer in the barber chair
is not included because he is not waiting). Second, the barber 0 or 1 is used to tell whether the
barber is idle or is working, and the third mutex is used to provide the mutual exclusion which is
required for the process to execute. In the solution, the customer has the record of the number
of customers waiting in the waiting room if the number of customers is equal to the number of
chairs in the waiting room then the upcoming customer leaves the barbershop.
When the barber shows up in the morning, he executes the procedure barber, causing him to
block on the semaphore customers because it is initially 0. Then the barber goes to sleep until
the first customer comes up.
When a customer arrives, he executes customer procedure the customer acquires the mutex for
entering the critical region, if another customer enters thereafter, the second one will not be able
to anything until the first one has released the mutex. The customer then checks the chairs in
the waiting room if waiting customers are less than the number of chairs then he sits otherwise
he leaves and releases the mutex.

Operating Systems - Unit 3 GNIT, Hyderabad.


If the chair is available then customer sits in the waiting room and increments the variable
waiting value and also increases the customer’s semaphore this wakes up the barber if he is
sleeping.
At this point, customer and barber are both awake and the barber is ready to give that person a
haircut. When the haircut is over, the customer exits the procedure and if there are no
customers in waiting room barber sleeps.
#define _REENTRANT

#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>

#include <pthread.h>
#include <semaphore.h>

// The maximum number of customer threads.


#define MAX_CUSTOMERS 25

// Function prototypes...
void *customer(void *num);
void *barber(void *);

void randwait(int secs);

// Define the semaphores.

// waitingRoom Limits the # of customers allowed

Operating Systems - Unit 3 GNIT, Hyderabad.


// to enter the waiting room at one time.
sem_t waitingRoom;

// barberChair ensures mutually exclusive access to


// the barber chair.
sem_t barberChair;

// barberPillow is used to allow the barber to sleep


// until a customer arrives.
sem_t barberPillow;

// seatBelt is used to make the customer to wait until


// the barber is done cutting his/her hair.
sem_t seatBelt;

// Flag to stop the barber thread when all customers


// have been serviced.
int allDone = 0;

int main(int argc, char *argv[]) {


pthread_t btid;
pthread_t tid[MAX_CUSTOMERS];
long RandSeed;
int i, numCustomers, numChairs;
int Number[MAX_CUSTOMERS];

// Check to make sure there are the right number of


// command line arguments.
if (argc != 4) {
printf("Use: SleepBarber <Num Customers> <Num Chairs> <rand seed>\n");
exit(-1);
}

// Get the command line arguments and convert them


// into integers.
numCustomers = atoi(argv[1]);
numChairs = atoi(argv[2]);
RandSeed = atol(argv[3]);

// Make sure the number of threads is less than the number of


// customers we can support.
if (numCustomers > MAX_CUSTOMERS) {
printf("The maximum number of Customers is %d.\n", MAX_CUSTOMERS);
exit(-1);
}

printf("\nSleepBarber.c\n\n");
printf("A solution to the sleeping barber problem using semaphores.\n");

// Initialize the random number generator with a new seed.

Operating Systems - Unit 3 GNIT, Hyderabad.


srand48(RandSeed);

// Initialize the numbers array.


for (i=0; i<MAX_CUSTOMERS; i++) {
Number[i] = i;
}
// Initialize the semaphores with initial values...
sem_init(&waitingRoom, 0, numChairs);
sem_init(&barberChair, 0, 1);
sem_init(&barberPillow, 0, 0);
sem_init(&seatBelt, 0, 0);

// Create the barber.


pthread_create(&btid, NULL, barber, NULL);

// Create the customers.


for (i=0; i<numCustomers; i++) {
pthread_create(&tid[i], NULL, customer, (void *)&Number[i]);
}

// Join each of the threads to wait for them to finish.


for (i=0; i<numCustomers; i++) {
pthread_join(tid[i],NULL);
}

// When all of the customers are finished, kill the


// barber thread.
allDone = 1;
sem_post(&barberPillow); // Wake the barber so he will exit.
pthread_join(btid,NULL);
}

void *customer(void *number) {


int num = *(int *)number;

// Leave for the shop and take some random amount of


// time to arrive.
printf("Customer %d leaving for barber shop.\n", num);
randwait(5);
printf("Customer %d arrived at barber shop.\n", num);

// Wait for space to open up in the waiting room...


sem_wait(&waitingRoom);
printf("Customer %d entering waiting room.\n", num);

// Wait for the barber chair to become free.


sem_wait(&barberChair);

// The chair is free so give up your spot in the


// waiting room.

Operating Systems - Unit 3 GNIT, Hyderabad.


sem_post(&waitingRoom);

// Wake up the barber...


printf("Customer %d waking the barber.\n", num);
sem_post(&barberPillow);

// Wait for the barber to finish cutting your hair.


sem_wait(&seatBelt);

// Give up the chair.


sem_post(&barberChair);
printf("Customer %d leaving barber shop.\n", num); }

void *barber(void *junk) {


// While there are still customers to be serviced...
// Our barber is omnicient and can tell if there are
// customers still on the way to his shop. while
(!allDone) {

// Sleep until someone arrives and wakes you..


printf("The barber is sleeping\n");
sem_wait(&barberPillow);

// Skip this stuff at the end...


if (!allDone) {

// Take a random amount of time to cut the //


customer's hair.
printf("The barber is cutting hair\n");
randwait(3);
printf("The barber has finished cutting hair.\n");

// Release the customer when done cutting...


sem_post(&seatBelt);
}
else {
printf("The barber is going home for the day.\n"); }
}
}

void randwait(int secs) {


int len;

// Generate a random number...


len = (int) ((drand48() * secs) + 1);
sleep(len);
}

Operating Systems - Unit 3 GNIT, Hyderabad.


How to use POSIX semaphores in C language

Semaphores are very useful in process synchronization and multithreading. But how to use one
in real life, for example say in C Language?
Well, we have the POSIX semaphore library in Linux systems. Let’s learn how to use it. The
basic code of a semaphore is simple as presented here. But this code cannot be written
directly, as the functions require to be atomic and writing code directly would lead to a context
switch without function completion and would result in a mess.
The POSIX system in Linux presents its own built-in semaphore library. To use it, we have to :
1. Include semaphore.h
2. Compile the code by linking with -lpthread -lrt

To lock a semaphore or wait we can use the sem_wait function:


int sem_wait(sem_t *sem);
To release or signal a semaphore, we use the sem_post function:
int sem_post(sem_t *sem);
A semaphore is initialised by using sem_init(for processes or threads)
or sem_open (for IPC).
sem_init(sem_t *sem, int pshared, unsigned int value);
Where,
∙ sem : Specifies the semaphore to be initialized.
∙ pshared : This argument specifies whether or not the newly initialized semaphore
is shared between processes or between threads. A non-zero value means the
semaphore is shared between processes and a value of zero means it is shared
between threads.
∙ value : Specifies the value to assign to the newly initialized semaphore.
To destroy a semaphore, we can use sem_destroy.
sem_destroy(sem_t *mutex);
To declare a semaphore, the data type is sem_t.

Operating Systems - Unit 3 GNIT, Hyderabad.


// C program to demonstrate working of
Semaphores #include <stdio.h>
#include <pthread.h>
#include <semaphore.h>
#include <unistd.h>
sem_t mutex;

void* thread(void* arg)


{
//wait
sem_wait(&mutex);
printf("\nEntered..\n");

//critical section
sleep(4);

//signal
printf("\nJust Exiting...\n");
sem_post(&mutex);
}

int main()
{
sem_init(&mutex, 0, 1);
pthread_t t1,t2;
pthread_create(&t1,NULL,thread,NULL);
sleep(2);
pthread_create(&t2,NULL,thread,NULL);
pthread_join(t1,NULL);
pthread_join(t2,NULL);
sem_destroy(&mutex);
return 0;

Operating Systems - Unit 3 GNIT, Hyderabad.


}

Compilation should be done with gcc a.c -lpthread -lrt


Explanation –
2 threads are being created, one 2 seconds after the first one.
But the first thread will sleep for 4 seconds after acquiring the lock.
Thus the second thread will not enter immediately after it is called, it will enter 4 – 2 =
2 secs after it is called.
So the output is:
Entered..

Just Exiting...

Entered..

Just Exiting...

but not:
Entered..

Entered..

Just Exiting...

Just Exiting...

Operating Systems - Unit 3 GNIT, Hyderabad.


Critical Region A set of critical sections.
Now assume that you can execute ProcessA and ProcessB concurrently. Each process has a
critical section. Both of the sections share the same variable (x). Together, the two critical
sections form a critical region. Why is this important? If you assume that the critical section of
ProcessA is guarded by mutual exclusion, you will still get incorrect results in x as ProcessB
does not honour the mutual exclusion. You need to implement mutual exclusion on the critical
region, by implementing it on every critical section which makes up the region.

Operating Systems - Unit 3 GNIT, Hyderabad.


Monitors in Process Synchronization

The monitor is one of the ways to achieve Process synchronization. The monitor is supported by
programming languages to achieve mutual exclusion between processes. For example Java
Synchronized methods. Java provides wait() and notify() constructs.
1. It is the collection of condition variables and procedures combined together in a special kind
of module or a package.
2. The processes running outside the monitor can’t access the internal variable of the monitor
but can call procedures of the monitor.
3. Only one process at a time can execute code inside monitors.
Syntax:

Condition Variables:
Two different operations are performed on the condition variables of the
monitor. Wait.
signal.
let say we have 2 condition variables
condition x, y; // Declaring variable

Wait operation
x.wait() : Process performing wait operation on any condition variable are suspended. The
suspended processes are placed in block queue of that condition variable.

Operating Systems - Unit 3 GNIT, Hyderabad.


Note: Each condition variable has its unique block queue.

Signal operation
x.signal(): When a process performs signal operation on condition variable, one of the blocked
processes is given chance.
If (x block queue empty)
// Ignore signal
else
// Resume a process from block queue.
Advantages of Monitor:
Monitors have the advantage of making parallel programming easier and less error prone than
using techniques such as semaphore.
Disadvantages of Monitor:
Monitors have to be implemented as part of the programming language . The compiler must
generate code for them. This gives the compiler the additional burden of having to know what
operating system facilities are available to control access to critical sections in concurrent
processes. Some languages that do support monitors are Java,C#,Visual Basic,Ada and
concurrent Euclid.
https://rextester.com/l/c_online_compiler_gcc
https://www.onlinegdb.com

Operating Systems - Unit 3 GNIT, Hyderabad.


Inter Process Communication

∙ Processes share memory


o data in shared messages
∙ Processes exchange messages
o message passing via sockets
∙ Requires synchronization
o mutex, waiting

Inter Process Communication(IPC) is an OS supported mechanism for interaction among


processes (coordination and communication)

∙ Message Passing
o e.g. sockets, pips, messages, queues
∙ Memory based IPC
o shared memory, memory mapped files
∙ Higher level semantics
o files, RPC
∙ Synchronization primitives

Message Passing

∙ Send/Receive messages
∙ OS creates and maintains a channel
o buffer, FIFO queue
∙ OS provides interfaces to processes
o a port
o processes send/write messages to this port
o processes receive/read messages from this port

Operating Systems - Unit 3 GNIT, Hyderabad.


∙ Kernel required to
o establish communication
o perform each IPC operation
o send: system call + data copy
o receive: system call + data copy
∙ Request-response: 4x user/ kernel crossings +
4x data copies

Advantages

∙ simplicity : kernel does channel management and synchronization

Disadvantages

∙ Overheads

Forms of Message Passing IPC

1. Pipes

Operating Systems - Unit 3 GNIT, Hyderabad.


∙ Carry byte stream between 2 process
∙ e.g connect output from 1 process to input of another

2. Message queues
∙ Carry "messages" among processes
∙ OS management includes priorities, scheduling of message delivery ∙
APIs : Sys-V and POSIX

Operating Systems - Unit 3 GNIT, Hyderabad.


3. Sockets
∙ send() and recv() : pass message buffers
∙ socket() : create kernel level socket buffer
∙ associated necessary kernel processing (TCP-IP,..)
∙ If different machines, channel between processes and network devices ∙
If same machine, bypass full protocol stack

Operating Systems - Unit 3 GNIT, Hyderabad.


Shared Memory IPC
∙ read and write to shared memory region
∙ OS establishes shared channel between the processes
1. physical pages mapped into virtual address space
2. VA(P1) and VA(P2) map to same physical address
3. VA(P1) != VA(P2)
4. physical memory doesn't need to be contiguous
∙ APIs : SysV, POSIX, memory mapped files, Android ashmem

Operating Systems - Unit 3 GNIT, Hyderabad.


Advantages
∙ System calls only for setup data copies potentially reduced (but not eliminated)

Disdvantages

∙ explicit synchronization
∙ communication protocol, shared buffer management
o programmer's responsibility

Which is better?

Overheads for 1. Message Passing : must perform multiple copies 2. Shared Memory : must
establish all mappings among processes' address space and shared memory pages

Thus, it depends.

Copy vs Map

Goal for both is to transfer data from one into target address space

Operating Systems - Unit 3 GNIT, Hyderabad.


Copy (Message Passing) Map (Shared Memory) CPU cycles to copy data to/from port
CPU cycles to map memory into address space

∙ Large Data: t(Copy) >> t(Map)


CPU to copy data to channel

If channel setup once, use many times (good

payoff) Can perform well for 1 time use

o e.g. trade-off exercised in Window "Local" Procedure Calls (LPC)

Shared Memory and Synchronization

Use threads accessing shared state in a single addressing space, but for

process Synchronization method:

1. mechanism supported by processing threading library (pthreads)


2. OS supported IPC for sync

Either method must coordinate

∙ no of concurrent access to shared segment


∙ when data is available and ready for consumption

IPC Synchronization

Message Queues Semaphores

Implement "mutual exclusion" via

send/receive OS supported synchronization construct

binary construct (either allow process or not)

Like mutex, if value = 0, stop; if value = 1,


decrement(lock) and proceed
Operating Systems - Unit 3 GNIT, Hyderabad.
// C program to demonstrate use of fork() and
pipe() #include<stdio.h>
#include<stdlib.h>
#include<unistd.h>
#include<sys/types.h>
#include<string.h>
#include<sys/wait.h>

int main()
{
// We use two pipes
// First pipe to send input string from parent
// Second pipe to send concatenated string from child

int fd1[2]; // Used to store two ends of first pipe


int fd2[2]; // Used to store two ends of second pipe

char fixed_str[] = "GNIT";


char input_str[100];
pid_t p;

if (pipe(fd1)==-1)
{
fprintf(stderr, "Pipe Failed" );
return 1;
}
if (pipe(fd2)==-1)
{
fprintf(stderr, "Pipe Failed" );
return 1;
}

scanf("%s", input_str);
p = fork();

if (p < 0)
{
fprintf(stderr, "fork Failed" );
return 1;
}

// Parent process
else if (p > 0)
{
char concat_str[100];

close(fd1[0]); // Close reading end of first pipe

// Write input string and close writing end of


first // pipe.
write(fd1[1], input_str, strlen(input_str)+1);

Operating Systems - Unit 3 GNIT, Hyderabad.


close(fd1[1]);

// Wait for child to send a string


wait(NULL);

close(fd2[1]); // Close writing end of second pipe

// Read string from child, print it and close


// reading end.
read(fd2[0], concat_str, 100);
printf("Concatenated string %s\n", concat_str);
close(fd2[0]);
}

// child process
else
{
close(fd1[1]); // Close writing end of first pipe

// Read a string using first pipe


char concat_str[100];
read(fd1[0], concat_str, 100);

// Concatenate a fixed string with it


int k = strlen(concat_str);
int i;
for (i=0; i<strlen(fixed_str); i++)
concat_str[k++] = fixed_str[i];

concat_str[k] = '\0'; // string ends with '\0'

// Close both reading ends


close(fd1[0]);
close(fd2[0]);

// Write concatenated string and close writing


end write(fd2[1], concat_str,
strlen(concat_str)+1);
close(fd2[1]);
exit(0);
}
}

Operating Systems - Unit 3 GNIT, Hyderabad.


/* C program to illustrate
pipe system call in C */
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#define MSGSIZE 16
char* msg1 = "hello, world #1";
char* msg2 = "hello, world #2";
char* msg3 = "hello, world #3";
int main()
{
char inbuf[MSGSIZE];
int p[2], i;
if (pipe(p) < 0)
exit(1);
/* continued */
/* write pipe */
write(p[1], msg1, MSGSIZE);
write(p[1], msg2, MSGSIZE);
write(p[1], msg3, MSGSIZE);
for (i = 0; i < 3; i++) { /* read pipe */
read(p[0], inbuf, MSGSIZE);
printf("%s\n", inbuf);
}
return 0;
}
/****************
Example program to demonstrate use of FIFOs (named
pipes) ****************/
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
void errexit(char *errMsg){
printf("\n About to exit: %s", errMsg);
fflush(stdout);
exit(1);
}
int main()
{
int ret;
pid_t pid;
int value;
char fifoName[]="/tmp/testfifo";
char errMsg[1000];
FILE *cfp;
FILE *pfp;
ret = mknod(fifoName, S_IFIFO | 0600, 0);

Operating Systems - Unit 3 GNIT, Hyderabad.


/* 0600 gives read, write permissions to user and none to group and world */
if(ret < 0){
sprintf(errMsg,"Unable to create fifo: %s",fifoName);
errexit(errMsg);
}
pid=fork();
if(pid == 0){
/* child -- open the named pipe and write an integer to it */
cfp = fopen(fifoName,"w");
if(cfp == NULL)
errexit("Unable to open fifo for writing");
ret=fprintf(cfp,"%d",9999);
fflush(cfp);
exit(0);
}
else{
/* parent - open the named pipe and read an integer from it */
pfp = fopen(fifoName,"r");
if(pfp == NULL)
errexit("Unable to open fifo for reading");
ret=fscanf(pfp,"%d",&value);
if(ret < 0)
errexit("Error reading from named pipe");
fclose(pfp);
printf("This is the parent. Received value %d from child on fifo \n", value);
unlink(fifoName); /* Delete the created fifo */
exit(0);
}
}
Operating Systems - Unit 3 GNIT, Hyderabad.
/* Filename: fifoclient.c */
#include <stdio.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <fcntl.h>
#include <unistd.h>
#include <string.h>

#define FIFO_FILE "MYFIFO"


int main() {
int fd;
int end_process;
int stringlen;
char readbuf[80];
char end_str[5];
printf("FIFO_CLIENT: Send messages, infinitely, to end enter \"end\"\n");
fd = open(FIFO_FILE, O_CREAT|O_WRONLY);
strcpy(end_str, "end");

while (1) {
printf("Enter string: ");
fgets(readbuf, sizeof(readbuf), stdin);
stringlen = strlen(readbuf);
readbuf[stringlen - 1] = '\0';
end_process = strcmp(readbuf, end_str);

//printf("end_process is %d\n", end_process);


if (end_process != 0) {
write(fd, readbuf, strlen(readbuf));
printf("Sent string: \"%s\" and string length is %d\n", readbuf, (int)strlen(readbuf)); }
else {
write(fd, readbuf, strlen(readbuf));
printf("Sent string: \"%s\" and string length is %d\n", readbuf, (int)strlen(readbuf));
close(fd);
break;
}
}
return 0;
}

Operating Systems - Unit 3 GNIT, Hyderabad.


/* Filename: fifoserver.c */
#include <stdio.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <fcntl.h>
#include <unistd.h>
#include <string.h>

#define FIFO_FILE "MYFIFO"


int main() {
int fd;
char readbuf[80];
char end[10];
int to_end;
int read_bytes;

/* Create the FIFO if it does not exist */


mknod(FIFO_FILE, S_IFIFO|0640, 0);
strcpy(end, "end");
while(1) {
fd = open(FIFO_FILE, O_RDONLY);
read_bytes = read(fd, readbuf, sizeof(readbuf));
readbuf[read_bytes] = '\0';
printf("Received string: \"%s\" and length is %d\n", readbuf, (int)strlen(readbuf));
to_end = strcmp(readbuf, end);
if (to_end == 0) {
close(fd);
break;
}
}
return 0;
}

Operating Systems - Unit 3 GNIT, Hyderabad.


/* Filename: msgq_send.c */
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <errno.h>
#include <string.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/msg.h>
#define PERMS 0644
struct my_msgbuf {
long mtype;
char mtext[200];
};
int main(void) {
struct my_msgbuf buf;
int msqid;
int len;
key_t key;
system("touch msgq.txt");
if ((key = ftok("msgq.txt", 'B')) == -1) {
perror("ftok");
exit(1);
}
if ((msqid = msgget(key, PERMS | IPC_CREAT)) == -1) {
perror("msgget");
exit(1);
}
printf("message queue: ready to send messages.\n");
printf("Enter lines of text, ^D to quit:\n");
buf.mtype = 1; /* we don't really care in this case */
while(fgets(buf.mtext, sizeof buf.mtext, stdin) != NULL) {
len = strlen(buf.mtext);
/* remove newline at end, if it exists */
if (buf.mtext[len-1] == '\n') buf.mtext[len-1] = '\0'; if
(msgsnd(msqid, &buf, len+1, 0) == -1) /* +1 for '\0' */
perror("msgsnd");
}
strcpy(buf.mtext, "end");
len = strlen(buf.mtext);
if (msgsnd(msqid, &buf, len+1, 0) == -1) /* +1 for '\0' */
perror("msgsnd");

if (msgctl(msqid, IPC_RMID, NULL) == -1) {


perror("msgctl");
exit(1);
}
printf("message queue: done sending messages.\n");
return 0;
}

Operating Systems - Unit 3 GNIT, Hyderabad.


/* Filename: msgq_recv.c */
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/msg.h>

#define PERMS 0644


struct my_msgbuf {
long mtype;
char mtext[200];
};

int main(void) {
struct my_msgbuf buf;
int msqid;
int toend;
key_t key;

if ((key = ftok("msgq.txt", 'B')) == -1) {


perror("ftok");
exit(1);
}

if ((msqid = msgget(key, PERMS)) == -1) { /* connect to the queue */


perror("msgget");
exit(1);
}
printf("message queue: ready to receive messages.\n");
for(;;) { /* normally receiving never ends but just to make conclusion
/* this program ends wuth string of end */
if (msgrcv(msqid, &buf, sizeof(buf.mtext), 0, 0) == -1) {
perror("msgrcv");
exit(1);
}
printf("recvd: \"%s\"\n", buf.mtext);
toend = strcmp(buf.mtext,"end");
if (toend == 0)
break;
}
printf("message queue: done receiving messages.\n");
system("rm msgq.txt");
return 0;
}

Operating Systems - Unit 3 GNIT, Hyderabad.


/* Shared memory program, need to open two terminal windows, one for reading another for
writing */
#include <stdlib.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <stdio.h>

int main()
{
// ftok to generate unique key
key_t key = ftok("shmfile",65);

// shmget returns an identifier in shmid


int shmid = shmget(key,1024,0666|IPC_CREAT);
// shmat to attach to shared memory
char *str = (char*) shmat(shmid,(void*)0,0);

printf("Write Data : ");


gets(str);

printf("Data written in memory: %s\n",str);

//detach from shared memory


shmdt(str);

return 0;
}

#include <iostream>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <stdio.h>
using namespace std;
int main()
{
// ftok to generate unique key
key_t key = ftok("shmfile",65);
// shmget returns an identifier in shmid
int shmid = shmget(key,1024,0666|IPC_CREAT);
// shmat to attach to shared memory
char *str = (char*) shmat(shmid,(void*)0,0);
printf("Data read from memory: %s\n",str);

//detach from shared memory


shmdt(str);
// destroy the shared memory
shmctl(shmid,IPC_RMID,NULL);
return 0;
}

Operating Systems - Unit 3 GNIT, Hyderabad.

You might also like