You are on page 1of 18

UNIT 3

Process Synchronization

Processes Synchronization or Synchronization is the way by which processes that


share the same memory space are managed in an operating system. It helps
maintain the consistency of data by using variables or hardware so that only one
process can make changes to the shared memory at a time. There are various
solutions for the same such as semaphores, mutex locks, synchronization hardware,
etc.

On the basis of synchronization, processes are categorized as one of the following


two types:

 Independent Process: The execution of one process does not affect the
execution of other processes.

 Cooperative Process: A process that can affect or be affected by other


processes executing in the system.

Process synchronization problem arises in the case of Cooperative process also


because resources are shared in Cooperative processes.

How Process Synchronization in OS Works?

Let us take a look at why exactly we need Process Synchronization. For example,
If a process1 is trying to read the data present in a memory location while
another process2 is trying to change the data present at the same location, there is a
high chance that the data read by the process1 will be incorrect.

1
Race Condition:

When more than one process is either running the same code or modifying the
same memory or any shared data, there is a risk that the result or value of the
shared data may be incorrect because all processes try to access and modify this
shared resource. Thus, all the processes race to say that my result is correct. This
condition is called the race condition. Since many processes use the same data, the
results of the processes may depend on the order of their execution.

This is mostly a situation that can arise within the critical section. In the critical
section, a race condition occurs when the end result of multiple thread
executions varies depending on the sequence in which the threads execute.

But how to avoid this race condition? There is a simple solution:

 by treating the critical section as a section that can be accessed by only a


single process at a time. This kind of section is called an atomic section.

A Race condition typically occurs when two or more threads try to read, write and
possibly make the decisions based on the memory that they are accessing
concurrently.

Sections of a Program

Here, are four essential elements of the critical section:

 Entry Section: It is part of the process which decides the entry of a


particular process.

 Critical Section: This part allows one process to enter and modify the
shared variable.

 Exit Section: Exit section allows the other process that are waiting in the
Entry Section, to enter into the Critical Sections. It also checks that a process
that finished its execution should be removed through this Section.

 Remainder Section: All other parts of the Code, which is not in Critical,
Entry, and Exit Section, are known as the Remainder Section.

2
Critical Section Problem:

A critical section is a code segment that can be accessed by only one process at a
time. The critical section contains shared variables that need to be synchronized to
maintain the consistency of data variables. So the critical section problem means
designing a way for cooperative processes to access shared resources without
creating data inconsistencies.

A critical section is a segment of code which can be accessed by a signal process at


a specific point of time. The section consists of shared data resources that required
to be accessed by other processes.

 The entry to the critical section is handled by the wait() function, and it is
represented as P().

 The exit from a critical section is controlled by the signal() function,


represented as V().

In the critical section, only a single process can be executed. Other processes,
waiting to execute their critical section, need to wait until the current process
completes its execution.
3
Rules for Critical Section

The critical section need to must enforce all three rules:

 Mutual Exclusion: Mutual Exclusion is a special type of binary semaphore


which is used for controlling access to the shared resource. It includes a
priority inheritance mechanism to avoid extended priority inversion
problems. Not more than one process can execute in its critical section at
one time.

 Progress: This solution is used when no one is in the critical section, and
someone wants in. Then those processes not in their reminder section should
decide who should go in, in a finite time.

 Bound Waiting: When a process makes a request for getting into critical
section, there is a specific limit about number of processes can get into their
critical section. So, when the limit is reached, the system must allow request
to the process to get into its critical section.

Solutions To The Critical Section

In Process Synchronization, critical section plays the main role so that the problem
must be solved.

Here are some widely used methods to solve the critical section problem.

Peterson Solution

Peterson’s solution is widely used solution to critical section problems. This


algorithm was developed by a computer scientist Peterson that’s why it is named as
a Peterson’s solution.

In this solution, when a process is executing in a critical state, then the other
process only executes the rest of the code, and the opposite can happen. This
method also helps to make sure that only a single process runs in the critical
section at a specific time.

Example

4
PROCESS Pi
FLAG[i] = true
while( (turn != i) AND (CS is !free) )
{
wait;
}
CRITICAL SECTION FLAG[i] = false
turn = j; //choose another process to go to CS
 Assume there are N processes (P1, P2, … PN) and every process at some
point of time requires to enter the Critical Section

 A FLAG[] array of size N is maintained which is by default false. So,


whenever a process requires to enter the critical section, it has to set its flag
as true. For example, If Pi wants to enter it will set FLAG[i]=TRUE.

 Another variable called TURN indicates the process number which is


currently wating to enter into the CS.

5
 The process which enters into the critical section while exiting would change
the TURN to another number from the list of ready processes.

 Example: turn is 2 then P2 enters the Critical section and while exiting
turn=3 and therefore P3 breaks out of wait loop.

Synchronization Hardware

Some times the problems of the Critical Section are also resolved by hardware.
Some operating system offers a lock functionality where a Process acquires a lock
when entering the Critical section and releases the lock after leaving it.

So when another process is trying to enter the critical section, it will not be able to
enter as it is locked. It can only do so if it is free by acquiring the lock itself.

Mutex Locks

Synchronization hardware not simple method to implement for everyone, so strict


software method known as Mutex Locks was also introduced.

In this approach, in the entry section of code, a LOCK is obtained over the critical
resources used inside the critical section. In the exit section that lock is released.

Semaphore Solution

Semaphore is simply a variable that is non-negative and shared between threads. It


is another algorithm or solution to the critical section problem. It is a signaling
mechanism and a thread that is waiting on a semaphore, which can be signaled by
another thread.

It uses two atomic operations, 1)wait, and 2) signal for the process
synchronization.

Example

WAIT ( S ):
while ( S <= 0 );
S = S - 1;
SIGNAL ( S ):
S = S + 1;

6
Semaphores in Operating System

Semaphores are integer variables that are used to solve the critical section problem
by using two atomic operations, wait and signal that are used for process
synchronization.

The definitions of wait and signal are as follows −

 Wait

The wait operation decrements the value of its argument S, if it is positive. If S is


negative or zero, then no operation is performed.

wait(S)
{
while (S<=0);
S--;
}

 Signal

The signal operation increments the value of its argument S.

signal(S)
{
S++;
}

Types of Semaphores

There are two main types of semaphores i.e. counting semaphores and binary
semaphores. Details about these are given as follows −

 Counting Semaphores

These are integer value semaphores and have an unrestricted value domain. These
semaphores are used to coordinate the resource access, where the semaphore count
is the number of available resources. If the resources are added, semaphore count

7
automatically incremented and if the resources are removed, the count is
decremented.

 Binary Semaphores

The binary semaphores are like counting semaphores but their value is restricted to
0 and 1. The wait operation only works when the semaphore is 1 and the signal
operation succeeds when semaphore is 0. It is sometimes easier to implement
binary semaphores than counting semaphores.

Advantages of Semaphores

Some of the advantages of semaphores are as follows −

 Semaphores allow only one process into the critical section. They follow the
mutual exclusion principle strictly and are much more efficient than some
other methods of synchronization.

 There is no resource wastage because of busy waiting in semaphores as


processor time is not wasted unnecessarily to check if a condition is fulfilled
to allow a process to access the critical section.

 Semaphores are implemented in the machine independent code of the


microkernel. So they are machine independent.

Disadvantages of Semaphores

Some of the disadvantages of semaphores are as follows −

 Semaphores are complicated so the wait and signal operations must be


implemented in the correct order to prevent deadlocks.

 Semaphores are impractical for last scale use as their use leads to loss of
modularity. This happens because the wait and signal operations prevent the
creation of a structured layout for the system.

 Semaphores may lead to a priority inversion where low priority processes


may access the critical section first and high priority processes later.

8
CLASSICAL PROBLEM OF SYNCHRONIZATION

The classical problem of synchronization in operating systems involves managing and


coordinating the activities of multiple processes or threads to ensure that they don't
interfere with each other in ways that can lead to unexpected or undesirable behavior.
Synchronization is crucial in multi-process or multi-threaded environments to maintain
data integrity and prevent race conditions, deadlocks, and other concurrency-related
issues. There are several classical synchronization problems, including:

1. The Producer-Consumer Problem: This problem involves two types of


processes - producers and consumers. Producers create data or items, while
consumers consume them. The challenge is to ensure that producers do not
produce items when the buffer is full and that consumers do not consume items
when the buffer is empty.
2. The Readers-Writers Problem: In this problem, you have multiple processes
that want to read shared data (readers) or write to it (writers). You need to ensure
that multiple readers can access the data simultaneously, but only one writer can
access it exclusively, and readers and writers do not conflict with each other.
3. The Dining Philosophers Problem: This problem involves a group of
philosophers who sit around a dining table. Each philosopher alternates between
thinking and eating, but they need forks to eat. The challenge is to allocate forks
in a way that prevents deadlocks and ensures fair access to forks for all
philosophers.
4. The Bounded-Buffer Problem: This problem is similar to the Producer-
Consumer problem but with multiple producers and consumers. The challenge is
to ensure that producers do not overwrite unprocessed data in the buffer, and
consumers do not try to access data that hasn't been produced yet.
5. The Semaphore Problem: Semaphores are a synchronization primitive used to
control access to resources. Problems related to semaphores include ensuring
proper use of semaphores to prevent race conditions and deadlocks.
6. The Barrier Synchronization Problem: In this problem, a group of processes
needs to synchronize at a common point before proceeding. It's often used in
parallel computing to ensure that all processes reach a particular point before
continuing.
7. The Mutex (Mutual Exclusion) Problem: Mutexes are used to ensure that only
one thread or process can access a critical section of code at a time. The problem
here is to use mutexes correctly to prevent multiple threads from simultaneously
entering a critical section.

9
These classical synchronization problems illustrate the challenges faced in multi-process
or multi-threaded environments and the need for synchronization mechanisms and
techniques to address these challenges. Various synchronization primitives such as
semaphores, mutexes, condition variables, and atomic operations are used to solve
these problems in modern operating systems and concurrent programming
environments.

MONITORS
A monitor is a high-level synchronization construct used in operating systems and
concurrent programming to simplify the management of shared resources and enable
safe, synchronized access to them. It was introduced by Per Brinch Hansen in 1973 and
is often associated with programming languages like Java and Python, where monitor-
based constructs are used.

A monitor encapsulates both the data structure (shared resource) and the set of
procedures (methods) that operate on that data structure. It provides a way to ensure
that only one process or thread can access the shared resource at a time, preventing
race conditions and providing a higher level of abstraction for synchronization.

Here are some key characteristics and concepts related to monitors in operating
systems:

1. Mutual Exclusion: Monitors enforce mutual exclusion, which means that only
one process or thread can be active within the monitor at any given time. This
prevents concurrent access to the shared resource and eliminates data
corruption.
2. Condition Variables: Monitors often include condition variables, which are used
to allow processes or threads to wait for certain conditions to be met before they
can proceed. Condition variables are commonly used for tasks like signaling other
threads when data becomes available or waiting for a resource to be released.
3. Synchronization: Monitors are used to synchronize the access to shared
resources. They ensure that only one thread can enter the monitor at a time and
that others must wait until the monitor is available.
4. Abstraction: Monitors provide an abstraction that simplifies the management of
shared resources and makes it easier to reason about concurrency and
synchronization. Programmers can encapsulate complex synchronization logic
within a monitor and expose a clean interface for accessing the resource.
5. Wait and Signal Operations: In many monitor implementations, threads can
perform "wait" and "signal" operations on condition variables. The "wait"
operation causes a thread to release the monitor and enter a waiting state, and

10
the "signal" operation can be used to wake up one or more waiting threads when
a specific condition is met.
6. Priority Inversion: Monitors can suffer from priority inversion, where a higher-
priority thread is delayed by lower-priority threads holding the monitor. To
mitigate this, priority inheritance or priority ceiling protocols are sometimes used.
7. Thread Safety: Monitors help ensure thread safety, as they encapsulate shared
resources and their access procedures. This simplifies concurrent programming
and reduces the likelihood of synchronization bugs.
8. Examples: High-level programming languages like Java and Python provide
monitor-like constructs. In Java, for example, the synchronized keyword is used to
create synchronized methods and blocks that function as monitors.

Monitors provide an effective way to manage concurrency and shared resources in a


structured and less error-prone manner. They have been widely adopted in the
development of concurrent software and operating systems, making it easier to design
and implement complex concurrent systems while minimizing the potential for
synchronization issues.

Deadlock in Operating System

What is Deadlock in OS?

All the processes in a system require some resources such as central processing
unit(CPU), file storage, input/output devices, etc to execute it. Once the execution
is finished, the process releases the resource it was holding. However, when many
processes run on a system they also compete for these resources they require for
execution. This may arise a deadlock situation.

A deadlock is a situation in which more than one process is blocked because it is


holding a resource and also requires some resource that is acquired by some other
process. Therefore, none of the processes gets executed.

Deadlock System model

A deadlock occurs when a set of processes is stalled because each process is


holding a resource and waiting for another process to acquire another resource. In
11
the diagram below, for example, Process 1 is holding Resource 1 while Process 2
acquires Resource 2, and Process 2 is waiting for Resource 1.

System Model :

 For the purposes of deadlock discussion, a system can be modeled as a


collection of limited resources that can be divided into different categories
and allocated to a variety of processes, each with different requirements.

 Memory, printers, CPUs, open files, tape drives, CD-ROMs, and other
resources are examples of resource categories.

 By definition, all resources within a category are equivalent, and any of the
resources within that category can equally satisfy a request from that
category. If this is not the case (i.e. if there is some difference between the
resources within a category), then that category must be subdivided further.
For example, the term “printers” may need to be subdivided into “laser
printers” and “color inkjet printers.”

 Some categories may only have one resource.


12
 The kernel keeps track of which resources are free and which are allocated,
to which process they are allocated, and a queue of processes waiting for this
resource to become available for all kernel-managed resources. Mutexes or
wait() and signal() calls can be used to control application-managed
resources (i.e. binary or counting semaphores. )

 When every process in a set is waiting for a resource that is currently


assigned to another process in the set, the set is said to be deadlocked.

Operations :
In normal operation, a process must request a resource before using it and release
it when finished, as shown below.

1. Request –
If the request cannot be granted immediately, the process must wait until the
resource(s) required to become available. The system, for example, uses the
functions open(), malloc(), new(), and request ().

2. Use –
The process makes use of the resource, such as printing to a printer or
reading from a file.

3. Release –
The process relinquishes the resource, allowing it to be used by other processes.

Necessary Conditions for Deadlock (Deadlock Characterization)

The four necessary conditions for a deadlock to arise are as follows.

 Mutual Exclusion: Only one process can use a resource at any given time
i.e. the resources are non-sharable.

13
 Hold and wait: A process is holding at least one resource at a time and is
waiting to acquire other resources held by some other process.

 No preemption: The resource can be released by a process voluntarily i.e.


after execution of the process.

 Circular Wait: A set of processes are waiting for each other in a circular
fashion. For example, let’s say there are a set of processes {P0, ,P1,P2,P3}
such that P0 depends on P1, P1 depends on P2, P2 depends
on P3 and P3 depends on P0. This creates a circular relation between all
these processes and they have to wait forever to be executed.

14
Example

In the above figure, there are two processes and two resources. Process 1 holds
"Resource 1" and needs "Resource 2" while Process 2 holds "Resource 2" and
requires "Resource 1". This creates a situation of deadlock because none of the
two processes can be executed. Since the resources are non-shareable they can
only be used by one process at a time(Mutual Exclusion). Each process is holding

15
a resource and waiting for the other process the release the resource it requires.
None of the two processes releases their resources before their execution and this
creates a circular wait. Therefore, all four conditions are satisfied.

Methods of Handling Deadlocks in Operating System

The first two methods are used to ensure the system never enters a deadlock.

Deadlock Prevention

This is done by restraining the ways a request can be made. Since deadlock occurs
when all the above four conditions are met, we try to prevent any one of them,
thus preventing a deadlock.

Deadlock Avoidance

When a process requests a resource, the deadlock avoidance algorithm examines


the resource-allocation state. If allocating that resource sends the system into an
unsafe state, the request is not granted.

Therefore, it requires additional information such as how many resources of each


type is required by a process. If the system enters into an unsafe state, it has to
take a step back to avoid deadlock.

Deadlock Detection and Recovery

We let the system fall into a deadlock and if it happens, we detect it using a
detection algorithm and try to recover.

Some ways of recovery are as follows.

 Aborting all the deadlocked processes.

 Abort one process at a time until the system recovers from the deadlock.

 Resource Preemption: Resources are taken one by one from a process and
assigned to higher priority processes until the deadlock is resolved.

16
Deadlock Ignorance

In the method, the system assumes that deadlock never occurs. Since the problem
of deadlock situation is not frequent, some systems simply ignore it. Operating
systems such as UNIX and Windows follow this approach. However, if a deadlock
occurs we can reboot our system and the deadlock is resolved automatically.

Note: The above approach is an example of Ostrich Algorithm. It is a strategy of


ignoring potential problems on the basis that they are extremely rare.

Difference between Starvation and Deadlocks

Deadlock Starvation

A deadlock is a situation in which more than Starvation is a process in which the


one process is blocked because it is holding a low priority processes are postponed
resource and also requires some resource that indefinitely because the resources are
is acquired by some other process. never allocated.

Resources are blocked by a set of processes in Resources are continuously used by


a circular fashion. high-priority resources.

It is prevented by avoiding anyone necessary


condition required for a deadlock or recovered It can be prevented by aging.
using a recovery algorithm.

In starvation, higher priority processes


In a deadlock, none of the processes get
execute while lower priority processes
executed.
are postponed.

Deadlock is also called circular wait. Starvation is also called lived lock.

Advantage of Deadlock Method

 No preemption is needed for deadlocks.

 It is a good method if the state of the resource can be saved and restored
easily.

 It is good for activities that perform a single burst of activity.

17
 It does not need run-time computations because the problem is solved in
system design.

Disadvantages of Deadlock Method

 The processes must know the maximum resource of each type required to
execute it.

 Preemptions are frequently encountered.

 It delays the process initiation.

 There are inherent pre-emption losses.

 It does not support incremental request of resources.

18

You might also like