You are on page 1of 33

Operating System

UNIT 2: UNDERSTANDING THE


SYNCHRONIZATION PROCESS
Race Conditions
 Race conditions usually occur if two or more processes
are allowed to modify the same shared variable at the
same time.
 To prevent race conditions, the operating system must
perform process synchronization to guarantee that only
one process is updating a shared variable at any one time.

SSCR-Canlubang Campus 6-Process Synchronization


Critical Section
 part of the process that contains the instruction or
instructions that access a shared variable or resource

SSCR-Canlubang Campus 6-Process Synchronization


Critical Section
The solution to this problem involves three aspects.
1. Mutual exclusion requirement should always be met in order
to guarantee that only one process may enter its critical
section at a time. If a process is currently executing its critical
section, any process that attempts to enter its own critical
section should wait until the other process leaves its critical
section.
2. Progress requirement. The solution must guarantee that if a
process wants to enter its critical section and no other process
is in its critical section, the process will be able to execute in
its critical section.
3. Bounded waiting requirement. The solution must guarantee
that processes will not wait for an indefinite amount of time
before it can enter its critical section.
SSCR-Canlubang Campus 6-Process Synchronization
Critical Section
 Software solutions to the critical section problem:

The first solution uses a global or shared


variable called TurnToEnter.
If TurnToEnter = 0, P0 can execute its
critical section. P0 changes its value to 1 to
indicate that it has finished executing its
critical section.
If TurnToEnter = 1, P1 can execute its
critical section. P1 changes its value to 0 to
indicate that it has finished executing its
critical section. The figure below illustrates
this.

SSCR-Canlubang Campus 6-Process Synchronization


Critical Section
The second solution uses a two-element,
Boolean array called WantToEnter.
If a process wants to enter its critical
section, it sets its variable to true.
If WantToEnter[0] = true, P0 wants to
enter its critical section.
If WantToEnter[1] = true, P1 wants to
enter its critical section. Then before a
process enters its critical section, it first
checks the WantToEnter variable of the
process.
If the WantToEnter variable of the other
process is also true, it will wait until it
becomes false before proceeding in its
critical section. A process will set its
WantToEnter variable to false once it
exits its critical section.

SSCR-Canlubang Campus 6-Process Synchronization


Critical Section
This is also called Peterson’s
Algorithm. The Peterson’s Algorithm
uses both TurnToEnter and WantToEnter
variables.
Suppose P0 wants to enter its critical
section, it sets WantToEnter[0] = true
and TurnToEnter = 1.
P0 then checks if P1 wants to enter its
critical section (WantToEnter[1] ==
true) and if it is P1’s turn to enter its
critical section (TurnToEnter == 1).
P0 will only proceed to execute if one of
these conditions does not exist.

SSCR-Canlubang Campus 6-Process Synchronization


Critical Section

SSCR-Canlubang Campus 6-Process Synchronization


Critical Section
The solution to the critical section
problem involving several processes is
often called Bakery Algorithm.
It follows the system used by bakeries
in attending to their customers.
Each customer that enters the bakery
gets a number.
As customers enter the bakery, the
numbers increase by one.
The customer that has the lowest
number will be served first.
Once finished, the customer discards
the assigned number. The customer
must get a new number if he or she
wants to be served again.

SSCR-Canlubang Campus 6-Process Synchronization


Critical Section
Hardware solutions to the critical section problem:

 Disabling interrupts.
This feature enables a process to disable interrupts before it starts
modifying a shared variable. If interrupts is disabled, the CPU will
not be switched from one process to another. The operating system
will then simply allow the process to complete the execution of the
critical section even though its time quantum has finished. Upon
exiting the critical section, the process will enable interrupts.

 Special hardware instructions.


There are special machine instructions that will allow a process to
modify a variable or memory location atomically. This means that if
a process executes an atomic operation, no other process will be
able to preempt it.

SSCR-Canlubang Campus 6-Process Synchronization


Critical Section
An example of this special instruction is the test_and_set instruction.
This instruction is illustrated in the figure below.

SSCR-Canlubang Campus 6-Process Synchronization


Critical Section
This algorithm uses a shared
Boolean variable called lock.
The test_and_set instruction is
used to test if lock is true or
false. The variable lock is true
if a process is inside its critical
section. Otherwise, lock is set
to false.

SSCR-Canlubang Campus 6-Process Synchronization


what is process Synchronization

• Process Synchronization is the coordination of execution


of multiple processes in a multi-process system to ensure
that they access shared resources in a controlled and
predictable manner.
• It aims to resolve the problem of race conditions and
other synchronization issues in a concurrent system.
what is process Synchronization

• Process Synchronization is the coordination of execution


of multiple processes in a multi-process system to ensure
that they access shared resources in a controlled and
predictable manner.
• It aims to resolve the problem of race conditions and
other synchronization issues in a concurrent system.
Objective of process
synchronization

• To ensure that multiple processes access shared resources


without interfering with each other and to prevent the
possibility of inconsistent data due to concurrent access.
• To achieve this, various synchronization techniques such
as critical sections, semaphores, and monitors are used.
Process Synchronization in multi-process system

• To ensure data consistency and integrity, and to avoid the


risk of deadlocks and other synchronization problems.
• Ensuring the correct and efficient functioning of multi-
process systems.
On the basis of synchronization, processes are
categorized as one of the following two types:

• Independent Process: The execution of one process does


not affect the execution of other processes.
• Cooperative Process: A process that can affect or be
affected by other processes executing in the system.
• Process synchronization problem arises in the case of
Cooperative processes also because resources are shared
in Cooperative processes.
Semaphores
 Semaphore is a tool that can easily be used to solve
more complex synchronization problems and does not
use busy waiting.
1. wait (S)
This operation waits for the value of
the semaphore S to become greater
than 0. If this condition is satisfied,
wait will decrement its value by 1.
wait(S): while S <= 0 { }; S--;
2. signal (S)
This operation increments the value
of semaphore S by 1. signal(S): S++;
The wait operation is atomic or
cannot be interrupted once
semaphore S is found to be greater
than 0.

SSCR-Canlubang Campus 6-Process Synchronization


Semaphore provides
mutual exclusion
Semaphore mutex; // initialized to 1
do {
wait (mutex);
// Critical Section
signal (mutex);
// remainder section
} while (TRUE);
Monitor
• Syntax of Monitor
• Monitor is one of the ways to achieve Process
synchronization.
• Monitor is supported by programming languages
to achieve mutual exclusion between processes.
• For example Java Synchronized methods. Java
provides wait() and notify() constructs.
1. It is the collection of condition variables and
procedures combined together in a special
kind of module or a package.
2. The processes running outside the monitor
can’t access the internal variable of monitor
but can call procedures of the monitor.
3. Only one process at a time can execute code
inside monitors.
Classic Synchronization Problems
The Dining Philosophers Problem

Restrictions:
1. A philosopher cannot start
eating unless he has both
forks.
2. A philosopher cannot pick up
both forks at the same time.
He has to do it one at a time.
3. He cannot get the fork that is
being used by the philosopher
to his right or to his left.

SSCR-Canlubang Campus 6-Process Synchronization


Classic Synchronization Problems
A possible solution is to use a
semaphore to represent each fork.
The wait operation is used when
picking up a fork while the signal
operation is used when putting
down a fork.

The mutual exclusion requirement


is satisfied since each fork is
represented by a semaphore which
guarantees that only one
philosopher can use a particular
fork at a time.

SSCR-Canlubang Campus 6-Process Synchronization


Classic Synchronization Problems
This solution is often called
readers-preference solution. If it
happens that a steady stream of
incoming reader processes wants
to access the database, this may
cause the starvation of writer
processes. Writer processes will
have to wait for an indefinite
period of time before it can access
the database.

SSCR-Canlubang Campus 6-Process Synchronization


Classic Synchronization Problems
A solution that favors the writer
processes is called writers-preference
solution. It still does not grant a writer
access to the database if there are
readers or writers already in the
database. However, a reader is denied
access to the database if a writer is
currently accessing the database or if
there is a waiting writer process (even if
other reader processes are already in the
database). If it happens that a steady
stream of incoming writer processes
wants to access the database, this may
cause the starvation of reader processes.

SSCR-Canlubang Campus 6-Process Synchronization


Advantages of Process Synchronization

• Ensures data consistency and integrity


• Avoids race conditions
• Prevents inconsistent data due to concurrent access
• Supports efficient and effective use of shared resources
Disadvantages of Process Synchronization

• Adds overhead to the system


• This can lead to performance degradation
• Increases the complexity of the system
• Can cause deadlocks if not implemented properly.
What is Deadlock?
• A deadlock is a situation which
occurs when a process enters in a
waiting state because the requested
resource is being held by another
waiting process, which in turn is
waiting for another resource held by
another waiting process.
• If a process is unable to change its
state indefinitely because the
resources requested by it are being
used by another waiting process, then
the system is said to be in a deadlock
A practical example of
Deadlock
• Let’s visualize Deadlock
• You can't get a job without
• Two processes competing for two
experience. You can't get resources in opposite order.
experience without a job. A. A single process goes through.
B. The later process has to wait.
C. A deadlock occurs when the first
process locks the first resource at the
same time as the second process
locks the second resource.
D. The deadlock can be resolved by
cancelling and restarting the first
process
Necessary Conditions
• There are four conditions that are necessary to achieve deadlock:
1. Mutual Exclusion - At least one resource must be held in a non-sharable
mode; If any other process requests this resource, then that process must wait
for the resource to be released.
2. Hold and Wait - A process must be simultaneously holding at least one
resource and waiting for at least one resource that is currently being held by
some other process.
3. No preemption - Once a process is holding a resource ( i.e. once its request
has been granted ), then that resource cannot be taken away from that process
until the process voluntarily releases it.
4. Circular Wait - A set of processes { P0, P1, P2, . . ., PN } must exist such
that every P[ i ] is waiting for P[ ( i + 1 ) % ( N + 1 ) ]. ( Note that this
condition implies the hold-and-wait condition, but it is easier to deal with the
conditions if the four are considered separately. )
METHODS FOR HANDLING
DEADLOCK:-
• Generally speaking there are three ways of handling
deadlocks:
1. Deadlock prevention or avoidance - Do not allow the
system to get into a deadlocked state.
2. Deadlock detection and recovery - Abort a process or pre-
empt some resources when deadlocks are detected.
3. Ignore the problem all together - If deadlocks only occur
once a year or so, it may be better to simply let them happen
and reboot as necessary than to incur the constant overhead
and system performance penalties associated with deadlock
prevention or detection. This is the approach that both
Windows and UNIX take.
Deadlock prevention:-
• One way to handle deadlocks is to ensure that at least one of the
four necessary conditions causing deadlocks is prevented by
design. This is deadlock prevention.
• Deadlock prevention approach is to design a system in such a
way that possibility of deadlock is excluded.
• Deadlocks can be prevented by preventing at least one of the
four required conditions:
• Mutual Exclusion
• Hold and Wait
• No Preemption
• Circular Wait
Deadlock avoidance:-
• The basic idea of deadlock avoidance is to grant only those
requests for available resources, which cannot possibly results in
deadlock.
• A decision is made dynamically whether the current resource if
allocated/generated lead to deadlock.
• If possibly cannot, the resource is granted to the requesting
process.
• Otherwise, the requesting process is suspended till the time when
its pending request can be safely granted.
• The two approaches followed for deadlock avoidance are:-
• Do not start the process if its demand might lead to deadlock.
• Do not grant an incremental resource request to a process if this
allocation might result in deadlock.
Deadlock detection and recovery:-
• In this approach, the available resources are granted freely and deadlocks
are checked occasionally.
• Detection means discovering a deadlock. If deadlock exists the system
must break or recover the deadlock.
• This approach is to grant resources freely but occasionally examine the
system state for deadlock and take remedial action when required. That is
why it is called deadlock detection and recovery.
• This approach involved two steps:-
• First, the deadlocked processes are identified or deleted.
• The next step is to break or recover the deadlock.
• The various strategies for recovery of deadlock are:-
1. Abort all deadlocked processes.
2. Backup each deadlocked process to some previously defined checkpoint and
restart.
3. Successively abort deadlocked processes until deadlock no longer exists.
4. Successively preempt resources until deadlock no longer exists.

You might also like