You are on page 1of 22

UNIT - II

Process Synchronization
Introduction:

A cooperating process is one that can affect or be affeceted by other processes executing in the
system cooperating processes may either directly share a logical address space or be allowed to share
data only through files.

Critical Section:

A section of code that only one process at a time can be executing.

Critical Section Problem:

To design an algorithm that allows at must one process into the critical section at a time without
deadlock . Each process has a segment of code called a critical section. Critical section is used to avoid
race conditions. When one process is executing in its critical section, no other process is to be allowed to
execute in its critical section. The executionof critical section by the process is mutually exclusive in time.
These sections are entry sectionand reminder section.

Do

Entry section

Criticalsection

Exit section

Reminder section

While(1);

General structure of a process(p)

A solution to the critical solution problem must satisfy the following three require

1.Mutual exclusion

2.progress

3.Bounded waiting
Mutual exclusion:

Suppose one process(p) is execute in its critical section, then no other processes are allowes to
execute in the critical in the critical sections.

Progress:

In the no process is executing in its critical section and some prosesses wish to enter their critical
sections, then only those processes that are not executiing in their remainder section can participate in
the decision on which will enter its critical section next. This selection of the process can not be
postponed indefinitely.

Bounded waiting:

When a process requests access to a critical section, a decision that grants it access may not be
delayed indefinitely.

Solution to the critical section problem is as follows:

Two process solutions:

We consider only two processes p0 for p1 for solving critical section problem.Following are the
algoarithms for critical section.

Algorithm 1:

Both the processes p0and p1 for solving critical section problem share the common integer
variable. We assign the name to this variable as turn and is initialised to 0 or 1.

If turn==i

Then the process p0 is allowed to execute in its critical section.

Structure:

do

While(turn !=1);

Critical section

turn=j;

remainder

}While(1);
For example, if turn==0 and p1 is ready to enter its critical section, p1 can not so, even though p0
may be in its remainder section

Algorithm 2:

Algorithm 1 cannot give athe sufficient information about the state of each process. It keeps only
records of the processes which is entered into critical section. To solve this problem, variable turn is
replaced with new variable called flag.

boolean flag[2];

First the elements of array atre initialised to false

If flag[i] is true,

Then p1 is ready to enter the critical section.

Structure:

do

flag[i]=true;

while(flag[j])

critical section

flag[i]=false;

remainder section

}while(1);

In this algorithm process Pi first sets flag[i] to be true ,then process pi is ready to enter its critical
section. Processp1 is also checks for process pj. If process pj were ready, then pi would wait unit flag[j]
was false. So process pj would enter into the critical section.

Algorithm 3:

It gives the correct solution to the critical section problem.Algorithm 3 satisfies all the three
requirements of the critical section . The processes share two variables.

boolean flag[2];

int turn;

Initialise condition is
flag[0]=flag[1]=false

Structure:

do

flag[i]=true;

turn=i

while(flag[j]&&turn==j);

critical section

flag[i]=false;

remainder section

}while(1);

For process,to enter the critical section first set flag[i] to be true and then sets turn to the value j. If
both processes try to enter at the same time, turn will set to both i and j at the same time.

Multiple process solutions:

Bakery algorithm is used in multiple process solution. It solves the problem of critical section for n
processes. This algorithm was developed for a distributed environment. The algorithm was larger than
processes to enter the critical section in the order of their token numbers. The barkery algorithm
cannot guarantee that two processes do not receive the same nummber In this case process with the
lowest name is served first. If pi and pj receive the same number and if i<j then pi is served first.

Structure:

do

choosing[i]=true

number[i]=max(number[0],number[1],……number[n-1]);

choosing[i]=false

for(j=0;j<n;j++)

{
while(choosing[j])

while((number[j]!=0)&&(number[j,j]<number[I,i]));

number[i]=0

}while(1);

Synchronization Hardware:

Hardware feature can make the programming task easier and improve system efficiency various
synchronization mechanisms are available to provide interprocess coordination and communication.
Test and set instruction is used for critical section problem. It most synchronization schemes, a physical
entity may be used to represent the resource. This instruction is executed automatically.This test and set
instruction is uesd in multiprocesses environment.

If two test and set instructions are executed simultaneously they will be executed sequentially in
some arbitary order.

Test and set instructions are initially as follows:

boolean test and set(boolean& target)

boolean rv=target;

target=true;

return ru;

Test and set instruction is used in implementing mutual exclusion. Data structure for this given
below.

Structure:

do

while(test and set (lock));

look=false;

}while(1);
Semaphores:

Semaphores is used to solve critical section problem. A semphores(s) is an integer value.

Semaphores is a variable that has an integer value upon which the following three operations are
defined.

1.A semaphores may be initialised to a non-negative value.


2.The wait operation decrement the semaphores value. If the value becomes negative then the
process executing the wait is blocked.
3.The signal operation increment the semaphores value. If the value is not positive then a
process blocked by a wait operation is unblocked.

Pseudo code for wait:

wait(S)

While(s<=0)

S=s-1

Pseudo code for signal:

Signal(s)

S=s+1;

Semaphores are executed automatically there are no guarantee that no two processes can
execute wait and signal operation same semaphore at the same time.

Binary Semaphore:

It is a semaphore with an integer value that can range only between 0 and 1. In principle, it should
be easier implement the binary semaphore.

Semaphores are not provided by hardware. But they have several attractive properties:

1. Semaphores are machine independent.

2. Semaphore are simple to implement.


3. Correctness is easy to determine.

4. Can have many different critical sections with different semaphores.

5. Semaphores acquires many resources simultaneously.

The process that has been blocked the longest is released from the acquire first is called a strong
semphores.

A semaphore that does not specify the order in which processes are removed from the Queue is
called as weak semaphore.

Busy Waiting:

Busy waiting cannot be avoided together. Busy waiting wastes CPU cycles that some other process
might be able to use productively. This type of semaphores is also a spinlock.

Spinlock are useful in multiprocessor system . Context switching is not required in spinlock.

Drawback of semaphore:

They are essentially shared global variables.

Access to semaphores can come from anywhere in a program.

There is no control or guarantee of proper usage.

1. Full is 0
2. Empty if equal to the number of slots in the buffer
3. Mutex is initially 1

Classic Problem of Synchronization:

Race condition and critical section problem is solved using various methods. In section session
some of the example are discuss here.

Procedure Consumer Problem:

One or more producers are generated some type of data and placing these in a buffer. A single
consumer is taking items out of the buffer one at a time. The system is to be constrained to prevent the
overlap buffer operation. The producer can generate item and store them in the buffer at its own space.
Each time, an index(in) into the buffer is incremented

B[1] B[2] B[3] B[4] B[5] B[6] …………….

Out in
The consumer proceeds in a similar fashions but must make sure that it does not attempt to read from
an empty buffer. The buffer itself may be implement as an array a linked list or any other collection of
those data item.

Bounded buffer:

In bounded buffer, producer may produce items only when there are empty buffer slots. A consumer
may consume only produced itemsand must wait when no items are available.

All producers must be kept waitin when the buffer is full when buffer is empty, consumer must wait. For
they can never get a head of producers.

Buffers are usually implemented in a circular fashion. In and out points to the next slot available for a
produced item, and to the place where the next item is to consumed from.

B[1] B[2] B[3] B[4] B[n]


Out in

The cpu can generate output data much faster than a line printer can print. Therefore sincethis involves
a producer and consumer of two different speeds, we need a buffer where the producer can
temporarity store data that can be retrieved by the consume at a more apporapriate speed.

Structure of consumer process:

do

wait(full);

wait(mutex);

…………

remove an item from buffer to next c

………….

signal(mutex);

signal(empty);

………….

consume the item in next c

…………

}while(1);
1. A solution to the producer-consumer problem satisfy the following conditions.
2. A producer must not overwrite a full buffer.
3. A consumer must not consume an empty buffer.
4. Producers and consumers and must access buffers in a mutually exclusive manner.
Reader and writers problems:

Reader and Writers problem:

Reader writer problem is good example of process synchronization and concurrency mechanisms.

There are a number of processes that only read the data area reader. Processes that only write to the
data area writers . The following conditions must be satisfied.

Any number of readers may simultaneously read the file.

Only one writer at a time may write to the file.

If a writer is writing to the file no reader may read it.

Structure of the producer process:

do

………….

Produce an item in next P

…………..

wait(empty);

wait(mutex);

……….

add next P to buffer

signal(mutex);

signal(full);

}while(1);

The readers-writers problem has several variations all involving priorities may be the readers having
highest priority or writers having high priority.
The Dining Philosophers Problem:

Five philosophers sit around a circular table. Each philosopher spends this life alternatively thinking and
eating . In the center of table is a large plate of food.

One fork is placed between each pair of philosophers and they agree that eachwikk only yse the fork to
his immediate and left.

There are five philosopher processes numbered 1 through 5. Between each pair of philiosophers is a
fork. Fork is also numbered 1 through 5. So that fork number 3 is between philiosophers 2 and3.

Structure of philiosophers:

do

Wait(chopstick[i]);

Wait(chopstick[(i+1)%5]);

…………

Eat

…………

Signal(chopstick[i]);

Signal(chopstick[(i+1)%5]);

……….

think

…………

}while(1);

Critical Regions:

Critical region is a control structure for implementing mutual exclusion over a shared variable.The
declaration of shared variable is given below.

Var mutex: shared T;

The variable mutex of type T is to be shared among many processes. The variable mutex can be accessed
by only inside the region statement of the following form region mutex when B do S;
While statement S is being executed no other process can access the mutex. B is the boolean expression
that governs the access to the critical region. Critical region enforce restricted usage of shared variable
and prevent potential errors resulting from improper use of orindary semaphore.

Critical region is very convenient for mutual exclusioon. However it is less versatile than a semaphore.

Conditional Critical Region:

Conditional critical region allow us to specify synchronization as well as mutual exclusion. It is similar to
a critical region . The shared variable is declared in the same way.

Conditional critical region provides following features:

1. Provide mutual exclusion.


2. It permit a process executing to conditional critical region to block itself until an arbitary
boolean condition becomes true.

Following code give the idea about conditional critical regions:

var x:shared T;

begin

repeat

………….

………….

region x do

begiin

…………

a wait condition

………..

end;

Montiors:

A monitor is a programming language construct that provides equivalent functionality to that of


semaphores but is easier to control. A monitor consists of procedures, the shared object and
administrative data.
Characteristics of a montior are as follows:

1. Only one process can be active within the monitor at a time .


2. The local data variables are accessible only by the monitor’s procedures and not by at external
procedure
3. A process enters the monitors by invokingone of its procedures

Monitors are a high level data abstraction tool combining three features:

1. Shared data
2. Operation on data
3. Synchronization scheduling

A monitor is characterised by a set of programmer defined operators. Monitors were derived to simplify
the complexity of synchronization problems. Every synchronization problem that can be solved with
monitor can also be solved with semaphores and vice versa.A monitor is a software module consisting of
one or more procedures an initialization sequence and local data.

Syntax:

Monitor Monitor-name

Declaration of shared variable

Procedure body P()

……………

……………

P2()

Procedure body

…………

………….

Pn()
{

Procedure body

…………...

Initialisation code

The monitor consist has been implemented in a number of programming languages. Since mointor are a
language features, they are implemented with the helps of a compiler.

A monitor support synchronization by the user of condition variables thataare contained within the
monitor and accessible only within the monitor.

Two condition variables are:

1. X.wait(): suspend execution of the calling process on condition X. The monitor is now available
for use by another process.
2. X.signal(): resume execution of some process suspended offer a X.wait on the same condition
variable. This operation a resumes exactly one suspended process.

A condition variable is like a semaphore with two differences:

1. A semaphore counts the number of excess up operations,but a signal operation of a condition


variable has no effect unless some process is waiting. A wait on a condition variable always
blocks the calling process.
2. A wait on a condition variable automatically does an up on the monitor mutex and block the
caller.
Interface condition{
Public void X.signal()
Public void X.wait();

Bounded buffer problem using monitors:

Monitor bounded buffer

Private buffer b=new.buffer(20)


Private int count=0;

Privatecondition nonfull,noneempty;

Public void add (object item)

If (count==20)

Nonfull.wait();

b.add(item);

count++;

nonempty.signal():

Public object remove()

If (count==0)

Nonempty.wait();

Item result=b.remove();

Counot=count-1;

Nonfull.signal();

Return result;

Each condition variable is associated with some logical condition on the state of the monitor.
Drawbacks of monitor:

1. Major weakness of monitors is the absence of concurrency if a monitor encapsulates the


resources. Since only one process can be active within a monitor at a time.
2. There is thepossibility or deadlocks in the case of nested monitor calls.
3. Monitor concept is ots lack of implementation most commonly used programming languages.
4. Monitors cannot easily be added if they are not natively supported by the language.

Deadlock:

A deadlock is a situation where a group of processes are permanently blocked as a results of


each process having acquired a subser of the resources needed for its completion and waiting for
release of the remaining resource held by other in the same group thus making it impossible for any of
the processes to proceed.

Process is said to be in a state of deadlock if it is waiting for a particular event that will not
occur.

Deadlock characterization:

The following conditions hold regarding the way a process uses resources

 Mutual exclusion

 Hold and wait

 No preemption

 Circular wait

Mutual exclusion:

Only one process may use a resource at a time. No other process can use a resource while it is
allocated to a process.

Hold and wait:

A process may hold a resource at the same time it requests another one.

Circular waiting:

Each process holds atleast one resource needed by the next process in the chain. There may be
more tahen two processes involved in a circular wait.
No preemption:

No resource can be forcibly removed from a process holding it. Resource can be released only
by the explicit action of the process rather than by the action of an external authority.

A deadlock is possible only of all four of these conditions simultaneously hold in the community of
processes. These conditions are necessary for a deadlock to exist

Resource allocation graph:

It is used to describe the deadlock . It is also called system resource allocation graph.Graph
consists of a set of vertices(V) and set of edges(E). All the active processes in the system denoted by
P={p1,p2……….Pn} and set of all resource type in the system is denoted by R={R1,R2,…………..Rn}.

Request edge is an edge from process to resource and denoted by Pi->Rj. Assignment edge is an edge
from resource to process and denoted Rj->pi. For representing process and resource in the resource
allocation graphs is shown by square and circle. Dot within the square represents the number of
instance. System consists of three process(i.e) p1.p2andp3, four resource (i.e) R1,R2,R3 and R4.
Resource R1 and R3 have one instance, R2 has two instance and R4 has three instance.

The sets P,R ,E consists

P=[P1,P2,P3} R={R1,R2,R3,R4} E={P1->R1,P2->R3,R1->P2,R2->p2,R2->P1,R3->P3}

Resource instances

R1-one instance R2-two instance R3-one instance R4-three instance

Process state

P1 is holding an instance type R2 and is waiting for an instance type R1.

P2 is holding an instance of R1 and R2 and waiting for and instance type R3.

P3 is holding an instance of R3

R1 R3

. .

P
P P 3
1 2
:
:

R2 R4
Suppose P3 request an instance of R2 since no resource instance is currently available a request edge
instance is currently available, a request edge P3->R2 is added to the graph . Process P1, P2 and P3 are
deadlocked.

R1 R3

. .

P P P
1 2 3
R2
: :R4

R2 R4

Resource allocation graph with deadlock

P
. 2

P1 P3

P
4
With a cycle but not deadlock

P1->R1->p3->R2->p1. These is a cycle but no deadlock because the process P4 may release its instance
of resource type R2. That resource can then be allocated to P3, breaking the cycle.

Methods for Handling Deadlocks


Deadlock problem is handled in following ways:

1. Protocol: using protocol we can prevent or avoid deadlocks. To take care that system will never
enter a deadlocks state.
2. Detect and recover: Allow the system to enter a deadlock state detect it and recover from
deadlock.
3. Ignore the deadlock: To ensure the deadlock never occurs in the system.

Deadlock prevention and deadlock detection algorithm is used for ignoring the deadlock. Deadlock
prevention is a set of methods for ensuring that at least one of the necessary condition cannot hold.
These methods prevent deadlocks by constraining how requests for resource can be made.

Deadlock prevention:

Methods for preventing deadlock are of two classes: Indirect methods and direct methods. An indirect
method is to prevent the occurrence of one of the three necessary condition (i.e) mutual exclusion,hold
and wait and no pre-emption. A direct method is to prevent the occurrence of a circular wait.

Mutual exclusion:

Mutual exclusion condition must be hold for non-sharable resources. If access to a resource requires
mutual exclusion must be supported by the operating system.

Some resources, such as files, may allow multiple accesses for reads but only exclusive access for writes.
In this case, deadlock can occur if more than one process requires write permission.

Hold and wait:

The hold and wait condition can be eliminated by forcing a process to release all resources held by it
whenever it requests a resource that is not available. For example, process copies data from a floppy
disk to a hard disk, sort a disk file and then prints the results to a printer.

If all the resources musts be requested at the beginning of the process, then the process must initially
request a floppy disk, hard disk and a printer. It will hold the printer for its entire execution, eventhough
it needs the printer only at the end. In these two method, resource utilization is low in the first method
and second method is affected by the starvation.

No pre-emption:

This condition is also caused by the nature of the resource. This condition can be prevented in several
way.

If a process holding certain resource in denied a further request. That process must release its original
resources and if necessary request them again, together with additional resource.
If a process requests a resource that is currently held by another process, the operating system may
preempt the second process and require it to release its resources.

In general, sequential io devies cannot be preempted.

Preemption is possible for certain types of resources, such as cpu and main memory.

Circular-wait:

One way to prevent the circular wait condition is by linear ordering of different types of system
resources. In this system resources are divided into different classes. If a process has been allocated
resources of type R, then it may subsequently request only those resource types following R in the
ordering.

For example, process hold the resource of class ci or then it can only request resource of class i+1 or
higher there after.

Deadlock avoidance:

Deadlock avoidance allows more concurrency than prevention does. Deadlock avoidance requires
additional information about how resources are to be requested. With deadlock avoidance, a decision is
made dynamically whether the current resource allocation request could, if gdranted, potentially lead to
deadlock. Two approaches are used to avoid the deadlock.

Do not start a process if its demands might lead to deadlock.

Do not grant an incremental resource request to a process if this allocation might lead to deadlock.

Safe

Relation between safe, unsafe and a deadlock state

Safe state Unsafe(deadlock)

A safe state is a state in which there is atleast one order in which all the processes can be run to
completion without resulting in a deadlock. A safe state is not a deadlock state. Conversely, a deadlock
state is an unsafe state. Not all unsafe states are deadlocks. An unsafe state may lead to a deadlock. As
long as the state is safe, an operating system can avoid unsafe stare. In an unsafe, the operating system
cannot prevent processes from requesting resource such that a deadlock occurs.

Disadvantages of deadlock avoidances:

The maximum resource requirement for each process must be stated in advance.

There must be a fixed number of resources to allocated and a fixed number of processes.
The processes under consideration must be independent.

Resource-allocation graph algorithm

In resource-allocation graph, normally request edge and assignment edge is used. In addition to the
request and assignment edge, new edge called claim edge is added.

When process Pi request resource Rj, the claim edge is converted to a request edge.

When a resource Rj is released by Pi, the assignment edge Rj->Pi is reconverted to a claim edge Pi->Rj.

Assignment edge

R1 Request edge

P P
1 2
Claim edge
R2

Banker’s algorithm:

The banker’s algorithm is the best known of the avoidance strategies. The strategy is modeled after the
leading policies employed in banking system.

It is suitable for multliple instance of each resource type. Banker’s algorithm is suitable to a resource
allocation system with multiple instances of each resources type.

Several data structures must be maintained to implement the banker’s algorithm. Let n be the number
of processes in the systesm and m be the number of resources types. We need the following data
structures.

Available : A vector of length m indicates the number of available resources of each type

Max :An n*m matrix defines the maximum demand of each process.

Allocation :An n*m matrix defines the number of resources of each type currently allocated to each
process.

Need : An n*m matrix indicates the remaining resource need of each process.

Deadlock detection:

If the system is not using any deadlock avoidance and prevention then a deadlock situation may occur.
Deadlock detection approach do not limit resource access or restrict process actions. With deadlock
detection request resources are granted to processes when we possible.
Single instance of each resource type:

If all resources have only a single instance, then deadlock detection algorithm that uses a variant
of the resource allocation graph, called a wait for graph. Wait for graph is obtained from resource
allocation graph. The assumption made in the wait for graph is as follows:

P
5 P
5
R3 5
R1 R4
5
5
5

P P1 P2 P
P1 P
2 3
3
2
3

P R5 P
R2
4 4

An edge from pi to pj in a wait for graph implies that process pi is waiting for pj to release a resource
that pi needs.An edge pi->pj exists in a wait for graph if and only if the corresponding resource allocation
graph contains two edges pi->Rq and Rq->pi for some resource Rq. If the wait for graph contains the
cycle, then there is a deadlock. To detect deadlock, the system needs to maintain the wait for graph and
periodically to invoke an algorithm that searches for a cycle in the graph.

Several instances of a resource types:

In deadlock detection approaches, the resource allocates simply grants on request for an available
resource for checking for deadlock of the system the algorithm is as follows:

Unmark all active processes from allocation, max and available in according with the system state.

Find an unmarked process I such that

Max i<=Available

If found, mark process I, update available

Available:=Available+allocation
And repeat this step. If no process is found, then go to next step.

If all processes are marked, the system is not deadlocked. Otherwise system is in deadlock state and the
set of unmarked process deadlocked.

Deadlock detection is only a part of the deadlock handling task. The system apply break the deadlock to
reclaim resources held by blocked processes and to ensure that the affected processes can eventually be
completed.

Recovery form deadlock:

Once deadlock has been detected some strategy is needed for recovery. Following are the solution to
recover the system from deadlock.

All deadlocked processes are aborted. Most of the operating system apply solution.

Abort one process at the time until the deadlock no longer exists. The order which processes are
selected for abortion should be on the basis of some criterion of minimum cost.

Successively preempt resources until deadlock no longer exist.

Back up each deadlocked process to some previously defined check point and restart all processes. This
requires rollback and restart mechanisms are built to the system.

The selection criteria could be one of the following :

For aborting a process, selection is as above:

Least amount of processor time consumed so far.

Least amount of output produced so far.

Most estimated time remaining.

Lowest priority

Least total resources allocated so far.

You might also like