You are on page 1of 26

KERALA UNIVERSITY QUESTIONS

SECTION A

1. What is PCB? (2016,2015)


2. How are OS designed in general (2019)
3. What does a time sharing OS require? (2019)
4. How is job different from process? (2019)
5. Why is short term scheduler called as CPU scheduler? (2019)

SECTION B

1. What is a process? (2016,2015)


2. What is the use of scheduler? (2016)
3. How to create a new process in UNIX ? (2016,2015)
4. What is an operating system? (2015)
5. What is a context switch? (2015)
6. Distinguish real time OS and parallel OS? (2019)
7. Define degree of multiprogramming? (2019)
8. Define CPU burst and I/O burst? (2019)
9. Define IPC? (2019 )
SECTION C

1. Differentiate Batch and time sharing system? (2016)


2. Write a note on threads. (2016,2015)
3. What are the scheduling criteria? (2016)
4. Draw a process state diagram in brief. (2016)
5. Brief about the process control block. (2016,2015,2019)
6. Write a note on process management (2015)
7. Describe round robin method of scheduling algorithm (2015)
8. Explain basic functions of OS. (2019)
9. Distinguish preemptive and non preemptive scheduling? (2019)

SECTION D

1. Explain the scheduling algorithms. (2015,2019)

Don Bosco College, Kottiyam Page 28


MODULE II

Cooperating Process

Concurrent process executing in the operating system may be either independent


process or cooperating process. A process is independent it cannot affect or affected by
another process executing in the system. Any process that doesn’t share any data with
any other process is independent. A process is cooperating if it can affect or be affected
by the process executing in the system. Any process that shares data with other
process is a cooperating process.

Advantage of Cooperating process

 Information sharing: Several users may be interested in the same piece of


information.

 Computation speedup: If we want particular task to run faster, we must break it


into subtasks, each of which will be executing in parallel with others.

 Modularity: Dividing the system functions into separate process or threads.

 Convenience: Even an individual user may have many task on which to work at
one time.

INTER-PROCESS COMMUNICATION (IPC)

There are a number of applications where processes need to communicate with each
other. Processes can communicate by passing information to each other via shared
memory or by message passing.

Files
Files are the most obvious way of passing information. One process writes a file, and
another reads it later. It is often used for IPC.

Shared Memory
When processes communicate via shared memory they do so by entering and retrieving
data from a single block of physical memory that designated as shared by all of them.
Each process has direct access to this block of memory.

Message Passing
Message passing is a more indirect form of communication. Rather than having direct
access to a block of memory, processes communicate by sending and receiving
packets of information

Don Bosco College, Kottiyam Page 29


called messages. These messages may be communicated indirectly or directly. Indirect
message passing is done via a mailbox. Direct message passing is done via a link
between the two communicating processes. In both cases the messages themselves
are sent via the operating system. The processes do not have direct access to any
memory used in the message passing process

Basic Concepts of Inter-process Communication and Synchronization

Synchronization is often necessary when processes communicate. Processes are


executed with unpredictable speeds. Yet to communicate one process must perform
some action such as setting the value of a variable or sending a message that the other
detects. This only works if the events perform an action or detect an action are
constrained to happen in that order. Thus one can view synchronization as a set of
constraints on the ordering of events. The programmer employs a synchronization
mechanism to delay execution of a process in order to satisfy such constraints.

The Critical-Section Problem

Processes that are working together often share some common storage that one can
read and write. The shared storage may be in main memory or it may be a shared file.
Each process has segment of code, called a critical section, which accesses shared
memory or files. The key issue involving shared memory or shared files is to find way to
prohibit more than one process from reading and writing the shared data at the same
time.
The important feature of the system is that, when one process is executing in its critical
section, no other process is to be allowed to execute in its critical section. That is, no

Don Bosco College, Kottiyam Page 30


two processes are executing in their critical sections at the same time. The critical-
section problem is to design a protocol that the processes can use to cooperate. Each
process must request permission to enter its critical section. The section of code
implementing this request is the entry section. The critical section may be followed by
an exit section. The remaining code is the remainder section. The general structure of
a typical process P is shown in Figure.

A solution to the critical section problem must satisfy the following three requirements

1.Mutual Exclusion
If process Pi is executing in its critical section, then no other processes can be
executing in their critical sections.
2. Progress
If no process is executing in its critical section and there exist some processes that wish
to enter their critical section, then the selection of the processes that will enter the
critical section next cannot be postponed indefinitely.
3. Bounded Waiting
A bound must exist on the number of times that other processes are allowed to enter
their critical sections after a process has made a request to enter its critical section and
before that request is granted.

Two Process solution/Mutual Exclusion Algorithm


Processes may share some common variables to synchronize their actions. This
algorithm is applicable to only two processes at a time. The process are numbered as
p0 and p1. For convenience we use pi and pj ; j==1-i.
Algorithm 1
This algorithm let the processes to share a common integer variable turn initialized to 0
or 1. if turn = i, P(i) can enter it’s critical section.
ProcessPi

int turn; //turn can have a value of either 0 or 1


do
{
while (turn != i)
{

Don Bosco College, Kottiyam Page 31


/*do nothing*/
}
critical section
turn = j;
remainder section
}
while (true)

A global variable turn is used to control the access to the shared resources, turn can
take two values 0 or 1 to indicate which process has entered the critical section. Each
process before entering the critical section, checks the value of turn to see if the other
process is in the critical section.
For example: If turn=p0;
Then p1 refrains from entering the critical section until the value of turn become p1.
Drawback
Suppose process p1 enters the critical section, after completing the critical section it set
the value of turn as p2 and then continues with the remainder section. During this time
p2 enters the critical section, once it completes the critical section it sets the value of
turn as p1 and then enters the remainder section. Suppose if p1 has crashed in its
remainder section, it cannot enters the critical section and cant set the turn as p2, which
prevents p2 also from entering the critical section. This algorithm satisfies mutual
exclusion, but not progress.

Algorithm 2
In this algorithm we use array:
boolean flag[2];
The elements of array are initialized to false. If flag[i] is true, this value indicates that pi
is ready to enter the critical section. Then, pi checks to verify that process pj is in the
critical section if so then pi would wait until flag[j] =false.
Process P i
do
{
flag[i]= true;
while (flag[j])
{
/*do nothing*/
}
critical section
flag[i] = false;
remainder section
} while (true)
In this algorithm each processes has its own flag variable to indicate whether it has
entered the critical section. For eg: if p1 wishes to enter the critical section, it first

Don Bosco College, Kottiyam Page 32


checks whether the flag value of p2, if the flag is set as true then p1 keeps on testing
the flag value until it become false once it become false then p1 enter the critical section
by setting its own flag value as true.
Drawback
Consider if both process are executed in an interleaved manner and both of them
decides to enter the critical section at the same time. Now process p1 test the value of
p2 and finds it as false and decides to enter the critical section but before it set the flag
value as true p2 checks the flag value of p1 and finds it as false and decides to enter
the critical section. Therefore both the process find each other’s flag values are false
and tries to enter the critical section which violates the mutual exclusion condition.

Algorithm 3

By combining the key ides of alorithm1 and algorithm 2, we obtain a correct solution to
the critical section problem, where all the three requirements are met. This solution to
the critical-section problem is also known as Peterson's solution. Peterson's
solution is restricted to two processes that alternate execution between their critical
sections and remainder sections. The processes are numbered Po and P1. For
convenience, when presenting Pi, we use Pj to denote the other process; Peterson's
solution requires two data items to be shared between the two processes:
boolean flag[2];
int turn;
Process P i

To enter the critical section, process Pi, first sets flag[i] to be true and then sets turn to
the value j, thereby asserting that if the other process wishes to enter the critical
section, it can do so. If both processes try to enter at the same time, turn will be set to
both i and j at roughly the same time. Only one of these assignments will last; the other
will occur but will be overwritten immediately. The eventual value of turn decides which
of the two processes is allowed to enter its critical section first.

Don Bosco College, Kottiyam Page 33


This algorithm meets all three requirements; solves the critical section problem for two
processes
Semaphore
A non-computer meaning of the word semaphore is a system or code for sending
signals. To overcome the difficulty in critical section we use a synchronization
tool called as semaphore. A semaphore is an integer variable that, apart from
initialization, is accessed only through two standard atomic operations: wait and
signal.

wait(s)
{
while (S<=0)
{
/*do nothing*/
S= S-1;
}
signal(S)
{
S = S + 1;
}

Semaphore Usage
We can use semaphore with the n process critical section problem. The n process
share a semaphore, mutex (standing for mutual exclusion), initialized to one. We can
also use semaphore to solve various synchronization problems,.

For eg: consider two concurrently running processes: p1 with a statement s1 and p2
with a statement s2. Suppose we require that s2 be executed only after s1 has
completed. We can implement this scheme readily with by letting p1 and p2 share a
common semaphore synch, initialized to 0, and inserting the statement.
S1;
Signal(synch);
in process p1 and the statements
Wait(synch);
S2;
in process p2.

Because synch is initialized to 0, p2 will execute s2 only after p1 has invoked


signal(synch), which is after s1.

Semaphore mutex;
Initially mutex = 1
do
{
wait (mutex);

Don Bosco College, Kottiyam Page 34


critical section
signal (mutex);
remainder section
}
while(true)
CLASSIC PROBLEMS OF SYNCHRONIZATION

1.The Bounded-Buffer Problem


The bounded-buffer problem is a classic example of concurrent access to shared
resources. A bounded buffer lets multiple producers and multiple consumers to share a
single buffer. Producer writes data to the buffer and consumer reads data from the
buffer.
 Producer must block if the buffer is full
 Consumer must block if the buffer is empty

Two counting semaphores are used for this. Use one semaphore empty to count the
empty slots in the buffer.
 Initialize the semaphore to N
 A producer must wait on this semaphore before writing to the buffer
 A consumer will signal this semaphore after reading from the buffer

Semaphore called full to count the number of data items in the buffer:
 Initialize the semaphore to 0
 A consumer must wait on this semaphore before reading from the buffer
 A producer will signal this semaphore after writing to the buffer

In our problem, the producer and consumer processes share the following data
structures:
int n;
semaphore mutex = 1;
semaphore empty = n;
semaphore full = 0;
The pool consists of n buffers, each capable of holding one item. The mutex semaphore
provides mutual exclusion for accesses to the buffer pool and is initialized to the value
1. The empty and full semaphores count the number of empty and full buffers. The
semaphore empty is initialized to the value n; the semaphore full is initialized to the
value 0. The symmetry between the producer and the consumer can be interpreted as
the producer producing full buffers for the consumer or as the consumer producing
empty buffers for the producer.

Don Bosco College, Kottiyam Page 35


Producer Process
do
{
// produce an item
wait(empty);
wait(mutex);
//add to buffer
Signal(mutex)
Signal(full);
} while (1);
Consumer Process
do
{
Wait(full)
Wait(mutex)
//remove an item from buffer
Signal(mutex)
Signal(empty)
} while (1);
2.The Readers–Writers Problem

Suppose that a database is to be shared among several concurrent processes. Some of


these processes may want only to read the database, whereas others may want to
update (that is, to read and write) the database. To distinguish between these two types
of processes by referring to the former as readers and to the latter as writers. Obviously,
if two readers access the shared data simultaneously, no adverse effects will result.
However, if a writer and some other process (either a reader or a writer) access the
database simultaneously, chaos may ensue. To ensure that these difficulties do not
arise, it is required to the writers have exclusive access to the shared database while
writing to the database. This synchronization problem is referred to as the readers–
writers.

There is a shared resource which should be accessed by multiple processes. There are
two types of processes. They are reader and writer. Any number of readers can read
from the shared resources simultaneously, but only one writer can write to the

Don Bosco College, Kottiyam Page 36


shared resources. When a writer is writing data to the resource, no other process can
access the resource. A writer cannot write to the resource if there is non zero number of
readers accessing the resource at that time. The reader writer problem is used to
manage synchronization so that there is no problem with the shared resource.

For example if two readers access the shared resource at the same time there is no
problem. However if two writers or a reader and writer access the object at the same
time there may be problems. To solve this writer should get exclusive access to an
objective, when a writer is accessing the object, no reader or writer may access it,
however multiple reader can access the object at the same time. This can be
implemented using semaphores.
The semaphore mutex and wrt are initialized to 1;read count (rc) is initialized to 0.The
semaphore wrt is common to both the reader and writer processes. The mutex
semaphore is used to ensure mutual exclusion when the variable rc is updated .The rc
variable keeps track of how many processes are currently reading the object. The
semaphore wrt functions as a mutual-exclusion semaphore for the writers. It is also
used by the first or last reader that enters or exit the critical section.

semaphore mutex = 1;
semaphore wrt = 1;
int rc = 0;

Reader process

wait( mutex);
rc++;
if(rc==1)
wait(wrt)
signal(mutex)
//read object
Wait(mutex)
rc—
if(rc==0)
signal(wrt)
signal(mutex)
writer process
wait(wrt)
//write
Signal(wrt)

Don Bosco College, Kottiyam Page 37


3. Dining Philosophers Problem
The dining philosophers problem is another classic synchronization problem which is
used to evaluate situations where there is a need of allocating multiple resources to
multiple processes.
At any instant, a philosopher is either eating or thinking. When a philosopher wants to
eat, he uses two chopsticks - one from their left and one from their right. When a
philosopher wants to think, he keeps down both chopsticks at their original place.
From the problem statement, it is clear that a philosopher can think for an indefinite
amount of time. But when a philosopher starts eating, he has to stop at some point of
time. The philosopher is in an endless cycle of thinking and eating.
When a philosopher wants to eat the rice, he will wait for the chopstick at his left and
picks up that chopstick. Then he waits for the right chopstick to be available, and then
picks it too. After eating, he puts both the chopsticks down.
But if all five philosophers are hungry simultaneously, and each of them pickup one
chopstick, then a deadlock situation occurs because they will be waiting for another
chopstick forever. The possible solutions for this are:
 A philosopher must be allowed to pick up the chopsticks only if both the left and
right chopsticks are available.
 Allow only four philosophers to sit at the table. That way, if all the four
philosophers pick up four chopsticks, there will be one chopstick left on the table.
So, one philosopher can start eating and eventually, two chopsticks will be
available. In this way, deadlocks can be avoided.
An array of five semaphores, stick[5], for each of the five chopsticks.
The code for each philosopher looks like:
While(true)
{
Wait(stick[i])
//Mod is used because if 1=5,next chopstick is 1(circular dining table)
Wait(stick[(i+1)%5]);
//eat
Signal(stick[i]);

Don Bosco College, Kottiyam Page 38


signal (stick[(i+1)%5]);
//think
}
DEADLOCKS

In a multiprogramming environment, several processes may compete for a finite number


of resources. A process requests resources; if the resources are not available at that
time, the process enters a waiting state. Sometimes, a waiting process is never again
able to change state, because the resources it has requested are held by other waiting
processes. This situation is called a deadlock.
A process must request a resource before using it and must release the resource after
using it. A process may request as many resources as it requires to carry out its
designated task. Obviously, the number of resources requested may not exceed the
total number of resources available in the system. In other words, a process cannot
request three printers if the system has only two.

Under the normal mode of operation, a process may utilize a resource in only the
following sequence:

1. Request: The process requests the resource. If the request cannot be granted
immediately(for example, if the resource is being used by another process), then the
requesting process must wait until it can acquire the resource.

2. Use: The process can operate on the resource (for example, if the resource is a
printer, the process can print on the printer).

3. Release: The process releases the resource.

The request and release of resources may be system calls, Examples are the request()
and release() device, open() and close() file, and allocate() and free() memory system
calls. Similarly, the request and release of semaphores can be accomplished through
the wait() and signal() operations on semaphores or through acquire() and release() of a
mutex lock.

For each use of a kernel-managed resource by a process or thread, the operating


system checks to make sure that the process has requested and has been allocated the
resource. A system table records whether each resource is free or allocated. For each
resource that is allocated, the table also records the process to which it is allocated. If a
process requests a resource that is currently allocated to another process, it can be
added to a queue of processes waiting for this resource. A set of processes is in a
deadlocked state when every process in the set is waiting for an event that can be
caused only by another process in the set. The events with which we are mainly
concerned here are resource acquisition and release. The resources may be either

Don Bosco College, Kottiyam Page 39


physical resources(for example, printers, tape drives, memory space, and CPU cycles)
or logical resources (for example, semaphores, mutex locks, and files).However, other
types of events may result in deadlocks.

Necessary Conditions for dead lock

A deadlock situation can arise if the following four conditions hold simultaneously in a
system:

1. Mutual exclusion: At least one resource must be held in a non sharable mode; that
is, only one process at a time can use the resource. If another process requests that
resource, the requesting process must be delayed until the resource has been released.

2. Hold and wait: A process must be holding at least one resource and waiting to
acquire additional resources that are currently being held by other processes.

3. No preemption: Resources cannot be preempted; that is, a resource can be


released only voluntarily by the process holding it, after that process has completed its
task.

4. Circular wait: A set {P0, P1, ..., Pn} of waiting processes must exist such that P0 is
waiting for a resource held by P1, P1 is waiting for a resource held by P2,..., Pn−1 is
waiting for a resource held by Pn, and Pn is waiting for a resource held by P0.

We emphasize that all four conditions must hold for a deadlock to occur. The
circular-wait condition implies the hold-and-wait condition, so the four conditions are not
completely independent.

Resource-Allocation Graph

Deadlocks can be described more precisely in terms of a directed graph called a


system resource-allocation graph. This graph consists of a set of vertices V and a set of
edges E. The set of vertices V is partitioned into two different types of nodes: P = {P1,
P2,..., Pn}, the set consisting of all the active processes in the system, and R = {R1, R2,
..., Rm}, the set consisting of all resource types in the system.

A directed edge from process Pi to resource type Rj is denoted by Pi → Rj; it signifies


that process Pi has requested an instance of resource type Rj and is currently waiting
for that resource. A directed edge from resource type Rj to process Pi is denoted by Rj
→ Pi; it signifies that an instance of resource type Rj has been allocated to process Pi.
A directed edge Pi → Rj is called a request edge; a directed edge Rj → Pi is called an
assignment edge.

Pictorially, we represent each process Pi as a circle and each resource type Rj as a


rectangle. Since resource type Rj may have more than one instance, we represent each
Don Bosco College, Kottiyam Page 40
such instance as a dot within the rectangle. Note that a request edge points to only the
rectangle Rj, whereas an assignment edge must also designate one of the dots in the
rectangle. When process Pi requests an instance of resource type Rj, a request edge is
inserted in the resource-allocation graph. When this request can be fulfilled, the request
edge is instantaneously transformed to an assignment edge. When the process no
longer needs access to the resource, it releases the resource. As a result, the
assignment edge is deleted. The resource-allocation graph shown in Figure depicts the
following situation.

The sets P, R, and E:

 P ={P1, P2, P3}


 R ={R1, R2, R3, R4} ◦
 E ={P1 → R1, P2 → R3, R1 → P2, R2 → P2, R2 → P1, R3 → P3}

 Resource instances:
◦ One instance of resource type R1
◦ Two instances of resource type R2
◦ One instance of resource type R3
◦ Three instances of resource type R4
 Process states:
◦ Process P1 is holding an instance of resource type R2 and is waiting for an
instance of resource type R1.
◦ Process P2 is holding an instance of R1 and an instance of R2 and is waiting
for an instance of R3.
◦ Process P3 is holding an instance of R3.

If the resource-allocation graph contains no cycles, then no process in the system is


deadlocked. f the graph does contain a cycle, then a deadlock may exist.

Don Bosco College, Kottiyam Page 41


If each resource type has several instances, then a cycle does not necessarily imply
that a deadlock has occurred. In this case, a cycle in the graph is a necessary but not a
sufficient condition for the existence of deadlock. To illustrate this concept, we return to
the resource-allocation graph depicted in the Figure below. Suppose that process P3
requests an instance of resource type R2. Since no resource instance is currently
available, we add a request edge P3 → R2 to the graph. At this point, two minimal
cycles exist in the system:
P1 → R1 → P2 → R3 → P3 → R2 → P1
P2 → R3 → P3 → R2 → P2

Processes P1, P2, andP3 are deadlocked. Process P2 is waiting for the resource R3,
which is held by process P3. ProcessP3 is waiting for either process P1 or process P2
to release resource R2. In addition, process P1 is waiting for process P2 to release
resource R1.

Now consider the resource-allocation graph in Figure below. In this example, we also
have a cycle: P1 → R1 → P3 → R2 → P1

Don Bosco College, Kottiyam Page 42


However, there is no deadlock. Observe that process P4 may release its instance of
resource type R2.That resource can then be allocated to P3, breaking the cycle.

METHODS FOR HANDLING DEADLOCKS

we can deal with the deadlock problem in one of three ways:

 We can use a protocol to prevent or avoid deadlocks, ensuring that the


system will never enter a deadlocked state.
 We can allow the system to enter a deadlocked state, detect it, and
recover.
 We can ignore the problem altogether and pretend that deadlocks never
occur in the system.

To ensure that deadlocks never occur, the system can use either a deadlock-
prevention or a deadlock-avoidance scheme.

DEADLOCK PREVENTION

For a deadlock to occur, each of the four necessary conditions must hold. By ensuring
that at least one of these conditions cannot hold, we can prevent the occurrence of
a deadlock.

Mutual Exclusion

The mutual exclusion condition must hold. That is, atleast one resource must be non
sharable. Sharable resources, in contrast, do not require mutually exclusive access and
thus cannot be involved in a deadlock. Read-only files are a good example of a
sharable resource. If several processes attempt to open a read-only file at the same
time, they can be granted simultaneous access to the file. A process never needs to
wait for a sharable resource.

In general, however, we cannot prevent deadlocks by denying the mutual-exclusion


condition, because some resources are intrinsically non sharable. For example, a mutex
lock (a printer) cannot be simultaneously shared by several processes.

Hold and Wait

To ensure that the hold-and-wait condition never occurs in the system, we must
guarantee that, whenever a process requests a resource, it does not hold any other
resources. One protocol that we can use requires each process to request and be
allocated all its resources before it begins execution. An alternative protocol allows a
process to request resources only when the none. Before it can request any additional
resources, however, it must release all the resources that it is currently allocated.

Don Bosco College, Kottiyam Page 43


To illustrate the difference between these two protocols, we consider a process that
copies data from a DVD drive to a file on disk, sorts the file, and then prints the results
to a printer. If all resources must be requested at the beginning of the process, then the
process must initially request the DVD drive, disk file, and printer. It will hold the printer
for its entire execution, even though it needs the printer only at the end.

The second method allows the process to request initially only the DVD drive and disk
file. It copies from the DVD drive to the disk and then releases both the DVD drive and
the disk file. The process must then request the disk file and the printer. After copying
the disk file to the printer, it releases these two resources and terminates.

Both these protocols have two main disadvantages. First, resource utilization may
be low, since resources may be allocated but unused for a long period. Second,
starvation is possible. A process that needs several popular resources may have to wait
indefinitely, because at least one of the resources that it needs is always allocated to
some other process.

No Preemption
To ensure that this condition does not hold, we can use the following protocol. If a
process is holding some resources and request another resource that cannot be
immediately allocated to it, then all resources currently being held are preempted and
added to the list of resources for which the process is waiting then the process will be
restarted only when it can regain its old resources as well as the new ones that it is
requesting.
Circular wait
One way to ensure that this condition never holds is to impose a total ordering of all
resources types and to require that each process requests resources in an increasing
order of enumeration.
Let R={ R1,R2,R3,……Rm} be the set of resource type we assign each resource type a
unique integer number, which allows us to compare two resources and to determine
whether one precedes another in ordering.
A process can initially request any number of instances of a resource type , say R i after
that , the process can request instances of resource type Rj if and only if F(Rj>F(Ri).
Alternatively, we can require that a process requesting an instance of resource type Rj
must have released any resources Ri such that F(Ri) ≥ F(Rj).
If these two protocols are used then the circular wait condition cannot hold.

DEADLOCK AVOIDANCE

In deadlock avoidance method, the OS must be given in advance the additional


information concerning which resource a process will request and use during its life
time. The system should consider the following for each request.
 Resource that are currently available.
 Resource that are currently allocated.
 Future requests and releases of each process.

Don Bosco College, Kottiyam Page 44


Using this information it is possible to construct an algorithm that ensures that the
system will never enter a dead lock state. The following algorithms define the deadlock
avoidance approach.
I. Resource Allocation Graph Algorithm
II. Banker’s Algorithm

Resource Allocation Graph Algorithm

It is similar to resource allocation graph, but in addition to request edge and assignment
edge, we introduce a new type of edge called a claim edge. A claim edge Pi → Rj
indicates that process Pi may request resource Rj at some time in the future. This edge
resembles a request edge in direction but is represented in the graph by a dashed line.
When process Pi requests resource Rj, the claim edge Pi → Rj is converted to a
request edge. Similarly, when a resource Rj is released by Pi, the assignment edge
Rj → Pi is reconverted to a claim edge Pi → Rj.

If no cycle exists, then the allocation of the resource will leave the system in a safe
state. If a cycle is found, then the allocation will put the system in an unsafe state.
Before converting a claim edge to request edge it will check whether the system will
enter into dead lock cycle. If so the resource will not be assigned as such.

The resource-allocation-graph algorithm is not applicable to a resource- allocation


system with multiple instances of each resource type

Banker’s Algorithm

This algorithm is applicable to resource having multiple instances. The deadlock-


avoidance algorithm- Banker’s Algorithm is less efficient than the resource-allocation
graph scheme.

Several data structures must be maintained to implement the banker’s algorithm. These
data structures encode the state of the resource-allocation system. We need the
following data structures, where n is the number of processes in the system and m is
the number of resource types:

• Available: A vector of length m indicates the number of available resources of each


type.

Don Bosco College, Kottiyam Page 45


If Available[j] equals k, then k instances of resource type Rj are available.

• Max: An n × m matrix defines the maximum demand of each process.

If Max[i][j] equals k, then process Pi may request at most k instances of resource type
Rj.

• Allocation : An n×m matrix defines the number of resources of each type currently
allocated to each process.

If Allocation[i][j] equals k, then process Pi is currently allocated k instances of resource


type Rj.

• Need: An n × m matrix indicates the remaining resource need of each process.

If Need[i][j] equals k, then process Pi may need k more instances of resource type Rj to
complete its task. Note that Need[i][j] equals Max[i][j] −Allocation[i][j].

Safety Algorithm

This algorithm is used for finding out whether or not a system is in a safe state. This
algorithm can be described as follows:

Step1: Let Work and Finish be vectors of length m and n, respectively.

Initialize Work = Available and Finish[i]=false for i = 0, 1, ...,n−1.

Step2: Find an index i such that Finish[i] ==false And Needi ≤Work

If no such i exists, go to step 4.

Step 3: Work = Work + Allocation[i]

Finish[i]=true Go to step 2.

Step 4: If Finish[i] ==true for all i, then the system is in a safe state.

This algorithm may require an order of m×n 2 operations to determine whether a state is
safe, whereas resource-allocation algorithm need n2 operations.

Resource-Request Algorithm

Let Request i be the request made by the process Pi.

If Request i [j] == k, then process Pi wants k instances of resource type Rj. When a
request for resources is made by process Pi, the following actions are taken:

Step 1: If Requesti ≤Needi, go to step2.

Don Bosco College, Kottiyam Page 46


Otherwise, raise an error condition, since the process has exceeded its maximum
claim.

Step 2: If Requesti ≤Available, go to step 3.

Otherwise, Pi must wait, since the resources are not available.

Step 3: The system pretend to have allocated the requested resources to process Pi by
modifying the state as follows:

Available = Available–Requesti;

Allocationi = Allocationi + Requesti;

Needi = Needi –Requesti;

If the resulting resource-allocation state is safe, the transaction is completed, and


process Pi is allocated its resources. However, if the new state is unsafe, then Pi must
wait for Requesti, and the old resource-allocation state is restored.

Deadlock Detection

If a system does not employ either a deadlock-prevention or a deadlock- avoidance


algorithm, then a deadlock situation may occur. In this environment, the system may
provide:

• An algorithm that examines the state of the system to determine whether a deadlock
has occurred

• An algorithm to recover from the deadlock

To detect and recover from a dead lock, the system must use deadlock detection
algorithm to check whether a dead lock has occurred and between which processes.
Then it should use a deadlock recovery mechanism to recover from the dead lock.

Deadlock detection method for resource having single instance

If all resources have only a single instance, then we can define a deadlock- detection
algorithm that uses a variant of the resource-allocation graph, called a wait-for graph.
We obtain this graph from the resource-allocation graph by removing the resource
nodes and collapsing the appropriate edges.

More precisely, an edge from Pi to Pj in a wait-for graph implies that process Pi is


waiting for process Pj to release a resource that Pi needs. An edge Pi → Pj exists in a
wait-for graph if and only if the corresponding resource- allocation graph contains two
edges Pi → Rq and Rq → Pj for some resource Rq.

Don Bosco College, Kottiyam Page 47


In the Figure we present a resource-allocation graph and the corresponding wait-for
graph. As before, a deadlock exists in the system if and only if the wait-for graph
contains a cycle. To detect deadlocks, the system needs to maintain the wait- for
graph and periodically invoke an algorithm that searches for a cycle in the graph. An
algorithm to detect a cycle in a graph requires an order of n 2 operations, where n is the
number of vertices in the graph.

Deadlock detection method for resource having multiple instances

The following data structures are used.

• Available: A vector of length m indicates the number of available resources of each


type.

• Allocation: An n×m matrix defines the number of resources of each type currently
allocated to each process.

• Request: An n × m matrix indicates the current request of each process.

If Request[i][j] equals k, then process Pi is requesting k more instances of resource type


Rj

Algorithm

Let Work and Finish be vectors of length m and n, respectively.

Step 1:Initialize Work= Available.

For i=0,1,...,n–1,

If Allocationi !=0, then Finish[i]= false. Otherwise,

Don Bosco College, Kottiyam Page 48


Finish[i]=true.

Step 2: Find an index i such that both

a. Finish[i] ==false

b. Requesti ≤Work If no such i exists, go to step 4.

Step 3: Work = Work + Allocationi

Finish[i]=true Go to step 2.

Step 4: If Finish[i] == false the system is in a deadlocked state .

Deadlock Recovery

To recover from a deadlock there are two possibilities.

1. To inform the operators or users of the system that a deadlock has occurred and
let them to do it manually.
2. To let the system to recover from the deadlock automatically.
In both cases we either abort one or more process to break the circular wait or preempt
resources of the deadlocked processes.
a) Process Termination
To eliminate deadlocks by aborting processes we can use one of thee two methods.
I. Abort all deadlocked processes.
II. Abort one process at a time until the deadlock cycle is eliminated.
b) Resource preemption
To eliminates deadlocks the system preempts resources from certain deadlocked
processes and allocate it to other process so that the deadlock cycle can be broken.
If preemption is required for to deal with deadlocks three things has to consider.
1. Select the victim
Determine the order of preemption to minimize cost. The cost factor means that
nu8mber of resources a deadlock process is holding and its execution time.
2. Roll back
Roll back the process to safe state and restart it from that state.
3. Starvation
In a system the victim selection is based on cost factors, it may happen that the same
process is always picked as the victim. As a result this process never completes its
designated task results in starvation.

Don Bosco College, Kottiyam Page 49


KERALA UNIVERSITY QUESTIONS
SECTION A
1. When a process is said to be dead locked state? (2019)
2. How is a job different from process?

SECTION B
1. Define deadlock. (2016,2015)
2. Define IPC? (2019)
3. Explain mutual exclusion. (2019)
4. How to detect deadlock when there is single instance of each resource type?
(2019)
5. What are the two factors to depend when we invoke dead lock detection algorithm?
(2019)
SECTION C

1.Brief about deadlock characterization. (2016,2015)

2.Describe resource allocation graph. (2016,2015)

3.Give the importance and contents of Process Control Block. (2019)

4.Describe the Peterson’s solution to the problem of critical section. (2019)

5.Discuss the importance of Resource Allocation Graph. (2019)

SECTION D
1. Explain deadlock avoidance algorithms. (2016,2015)
2. Explain Banker’s algorithm to avoid deadlocks. (2019)

Don Bosco College, Kottiyam Page 50


1.What is a CPU burst? An I/O burst?

·CPU burst: a time interval when a process uses CPU only.

I/O burst: a time interval when a process uses I/O devices only.

2. What does “preemptive” mean?

Cause one process to temporarily halt, in order to run another.

3. What is the “dispatcher”?

Determines which processes are swapped out.

4. What is throughput?

Number of jobs done per time period.

5.List performance criteria we could select to optimize our system.

CPU use, throughput, turnaround time, waiting time,response time.

6. What is a Gantt chart? Explain how it is used.

A rectangle marked off horizontally in time units, marked off at end of each job or job-segment.
It shows the distribution of time-bursts in time. It is used to determine total and average statistics
on jobs processed, by formulating various scheduling algorithms on it.

7. What are the advantages of SJF? Disadvantages?

Provably optimum in waiting time. But no way to know length of next CPU burst.

8. What is indefinite blocking? How can it occur?

It is also called starvation. A process with low priority that never gets a chance to execute. Can
occur if CPU is continually busy with higher priority jobs.

9. What is “aging”?

Gradual increase of priority with age of job, to prevent “starvation.”

10.What is SRTF (Shortest-Remaining-Time-First) scheduling?

A preemptive scheduling algorithm that gives high priority to a job with least amount of CPU
burst left to complete.

11. What is round-robin scheduling?

Each job is given a time quantum slice to run; if not completely done by that time interval, job is
suspended and another job is continued. After all other jobs have been given a quantum, first
job gets its chance again.

Don Bosco College, Kottiyam Page 51


12. What is the critical-section problem?

To design an algorithm that allows at most one process into the critical section at a time, without
deadlock.

13. What is the main advantage of the layered approach to system design?

As in all cases of modular design, designing an operating system in a modular way has several
advantages. The system is easier to debug and modify because changes affect only limited
sections of the system rather than touching all sections of the operating system. Information is
kept only where it is needed and is accessible only within a defined and restricted area, so any
bugs affecting that data must be limited to a specific module or layer.

14. Describe the differences among short-term, medium-term, and long-term scheduling.

Short-term (CPU scheduler) - selects from jobs in memory, those jobs which are ready to
execute, and allocates the CPU to them.

Medium-term used especially with time-sharing systems as an intermediate scheduling level. A


swapping scheme is implemented to remove partially run programs from memory and reinstate
them later to continue where they left off.

Long-term (job scheduler) - determines which jobs are brought into memory for processing.

15. List types of resources we might consider in deadlock problems on computers.

CPU cycles, memory space, files, I/O devices, tape drives, printers.

16. Define deadlock.

A situation where every process is waiting for an event that can be triggered only by another
process.

17. What are the four necessary conditions needed before deadlock can occur?

a. At least one resource must be held in a nonsharable mode.

b. A process holding at least one resource is waiting for more resources held by other
processes.

c. Resources cannot be preempted.

d. There must be a circular waiting.

18. Give examples of sharable resources.

Read-only files, shared programs and libraries.

19. Give examples of non-sharable resources.

Printer, magnetic tape drive, update-files, card readers.

Don Bosco College, Kottiyam Page 52


20. List three overall strategies in handling deadlocks.

a. Ensure system will never enter a deadlock state.

b. Allow deadlocks, but devise schemes to recover from them.

c. Pretend deadlocks don’t happen.

21. What is starvation?

System is not deadlocked, but at least one process is indefinitely postponed..

22. List three options for breaking an existing deadlock.

a. Violate mutual exclusion, risking data.

b. Abort a process.

c. Preempt resources of some process.

23. What three issues must be considered in the case of preemption?

a. Select a victim to be preempted.

b. Determine how far back to rollback the victim.

c. Determine means for preventing that process from being

“starved.”

Don Bosco College, Kottiyam Page 53

You might also like