You are on page 1of 58

Real Time Operating Systems

RV College of Engineering®, Bengaluru-59

(Autonomous Institution Affiliated to VTU)

Department of Electronics and Instrumentation Engineering

18EI6D4

Real Time Operating System

“PROCESS SYNCHRONIZATION AND DEADLOCK”

Name of the student USN

Khushi Garg 1RV18EI028

Kushagra Kumar 1RV18EI029

Mohit Kumar 1RV18EI033

Sparsh Chhattani 1RV18EI056

Sneha Bhat A 1RV18EI054

Rounak Raj 1RV18EI047

Under the guidance of:

Dr. Kendaganna Swamy S

Assistant Professor

Electronics and Instrumentation Department

Electronics and Instrumentation Department Page 1


Real Time Operating Systems

UNIT 3

Syllabus:

Process Synchronization Concurrency: Principles of Concurrency, Mutual Exclusion H/W


Support, software approaches, Semaphores and Mutex, Message Passing, Monitors, Classical
Problems of Synchronization: Readers- Writers Problem, Producer Consumer Problem.

Deadlock: Principles of deadlock, Deadlock Prevention, Deadlock Avoidance, Deadlock


Detection, An Integrated Deadlock Strategies, Dining Philosopher problem.

INDEX:

1. Parallelism……………………………………………………..................3
2. Concurrency…………………………………………………...................4
3. Process synchronisation………………………………………………….6
4. Race condition……………………………………………………………9
5. Critical section……………………………………………………………9
6. Critical section solution using lock………………………………………10
7. Critical section solution using test and set……………………………….13
8. Turn variable (strict alteration method)………………………………….14
9. Semaphores………………………………………………………………16
10. Message passing…………………………………………………………19
11. Monitors………………………………………………………………….21
12. Classical problems of synchronisation…………………………………...24
a) Reader writer problem………………………………………….24
b) Producer consumer problem……………………………………26
13. Deadlock………………………………………………………………….32
a) Resource allocation graphs……………………………………34
b) Methods of handling deadlock………………………………...37
c) Deadlock prevention…………………………………………..39
d) Deadlock avoidance…………………………………………...42
e) Deadlock detection and recovery……………………………...48
14. Dining philosopher problem……………………………………………...52
15. Test questions…………………………………………………………….58

Electronics and Instrumentation Department Page 2


Real Time Operating Systems

PROCESS SYNCHRONISATION

PARALLELISM:

It is the act of running multiple computations simultaneously.

Analogy – we can sing and dance both at the same time. This means we are parallely doing
both the tasks.

EXAMPLE:

Fig 3.1: example of parallelism

Fig 3.2: threads parallely getting executed on different cores

Electronics and Instrumentation Department Page 3


Real Time Operating Systems

 In fig 3.1, we are performing 3 different tasks on 3 different threads.


 Task1 and task2 will run on 2 different threads and task3 on the main thread.
 If we run this code on quad core CPU, which means there are 4 cores in the CPU then
we can run these 3 tasks in parallel.
 In OS we have a scheduler, which is responsible to schedule threads on the CPU. So
in this case it is possible that the main thread runs on core 1 and thread-2 and thread-3
on core 2 and core 3 respectively(as shown in fig 3.2).

CONCURRENCY:

It is the act of managing and running multiple computations at the same time.

Analogy – we cannot eat and talk at the same time. We first finish our bite and then talk than
again we eat and when we finish we talk.

EXAMPLE:

Fig 3.3: example of concurrency

Electronics and Instrumentation Department Page 4


Real Time Operating Systems

Fig 3.4: threads executing on a single core

 In fig 3.4, we can see, we have only one core in our CPU and scheduler have to assign
threads to the core.
 Since there is one core, time sharing has to be done within the threads.
 Scheduler assigns some amount of time (say some few miliseconds) to each thread to
execute, this is called interleaving of threads.

CASE 1:

Fig 3.5: when one thread can execute the complete function in the time alloted

 This is the best case scenario (fig 3.5) where one thread checks the availability, books
the ticket and update the available tickets.
 So when next thread comes it will have the correct value of available tickets and will
book the tickets accordingly.

Electronics and Instrumentation Department Page 5


Real Time Operating Systems

 This flow is not guaranteed as we are not sure that thread – 1 will be able to execute
all the statements within the allocated amount of time.

CASE 2:

Fig 3.6: when one thread can execute only a single line

 In this case, within the allocated time, thread 1 is able to execute only one statement.
In this scenario both, thread 1 and thread 2, will check the availability book the tickets
and update the available tickets.
 If only one ticket is left then both the threads will book that one seat
 This can create problems.

So to resolve this problem we need cooperative processes and process synchronisation is


needed.

PROCESS SYNCHRONISATION

Process Synchronization is the task of coordinating the execution of processes in a way that
no two processes can have access to the same shared data and resources.

It is specially needed in a multi-process system when multiple processes are running together,
and more than one processes try to gain access to the same shared resource or data at the
same time.

Electronics and Instrumentation Department Page 6


Real Time Operating Systems

On the basis of synchronization, processes are categorized as one of the following two
types:

 Independent Process : Execution of one process does not affects the execution of other
processes.
 Cooperative Process : Execution of one process affects the execution of other
processes. They share variables/code/resources etc.
Process synchronization problem arises in the case of Cooperative process also because
resources are shared in Cooperative processes.

Let’s take an example:

We have two processes that are running on same system P1 and P2

int shared = 5

P1 P2

int x= shared; // x=5 int y = shared; // y =5

x++; // x=6 y--; // y = 4

sleep(1); //sleep for 1sec sleep(1); // sleep for 1sec

shared = x; //shared = 6 shared = y; // shared = 4

 Considering a uniprocessor system and both the programs enter the system parallely
and assuming that P1 enters first.
 As P1 enters, first the value of x is given by shared variable (i.e 5) and in the next
statement the value becomes 6.
 In the next statement process P1 goes into sleep mode(becomes paused) for 1sec
 But during this period CPU does not sit idle, P1 prempts and P2 start executing
(context switching takes place).
 Now P2 starts executing and after execution of first 2 statements of P2, y becomes 4.
 When P2 goes in sleep mode, CPU again switches back to process P1.

Electronics and Instrumentation Department Page 7


Real Time Operating Systems

 Now P1 will execute and value of ‘shared’ will be 6 (as x=6) and P1 terminates.
 The CPU will now move back to P2 and the value of ‘shared’ now becomes 4.
 But in this simultaneous process the value of ‘shared’ should remain 4 as in one
process we are adding one value and in another we are subtracting
 But we get the value of ‘shared’ as 4. So we can conclude that here 2 processes are
parallel and cooperative but they are not synchronised.
 Therefore, this problem is referred to as RACE CONDITION.

Synchronization Examples
 Windows XP Synchronization
 Uses interrupt masks to protect access to global resources on uniprocessor
systems
 Uses spinlocks on multiprocessor systems
 Also provides dispatcher objects which may act as either mutexes and
semaphores
 Dispatcher objects may also provide events
o An event acts much like a condition variable
 Linux Synchronization
 Disables interrupts to implement short critical sections
 Provides semaphores and spin locks
 Pthreads Synchronization
 Pthreads API is OS-independent and the detailed implementation depends on
the particular OS
 By POSIX, it provides
o mutex locks
o condition variables (monitors)
o read-write locks (for long critical sections)
o spin locks

Electronics and Instrumentation Department Page 8


Real Time Operating Systems

RACE CONDITION:

A race condition is a situation that may occur inside a critical section. This happens when
the result of multiple thread execution in the critical section differs according to the order in
which the threads execute.

CRITICAL SECTION:

Critical section is the code segment that can be accessed by only one process at a time.
Critical section contains shared variables which need to be synchronised to maintain
consistency of data variables.

We consider two cooperative processes here (P1 and P2).

Fig 3.7: critical section can be accessed only after passing through entry section

 Due to race condition, while P1 is executing critical section P2 should not have access
to critical section and vice-a-versa.
 To avoid this problem, each program should have an ENTRY SECTION to
synchronise them (fig 3.7)
 The program should given the access to critical section only when it passes through
entry code.
 The entry section has to be such a way that if one program passes the entry section,
the other should not be able to pass the entry section (using which only one program
will be able to access the critical section).
 Whenever a program enters a critical section, it has to pass through an exit code too.

Electronics and Instrumentation Department Page 9


Real Time Operating Systems

 By doing this we have achieved SYNCHRONISATION.

Any synchronisation method used should follow the following 4 conditions:

 Mutual exclusion:
If a process is executing in its critical section, then no other process is allowed to
execute in the critical section.

 Progress:
Taking an example, if P1 wants to use critical section, P2 is not allowing it to use
the critical section even though it is not using it. In other words P2 is not allowing
P1 to progress nor it itself is progressing.

 Bounded wait:
A bound must exist on the number of times a process enters a critical section.
There should not be a starvation problem for the processes waiting to enter the
critical section.

 No assumption regarding hardware or speed:


Whichever the solution is proposed, it should not be bound to a hardware or
speed. Solution should be universal.

NOTE: Mutual exclusion and progress are primary conditions (which means these conditions
are mandatory to be achieved) and the other two conditions are secondary conditions.

CRITICAL SECTION SOLUTION USING ‘LOCK’:

In LOCK, we have a shared lock variable which can take either of the two values, 0 or 1.

LOCK = 0 (means critical section is vacant)

LOCK = 1 (means critical section is acquired)

Electronics and Instrumentation Department Page 10


Real Time Operating Systems

Before entering into the critical section, a process inquires about the lock. If it is locked, it
keeps on waiting until it becomes free and if it is not locked, it takes the lock and executes the
critical section.

do

acquire lock

CS

release lock

Fig 3.8: critical section problem using lock

CASE 1:

 Taking two processes P1 and P2 and initialising LOCK variable as zero. (LOCK = 0)
 Now if process P1 enters the entry section it will check for LOCK variable, LOCK =
0 (as we initialised LOCK = 0) which makes the while condition false and executes
the line 2 which makes tha value of LOCK as 1. (LOCK = 1). (refer fig 3.8)
 Now P1 executes the critical section in line 3 and again makes the LOCK as 0 in line
4 which is the exit section. (LOCK = 0)

Electronics and Instrumentation Department Page 11


Real Time Operating Systems

 If P2 enters now in the entry section, it will check for while condition (since LOCK =
0, while condition will become false).
 It will execute line 2 and make LOCK = 1, which means now the critical section is
acquired by P2. (LOCK = 1)
 P2 will execute the critical section in line 3 and again make LOCK as 0 in line 4.
(LOCK = 0)

Q. Is there any guarantee that there will be mutual exclusion in every case?

CASE 2:

 Taking two processes P1 and P2 and initialising LOCK variable as zero. (LOCK =
0)
 P1 enters the critical section and executes line 1 (since the value of LOCK = 0,
while condition becomes false and it executes line 2)
 But due to some reason P1 gets pre-empted and now CPU starts executing P2
process.
 P2 also checks the while condition (since LOCK = 0, while condition becomes
false and it executes line 2).
 P2 makes the value of LOCK as 1 and starts executing the critical section. (LOCK
= 1)
 Now P1 returns back (and since it has already executed line 1, it will start its
execution from line 2) and overwrites on the variable LOCK making the value
LOCK as 1.(LOCK = 1)
 Due to this, P1 also enters the critical section. Now P1 and P2 both are in critical
section together violating the law of mutual exclusion.

Therefore we can conclude that:

 It executes in user mode, we do not take any support from kernels or the operating
system.
 It is a multiprocessor solution.
 Mutual exclusion is not achieved.

Electronics and Instrumentation Department Page 12


Real Time Operating Systems

CRITICAL SECTION SOLUTION USING TEST AND SET:

In test and set method we have combined (made it atomic) in one function named as
test_and_set which will test the value of LOCK variable and also set its value.

Fig 3.9:critical section using test and set

 In fig 3.9, we can see that when P1 enters and taking the initial value of LOCK as
false (means 0) the test_and_set function will be called.
 The function call is call by reference. The variable *target is a pointer variable
which will point to the LOCK (i.e. address 1000).
 Now the value stored at position 1000 will be copied to a new variable ‘r’ (i.e. r =
false) and also make the variable LOCK as ‘true’ (LOCK = true).
 Finally we will return the value of ‘r’ (return false) which will be the deciding
factor that a process will enter the critical section or not.
 Since we are returning false, while condition will become false and P1 will start
executing critical section.
 When P1 is executing critical section and P2 comes then again test_and_set
function will be called.
 The value of r will become true (r = true) as the value of LOCK (*target) is true
and it will overwrite the value of LOCK (*target) as true again.

Electronics and Instrumentation Department Page 13


Real Time Operating Systems

 The value of r (i.e true) will be returned and P2 will not be allowed to access the
critical section.
 This is how mutual exclusion is achieved and there is progress also being
achieved.

TURN VARIABLE (STRICT ALTERATION METHOD):

Turn Variable or Strict Alternation Approach is the software mechanism implemented at user
mode. It is a busy waiting solution which can be implemented only for two processes. In this
approach, A turn variable is used which is actually a lock.

This approach can only be used for only two processes. In general, let the two processes be
P1 and P2. They share a variable called turn variable. The pseudo code of the program can be
given as following.

Fig 3.10: critical section using turn variable

In the entry section, as shown in fig 3.10, in general, the process P0 will not enter in the
critical section until its value is 1 or the process P1 will not enter in the critical section until
its value is 0.

 Initially, two processes P0 and P1 are available and want to execute into critical
section.

Electronics and Instrumentation Department Page 14


Real Time Operating Systems

 The turn variable is equal to 0 (turn = 0) hence P0 will get the chance to enter into the
critical section and will be there uptil it completes its part.
 P0 finishes its critical section and assigns 1 to turn variable (turn = 1). P1 will get the
chance to enter into the critical section. The value of turn remains 1 until P1 finishes
its critical section.

We can conclude:

 The strict alternation approach provides mutual exclusion in every case. This
procedure works only for two processes. The pseudo code is different for both of
the processes. The process will only enter when it sees that the turn variable is
equal to its Process ID otherwise not Hence No process can enter in the critical
section regardless of its turn.
 Progress is not guaranteed in this mechanism. If P0 doesn't want to enter into
the critical section on its turn then P1 got blocked for infinite time. P1 has to
wait for so long for its turn since the turn variable will remain 0 until P0
assigns it to 1.

Peterson’s Solution:

 Two processes solution


 Assume that the LOAD and STORE instructions are atomic; that is, cannot be
interrupted (!)
 The two processes share two variables:

o int turn;
o Boolean interest[2]

 The variable turn indicates whose turn it is to enter the critical section.
 The interest array is used to indicate if a process is ready to enter the critical section.
 interest[i]==true implies that process Pi is ready (i = 0,1)

do {

k = (i==0 ? 1 : 0); // Number of the other process

Electronics and Instrumentation Department Page 15


Real Time Operating Systems

interest[i] = TRUE;

turn = k;

while (interest[k] && turn == k); // Do nothing = wait

// Execute the CRITICAL SECTION

interest[i] = FALSE;

// REMAINDER SECTION

} while (TRUE);

SEMAPHORES:

A semaphore is an integer variable, which can be accesses only through two operations wait()
and signal()

There are 2 types of semaphores:

 Counting semaphore : They can have any value and are not restricted over a certain
domain. They can be used to control access to a resource that has a limitation on the
number of simultaneous accesses.

The semaphore can be initialised to the number of instances of the resource.


Whenever a process wants to use that resource, it checks if the number of remaining
instances is more than zero i.e. the process has an instance available as as shown in fig
3.11.

Then, the process can enter the critical section thereby decreasing the value of
counting of semaphore by 1. After the process is over with the use of the instance of
the resource, it can leave the critical section thereby adding 1 to the number of
available instances of the resource.

Electronics and Instrumentation Department Page 16


Real Time Operating Systems

Fig 3.11: example of counting semaphore

 Binary semaphore: They can only be either 0 or 1. They are also known as mutex
locks, as the locks can provide mutual exclusion. All the processes can share the same
mutex semaphore that is initialised to 1(if it is initialised to zero, all the processes will

Electronics and Instrumentation Department Page 17


Real Time Operating Systems

be blocked and there will be a deadlock situation). Then, a process has to wait until
the lock becomes 0 as shown in fig 3.12. Then, the process can make the mutex
semaphore 1 and start its critical section. When it completes its critical section, it can
reset the value of mutex semaphore to 0 and some other process can enter the critical
section.

Fig 3.12: example of binary semaphore

Electronics and Instrumentation Department Page 18


Real Time Operating Systems

MESSAGE PASSING:

It refers to means of communication between


- Different thread with in a process.
- Different processes running on same node.
- Different processes running on different node.

In this a sender or a source process send a message to a known receiver or destination


process. Message has a predefined structure and message passing uses two system call:
Send and Receive as shown in fig 3.13.

send(name of destination process, message);

receive(name of source process, message);

Fig 3.13: sending and receiving

In this calls, the sender and receiver processes address each other by names. Mode of
communication between two process can take place through two methods

1) Direct Addressing
2) Indirect Addressing

Direct Addressing:

In this type, the two processes need to name each other to communicate. This become easy
if they have the same parent.

Example

If process A sends a message to process B, then

Electronics and Instrumentation Department Page 19


Real Time Operating Systems

send(B, message);
Receive(A, message);

By message passing a link is established between A and B. Here the receiver knows the
Identity of sender message destination as shown in fig 3.14. This type of arrangement in
direct communication is known as Symmetric Addressing.

Fig 3.14: symmetric addressing

Another type of addressing known as asymmetric addressing where receiver does not
know the ID of the sending process in advance as shown in fig 3.15.

Fig 3.15: asymmetric addressing

Indirect addressing:

In this message send and receive from a mailbox. A mailbox can be abstractly viewed as
an object into which messages may be placed and from which messages may be removed
by processes. The sender and receiver processes should share a mailbox to communicate
as shown in fig 3.16.

Electronics and Instrumentation Department Page 20


Real Time Operating Systems

The following types of communication link are possible through mailbox.

- One to One link: one sender wants to communicate with one receiver. Then single link
is established.

- Many to Many link: Multiple Sender want to communicate with single


receiver.Example in client server system, there are many crying processes and one server
process. The mailbox is here known as PORT.

- One to Many link: One sender wants to communicate with multiple receiver, that is to
broadcast message.

- Many to many: Multiple sender want to communicate with multiple receivers.

Fig 3.16: indirect addressing

MONITORS IN PROCESS SYNCHRONISATION:

The monitor is one of the ways to achieve Process synchronization. The monitor is supported
by programming languages to achieve mutual exclusion between processes. Monitors are
defined as the construct of programming language, which helps in controlling shared data
access. The Monitor is a module or package which encapsulates shared data structure,
procedures, and the synchronization between the concurrent procedure invocations. For
example Java Synchronized methods. Java provides wait() and notify() constructs.

1. It is the collection of condition variables and procedures combined together in a


special kind of module or a package.

Electronics and Instrumentation Department Page 21


Real Time Operating Systems

2. The processes running outside the monitor can’t access the internal variable of the
monitor but can call procedures of the monitor.
3. Only one process at a time can execute code inside monitors.

Characteristics of Monitors
1. Inside the monitors, we can only execute one process at a time.
2. Monitors are the group of procedures, and condition variables that are merged
together in a special type of module.
3. If the process is running outside the monitor, then it cannot access the monitor’s internal
variable. But a process can call the procedures of the monitor.

4. Monitors offer high-level of synchronization

5. Monitors were derived to simplify the complexity of synchronization problems.

6. There is only one process that can be active at a time inside the monitor.

Components of Monitor
There are four main components of the monitor:

1. Initialization : Initialization comprises the code, and when the monitors are created,
we use this code exactly once
2. Private data : Private data is another component of the monitor. It comprises all the
private data, and the private data contains private procedures that can only be used
within the monitor. So, outside the monitor, private data is not visible.
3. Monitor procedure : Monitors Procedures are those procedures that can be called from
outside the monitor
4. Monitor entry queue : Monitor entry queue is another essential component of the
monitor that includes all the threads, which are called procedures.

Syntax of monitor:

Electronics and Instrumentation Department Page 22


Real Time Operating Systems

Fig 3.17: syntax of monitor

Condition Variables

There are two types of operations that we can perform on the condition variables of the
monitor:

1. Wait : a.wait(): – The process that performs wait operation on the condition variables
are suspended and locate the suspended process in a block queue of that condition
variable.
2. Signal : a.signal() : – If a signal operation is performed by the process on the
condition variable, then a chance is provided to one of the blocked processes.

Advantages of Monitor:

Monitors have the advantage of making parallel programming easier and less error prone than
using techniques such as semaphore.

Disadvantages of Monitor:

Monitors have to be implemented as part of the programming language. The compiler must
generate code for them. This gives the compiler the additional burden of having to know what

Electronics and Instrumentation Department Page 23


Real Time Operating Systems

operating system facilities are available to control access to critical sections in concurrent
processes. Some languages that do support monitors are Java, C#, Visual Basic, Ada and
concurrent Euclid.

CLASSICAL PROBLEMS OF SYNCHRONISATION:

Reader Writer Problem :

The readers-writers problem relates to an object such as a file that is shared between multiple
processes. Some of these processes are readers i.e. they only want to read the data from the
object and some of the processes are writers i.e. they want to write into the object.

The readers-writers problem is used to manage synchronization so that there are no problems
with the object data. For example - If two readers access the object at the same time there is
no problem. However if two writers or a reader and writer access the object at the same time,
there may be problems.

To solve this situation, a writer should get exclusive access to an object i.e. when a writer is
accessing the object, no reader or writer may access it. However, multiple readers can access
the object at the same time.

This can be implemented using semaphores. The codes for the reader and writer process in
the reader-writer problem are given as follows −

Reader Process

The code that defines the reader process is given below −

wait (mutex);

rc ++;

if (rc == 1)

Electronics and Instrumentation Department Page 24


Real Time Operating Systems

wait (wrt);

signal(mutex);

. READ THE OBJECT

wait(mutex);

rc --;

if (rc == 0)

signal (wrt);

signal(mutex);

In the above code, mutex and wrt are semaphores that are initialized to 1. Also, rc is a
variable that is initialized to 0.

The mutex semaphore ensures mutual exclusion and wrt handles the writing mechanism and
is common to the reader and writer process code.

The variable rc denotes the number of readers accessing the object.

As soon as rc becomes 1, wait operation is used on wrt.

This means that a writer cannot access the object anymore. After the read operation is done,
rc is decremented. When re becomes 0, signal operation is used on wrt. So a writer can access
the object now.

Writer Process

The code that defines the writer process is given below:

wait(wrt);

Electronics and Instrumentation Department Page 25


Real Time Operating Systems

. WRITE INTO THE OBJECT

signal(wrt);

If a writer wants to access the object, wait operation is performed on wrt. After that no other
writer can access the object. When a writer is done writing into the object, signal operation is
performed on wrt.

Producer-Consumer Problem :

In the producer-consumer problem, there is one Producer that is producing something and
there is one Consumer that is consuming the products produced by the Producer. The
producers and consumers share the same memory buffer that is of fixed-size.

The job of the Producer is to generate the data, put it into the buffer, and again start
generating data. While the job of the Consumer is to consume the data from the buffer.

● The producer should produce data only when the buffer is not full. If the buffer is full,
then the producer shouldn't be allowed to put any data into the buffer.
● The consumer should consume data only when the buffer is not empty. If the buffer is
empty, then the consumer shouldn't be allowed to take any data from the buffer.
● The producer and consumer should not access the buffer at the same time.

In the producer-consumer problem, we use three semaphore variables:

 Semaphore S: This semaphore variable is used to achieve mutual exclusion between


processes. By using this variable, either Producer or Consumer will be allowed to use
or access the shared buffer at a particular time. This variable is set to 1 initially.
 Semaphore E: This semaphore variable is used to define the empty space in the
buffer. Initially, it is set to the whole space of the buffer i.e. "n" because the buffer is
initially empty.

Electronics and Instrumentation Department Page 26


Real Time Operating Systems

 Semaphore F: This semaphore variable is used to define the space that is filled by the
producer. Initially, it is set to "0" because there is no space filled by the producer
initially.

Producer:

int count = 0;

void producer(void)

int itemp;

while(true)

producer_item(itemp)

while(count==n) //buffer is full or not

buffer[in]=itemp;

in=(int1)modn; //increment n

count=count+1; //load Rp, m[count],increase Rp,store m[count],Rp

Consumer:

Initial Product is x1 = itemp

void consumer(void)

int itemc;

Electronics and Instrumentation Department Page 27


Real Time Operating Systems

while(true)

while(count==0)

itemc=buffer(out);

out=(out + 1)modn;

count=count-1;

process_item(itemc);

SOLUTION:

Pseudo-code for the producer:

void producer() {

while(T) {

produce() //produce data by the producer

wait(E) //reduce the value of the semaphore variable "E" by one

wait(S) //set the semaphore variable "S" to "0" so that no other process can enter into the
critical section

append() //append the newly produced data in the buffer

signal(S) //set the semaphore variable "S" to "1"

signal(F) //increase the semaphore variable "F" by one

Electronics and Instrumentation Department Page 28


Real Time Operating Systems

Pseudo-code for the consumer:

void consumer() {

while(T) {

wait(F) //decrease the semaphore variable "F" by one

wait(S) //set the semaphore variable "S" to "0"

take() //take the data from the buffer by the consumer

signal(S) //set the semaphore variable "S" to "1" so that other processes can come into
the critical section

signal(E) //increase the semaphore variable "E" by one

use() //use the data taken from the buffer by the process to do some operation

EXTRA INFORMATION

Synchronization Hardware

I. Many systems provide hardware support for critical section code


II. Uniprocessors - could disable interrupts
 Currently running code would execute without preemption
 Dangerous to disable interrupts at application level
o Disabling interrupts is usually unavailable in CPU user mode
 Generally too inefficient on multiprocessor systems
o Operating systems using this are not broadly scalable
III. Modern machines provide special atomic hardware instructions

 Atomic = non-interruptible

Electronics and Instrumentation Department Page 29


Real Time Operating Systems

 Test memory word and set value


 swap contents of two memory words

TestAndSet Instruction

I. Semantics of the TAS instruction:

boolean TestAndSet (boolean *target)

boolean rv = "target;

*target = TRUE; return rv:

II. Shared boolean variable lock, initialized to false.


III. Use:

do {

while (TestAndSet (&lock))

; /* do nothing

// critical section lock = FALSE;

// remainder section

} while (TRUE);

Swap Instruction

Semantics of the Swap instruction:

void Swap (boolean *a, boolean *b)

boolean temp = *a;

*a = *b;

Electronics and Instrumentation Department Page 30


Real Time Operating Systems

*b = temp:

I. Shared Boolean variable lock initialized to FALSE; each process has a local Boolean

variable key.

 Solution:

do {

key = TRUE;

while (key == TRUE) Swap (&lock, &key);

// critical section lock = FALSE;

// remainder section

} while (TRUE);

Spin-lock

I. Spin-lock is a general (counting) semaphore using busy waiting instead of blocking


 Blocking and switching between threads and/or processes may be much more time
demanding than the time waste caused by short-time busy waiting
 One CPU does busy waiting and another CPU executes to clear away the reason for
waiting
II. Used in multiprocessors to implement short critical sections
 Typically inside the OS kernel
III. Used in many multiprocessor operating systems
 Windows 2k/XP, Linuxes, ...

Electronics and Instrumentation Department Page 31


Real Time Operating Systems

DEADLOCK

Deadlock is a situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process.

EXAMPLES:

To illustrate a deadlocked state, consider a system with three CD RW drives. Suppose each of
three processes holds one of these CD RW drives. If each process now requests another drive,
the three processes will be in a deadlocked state. Each is waiting for the event “CD RW is
released,” which can be caused only by one of the other waiting processes. This example
illustrates a deadlock involving the same resource type.

Deadlocks may also involve different resource types. For example, consider a system with
one printer and one DVD drive. Suppose that process Pi is holding the DVD and process Pj is
holding the printer. If Pi requests the printer and Pj requests the DVD drive, a deadlock
occurs.

Analogy:

I. When two trains are coming toward each other on the same track and there is only one
track, none of the trains can move once they are in front of each other.
II. Road Traffic is an real life example of deadlock. Explain here –
If traffic is flowing in one side. And, bridge is worked as resource.
When deadlock arises, then this problem can be solved, if one car is getting backs up
(Preempt resources and rollback).
Deadlock issue is happened, if multiple cars are getting to backed up. So, now
starvation is easily getting on.

When there are two or more processes that hold some resources and wait for resources held
by other(s), it results in a deadlock. For example, in the below diagram, Process 1 is holding

Electronics and Instrumentation Department Page 32


Real Time Operating Systems

Resource 1 and waiting for resource 2 which is acquired by process 2, and process 2 is
waiting for resource 1. (fig 3.18)

Fig 3.18: Example of deadlock

Four necessary conditions for deadlock to occur:

A deadlock situation can arise if the following four conditions hold simultaneously in a
system.

I. Mutual exclusion: At least one resource must be held in a nonsharable mode; that is,
only one process at a time can use the resource. If another process requests that
resource, the requesting process must be delayed until the resource has been released.
II. Hold and wait: A process must be holding at least one resource and waiting to
acquire additional resources that are currently being held by other processes.
III. No pre-emption: Resources cannot be preempted; that is, a resource can be released
only voluntarily by the process holding it, after that process has completed its task. A
resourse cant be taken from another process unless the existing process releases that
resource.
IV. Circular wait: A set {P0, P1, ..., Pn} of waiting processes must exist such that P0 is
waiting for a resource held by P1, P1 is waiting for a resource held by P2, ..., Pn−1 is
waiting for a resource held by Pn, and Pn is waiting for a resource held by P0.

Electronics and Instrumentation Department Page 33


Real Time Operating Systems

RESOURCE ALLOCATION GRAPH:

Deadlocks can be described more precisely in terms of a directed graph called a system
resource-allocation graph. It is used to detect the deadlock in the system. This graph consists
of a set of vertices V and a set of edges E.

I. The set of vertices V is partitioned into two different types of nodes:


 P = {P1, P2, ..., Pn}, the set consisting of all the active processes in the system, and
 R = {R1, R2, ..., Rm}, the set consisting of all resource types in the system.

II. There are two types of edges:


 Request edge: A directed edge from process Pi to resource type Rj is denoted by
Pi → Rj ; it signifies that process Pi has requested an instance of resource type Rj
and is currently waiting for that resource.
 Assignment Edge: A directed edge from resource type Rj to process Pi is denoted
by Rj → Pi ; it signifies that an instance of resource type Rj has been allocated to
process Pi .

Pictorially, we represent each process Pi as a circle and each resource type Rj as a rectangle.
Since resource type Rj may have more than one instance, we represent each such instance as a
dot within the rectangle.

A request edge points to only the rectangle Rj , whereas an assignment edge must also
designate one of the dots in the rectangle. When process Pi requests an instance of resource
type Rj , a request edge is inserted in the resource-allocation graph.

There are two rules to detect deadlock:

I. RULE 1: Single Instance of Each Resource Type

 If a cycle is being formed then the system is in a deadlock state.


 If no cycle is being formed, then the system is not in a deadlock state.

II. RULE 2: Several Instances of a Resource Type

Electronics and Instrumentation Department Page 34


Real Time Operating Systems

 If there is cycle formation in the graph, then the system may or may not be in a
deadlock situation. In this case, the Banker’s algorithm is used to detect deadlock.
 If there is no cycle formation in the graph, then we can say no deadlock in the
system.

NOTE : A process in operating systems uses different resources and uses resources in the
following way.

I. Requests a resource
II. Use the resource
III. Releases the resource

EXAMPLE:

Fig 3.19: Resource Allocation Graph

Figure 3.19 gives an example Resource Allocation Graph. Suppose that process P3 requests
an instance of resource type R2. Since no resource instance is currently available, we add a
request edge P3 → R2 to the graph (Figure 3.20). At this point, two minimal cycles exist in
the system:

P1 → R1 → P2 → R3 → P3 → R2 → P1

P2 → R3 → P3 → R2 → P2

Electronics and Instrumentation Department Page 35


Real Time Operating Systems

Fig 3.20: Resource Allocation Graph with a deadlock

Processes P1, P2, and P3 are deadlocked. Process P2 is waiting for the resource R3, which is
held by process P3. Process P3 is waiting for either process P1 or process P2 to release
resource R2. In addition, process P1 is waiting for process P2 to release resource R1.

Now consider the resource-allocation graph in Figure 3.21. In this example, we also have a
cycle:

P1 → R1 → P3 → R2 → P1

Fig 3.21: Resource-Allocation Graph with a cycle but no deadlock

However, there is no deadlock. Observe that process P4 may release its instance of resource
type R2. That resource can then be allocated to P3, breaking the cycle.

In summary, if a resource-allocation graph does not have a cycle, then the system is not in a
deadlocked state. If there is a cycle, then the system may or may not be in a deadlocked state.
This observation is important when we deal with the deadlock problem.

Electronics and Instrumentation Department Page 36


Real Time Operating Systems

METHODS FOR HANDLING DEADLOCKS:

The deadlock problem can be dealt with in one of following ways:

I. Deadlock ignorance. (Ostrich method)


II. Deadlock prevention.
III. Deadlock avoidance.
IV. Deadlock detection and recovery.

I. Deadlock ignorance. (Ostrich method)

We can ignore the problem altogether and pretend that deadlocks never occur in the system.
This solution is the one used by most operating systems, including Linux and Windows. It is
then up to the application developer to write programs that handle deadlocks. This method is
cheaper than the other approaches. In many systems, deadlocks don’t occur frequently (say,
they occur once per year). So, the extra expense of the other methods may not seem
worthwhile.

Ostrich Algorithm:

In the Ostrich Algorithm, we will simply ignore deadlock problems, and it is assumed that it
will not come again this problem. This technique is implemented because some time the cost
of deadlock handling is more expensive compare to simple ignoring them. If, some time
deadlock is happened by chance then entire system will be rebooted.

II. Deadlock prevention.

We can use a protocol to prevent deadlocks, ensuring that the system will never enter a
deadlocked state. Deadlock prevention provides a set of methods to ensure that at least one of
the necessary conditions for deadlock to occur cannot hold. These methods prevent deadlocks
by constraining how requests for resources can be made.

Electronics and Instrumentation Department Page 37


Real Time Operating Systems

One can zoom into each category individually, prevention is done by negating one of above
mentioned necessary conditions for deadlock.

III. Deadlock avoidance.

Deadlock avoidance requires that the operating system be given additional information in
advance concerning which resources a process will request and use during its lifetime. With
this additional knowledge, the operating system can decide for each request whether or not
the process should wait. To decide whether the current request can be satisfied or must be
delayed, the system must consider the resources currently available, the resources currently
allocated to each process, and the future requests and releases of each process.

Avoidance is kind of futuristic in nature. By using strategy of “Avoidance”, we have to make


an assumption. We need to ensure that all information about resources which process will
need are known to us prior to execution of the process. We use Banker’s algorithm (Which is
in-turn a gift from Dijkstra) in order to avoid deadlock.

IV. Deadlock detection and recovery.

We can allow the system to enter a deadlocked state, detect it, and recover. If a system does
not employ either a deadlock-prevention or a deadlock-avoidance algorithm, then a deadlock
situation may arise. In this environment, the system can provide an algorithm that examines
the state of the system to determine whether a deadlock has occurred and an algorithm to
recover from the deadlock (if a deadlock has indeed occurred).

Some researchers have argued that none of the basic approaches alone is appropriate for the
entire spectrum of resource-allocation problems in operating systems. The basic approaches
can be combined, however, allowing us to select an optimal approach for each class of
resources in a system.

When there are no algorithms to detect and recover from deadlocks, the undetected deadlock
will cause the system’s performance to deteriorate. The resources are being held by processes
that cannot run and more and more processes, as they make requests for resources, will enter
a deadlocked state. Eventually, the system will stop functioning and will need to be restarted
manually.

Electronics and Instrumentation Department Page 38


Real Time Operating Systems

DEADLOCK PREVENTION:

For a deadlock to occur, each of the four necessary conditions given earlier must hold. By
making any one of the conditions to fail, deadlock occurrence can be prevented.

Examining each of these conditions:

I. Mutual Exclusion:

To move from mutual exclusion to non-mutual exclusion, we need to make all the resources
sharable. Read-only files are a good example of a sharable resource. If several processes
attempt to open a read-only file at the same time, they can be granted simultaneous access to
the file. In general, however, we cannot prevent deadlocks by denying the mutual-exclusion
condition, because some resources are intrinsically nonsharable. For example, a mutex lock
cannot be simultaneously shared by several processes. Therefore, it is difficult to implement.

II. Hold and Wait:

To ensure that the hold-and-wait condition never occurs in the system, we must guarantee
that, whenever a process requests a resource, it does not hold any other resources.

One protocol that we can use requires each process to request and be allocated all its
resources before it begins execution. After the process completes its execution the resources
can be given to other processes. We can implement this provision by requiring that system
calls requesting resources for a process precede all other system calls.

An alternative protocol allows a process to request resources only when it has none. A
process may request some resources and use them. Before it can request any additional
resources, it must release all the resources that it is currently allocated.

This results in the following problems:

 The average waiting time increases.


 Starvation problem: A process that needs several popular resources may have to
wait indefinitely, because at least one of the resources that it needs is always
allocated to some other process. Also, lesser priority processes may keep waiting
for resources.

Electronics and Instrumentation Department Page 39


Real Time Operating Systems

III. No Preemption:

The third necessary condition for deadlocks is that there be no preemption of resources that
have already been allocated. To ensure that this condition does not hold, we can use the
following protocol:

 If a process is holding some resources and requests another resource that cannot
be immediately allocated to it (that is, the process must wait), then all resources
the process is currently holding are preempted. In other words, these resources are
implicitly released.
 The preempted resources are added to the list of resources for which the process is
waiting. The process will be restarted only when it can regain its old resources, as
well as the new ones that it is requesting.

Alternatively, if a process requests some resources, we first check whether they are available.

 If they are, we allocate them.


 If they are not, we check whether they are allocated to some other process that is
waiting for additional resources. If so, we preempt the desired resources from the
waiting process and allocate them to the requesting process.
 If the resources are neither available nor held by a waiting process, the requesting
process must wait. While it is waiting, some of its resources may be preempted,
but only if another process requests them. A process can be restarted only when it
is allocated the new resources it is requesting and recovers any resources that were
preempted while it was waiting.

This protocol is often applied to resources whose state can be easily saved and restored later,
such as CPU registers and memory space. It cannot generally be applied to such resources as
mutex locks and semaphores. But, the method is difficult to implement practically.

IV. Circular Wait:

Electronics and Instrumentation Department Page 40


Real Time Operating Systems

The fourth and final condition for deadlocks is the circular-wait condition. One way to ensure
that this condition never holds is to assign/ give a number to each and every resource. A
process can access or request the resources only in an increasing order.

We assign to each resource type a unique integer number, which allows us to compare two
resources and to determine whether one precedes another in our ordering. Formally, we
define a one-to-one function F: R → N, where N is the set of natural numbers.

For example, if the set of resource types R includes tape drives, disk drives, and printers, then
the function F might be defined as follows:

F(tape drive) = 1

F(disk drive) = 5

F(printer) = 12

We can now consider the following protocol to prevent deadlocks:

 Each process can request resources only in an increasing order of enumeration.


That is, a process can initially request any number of instances of a resource type
—say, Ri . After that, the process can request instances of resource type R j if and
only if F(Rj) > F(Ri).
 For example, using the function defined previously, a process that wants to use the
tape drive and printer at the same time must first request the tape drive and then
request the printer. Alternatively, we can require that a process requesting an
instance of resource type Rj must have released any resources Ri such that F(Ri) ≥
F(Rj).
 Note also that if several instances of the same resource type are needed, a single
request for all of them must be issued. If these two protocols are used, then the
circular-wait condition cannot hold.

We can demonstrate this fact by assuming that a circular wait exists (proof by contradiction).
Let the set of processes involved in the circular wait be {P0, P1, ..., Pn}, where Pi is waiting
for a resource Ri , which is held by process Pi+1. (Modulo arithmetic is used on the indexes,
so that Pn is waiting for a resource Rn held by P0.) Then, since process Pi+1 is holding
resource Ri while requesting resource Ri+1, we must have F(Ri) < F(Ri+1) for all i. But this
condition means that F(R0) < F(R1) < ... < F(Rn) < F(R0). By transitivity, F(R0) < F(R0),

Electronics and Instrumentation Department Page 41


Real Time Operating Systems

which is impossible. Therefore, there can be no circular wait. We can accomplish this scheme
in an application program by developing an ordering among all synchronization objects in the
system. All requests for synchronization objects must be made in increasing order.

DEADLOCK AVOIDANCE:

Side effects of preventing deadlocks by ensuring that at least one of the necessary conditions
for deadlock cannot occur:

I. Low device utilization.


II. Reduced system throughput.

An alternative method is by avoiding deadlocks. In this method the system should know well
in advance about:

I. how many number of processes are there.


II. how many number of resources each process requires.
III. process execution sequence.

For example, in a system with one tape drive and one printer, the system might need to know
that process P will request first the tape drive and then the printer before releasing both
resources, whereas process Q will request first the printer and then the tape drive. With this
knowledge of the complete sequence of requests and releases for each process, the system
can decide for each request whether or not the process should wait in order to avoid a
possible future deadlock.

Each request requires that in making this decision the system consider the resources currently
available, the resources currently allocated to each process, and the future requests and
releases of each process.

The various algorithms that use this approach differ in the amount and type of information
required. The simplest and most useful model requires that each process declare the
maximum number of resources of each type that it may need. Given this a prior information,

Electronics and Instrumentation Department Page 42


Real Time Operating Systems

it is possible to construct an algorithm that ensures that the system will never enter a
deadlocked state.

Safe Execution:

A state is safe if the system can allocate resources to each process (up to its maximum) in
some order and still avoid a deadlock. More formally, a system is in a safe state only if there
exists a safe sequence.

A sequence of processes is a safe sequence for the current allocation state if, for each Pi , the
resource requests that Pi can still make can be satisfied by the currently available resources
plus the resources held by all Pj , with j < i. In this situation, if the resources that Pi needs are
not immediately available, then Pi can wait until all Pj have finished. When they have
finished, Pi can obtain all of its needed resources, complete its designated task, return its
allocated resources, and terminate. When Pi terminates, Pi+1 can obtain its needed resources,
and so on.

Unsafe Execution:

If no such sequence exists, then the system state is said to be unsafe. In an unsafe state, the
operating system cannot prevent processes from requesting resources in such a way that a
deadlock occurs. The behavior of the processes controls unsafe states. The safe, unsafe and
deadlock state spaces are shown in figure 3.22.

Fig 3.22: Safe, unsafe and deadlock state spaces

Electronics and Instrumentation Department Page 43


Real Time Operating Systems

Deadlock-avoidance algorithm - Banker’s Algorithm :

The name Banker’s Algorithm was chosen because the algorithm could be used in a banking
system to ensure that the bank never allocated its available cash in such a way that it could no
longer satisfy the needs of all its customers.

Several data structures must be maintained to implement the banker’s algorithm. These data
structures encode the state of the resource-allocation system.

We need the following data structures, where n is the number of processes in the system and
m is the number of resource types:

I. Available: A vector of length m indicates the number of available resources of each


type. If Available[j] equals k, then k instances of resource type Rj are available.
II. Max: An n × m matrix defines the maximum demand of each process. If Max[i][j]
equals k, then process Pi may request at most k instances of resource type Rj .
III. Allocation: An n × m matrix defines the number of resources of each type currently
allocated to each process. If Allocation[i][j] equals k, then process P i is currently
allocated k instances of resource type Rj .
IV. Need: An n × m matrix indicates the remaining resource need of each process. If
Need[i][j] equals k, then process Pi may need k more instances of resource type Rj to
complete its task. Note that Need[i][j] equals Max[i][j] − Allocation[i][j].

These data structures vary over time in both size and value.

Some notations:

I. X and Y  Vectors of length n.


II. X ≤ Y  X[i] ≤ Y[i] for all i = 1, 2, ..., n.
III. Each row in the matrices Allocation and Need are treated as vectors ( referred as
Allocationi and Needi )
IV. Allocationi  The resources currently allocated to process Pi
V. Needi  The additional resources that process Pi may still request to complete its
task.

 Safety Algorithm:

Electronics and Instrumentation Department Page 44


Real Time Operating Systems

The algorithm for finding out whether or not a system is in a safe state can be described as
follows:

1. Let Work and Finish be vectors of length m and n, respectively. Initialize Work =
Available and Finish[i] = false for i = 0, 1, ..., n − 1.

2. Find an index i such that both

a. Finish[i] == false

b. Needi ≤ Work

If no such i exists, go to step 4.

3. Work = Work + Allocationi

Finish[i] = true

Go to step 2.

4. If Finish[i] == true for all i, then the system is in a safe state. Moreover,
if Finish[i]==false the process Pi is deadlocked.

This algorithm may require an order of m × n2 operations to determine whether a state is safe.

 Resource-Request Algorithm:

This is the algorithm for determining whether requests can be safely granted.

Let Requesti be the request vector for process Pi . If Requesti [j] == k, then process Pi wants
k instances of resource type Rj. When a request for resources is made by process Pi, the
following actions are taken:

1. If Requesti ≤ Needi , go to step 2. Otherwise, raise an error condition, since the


process has exceeded its maximum claim.

2. If Requesti ≤ Available, go to step 3. Otherwise, Pi must wait, since the resources


are not available.

3. Have the system pretend to have allocated the requested resources to process Pi by
modifying the state as follows:

Available = Available–Requesti ;

Electronics and Instrumentation Department Page 45


Real Time Operating Systems

Allocationi = Allocationi + Requesti ;

Needi = Needi –Requesti;

If the resulting resource-allocation state is safe, the transaction is completed, and process Pi is
allocated its resources. However, if the new state is unsafe, then Pi must wait for Requesti ,
and the old resource-allocation state is restored.

EXAMPLE:

Considering a system with five processes P0 through P4 and three resources of type A, B, C.
Resource type A has 10 instances, B has 5 instances and type C has 7 instances. Suppose at
time t0 following snapshot of the system has been taken (table 3.1):

Table 3.1: System information at time t 0

Question1. What will be the content of the Need matrix?


Need [i, j] = Max [i, j] – Allocation [i, j]

So, the content of Need Matrix is (Table 3.2):

Table 3.2: Need Matrix

Question2. Is the system in a safe state? If Yes, then what is the safe sequence?

Electronics and Instrumentation Department Page 46


Real Time Operating Systems

Applying the Safety algorithm on the given system (fig 3.23),

Fig 3.23: Steps for Q2

Question3. What will happen if process P1 requests one additional instance of resource
type A and two instances of resource type C?

Electronics and Instrumentation Department Page 47


Real Time Operating Systems

Fig 3.24: Steps for Resource Request Algorithm

We must determine whether this new system state is safe. To do so, we again execute
Safety algorithm on the above data structures.(Figure 3.25)

Fig 3.25: Steps for Safety Algorithm

Hence the new system state is safe, so we can immediately grant the request for process P1
.

DEADLOCK DETECTION AND RECOVERY:

If a system does not employ either a deadlock-prevention or a deadlock-avoidance algorithm,


then a deadlock situation may occur. In this environment, the system may provide:

I. An algorithm that examines the state of the system to determine whether a deadlock
has occurred.
II. An algorithm to recover from the deadlock.

Electronics and Instrumentation Department Page 48


Real Time Operating Systems

A detection-and-recovery scheme requires overhead that includes not only the run-time costs
of maintaining the necessary information and executing the detection algorithm but also the
potential losses inherent in recovering from a deadlock. Now, the Bankers algorithm includes
a Safety Algorithm (given earlier). This is the Deadlock Detection Algorithm.

RECOVERY FROM DEADLOCK:

When a detection algorithm determines that a deadlock exists, several alternatives are
available. One possibility is to inform the operator that a deadlock has occurred and to let the
operator deal with the deadlock manually. Another possibility is to let the system recover
from the deadlock automatically. There are two options for breaking a deadlock. One is
simply to abort one or more processes to break the circular wait. The other is to preempt
some resources from one or more of the deadlocked processes

I. Pessimistic Approach or Process Termination:

To eliminate deadlocks by aborting a process, we use one of two methods. In both methods,
the system reclaims all resources allocated to the terminated processes.

 Abort or kill all deadlocked processes: This method clearly will break the deadlock
cycle, but at great expense. The deadlocked processes may have computed for a long
time, and the results of these partial computations must be discarded and probably
will have to be recomputed later. There will be more burden w.r.t performance.
 Abort one process at a time until the deadlock cycle is eliminated: This method
incurs considerable overhead, since after each process is aborted, a deadlock-detection
algorithm must be invoked to determine whether any processes are still deadlocked.

Many factors may affect which process is chosen, including:

 What the priority of the process is


 How long the process has computed and how much longer the process will compute
before completing its designated task
 How many and what types of resources the process has used (for example, whether
the resources are simple to preempt)
 How many more resources the process needs in order to complete
 How many processes will need to be terminated
 Whether the process is interactive or batch

Electronics and Instrumentation Department Page 49


Real Time Operating Systems

II. Optimistic Approach or Resource Preemption:

To eliminate deadlocks using resource preemption, we successively preempt some resources


from processes and give these resources to other processes until the deadlock cycle is broken.
If preemption is required to deal with deadlocks, then three issues need to be addressed:

 Selecting a victim: Select a process based on the above given points and make it
victim.
 Starvation: In a system where victim selection is based primarily on cost factors, it
may happen that the same process is always picked as a victim. As a result, this
process never completes its designated task, a starvation situation any practical system
must address. Clearly, we must ensure that a process can be picked as a victim only a
(small) finite number of times. The most common solution is to include the number of
rollbacks in the cost factor.
 Rollback: The pre-empted process must roll back to some safe state and must be
restarted from that state. This can be done in two ways:
o Total rollback: The pre-empted process, while executing again, starts from
first.
o Partial rollback: The pre-empted process, while restarting, starts from where it
stopped.

Deadlock Modeling:

Holt (1972) showed how these four conditions can be modeled using directed graphs. The
graphs have two kinds of nodes: processes, shown as circles, and resources, shown as
squares. An arc from a resource node (square) to a process node (circle) means that the
resource has previously been requested by, granted to, and is currently held by that process.
In Fig. 3.26(a), resource R is currently assigned to process A.

Electronics and Instrumentation Department Page 50


Real Time Operating Systems

Figure 3.26: Resource allocation graphs. (a) Holding a resource. (b) Requesting a resource.
(c) Deadlock.

An arc from a process to a resource means that the process is currently blocked waiting for
that resource. In Fig. 3.26(b), process B is waiting for resource S. In Fig. 3.26(c) we see a
deadlock: process C is waiting for resource T, which is currently held by process D. Process
D is not about to release resource T because it is waiting for resource U, held by C. Both
processes will wait forever. A cycle in the graph means that there is a deadlock involving the
processes and resources in the cycle (assuming that there is one resource of each kind). In this
example, the cycle is C-T-D-U-C.

Now let us look at an example of how resource graphs can be used. Imagine that we have
three processes, A, B, and C, and three resources, R, S, and T. The operating system is free to
run any unblocked process at any instant, so it could decide to run A until A finished all its
work, then run B to completion, and finally run C.

This ordering does not lead to any deadlocks (because there is no competition for resources)
but it also has no parallelism at all. In addition to requesting and releasing resources,
processes compute and do I/O. When the processes are run sequentially, there is no
possibility that while one process is waiting for I/O, another can use the CPU. Thus running
the processes strictly sequentially may not be optimal. On the other hand, if none of the
processes do any I/O at all, shortest job first is better than round robin, so under some
circumstances running all processes sequentially may be the best way.

Let us now suppose that the processes do both I/O and computing, so that round robin is a
reasonable scheduling algorithm. However, as we have already mentioned, the operating

Electronics and Instrumentation Department Page 51


Real Time Operating Systems

system is not required to run the processes in any special order. In particular, if granting a
particular request might lead to deadlock, the operating system can simply suspend the
process without granting the request (i.e., just not schedule the process) until it is safe.

The point to understand is that resource graphs are a tool that let us see if a given
request/release sequence leads to deadlock. We just carry out the requests and releases step
by step, and after every step check the graph to see if it contains any cycles. If so, we have a
deadlock; if not, there is no deadlock. Although our treatment of resource graphs has been for
the case of a single resource of each type, resource graphs can also be generalized to handle
multiple resources of the same type (Holt, 1972).
THE DINING-PHILOSOPHER’S PROBLEM

SITUATION:

Consider five philosophers who spend their lives thinking and eating. The philosophers share
a circular table surrounded by five chairs, each belonging to one philosopher. In the center of
the table is a bowl of rice, and the table is laid with five single chopsticks (Figure 3.27).

Fig 3.27: The situation of the dining philosophers.

When a philosopher thinks, she does not interact with her colleagues. From time to time, a
philosopher gets hungry and tries to pick up the two chopsticks that are closest to her (the
chopsticks that are between her and her left and right neighbors). A philosopher may pick up
only one chopstick at a time. Obviously, she cannot pick up a chopstick that is already in the
hand of a neighbor. When a hungry philosopher has both her chopsticks at the same time, she

Electronics and Instrumentation Department Page 52


Real Time Operating Systems

eats without releasing the chopsticks. When she is finished eating, she puts down both
chopsticks and starts thinking again.

SOLUTION:

The dining-philosophers problem is considered a classic synchronization problem because it


is an example of a large class of concurrency-control problems. It is a simple representation
of the need to allocate several resources among several processes in a deadlock-free and
starvation-free manner. One simple solution is to represent each chopstick with a semaphore.
A philosopher tries to grab a chopstick by executing a wait() operation on that semaphore.
She releases her chopsticks by executing the signal() operation on the appropriate
semaphores. Thus, the shared data are

semaphore chopstick[5];

do {

wait(chopstick[i]);

wait(chopstick[(i+1) % 5]);

...

/* eat for awhile */

...

signal(chopstick[i]);

signal(chopstick[(i+1) % 5]);

...

/* think for awhile */

...

} while (true);

Electronics and Instrumentation Department Page 53


Real Time Operating Systems

Fig 3.28: The structure of philosopher i.

where all the elements of chopstick are initialized to 1. The structure of philosopher i is
shown in Figure 3.28.

Although this solution guarantees that no two neighbours are eating simultaneously, it
nevertheless must be rejected because it could create a deadlock. Suppose that all five
philosophers become hungry at the same time and each grabs her left chopstick. All the
elements of chopstick will now be equal to 0. When each philosopher tries to grab her right
chopstick, she will be delayed forever.

Several possible remedies to the deadlock problem are replaced by:

I. Allow at most four philosophers to be sitting simultaneously at the table.


II. Allow a philosopher to pick up her chopsticks only if both chopsticks are available (to
do this, she must pick them up in a critical section).
III. Use an asymmetric solution—that is, an odd-numbered philosopher picks up first her
left chopstick and then her right chopstick, whereas an even-numbered philosopher
picks up her right chopstick and then her left chopstick.

Dining Philosophers with Monitors

monitor DP

enum {THINKING; HUNGRY, EATING} state [5] ;

condition self [5];

void pickup (int i) { state[i] = HUNGRY; test(i);

if (state[i] != EATING) self [i].wait;

void putdown (int i) { state[i] = THINKING;

// test left and right neighbors test((i + 4) % 5);

Electronics and Instrumentation Department Page 54


Real Time Operating Systems

test((i + 1) % 5);

void test (int i) {

if ((state[(i + 4) % 5] != EATING) && (state[i] == HUNGRY) &&

(state[(i+1) %5] != EATING)) {

state[i] = EATING ; self[i].signal();

initialization_code() {

for (int i = 0; i < 5; i++) state[i] = THINKING;

Q1) A system has n resources R0,…,Rn-1,and k processes P0,….Pk-1.The implementation of


the resource request logic of each process Pi is as follows:

if (i % 2 == 0) {

if (i < n) request Ri

if (i+2 < n) request Ri+2

else {

if (i < n) request Rn-i

if (i+2 < n) request Rn-i-2

In which one of the following situations is a deadlock possible?

(A) n=40, k=26

Electronics and Instrumentation Department Page 55


Real Time Operating Systems

(B) n=21, k=12

(C) n=20, k=10

(D) n=41, k=19

A) Option B is answer

No. of resources, n = 21

No. of processes, k = 12

Processes {P0, P1....P11} make the following Resource requests:

{R0, R20, R2, R18, R4, R16, R6, R14, R8, R12, R10, R10}

For example P0 will request R0 (0%2 is = 0 and 0< n=21).

Similarly, P10 will request R10. P11 will request R10 as n - i = 21 - 11 = 10.

As different processes are requesting the same resource, deadlock

may occur.

Electronics and Instrumentation Department Page 56


Real Time Operating Systems

TEST QUESTIONS

1. Define process synchronization. Why is process synchronisation needed in OS?


2. Define Parallelism and Concurrency.
3. Differentiate between Parallelism and Concurrency.
4. What are the types of process Synchronization?
5. Define race condition.
6. State the reason for race around condition and the methods to overcome it.
7. Brief about the Producer and Consumer problem (Bounder buffer problem).
8. . Brief about the Producer and Consumer problem (Bounder buffer problem).
9. What’s a critical section?
10. What are the rules to be followed while achieving process
Synchronization?
11. Brief about the critical section selection using lock.
12. Explain about turn variable (Strict alteration method).
13. Explain about Semaphores in Operating System.
14. Describe atomic process.
15. Discuss about the 2 types of semaphore.
16. What are the advantages and disadvantages of semaphore?
17. What’s test_and_set in lock method of process synchronisation?
18. Explain about the reader writer problem.
19. Explain monitors in process synchronisation.
20. What is Message passing?
21. What are the types of message passing?
22. What are the types of direct communication? (Differentiate between symmetric
direct communication and asymmetric direct communication.)
23. What are the types of communication under the Mailbox under indirect
communication addressing?
24. What is a deadlock in OS?
25. Mention four necessary conditions for deadlock to happen.
26. What is a resource allocation graph and state the 2 rules in RAG to detect the
deadlock.

Electronics and Instrumentation Department Page 57


Real Time Operating Systems

27. Mention the deadlock handling methods.


28. Explain in detail about deadlock prevention method
29. Discuss Bankers algorithm for deadlock avoidance
30. Example on Banker’s Algorithm
31. What’s a safe sequence?
32. Discuss deadlock detection and recovery.
33. What’s an optimistic approach under deadlock recovery?
34. Discuss the dining philosopher’s process synchronization problems with a suitable
example.

Electronics and Instrumentation Department Page 58

You might also like