You are on page 1of 54

Operating System

Shubham Kumaram

April 29, 2017


Contents

I Introduction to Operating System 5


1 Introduction 6
1.1 Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Computer System . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Viewpoints of operating system’s role . . . . . . . . . . . . . . . . 7
1.4 Goals of an OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Functions/Roles/Operations of an OS . . . . . . . . . . . . . . . 7

2 Classification of Operating Systems 9


2.1 Network Operating System (NOS) . . . . . . . . . . . . . . . . . 9
2.2 Distributed Operating System (DOS) . . . . . . . . . . . . . . . 9
2.3 Batch Processing Systems . . . . . . . . . . . . . . . . . . . . . . 10
2.3.1 Performance of batch processing system . . . . . . . . . . 10
2.4 Multiprogramming System . . . . . . . . . . . . . . . . . . . . . . 11
2.4.1 Performance of Multiprogramming System . . . . . . . . 11
2.5 Real Time Application . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5.1 Features of a Real Time Operating System . . . . . . . . 12

3 Computer System Architecture 13


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Multiprocessor Systems . . . . . . . . . . . . . . . . . . . . . . . 13
3.2.1 SMP and ASMP . . . . . . . . . . . . . . . . . . . . . . . 14

4 Operating System Services 15


4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2 User Operating System Interface . . . . . . . . . . . . . . . . . . 15
4.3 System Calls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.3.1 Types of System Calls . . . . . . . . . . . . . . . . . . . . 16
4.3.2 Application Programming Interface (API) . . . . . . . . . 17
4.3.3 System Call Interface . . . . . . . . . . . . . . . . . . . . 17
4.4 System Components . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.5 Booting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.6 Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

5 Operating System Structure 19


5.1 Monolithic Structure . . . . . . . . . . . . . . . . . . . . . . . . . 19
5.2 Layered Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.3 Microkernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

1
II Process 21
6 Introduction to Process 22
6.1 Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.2 Process State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.3 Process Control Block . . . . . . . . . . . . . . . . . . . . . . . . 23
6.4 Process Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.4.1 Scheduling Queues . . . . . . . . . . . . . . . . . . . . . . 24
6.4.2 Schedulers . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

7 Interprocess Communication 26
7.1 Types of Processes . . . . . . . . . . . . . . . . . . . . . . . . . . 26
7.1.1 Independent Process . . . . . . . . . . . . . . . . . . . . . 26
7.1.2 Cooperating Process . . . . . . . . . . . . . . . . . . . . . 26
7.2 Shared Memory System . . . . . . . . . . . . . . . . . . . . . . . 27
7.3 Message Passing System . . . . . . . . . . . . . . . . . . . . . . . 27
7.3.1 Direct or Indirect Communication . . . . . . . . . . . . . 27
7.3.2 Synchronous or Asynchronous Communication . . . . . . 28
7.3.3 Automatic or Explicit Buffering . . . . . . . . . . . . . . . 28

8 Threads 29
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
8.2 Advantages of Threads . . . . . . . . . . . . . . . . . . . . . . . . 29
8.3 Multi-threading Models . . . . . . . . . . . . . . . . . . . . . . . 30
8.3.1 Many-to-One Model . . . . . . . . . . . . . . . . . . . . . 30
8.3.2 One-to-One Model . . . . . . . . . . . . . . . . . . . . . . 30
8.3.3 Many-to-Many Model . . . . . . . . . . . . . . . . . . . . 30
8.4 Thread Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

9 Process Scheduling 32
9.1 Preemptive and Non-preemptive Scheduling . . . . . . . . . . . . 32
9.2 Dispatcher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
9.3 Scheduling Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . 33
9.4 Scheduling Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 33
9.4.1 First Come First Served scheduling . . . . . . . . . . . . . 33
9.4.2 Shortest Job First scheduling . . . . . . . . . . . . . . . . 34
9.4.3 Priority Scheduling . . . . . . . . . . . . . . . . . . . . . . 34
9.4.4 Round Robin Scheduling . . . . . . . . . . . . . . . . . . 35
9.4.5 Multilevel Queue Scheduling . . . . . . . . . . . . . . . . 36
9.4.6 Multilevel Feedback Queue Scheduling . . . . . . . . . . . 36

10 Process Synchronization 37
10.1 Important Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
10.1.1 Race Condition . . . . . . . . . . . . . . . . . . . . . . . . 37
10.1.2 Critical Section Problem . . . . . . . . . . . . . . . . . . . 37
10.1.3 The Problem of Busy Wait . . . . . . . . . . . . . . . . . 38
10.2 Classical Process Synchronization Problems . . . . . . . . . . . . 38
10.2.1 Producers-Consumers with bounded buffers . . . . . . . . 38
10.2.2 Dining Philosophers Problem . . . . . . . . . . . . . . . . 38
10.3 Approaches to Implement Critical Sections . . . . . . . . . . . . 39

2
10.3.1 Algorithmic Approach . . . . . . . . . . . . . . . . . . . . 39
10.3.2 Semaphores . . . . . . . . . . . . . . . . . . . . . . . . . . 39
10.3.3 Test-and-Set(TS) Instruction . . . . . . . . . . . . . . . . 41

11 Deadlocks 47
11.1 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
11.2 Deadlock Characterization . . . . . . . . . . . . . . . . . . . . . . 47
11.2.1 Necessary Conditions . . . . . . . . . . . . . . . . . . . . 47
11.2.2 Resource Allocation Graph . . . . . . . . . . . . . . . . . 47
11.3 Methods for Handling Deadlocks . . . . . . . . . . . . . . . . . . 48
11.3.1 Deadlock Prevention . . . . . . . . . . . . . . . . . . . . . 49
11.3.2 Deadlock Avoidance . . . . . . . . . . . . . . . . . . . . . 49
11.3.3 Deadlock Detection . . . . . . . . . . . . . . . . . . . . . 51
11.3.4 Recovery from Deadlock . . . . . . . . . . . . . . . . . . . 53

3
List of Algorithms

1 Solution outline for a single buffer Producers-Consumers system


using signalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2 Individual Operations for the Producers-Consumers problem . . 40
3 An outline of a Dining Philosopher process . . . . . . . . . . . . 42
4 Dekker’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5 Peterson’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 44
6 Semantics of wait and signal operations on a semaphore . . . . . 44
7 Mutual Exclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
8 Bounded Concurrency using Semaphores . . . . . . . . . . . . . . 45
9 Signalling using semaphores . . . . . . . . . . . . . . . . . . . . . 46
10 Producers-Consumers using Semaphores . . . . . . . . . . . . . . 46

4
Part I

Introduction to Operating
System

5
Chapter 1

Introduction

1.1 Operating System


An Operating System is a program that manages the computer hardware. It also
provides a basis for application programs and acts as an intermediary between
the computer user and the computer hardware.
An Operating System acts as an intermediary between the user of a computer
and computer hardware. The purpose of an Operating System is to provide an
environment in which a user can execute programs in a convenient and efficient
manner.
Some Operating Systems are designed to be convenient, others to be efficient,
and others some combination of the two.

1.2 Computer System


A computer system can be divided roughly into four components:
1. Hardware
2. Operating System
3. Application Programs

4. Users

The Hardware (CPU, memory, and I/O devices) provide the basic
computing resources for the system

The Application Programs such as word processors, spreadsheets, compilers


and web browsers define the ways in which these resources are used to solve
users computing problems.

The Operating System controls and coordinates the use of the hardware
among the various application programs for the various users
Next we describe the basic computer architecture that makes it possible to
write a functional operating system.

6
User 1 User 2 User 3 ...... User n

Compiler Assembler Text Editor ...... Database System

System and Application Programs

Operating System

Computer Hardware

Figure 1.1: Abstract view of the components of a computer system

1.3 Viewpoints of operating system’s role


User view Personal computers, mainframes or minicomputers, handheld com-
puters, embedded computers

System view In this context, we can view an operating system as a resource


allocator. A slightly different view is that it is a control program.

The common functions of controlling and allocating resources are brought


together into one piece of software - The Operating System

1.4 Goals of an OS
1. Efficient use of a computer system
2. User convenience

1.5 Functions/Roles/Operations of an OS
An Operating System implements computational requirements of its users with
the help of resources of the computer system. Its key concerns are described as
follows:

7
Concern OS responsibility/Function
Programs Initiation and termination of programs. Providing
convenient methods so that several programs can
work towards a common goal
Resources Ensuring availability of resources in the system and
allocating them to programs
Scheduling Deciding when, and for how long to devote the CPU
to a program
Protection Protect data and programs against interference from
other users and their programs

8
Chapter 2

Classification of Operating
Systems

OS Class Period Prime Concern Key Concepts


Batch Programming 1960s CPU idle time Spooling, Command Processor
Multiprogramming 1970s Resource utilization Program priorities, preemption
Time Sharing 1970s Good response time Time slice, Round Robin schedul-
ing
Real Time 1980s Meet the deadline Real time scheduling
Distributed 1990s Resource sharing Transparency, Distributed con-
trol

2.1 Network Operating System (NOS)


An OS, which includes software to communicate with other computers via a
network is called NOS. This allows resources such as files, application programs
and printers to be shared between computers. Such OS are specialized to
provide the networking services.
NOS examples are : BSD (Berkeley System Distribution) UNIX, Novell,
Windows NT etc. NOS can be specialized to serve as peer-to-peer OS or as
client/server Operating System.

2.2 Distributed Operating System (DOS)


A large central computer with a number of remote terminals connected to
it is sometimes conceived as distributed processing environment. Resource
sharing, increased throughput, communication, reliability are few reasons to
have distributed processing systems. The job of DOS is to allow the user to
access remote resources in the same manner as they access local resources.
Alpha kernel, Amoeba, Angle, Chorus, Mach are few examples of DOS

9
Multi-
programming
Time Distributed
Efficiency →

Sharing OS

Real
Time OS
Batch
Processing

Necessity Good service Resource Sharing

User Convenience →

Figure 2.1: Efficiency and User Convenience in different OS classes

2.3 Batch Processing Systems


Batch Processing was introduced to avoid CPU time wastage. A batch is a
sequence of user jobs formed for the purpose of processing by a batch processing
operating system. The primary function of the batch processing system is to
service the jobs in a batch one after another without requiring the operator
intervention.

2.3.1 Performance of batch processing system


The notion of turn-around time is used to quantify the Performance of a batch
processing system. Due to spooling, the turn-around time of a job — jobi
processed in a batch processing system includes the following time intervals —
1. Time until a batch is formed (i.e. time until the jobs jobi+1 , . . . jobn are
submitted).
2. Time spent in executing all jobs of the batch.
3. Time spent in printing and sorting the results belonging to different jobs

Response Time The response time provided to a subrequest is the time be-
tween the submission of the subrequest by the user and the formulation
of the process response to it.
Turn Around Time The turn around time of a job, program or process
is the time since its submission for processing to the time its results
become available to the user.

10
Batch execution Result Printing

t0 t1 t2 t3 t4 t5 t6

Job is Batch is

submitted formed

Results are

returned to user

Turn around time

Figure 2.2: Turn-around time in a batch processing system

2.4 Multiprogramming System


2.4.1 Performance of Multiprogramming System
An appropriate measure of performance of a Multiprogramming Operating
System is throughput, which is the ratio of the number of programs processed
and the total time taken to process them
Throughput of a multiprogramming OS that processes n programs over the
n
period of time that starts at to and ends at tf is (tf −t o)
.
To optimize the throughput, a multiprogramming system uses the concepts
and techniques described below —

i) Degree of multiprogramming program mix The number of user pro-


grams that the OS keeps in memory at any time.
The kernel keeps a mix of CPU-bound and I/O-bound programs in memory,
where
• A CPU-bound program is a program involving a lot of computation
and very little I/O
• An I/O-bound program involves very little computation and a lot of
I/O. It uses the CPU in small bursts
ii) Priority-based and pre-emptive scheduling Every program is assigned
a priority. The CPU is always allocated to the highest priority program
that wishes to use it.
A low priority program executing on the CPU is pre-empted, if a higher
priority program wishes to use the CPU.

Priority Priority is a tie-breaking notion used in a Scheduler to decide which


request should be scheduled on the server where many requests await
service.
Preemption Preemption is the forced deallocation of the CPU from a program.

Time slice The notion of a time slice is used to prevent monopolization of the
CPU by a program. The time slice is the largest amount of CPU time any
program can consume when scheduled to execute on the CPU

11
Swapping The technique of swapping provides an alternative whereby a com-
puter system can support a large number of users without having to possess
a large memory.
Swapping is the technique of temporarily removing inactive programs from
the memory of a computer system.

2.5 Real Time Application


A real time application is a program that responds to activities in an external
system within a maximum time determined by the external system.

2.5.1 Features of a Real Time Operating System


1. Permits creation of multiple processes within an application
2. Permits priorities to be assigned to processes
3. Permits a programmer to define interrupts and interrupt processing routines
4. Uses priority driven or deadline oriented scheduling

5. Provides fault tolerance and graceful degradation

Examples of real time OS are : Harmony, Maruti, OS-9 and RTEMs etc.

12
Chapter 3

Computer System
Architecture

3.1 Introduction
A computer system may be organized in a number of different ways, which we
can categorize roughly according to the number of general purpose processors
used–
Single-processor systems There is one main CPU capable of executing a
general purpose instruction set, including instructions for user processes.
Almost all systems have other special-purpose processors as well. They
may come in the form of device-specific processors, such as disk, keyboard
and graphics controllers or on mainframes, they may come in the form of
I/O processors.
Multiprocessor Systems Graceful degradation and fault tolerant — the abil-
ity to continue providing service proportional to the level of surviving
hardware is called graceful degradation. Some systems go beyond grace-
ful degradation and are called fault tolerant, because they can suffer a
failure of any single component and still continue operation.

3.2 Multiprocessor Systems


Although single processor systems are most common, multiprocessor systems
(also known as parallel systems or tightly-coupled systems) are growing in
importance. Such systems have two or more processors in close communication
sharing the computer bus and sometimes the clock, memory and peripheral
devices.
Multiprocessor Systems have three main advantages:
1. Increased throughput
2. Economy of scale
3. Increased reliability – is very much required in mission critical applications
like online services and real time applications.

13
4. Graceful degradation, and
5. Fault tolerance

Types of Multiprocessor System


The multiple-processor systems in use today are of two types —
1. Symmetric Multiprocessor (SMP)
2. Asymmetric Multiprocessor (ASMP)

3.2.1 SMP and ASMP


The difference between symmetric and asymmetric multiprocessing may result
from either hardware or software. For instance, Sun’s operating system SunOS
Version 4 provided asymmetric multiprocessing, whereas version 5 (Solaris) is
symmetric on the same hardware.

14
Chapter 4

Operating System Services

4.1 Introduction
An OS provides an environment for the execution of programs. It provides certain
services to programs and to the users of these programs. These OS services are
provided for the convenience of the programmers to make the programming task
easier.
One set of Operating System services provides functions that are helpful to
the users
• User Interface
• I/O operations
• Communications
• Program execution
• File system manipulation
• Error detection
Another set of Operating System functions exist not for helping the user but
rather for ensuring the efficient operation of the system itself.
• Resource Allocation
• Accounting
• Protection and security

4.2 User Operating System Interface


There are two fundamental approaches for users to interface with the Operating
System.

i) Command Interpreter Command interpreter or command line interface


allows users to directly enter commands that are to be performed by the
Operating System.

15
ii) Graphical User Interface GUI allows the user to interface with the oper-
ating system via a graphical user interface.

4.3 System Calls


System calls provide an interface to the services made available by an Operating
System. These calls are generally available as routines written in C and C++,
although cartain low-level tasks (for example, tasks where hardware must be
accessed directly), may need to be written using assembly-language instructions.

4.3.1 Types of System Calls


System calls can be grouped roughly into five major categories:

• Process Control
– End, abort
– Load, execute
– Create process
– terminate process
– Get process attributes
– set process attributes
– wait for time
– wait event, signal event
– allocate and free memory
• File management
– Create file, delete file
– open, close
– read, write, reposition
– get file attributes, set file attributes
• Device Management
– Request device, release device
– Read, write, reposition
– get device attributes, set device attributes
– Logically attach or detach device
• Information Maintenance

– get time or date, set time or date


– get system data, set system data
– get process, file, or device attributes
– set process, file, or device attributes

16
User application

open()
user mode
System Call Interface
kernel mode

open()
..
. Implementation
of open() system call
..
.. .
. ..
.
return

Figure 4.1: Relationship between an API, system call interface, and the operating
system. It illustrates how the OS handles a user application invoking the open()
system call

• Communications
– create, delete communication connection
– send, receive messages
– transfer status information
– attach or detach remote devices

4.3.2 Application Programming Interface (API)


Most programmers never see the details of system calls. Typically application
developers design programs according to an API. The API specifies a set of
functions that are available to an application programmer. Three of most
common APIs available to application programmers are the Win32 API for
Windows system, the POSIX API for UNIX, Linux and Mac OS X, and the
JAVA API for designing programs that run on the Java Virtual Machine.

4.3.3 System Call Interface


The run-time support system (a set of functions built into libraries included with
a compiler) for most programming languages provides a system-call interface
that serves as the link to system calls made available by the operating system.

4.4 System Components


• Process management
• Main Memory management
• Secondary storage management

17
• I/O system management
• File Management

• Protection system
• Networking
• Command-interpreter system

4.5 Booting
1. Determine the configuration of the system
2. Load OS Programs constituting the kernel in memory

3. Initialize data structures of the OS


4. Pass control to the OS

4.6 Kernel
The kernal provides basic services for all other parts of the operating system,
typically including memory management,process management,file man-
agement and I/O management (i.e. accessing the peripheral devices).
These services are requested by other parts of Operating System or by
application programs through a specified set of program interfaces referred
to as system calls.
The kernel performs its tasks, such as executing processes and handling
interrupts in the kernal space, whereas everything a user normally does, such
as writing text in a text editor or running programs in a GUI, is done in
user space.

18
Chapter 5

Operating System Structure

Non-resident
System Resident Area part of OS
area Transient Area

User
area Swapped-out
program

Figure 5.1: User and OS programs in memory

5.1 Monolithic Structure


Early Operating systems had a monolithic structure, that is the OS code did
not consist of a set of modules with closely defined interfaces, rather it consisted
of a single module.

User User
interface program

OS Layer

Bare Machine

Figure 5.2: Monolithic OS

19
5.2 Layered Approach
In the layered approach, the operating system is broken up into a number of
layers. The bottom layer (layer 0) is the hardware, and the highest layer (layer
N) is the user interface. The main advantage of this approach is simplicity of
construction and debugging. The layers are selected so that each uses functions
(operations) and services of only lower-level layers.

Layer N
User Interface

......
Layer 1

Layer 0
Hardware

Figure 5.3: A layered Operating System

5.3 Microkernels
This method structures the Operating System by removing all non-essential
components from the kernel and implementing them as system and user-level
programs. The result is a smaller kernel. Typically, however microkernels provide
minimal process and memory management, in addition to a communication
facility.

20
Part II

Process

21
Chapter 6

Introduction to Process

6.1 Process
A process is a program in execution. A process is more than the program code,
which is sometimes known as the text section.
max

stack

heap

data

text

Figure 6.1: Process in memory

6.2 Process State


The state of a process is defined in part by the current activity of that process.
Each process may be in one of the following states:
New The process is being created
Running Instructions are being executed

22
New Terminated

admitted Exit
Interrupt

Ready Running
scheduler
I/O or dispatch
I/O or Event wait
Event completion
Waiting

Figure 6.2: Process state

Waiting The process is waiting for some event to occur (such as an I/O com-
pletion)

Ready The process is waiting to be assigned to a processor


Terminated The process has finished execution

6.3 Process Control Block


Each process is represented in the Operating System by a process control block
(PCB). A PCB contains many pieces of information associated with a specific
process including these:
• Process state

• Program counter
• CPU registers
• CPU scheduling info

• Memory management information


• Accounting information
• I/O status information

6.4 Process Scheduling


To meet the objective of multiprogramming and time sharing, the process
scheduler selects an available process (possibly from a set of several available
processes) for program execution on the CPU.

23
Process state

Process number

Program counter

Registers

Memory limits

list of opened files

...

Figure 6.3: Process Control Block

6.4.1 Scheduling Queues


As processes enter the system, they are put into a job queue, which consists of
all processes in the system. The processes that are residing in main memory and
are ready and waiting to execute are kept on a list called the ready queue. This
queue is generally stored as linked list. Each device has its own device queue.

Ready Queue CPU

I/O I/O Queue I/O Request

Time slice expired

Child Execute Fork a child

Interrupt Occurs Wait for an interrupt

Figure 6.4: Queuing diagram representation of process scheduling

6.4.2 Schedulers
A process migrates among the various scheduling queues throughout its lifetime.
The Operating System must select, for scheduler purposes, processes from these
queues in some fashion. The selection process is carried out by the appropriate
scheduler.

24
Long-term Scheduler/Job Scheduler
The long term scheduler or job scheduler selects processes from the job pool
(typically on a disk) and loads them into memory for execution.

Short-term Scheduler/CPU Scheduler


The short-term scheduler or CPU scheduler selects from among the processes
that are ready to execute and allocated the CPU to one of them.

25
Chapter 7

Interprocess
Communication

7.1 Types of Processes


7.1.1 Independent Process
A process is independent if it cannot affect or be affected by other processes
executing in the system. Any process that does not share data with any other
process is independent.

7.1.2 Cooperating Process


A process is cooperating if it can affect or be affected by the other processes
executing in the system. Clearly, any process that shares data with other
processes is a cooperating process.
There are several reasons for providing an environment that allows process
cooperation:
• Information sharing

• Computation speedup
• Modularity
• Convenience
Cooperating processes require an IPC mechanism that will allow them to
exchange data and information.
There are two fundamental models of Inter-process Communication (IPC):
1. Shared Memory Systems
2. Message Passing System

26
M 1
Process A Process A
shared
M
Process B
2
Process B
1
2

Kernel Kernel
M

(a) Message Passing (b) Shared Memory

Figure 7.1: Inter Process Communication Methods

7.2 Shared Memory System


In the shared memory model, a region of memory that is shared by co-operating
processes is established. Processes can then exchange information by reading
and writing data to the shared region.

7.3 Message Passing System


In the Message Passing model, communication takes place by means of messages
exchanged between the cooperating processes. Here are several methods for
logically implementing a link and send()/receive() operations.

7.3.1 Direct or Indirect Communication


Under direct communication, each process that wants to communicate must
explicitly name the recipient or sender of the communication. In this scheme,
the send() and receive() primitives are defined as:
send(P, message) send a message to process P

receive(Q, message) receive a message from process Q


This scheme exhibits symmetry in addressing, that both the sender process
and the receiver process must name the other to communicate. A variant of
this scheme employs asymmetry in addressing. Here, only the sender names the
recipient; the recipient is not required to name the sender.
With indirect communication, the messages are sent to and received from
mailboxes. A mailbox is a repository for interprocess messages. It has a unique
identity. The owner of a mailbox is typically the process that created it. Only
the owner process can receive messages from a mailbox. Any process that knows
the identity of a mailbox can send messages to it. These processes are called
users of a mailbox.

27
Advantages of Mailboxes
i) Anonymity of Receiver A process sending a message to a mailbox need
not know the identity of the receiver process. If an OS permits the receiver
of a mailbox to be changed dynamically, a process can take over the
functionality of another process.
ii) Classification of Messages A process may create several mailboxes, and
use each mailbox to receive messages of a specific kind. This arrangement
permits easy classification of messages.

7.3.2 Synchronous or Asynchronous Communication


Message passing may be either blocking or non-blocking — also known as
synchronous and asynchronous.
Blocking Send The sending process is blocked until the message is received
by the receiving process or by the mailbox.
Non-blocking Send The sending process sends the message and resumes op-
eration

Blocking Receive The receiver blocks until a message is available


Non-blocking Receive The receiver receives either a valid message or a null.

7.3.3 Automatic or Explicit Buffering


Whether communication is direct or indirect, messages exchanged by commu-
nicating processes reside in a temporary queue. Basically, such queues can be
implemented in three ways.
i) Zero Capacity The queue has a maximum length of zero. Thus the link
cannot have any messages waiting in it. In this case, the sender must block
until the recipient receives the message.
ii) Bounded Capacity The queue has finite length n, thus at most n messages
can reside in it. If the link is full, the sender must block until space is
available in the queue.

iii) Unbounded Capacity The queue’s length is potentially infinite, thus any
number of messages can wait in it; the sender never blocks.

28
Chapter 8

Threads

8.1 Introduction
Use of processes to provide concurrency within an application incurs high process
switching overhead. Threads provide a low cost method of implementing
concurrency that is suitable for certain kinds of applications.
Process switching overhead has two components:
• Execution related overhead
• Resource use, related overhead

A thread is a program in execution that uses the resources of a


process. “A thread is a basic unit of CPU utilization.” Threads are also called
as “light weight processes”.

8.2 Advantages of Threads


i) Low overhead or Economy Thread states consist only of the state of a
computation. Resource allocation state and communication state is not a
part of thread state, which leads to low switching overhead.
ii) Utilization of Multiprocessor Architecture/Speedup Concurrency within
a process can be realized by creating many threads in it.
iii) Efficient Communication Threads of a process can communicate with
one another through shared data space thus avoiding the overhead of
system calls for communication.

iv) Responsiveness Multithreading an interactive application may allow a


program to continue running even if a part of it is blocked or is performing
a lengthy operation.
v) Resource Sharing By default, threads share the memory and the resources
of the process to which they belong.

29
8.3 Multi-threading Models
Support for threads may be provided either at the user level, for user threads, or
by the kernel, for kernel threads. User threads are supported above the kernel
and are managed without kernel support, whereas kernel threads are supported
and managed directly by the operating system.
Ultimately, there must exist a relationship between user threads and kernel
threads. There are three common ways of establishing this relationship.

8.3.1 Many-to-One Model


The many-to-one model maps many user-level threads to one kernel thread.
Thread management is done by the thread library in user space, so it is efficient,
but the entire process will block if a thread makes a blocking system call. Also,
because only one thread can access the kernel at a time, multiple threads are
unable to run in parallel on multiprocessors.

user thread

K kernel thread

Figure 8.1: Many-to-One Model

8.3.2 One-to-One Model


The one-to-one model maps each user thread to a kernel thread. It provides
more concurrency than the many-to-one model by allowing another thread to run
when a thread makes a blocking system call. It also allows multiple threads to
run in parallel on multiprocessors. Linux along with Windows family implements
this model.

8.3.3 Many-to-Many Model


The many-to-many model multiplexes many user level threads to a smaller or
equal number of kernel threads.

8.4 Thread Libraries


A thread library provides the programmer an API for creating and managing
threads. There are two primary ways of implementing a thread library. The first
approach is to provide a library entirely in user space with no kernel support.

30
user thread

K K K K kernel thread

Figure 8.2: One-to-One Model

user thread

K K K kernel thread

Figure 8.3: Many-to-Many Model

The second approach is to implement a kernel-level library supported directly


by the operating system.
Three main thread libraries are in use today —
1. POSIX pthreads
2. Win32
3. Java

Pthreads, the threads extension of the POSIX standard may be provided as


either a user- or kernel-level library. The Win32 thread library is a kernel-level
library available on Windows Systems. The Java thread API allows thread
creation and management directly in the Java programs.

31
Chapter 9

Process Scheduling

CPU scheduling is the basis of multiprogrammed operating system. By switching


the CPU among processes, the operating system can make the computer more
productive.
CPU scheduling decisions may take place under the following four circum-
stances:
1. When a process switches from the running state to the waiting state
2. When a process switches from the running state to the ready state
3. When a process switches from the waiting state to the ready state

4. When a process terminates


For situations 1 and 4, there is no choice in terms of scheduling. A new
process (if one exists in the ready queue) must be selected for execution. There
is a choice, however, for situations 2 and 3.

9.1 Preemptive and Non-preemptive Scheduling


When scheduling takes place only under circumstances 1 and 4, we say that the
scheduling scheme is non-preemptive or co-operative, otherwise it is preemptive.
Under non-preemptive scheduling, once the CPU has been allocated for a
process, the process keeps the CPU until it releases the CPU either by terminating
or by switching to the waiting state. This (non-preemptive) scheduling was used
by Microsoft Windows 3.x; Windows 95 introduced preemptive scheduling, and
all subsequent versions of Windows operating system have used pre-emptive
scheduling.

9.2 Dispatcher
Another component involved in the CPU scheduling function is the dispatcher.
The dispatcher is the module that gives control of the CPU to the process
selected by the short-term scheduler. This function involves the following:

• Switching context

32
• Switching to the user mode
• Jumping to the proper location in the user program to restart that program
The time it takes for the dispatcher to stop one process and start another
running is known as the dispatch latency.

9.3 Scheduling Criteria


The criteria includes the following:
• CPU Utilization
• Throughput
• Turn around time
• Waiting time
• Response time

9.4 Scheduling Algorithms


CPU scheduling deals with the problem of deciding which of the processes in
the ready queue is to be allocated the CPU. There are many different CPU
scheduling algorithms.

9.4.1 First Come First Served scheduling


With this scheme, the process that requests the CPU first is allocated the CPU
first. The implementation of the FCFS policy is easily managed with a FIFO
queue.
The average waiting time under the FCFS policy, however is often quite long.
Consider the following set of processes that arrive at time 0 with the length of
the CPU burst given in milliseconds.

Process Burst Time


P1 24
P2 3
P3 3

If the processes arrive in the order P1 , P2 , P3 and are served in FCFS order,
we get the Gantt chart as shown in Figure 9.1 .

P1 P2 P3
0 24 27 30

Figure 9.1: Gantt chart

(0+24+27)
Avg waiting time = 3 = 17milliseconds

33
P2 P3 P1
0 3 6 30

Figure 9.2: Gantt chart

If the processes arrive in the order P2 , P3 , P1 , then we get the Gantt chart
in Figure 9.2.
(6+0+3)
Avg waiting time = 3 = 3milliseconds

This reduction is substantial. Thus, the average waiting time under an FCFS
policy is generally not minimal and may vary substantially if the processes’ is
CPU burst times vary greatly. The FCFS scheduling algorithm is non-preemptive.

9.4.2 Shortest Job First scheduling


This algorithm associates with each process the length of the process’s next CPU
burst. When the CPU is available, it is assigned to the process that has the
smallest next CPU burst. If the next CPU burst of two processes are the same,
FCFS scheduling is used to break the tie. Note that a more appropriate term
for this scheduling method would be the “Shortest Next CPU burst” algorithm,
because scheduling depends on the length of the next CPU burst of a process,
rather than its total length.
Consider the following set of processes, with the length of the CPU burst in
milliseconds.
Process Burst Time
P1 6
P2 8
P3 7
P4 3

P4 P1 P3 P2

0 3 9 16 24

Figure 9.3: Gantt chart

(3+16+9+0)
Avg waiting time = 4 = 7milliseconds

By comparison, if we were using the FCFS scheduling scheme, the average


waiting time would be 10.25 milliseconds.

9.4.3 Priority Scheduling


In the Shortest Job First scheme, CPU burst has been used in computing
priorities. External priorities are set by criteria outside the operating system,
such as the importance of the process, the type and amount of funds being

34
paid for computer use, the department sponsoring the work, and other political
factors.
Priority scheduling can be either pre-emptive or non-preemptive.
A major problem with priority scheduling algorithms is indefinite blocking
or starvation. A priority scheduling algorithm can leave some low priority
processes waiting indefinitely. A solution to the problem of indefinite blockage
of low-priority processes is aging. Aging is a technique of gradually increasing
the priority of processes that wait in the system for a long time.

9.4.4 Round Robin Scheduling


The Round Robin (RR) scheduling algorithm is designed especially for time
sharing systems. It is similar to FCFS scheduling, but pre-emption is added
to switch between processes.
A small unit of time, called a time quantum or time slice, is defined. The
CPU scheduler goes around the ready queue, allocating the CPU to each process
for a time interval of upto 1 time quantum.
The average waiting time under the RR policy is often long. Consider the
following set of processes that arrive at time 0:
Process Burst Time
P1 24
P2 3
P3 3
Resulting RR schedule is —

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

Figure 9.4: Gantt chart

17
Avg waiting time = 3 = 5.66 milliseconds

The RR scheduling algorithm is thus pre-emptive.


If there are n processes in the ready queue and the time quantum ins q,
then each process gets 1/n of the CPU time in chunks of at most q time units.
Each process must wait no longer than (n − 1) × q time units until its next
time quantum. For example, with five processes and a time quantum of 20
milliseconds, each process will get up to 20 milliseconds every 100 milliseconds.
The performance of the RR algorithm depends heavily on the size of the time
quantum. At one extreme, if the time quantum is extremely large, the RR policy
is same as the FCFS policy. If the time quantum is extremely small, the RR
approach is called processor sharing and (in theory) creates the appearance
that each of n processes has its own processor running at 1/n the speed of the
real processor.
In software, we need also to consider the effect of context switching on the
performance of RR scheduling. Although the time quantum should be large
compared with the context-switch time, it should not be too large. If the time

35
quantum is too large, the RR scheduling degenerates to FCFS policy. A rule
of thumb is that 30% of the CPU bursts should be shorter than the
time quantum.

9.4.5 Multilevel Queue Scheduling


Multilevel scheduling provides a hybrid solution to the problem of providing
efficiency and good user service simultaneously. To exploit the features of
multilevel scheduling, the scheduler differentiates between interactive and non-
interactive processes.
A multilevel queue scheduling algorithm partitions the ready queue into
several separate queues. The processes are permanently assigned to one queue,
generally based on some property of the process, such as memory size, process
priority, or process type. Each queue has its own scheduling algorithm. In
addition, there must be scheduling among the queues.
Multilevel queue scheduling is commonly implemented as fixed-priority pre-
emptive scheduling. Another possibility to implement this scheduling is to
time-slice among the queues.

9.4.6 Multilevel Feedback Queue Scheduling


The multilevel feedback queue scheduling algorithm allows a process to move
between queues. The idea is to separate processes according to the characteristics
of their CPU bursts. If a process uses too much CPU time, it will be moved to a
lower-priority queue. In addition, a process that waits too long in a lower-priority
queue may be moved to a higher-priority queue. This form of aging prevents
starvation. For example, consider a multilevel feedback queue scheduling with
three queues, numbered from 0 to 2.

quantum=8

quantum=16

FCFS

Figure 9.5: Multilevel Feedback Queues

36
Chapter 10

Process Synchronization

10.1 Important Terms


10.1.1 Race Condition
A situation, where several processes access and manipulate the same data
concurrently and the outcome of the execution depends on the particular order
in which the access takes place, is called a race condition.

10.1.2 Critical Section Problem


Code executed by a process can be grouped into sections, some of which require
access to shared resources, and others that do not. The sections of code which
require access to shared resources are called critical sections. To avoid race
conditions, a mechanism is needed to appropriately synchronize the execution
within critical section.
Consider a system consisting of n processes (P0 , P1 , . . . Pn ). Each process
has a segment of code, called a critical section, in which the process may be
changing variables, updating a table, writing a file, and so on. The important
feature of the system is that when one process is executing in its critical section,
no other process is to be allowed to execute in its critical section, that is, no two
processes are executing in their critical sections at the same time.
A solution to the critical section problem must satisfy the following three
requirements:
i) Mutual Exclusion If a process Pi is executing in its critical section, then
no other process can be executing in their critical sections.
ii) Progress If no process is executing in its critical section and some processes
wish to enter their critical sections, then only those processes that are
not executing in their remainder sections can participate in the decision
on which will enter its critical section next, and this selection can not be
postponed indefinitely.
iii) Bounded waiting There exists a bound, or limit on the number of times
that other processes are allowed to enter their critical sections after a
process has made a request to enter its critical section and before that
request is granted.

37
Producers Consumers

Buffer Pool

Figure 10.1: Producers and Consumers

10.1.3 The Problem of Busy Wait


A busy wait is a situation in which a process repeatedly checks if a condition
that would enable it to get past a synchronization point is satisfied. It ends
only when the condition is satisfied. Thus a busy wait keeps the CPU busy in
executing a process, even as the process does nothing.

10.2 Classical Process Synchronization Problems


10.2.1 Producers-Consumers with bounded buffers
A producers-consumers system with bounded buffers consists of an unspecified
number of producer and consumer processes and a finite pool of buffers.
Each buffer is capable of holding one record of information — it is said to
become full when a producer writes into it, and empty when a consumer copies
out a record contained in it, it is empty to start with. A producer process
produces one record at a time and writes it into the buffer. A consumer process
consumes information one record at a time.
A solution to the producer consumer problem must satisfy the following
conditions:
1. A producer must not overwrite a full buffer
2. A consumer must not consume an empty buffer

3. Producers and consumers must access buffers in a mutually exclusive


manner

10.2.2 Dining Philosophers Problem


Five philosophers sit around a table pondering philosophical issues. A bowl of
rice is kept in front of each philosopher, and a fork is placed between each pair of
philosophers. To eat, a philosopher must pick up the two forks placed between
him and his immediate neighbours on either side, one at a time. The problem is
to design processes to represent the philosophers such that each philosopher can
eat when hungry and none dies of hunger.

38
Algorithm 1 Solution outline for a single buffer Producers-Consumers system
using signalling
[ht]
var
buffer : . . . ;
buffer full: boolean;
producer blocked, consumer blocked: boolean;

Begin
buffer full := false;
producer blocked := false;
consumer blocked := false;

Producer Consumer
Parbegin Parbegin
repeat repeat
check b empty; check b full;
{Produce in the buffer} {Consume from the buffer}
post b full; post b empty;
{Remainder of the cycle} {Remainder of the cycle}
until forever until forever
ParEnd ParEnd

End

The dining philosopher problem is considered a classic synchronization prob-


lem neither because of its practical importance nor because computer scientists
dislike philosophers, but because it is an example of a large class of concurrency-
control problems. It is a simple representation of the need to allo-
cate several resources among several processes in a deadlock-free and
starvation-free manner.

10.3 Approaches to Implement Critical Sections


10.3.1 Algorithmic Approach
This consists of Dekker’s Algorithm (see 4) and Peterson’s Algorithm (see 5).

10.3.2 Semaphores
A semaphore is a shared integer variable with non-negative values that
can be subjected only to the following operations:

1. Initialization (specified as part of its declaration)


2. The indivisible operations wait and signal

39
Algorithm 2 Individual Operations for the Producers-Consumers problem

Operations of Producer Operations of Consumer


procedure check b empty procedure check b full
Begin Begin
if buffer full = true then if buffer full = false then
producer blocked := true; consumer blocked:=true;
block(Producer); block(Consumer);
end if end if
End End
end procedure end procedure
procedure post b full procedure post b empty
Begin Begin
buffer full := true; buffer full := false;
if consumer blocked=true if producer blocked = true
then then
consumer blocked:=false; producer blocked:=false;
activate(Consumer); activate(Producer);
end if end if
End End
end procedure end procedure

Uses of semaphores in Concurrent Systems


• Mutual Exclusion
• Bounded Concurrency
• Signalling between processes

Binary Semaphores
A binary semaphore is a special form of a semaphore used for implementing
mutual exclusion. Hence it is often called a mutex. A binary semaphore is
initialized to 1 and takes only the values 0 and 1 during execution of a program.

Bounded Concurrency
Algorithm 8 illustrates how a set of concurrent processes share five printers.

Counting semaphores Counting semaphores can be used to control access


to a given resource consisting of a finite number of instances. The semaphore is
initialized to the number of resources available.

Signalling between processes


A semaphore can be used to achieve this synchronization as shown in Algorithm
9.

40
P

Rice P

Figure 10.2: Dining Philosophers

10.3.3 Test-and-Set(TS) Instruction


The important characteristics in this instruction is that this instruction is
executed atomically. Thus, if two Test-and-Set instructions are executed simul-
taneously (each on a different CPU), they will be executed sequentially in some
arbitrary order.

41
Algorithm 3 An outline of a Dining Philosopher process
1: repeat
2: successful := false
3: while not successful do

If both forks are available then lift the forks one at a time

4: successful := true
5: if successful=false then
6: block(Pi );
7: end if
8: {eat}

Put down both forks

9: if left neighbour is waiting for his right fork then


10: activate(left neighbour);
11: end if
12: if right neighbour is waiting for his left fork then
13: activate(right neighbour);
14: end if
15: end while
16: think
17: until forever

42
Algorithm 4 Dekker’s Algorithm
var
turn : 1. . . 2;
c1,c2 : 0. . . 1;

Begin
c1 := 1;
c2 := 1;
turn := 1;

Process P1 Process P2
Parbegin Parbegin
repeat repeat
c1 := 0; c2 := 0;
while c2 = 0 do while c1 = 0 do
if turn = 2 then if turn = 1 then
c1 := 1; c2 := 1;
while turn=2 do while turn=1 do
{nothing}; {nothing};
end while end while
c1 := 0; c2 := 0;
end if end if
end while end while
{critical section} {critical section}
turn := 2; turn := 1;
c1 := 1; c2 := 1;
{Remainder of the cycle} {Remainder of the cycle}
until forever; until forever
ParEnd ParEnd

End

43
Algorithm 5 Peterson’s Algorithm
var
flag : array[0. . . 1] of boolean;
turn : 0. . . 1;

Begin
flag[0] := false;
flag[1] := false;

Process P0 Process P1
Parbegin Parbegin
repeat repeat
flag[0] := true; flag[1] := true;
turn := 1; turn := 0;
while flag[1] & turn=1 do while flag[0] & turn=0 do
{nothing}; {nothing};
end while end while
{critical section} {critical section}
flag[0] := false; flag[1] := false;
{Remainder of the cycle} {Remainder of the cycle}
until forever; until forever
ParEnd ParEnd

End

Algorithm 6 Semantics of wait and signal operations on a semaphore


procedure wait(S)
Begin
if S > 0 then
S := S − 1;
else
block the process on S;
end if
End
end procedure
procedure signal(S)
Begin
if some processes are blocked on S then
activate one blocked process;
else
S := S + 1;
end if
End
end procedure

44
Algorithm 7 Mutual Exclusion
Begin
var
sem cs : semaphore := 1;

Process Pi Process Pj
Parbegin Parbegin
repeat repeat
wait(sem cs); wait(sem cs);
{critical section} {critical section}
signal(sem cs); signal(sem cs);
{Remainder of the cycle} {Remainder of the cycle}
until forever until forever
ParEnd ParEnd

End

Algorithm 8 Bounded Concurrency using Semaphores


Begin
var
printers : semaphore := 5;

Process P1 . . . . . . Process Pn
Parbegin Parbegin
repeat repeat
wait(printers); wait(printers);
{use a printer} {use a printer}
signal(printers); signal(printers);
{Remainder of the cycle} {Remainder of the cycle}
until forever until forever
ParEnd ParEnd

End

45
Algorithm 9 Signalling using semaphores
Begin
var
sync : semaphore := 0;

Parbegin

Process Pi Process Pj
... ...
wait(sync); Perform action aj
Perform action ai signal(sync);

ParEnd
End

Algorithm 10 Producers-Consumers using Semaphores


Type item = . . . ;
var
full : semaphore := 0; . Initialization
empty: semaphore := 1;
buffer : array[0] of item;

Begin

Producer Consumer
Parbegin Parbegin
repeat repeat
wait(empty); wait(full);
buffer[0] := . . . ; . i.e x := buffer[0]; . i.e
produce consume
signal(full); signal(empty);
{Remainder of the cycle} {Remainder of the cycle}
until forever; until forever
ParEnd ParEnd

End

46
Chapter 11

Deadlocks

A deadlock is a situation in which some processes wait for each other’s actions
indefinitely.

11.1 System Model


Under the normal mode of operation, a process may utilize a resource in only
the following sequence —
1. Request

2. Use
3. Release

11.2 Deadlock Characterization


This topic describes the features that characterize deadlocks.

11.2.1 Necessary Conditions


A deadlock situation can arise if the following four conditions hold simultaneously
in a system:
1. Mutual Exclusion
2. Hold and Wait
3. No Pre-emption

4. Circular Wait

11.2.2 Resource Allocation Graph


Deadlocks can be described more precisely in terms of a directed graph called a
system resource allocation graph.
In Figure 11.2, two minimal cycles exist:

47
R1 R3

P1 P2 P3

R2 R4

Figure 11.1: Resource Allocation Graph

1. P1 → R1 → P2 → R3 → P3 → R2 → P1
2. P2 → R3 → P3 → R2 → P2

R1 R3

P1 P2 P3

R2 R4

Figure 11.2: Resource Allocation Graph with Deadlock

In Figure 11.3, one cycle exists — P1 → R1 → P3 → R2 → P1 ; however,


there is no deadlock.
In summary, if a resource allocation graph does not have a cycle, then the
system is not in a deadlocked state. If there is a cycle, then the system may or
may not be in a deadlocked state.

11.3 Methods for Handling Deadlocks


Generally speaking, we can deal with the deadlock problem in one of the three
ways:

48
R1

P2

P1 P3

R2 P4

Figure 11.3: Resource Allocation graph with a cycle but no deadlock

1. We can use a protocol to prevent or avoid deadlocks


2. We can allow the system to enter a deadlock state, detect it and recover
3. We can ignore the problem altogether and pretend that deadlocks never
occur in the system.

11.3.1 Deadlock Prevention


By ensuring that at least one of these conditions (necessary conditions for
deadlock) cannot hold, we can prevent the occurrence of a deadlock.

11.3.2 Deadlock Avoidance


Deadlock avoidance differs from deadlock prevention in one vital respect. It does
not try to prevent any of the conditions for deadlock. However, the avoidance
approach uses a resource allocation policy that grants a resource only if the
kernel can establish that granting the request cannot lead to a deadlock either
immediately or in future.

Resource Allocation Graph Algorithm (RAG)


Suppose that Process P2 requests Resource R2 . Although R2 is currently free,
we cannot allocate it to P2 , since this action will create a cycle in the Graph. A
cycle indicates that the system is in an unsafe state.

Banker’s Algorithm
Banker’s Algorithm uses two tests – a feasibility test and a safety test when a
process makes a request.

49
R1

P1 P2

R2

Figure 11.4: RAG for deadlock avoidance

a) Initial state
R R1 R2 R3 R4
1 R2 R3 R4     
P1  2 1 2 1  1 1 1 1 0 0 0 0 5 3 5 4
P2  2 4 3 2   2 0 1 0   0 1 1 0  Total allotted
P3  5 4 2 2  
 2 0 2

2 

 0 0 0

0  R1 R2 R3 R4
P4  0 3 4 1 
0 2 1 1 0 0 0 0 6 4 8 5
Total existing
Max Need Allocated Resources Requested Resources Active = {P1 , P2 , P3 , P4 }

b) Before while loop


R R1 R2 R3 R4
1 R2 R3 R4     
P1  2 1 2 1  1 1 1 1 0 0 0 0 5 4 6 4
P2  2 4 3 2   2 1 2 0   0 1 1 0  Simulated allotted
P3  5 4 2 2 
   
 2 0 2 2   0 0 0 0  R1 R2 R3 R4
P4  0 3 4 1 
0 2 1 1 0 0 0 0 6 4 8 5
Total existing
Max Need Allocated Resources Requested Resources Active = {P1 , P2 , P3 , P4 }

c) After simulating completion of Process P1

R R1 R2 R3 R4
1 R2 R3 R4     
P1  2 1 2 1  − − − − 0 0 0 0 4 3 5 3
P2  2 4 3 2   2 1 2 0   0 1 1 0  Simulated allotted
P3  5 4 2 2 
   
 2 0 2 2   0 0 0 0  R1 R2 R3 R4
P4  0 3 4 1 
0 2 1 1 0 0 0 0 6 4 8 5
Total existing
Max Need Allocated Resources Requested Resources Active = {P2 , P3 , P4 }

50
R1

t Re
men qu
n est
sig e
As Edg Ed
ge

P1 Cl P2
aim
Ed
ge

R2

Figure 11.5: An unsafe state in a RAG

d) After simulating completion of Process P4

R R1 R2 R3 R4
1 R2 R3 R4     
P1  2 1 2 1  − − − − 0 0 0 0 4 1 4 2
P2  2 4 3 2   2 1 2 0   0 1 1 0  Simulated allotted
P3  5 4 2 2  
 2 0 2

2 

 0 0 0

0  R1 R2 R3 R4
P4  0 3 4 1 
− − − − 0 0 0 0 6 4 8 5
Total existing
Max Need Allocated Resources Requested Resources Active = {P2 , P3 }

e) After simulating completion of Process P2

R R1 R2 R3 R4
1 R2 R3 R4     
P1  2 1 2 1  − − − − 0 0 0 0 2 0 2 2
P2  2 4 3 2   − − − −   0 1 1 0  Simulated allotted
P3  5 4 2 2  
 2 0 2

2 

 0 0 0

0  R1 R2 R3 R4
P4  0 3 4 1 
− − − − 0 0 0 0 6 4 8 5
Total existing
Max Need Allocated Resources Requested Resources Active = {P3 }

In this example P2 process has made a request (0,1,1,0). The sequence


P1 , P4 , P2 , P3 is a safe sequence. Hence the request made by P2 process is safe.

11.3.3 Deadlock Detection


If a system does not employ either a deadlock prevention or a deadlock avoidance
algorithm, then a deadlock situation may occur. In this environment, the system
must provide:

51
P5 P5

R1 R3 R4

P1 P2 P3
P1 P2 P3

P4

R2 R5 P4

(a) Resource Allocation Graph (b) Corresponding Wait For graph

• An algorithm that examines the state of the system to determine whether


a deadlock has occured
• An algorithm to recover from the deadlock

Single Instance of Each Resource Type


If all resources have only a single instance, then we can define a deadlock detection
algorithm that uses a variant of the RAG, called a wait-for graph.
As before, a deadlock exists in the system if and only if the wait-for graph
contains a cycle. To detect deadlock, the system needs to maintain the wait-for
graph and periodically invoke an algorithm that searches for a cycle in the graph.

Multiple Instances of Each Resource Type


The wait-for graph scheme is not applicable to a resource allocation system with
multiple instances of each resource type. We turn now to a deadlock detection
algorithm that is applicable to such a system.
To illustrate this algorithm, we consider a system with five processes Po
through P4 and three resource types A,B and C. Resource type A has seven
instances, resource type B has two instances, and resource type C has six
instances. Suppose that at time To , we have the following resource allocation
state:

Allocation Request Available


A B C A B C A B C
P0 0 1 0 0 0 0 0 0 0
P1 2 0 0 2 0 2
P2 3 0 3 0 0 0
P3 2 1 1 1 0 0
P4 0 0 2 0 0 2
We claim that the system is not in a deadlock state because sequence
< P0 , P2 , P3 , P1 , P4 > exists.

52
Suppose now that process P2 makes one additional request for an instance of
Type C. The request matrix is modified as follows:

Request
A B C
P0 0 0 0
P1 2 0 2
P2 0 0 1
P3 1 0 0
P4 0 0 2

We claim that the system is now deadlocked. Although we can reclaim the
resources held by process P0 , the number of available resources is not sufficient
to fulfill the requests of other processes. Thus, deadlock exists consisting of
processes P1 , P2 , P3 & P4 .

11.3.4 Recovery from Deadlock


There are two options for breaking a deadlock. One is simply to pre-empt some
resources from one or more of the deadlocked processes. The other is to abort
one or more processes to break the circular wait.

1. Process Termination

(a) Abort all deadlocked processes


(b) Abort one process at a time until the deadlock cycle is eliminated
;
2. Resource Pre-emption If pre-emption is required to deal with deadlocks,
then three issues need to be addressed
(a) Selecting a victim
(b) Rollback
(c) Starvation

53

You might also like