You are on page 1of 364

Operating System

By:Dr. P.S.Tanwar
O.S.
Definition

Goals of Operating System

Functions of Operating System

Efficiency Measures of O.S.

Types of O.S.

By:Dr. P.S.Tanwar
Definition
O.S. is a system software that
manages the computer H/W.

It also provides a basis for


application program and is used as
an intermediary between user and
hardware of computer.
By:Dr. P.S.Tanwar
Components of OS
Hardware

OS

Application S/W

User

By:Dr. P.S.Tanwar
Components of OS
End User

Application
Program

O.S.

H/W

By:Dr. P.S.Tanwar
OS

By:Dr. P.S.Tanwar
Goals of O.S.
convenient to user and

efficient for hardware.

By:Dr. P.S.Tanwar
Definition
O.S. is a system software which is
used as an interface between user
and hardware of computer so that
it is convenient for user and

efficient for hardware.

By:Dr. P.S.Tanwar
O.S.
Always resides in the memory.

O.S. is a control program.

Control program controls the


execution of user program to
prevent errors and improper use of
the computer.
By:Dr. P.S.Tanwar
O.S.
O.S. is a resource manager.

It manages CPU time, memory


space, file storage space, I/P-O/P
Devices.

By:Dr. P.S.Tanwar
Bootstrap Program
For a computer to start running it needs to have an initial
program to run that initial program is known as boot strap
program.

It is stored in ROM or EEPROM

It initializes all aspects of the system from CPU registers to


device controllers to memory contents.

It know how to load OS and to start executing that system.

OS then starts executing the first process such as “init” and


waits for some event to occur.

By:Dr. P.S.Tanwar
Functions of Operating
System
1. Process Management

2. Memory Management

3. I/P and O/P Management

4. Security & Protection

5. File Management

6. Job priority System

By:Dr. P.S.Tanwar
Functions of Operating
System
7. Job switching

8. Understand the meaning of commands and


instruction

9. Provide easier user interface

10. Resource allocation

11. Error Detection:


• Generation of traces, error messages and other
debugging and error detecting facilities.
By:Dr. P.S.Tanwar
Early Systems (Without OS)
hardware is a very expensive;

no operating systems exist


One user at console
• One function at a time (computation, I/O, user
think/response)

• Program loaded via card deck

• Libraries of device drivers (for I/O)

• User debugs at console

By:Dr. P.S.Tanwar
Early Systems (Without
batchprocessing)

By:Dr. P.S.Tanwar
Early Systems
Drawbacks
Low throughput

More turn around time

By:Dr. P.S.Tanwar
Batch Processing OS
hardware is expensive,

humans are cheap

Simple batch processing: load program, run, print results, dump,


repeat
• User gives program (cards or tape) to the operator, who schedules
the jobs

• Resident monitor automatically loads, runs, dumps user jobs

• Requires memory management (relocation) and protection

• More efficient use of hardware, but debugging is more difficult (from


dumps)

By:Dr. P.S.Tanwar
Batch Processing OS
Jobs with similar requirements were
batched together and run through the
computer as a group.

By:Dr. P.S.Tanwar
Batch Processing OS

By:Dr. P.S.Tanwar
Batch Processing OS
Job Cards

Job Control Language (JCL)


$COB-Execute the Cobol Compiler

$JOB-First Card of Job

$END-Last Card of Job

$LOAD-Load the prog

$RUN-Execute the user Prog.


By:Dr. P.S.Tanwar
Batch Processing OS
Resident Monitor
It is always resident in the memory

It automatically arrange similar job in a


sequence

Loader

Job Sequencing

Control Card Interpreter

By:Dr. P.S.Tanwar
Batch Processing OS
Main Memory

Loader
Resident Monitor Job sequencing
Control card interpreter

User
program area

By:Dr. P.S.Tanwar
Batch Processing OS
Drawbacks
Inefficient use of CPU
• Speed mismatch between Fast CPU and slow I/O devices

Not user friendly

If error occurs while a Job is in execution It can be found


only after the execution of all jobs

Resource utilization is not optimum

Strict Job Sequencing is required

By:Dr. P.S.Tanwar
Efficiency Measures of O.S.
Throughput
num. of processes per unit time

CPU Burst time


Actual Time req. to complete the process by CPU

Turn around time


Time interval between job submission and job completion by CPU

Waiting time
Total Waiting time in the Ready Queue to be executed by CPU

Response time
Time interval between Job submission and First Response time given
by CPU
By:Dr. P.S.Tanwar
Types of O.S.
Batch processing O.S.

Multiprogramming O.S.

Multitasking O.S.( Time Sharing OS)

Multiprocessing O.S.( Parallel Systems)

Real time O.S.

Distributed O.S.

Network O.S.

By:Dr. P.S.Tanwar
SPOOLing
Simultaneous Peripheral Operation
On-Line

It is the process from which more


than one input or output devices
can interact with disk
simultaneously via CPU.
By:Dr. P.S.Tanwar
SPOOLing

By:Dr. P.S.Tanwar
Multiprogramming
More than one job resides in the
memory at the same time.

But CPU can use one job at a time.

SPOOLing concept is used.

During I/O operation there is no need for


CPU to wait for that Job. CPU can took
next job and start executing it.
By:Dr. P.S.Tanwar
Multiprogramming

OS
Job1(wait or
blocked)

Job2 (running)

Job3 (ready)

By:Dr. P.S.Tanwar
Multiprogramming
States of Process
exit
Ready(queue)

Running

Block(Wait)

New job

By:Dr. P.S.Tanwar
Multiprogramming OS
CPU Scheduling

Job Scheduling

Memory Mgmt.

By:Dr. P.S.Tanwar
Multiprogramming
Advantages
Increased Throughput

Reduced Turnaround time

Reduced Waiting time

Reduced Response time

Increased CPU utilization

By:Dr. P.S.Tanwar
Time Sharing OS
MultiTasking OS

Time Slice Concept is used

It is logical Extension of Multiprogramming.

Multiple jobs are executed by the CPU


switching between them.

but the switches occur so frequently that the


user may interact with each program while it is
running
By:Dr. P.S.Tanwar
Time Sharing OS
Time slice is given to each process after
expiree of time, process comes out from
execution by CPU and it will move to ready
queue. And next scheduled process will come
for execution by CPU.

P1 P2 P3 P1 P2 P3 P1 P3 P3 P3
(t) (t) (t) (t) (t) (t) (t) (t) (t) (t)

By:Dr. P.S.Tanwar
TimeSharing
States of Process
New

Ready

Running

Block(Waiting)

terminated

By:Dr. P.S.Tanwar
Time Sharing OS
Advantages
Increased Throughput

Reduced Turnaround time

Reduced Response time

Reduced waiting time

Increased CPU utilization

By:Dr. P.S.Tanwar
Parallel Systems
Tightly coupled Systems
More than one processors are in close communication,
sharing the computer bus,the clock and sometimes the
memory and peripheral devices.

n Processors

Speed increases but less than n times

If one processor fails it will not halt the system

By:Dr. P.S.Tanwar
Parallel Systems
Types of Parallel System
Symmetric multiprocessing OS

Assymmetric multiprocessing OS

By:Dr. P.S.Tanwar
Symmetric Multiprocessing
Systems
Each processor runs an identical copy
of the OS and these copies
communicate with one another as
needed.

Ex Sun OS ver. 5(Solaris 2)

By:Dr. P.S.Tanwar
Assymmetric
Multiprocessing Systems
Each processor is assigned a specific task.

A master processor controls the system.

Master processor schedules and allocate work

to the slave processors.

Ex Sun OS ver. 4

By:Dr. P.S.Tanwar
Distributed OS
Loosely coupled Systems
More than one processors are not in close
communication,

They are not sharing the clock and the memory.

Each processor has its own local memory.

The Processors communicate with each other by high


speed buses or telephone lines.

By:Dr. P.S.Tanwar
Distributed OS
Loosely coupled Systems
Processors may vary in size and function.

They may include microprocessor, workstations,


minicomputers and large general purpose computers.

These processors are referred as sites, nodes,


computers etc.

By:Dr. P.S.Tanwar
Distributed OS

Node3
Node1

Network

Node2
Node4

By:Dr. P.S.Tanwar
Distributed OS
Functions
Resource Sharing

Computation Speedup

Reliability

Communication

By:Dr. P.S.Tanwar
Real time OS
Rigid time requirement on the operation
of a processor or the flow of data

Real time OS has well defined, fixed


time constraints.

Processing must be done with in the


defined constraints or the system will
fail.
By:Dr. P.S.Tanwar
Real time OS
Use
Scientific Experiment

Medical imaging system

Industrial control system

Home appliance controller

Weapon system

Flight simulators

By:Dr. P.S.Tanwar
Real time OS
Types
Soft Real time O.S.:
• Less restrictive type of OS

• Use: Multimedia, VR, scientific projects

Hard Real time O.S.


• Guarantees that criticals task complete on time.

• ROM is used no virtual memory

• Use: Flight simulation

By:Dr. P.S.Tanwar
Computer System

By:Dr. P.S.Tanwar
Storage device Hierarchy

By:Dr. P.S.Tanwar
Dual Mode Operation
To protect OS from malfuntion
hardware mode protection is
provided by many OS.

Two types of mode


Supervisor Mode or Monitor Mode or
System mode or Privileged mode

User Mode
By:Dr. P.S.Tanwar
Dual Mode Operation
Mode bit is added to the hardware of the
computer to indicate the mode
Monitor mode(0)
• Executing on behalf of OS

User mode(1)
• Executing on behalf of user

By:Dr. P.S.Tanwar
Dual Mode Operation
At boot time hardware starts in monitor
mode

then OS loaded

Then OS starts the user processes in


user mode

By:Dr. P.S.Tanwar
Dual Mode Operation

By:Dr. P.S.Tanwar
Dual Mode Operation

By:Dr. P.S.Tanwar
Dual Mode Operation

By:Dr. P.S.Tanwar
System Calls

By: Dr. P.S.Tanwar


System Calls
System Calls
System Calls provide the interface between a
process and the operating System.

Process
Program in execution is called Process

By: Dr. P.S.Tanwar


System Calls
They are generally available as
assembly language instructions.

Some systems allows system calls to be


made directly from higher level lang.
like C, Bliss BCPL,PL/360

By: Dr. P.S.Tanwar


Q/A
What is a system call?
A. It is an interface between process and OS.

B. It is an interface between user and OS.

C. It is an interface between user and


hardware.

D. It is an interface between user and System.

By: Dr. P.S.Tanwar


Mode of Process
Modes
User Mode
• No direct access to resources like memory
or hardware

Kernel Mode
• Direct access to resources like memory or
hardware
By: Dr. P.S.Tanwar
System Call

Process
(Program in Execution)

System Calls

OS

Kernel

Hardware

By: Dr. P.S.Tanwar


System Call

User Mode
Call the Return from
Process
System Call System Call

Kernel Mode
Execute the
System Call

Process =Program in execution

By: Dr. P.S.Tanwar


Q/A
In which mode system calls are
executed?
A. User Mode.

B. Kernel Mode.

C. User and Kernel Mode Both

D. None of the above.

By: Dr. P.S.Tanwar


System Calls
Example

By: Dr. P.S.Tanwar


Types of System Call
1. Process control

2. File manipulation

3. Device manipulation

4. Information Maintenance

5. Communication

By: Dr. P.S.Tanwar


Types of System Call
1. Process control
end/abort

load/execute

Create process /terminate process

Get process attributes /set process attributes

wait for specified time

wait event, signal event

Allocate and free memory

By: Dr. P.S.Tanwar


Types of System Call
2. File Manipulation
Create file/ delete file

Open / close

Read/ Write /Reposition

Get file attributes /set file attributes

By: Dr. P.S.Tanwar


Types of System Call
3. Device manipulation
Request devices/release devices

Logically attach or detach devices

Read/ Write /Reposition

Get device attributes /set device attributes

By: Dr. P.S.Tanwar


Types of System Call
4. Information Maintenance
Get time or date /set time or date

Get system data/set system data

Get process/file/device attributes

Set process/file/device attributes

By: Dr. P.S.Tanwar


Types of System Call
5. Communication
Create or delete communication connection

Send, receive messages

Transfer status information

Attach or detach remote devices

By: Dr. P.S.Tanwar


Q/A
Specify the category of ‘wait for
specified time’ system call ?
A. Process control

B. File manipulation

C. Device manipulation

D. Information Maintenance

By: Dr. P.S.Tanwar


POSIX and Win32 Calls
Comparison
POSIX Win32 Description
fork CreateProcess Create a new process
wait WaitForSingleObject The parent process may wait for the child to finish
execve -- CreateProcess = fork + execve
exit ExitProcess Terminate process
open CreateFile Create a new file or open an existing file
close CloseHandle Close a file
read ReadFile Read data from an open file
write WriteFile Write data into an open file
lseek SetFilePointer Move read/write offset in a file (file pointer)
stat GetFileAttributesExt Get information on a file
mkdir CreateDirectory Create a file directory
rmdir RemoveDirectory Remove a file directory
link -- Win32 does not support “links” in the file system
unlink DeleteFile Delete an existing file
chdir SetCurrentDirectory Change working directory
System Programs
System programs exists between OS and Appl.
Program

System programs provide more convenient


environment for program development and
execution.

Some of them are simply user interfaces for


System calls

By: Dr. P.S.Tanwar


System Programs
End User

Application Program

System program

O.S.

H/W

By: Dr. P.S.Tanwar


System Programs
File Manipulation

Status Information

File Modification

Programming Language Support

Program loading and execution

Communication

Application Programs

By: Dr. P.S.Tanwar


System Programs
File Manipulation: These programs …
Create files and directories

Delete files and directories

Copy files and directories

Rename files and directories

Print files and directories

Dump files and directories

List files and directories

By: Dr. P.S.Tanwar


System Programs
Status Information: These Programs
Ask for System date and time

Amt of available memory or disk or space

Num. of users or similar status information

By: Dr. P.S.Tanwar


System Programs
File Modification: These programs
Are text editors which are available to create and modify
the contents of files stored on disk or tape.

By: Dr. P.S.Tanwar


System Programs
Programming Language Support
For common programming languages like C, Pascal Cobol
etc

Compilers

Assemblers

Interpreters

are provided to the user with OS and many of these


programs are now priced and provided separately.

By: Dr. P.S.Tanwar


System Programs
Program loading and execution
Once a program is assembled or compiled or interpreted
it must be loaded into memory to be executed.

The system may provide


• absolute Loaders

• Relocatable loaders

• Linkage editors

• Overlay editors

• Debugging system for High level lang.

By: Dr. P.S.Tanwar


System Programs
Communication
Provide virtual connections among processes, users and
different computers system.

They allow users


• to send messages to each other’s screens.

• to Send large messages like email

• to Remote login(use others computer remotely)

By: Dr. P.S.Tanwar


System Programs
Application Programs
Most OS provides programs useful to solve common
problems or to perform common operations.
• Compiler compilers

• Text formatters

• Plotting packages

• Database systems

• spreadsheets

• Statistical analysis packages

• Games

By: Dr. P.S.Tanwar


System Programs
Command interpreter is most imp. System
program for an OS

By: Dr. P.S.Tanwar


Processes

By: Dr. P.S.Tanwar


Processes
Process Concept

Process Scheduling

Operations on Processes

Interprocess Communication

Examples of IPC Systems

Communication in Client-Server Systems

By: Dr. P.S.Tanwar


Process Concept
An operating system executes a variety of
programs:
Batch system –jobs

Time-shared systems –user programs or tasks

Process: A program in execution is called


process.

process execution must progress in sequential


fashion
By: Dr. P.S.Tanwar
Process Concept
A process includes:
Program counter

Stack ( containing temp data i.e. subroutine parameters,


return addresses and temp variables)

Data section (global variables)

Program – Passive entity(Text Code)

Process –Active Entity

By: Dr. P.S.Tanwar


Q/A
What is Process?
A. Process is a program.

B. Process is a program in execution.

C. Process is a method

D. All of these

By: Dr. P.S.Tanwar


Process in Memory

By: Dr. P.S.Tanwar


Process States
As a process executes, it changes state
new: The process is being created

running: Instructions are being executed

waiting: The process is waiting for some event to occur

ready: The process is waiting to be assigned to a


processor

terminated: The process has finished execution

By: Dr. P.S.Tanwar


Dinner
P1 P2 P3 P4 P5

By: Dr. P.S.Tanwar


Dinner
P1 P2 P3 P4 P5

By: Dr. P.S.Tanwar


Dinner
P1 P2 P3 P4 P5

By: Dr. P.S.Tanwar


Process States

admitted
new p1
Interrupt Exit terminated

ready running

Scheduler Dispatch
I/O or Event I/O or event
Completion wait

waiting

By: Dr. P.S.Tanwar


Process States

admitted
new Interrupt Exit terminated
p2
p3

ready running
p1

Scheduler Dispatch
I/O or Event I/O or event
Completion wait

waiting

By: Dr. P.S.Tanwar


Process States

admitted
new Interrupt Exit terminated

ready running
p2 p1 p3

Scheduler Dispatch
I/O or Event I/O or event
Completion wait

waiting

By: Dr. P.S.Tanwar


Process States

admitted
new Interrupt Exit terminated

ready running
p3 p2

Ready Queue
Scheduler Dispatch
I/O or Event I/O or event
Completion wait

waiting
I/O Queue1

By: Dr. P.S.Tanwar


Process Control Block(PCB)
Information associated with each process
Process state

Program counter

CPU registers

CPU scheduling information

Memory-management information

Accounting information

I/O status information

By: Dr. P.S.Tanwar


Q/A

By: Dr. P.S.Tanwar


Q/A

By: Dr. P.S.Tanwar


CPU switch from process to process

By: Dr. P.S.Tanwar


Process Scheduling
Queues

By: Dr. P.S.Tanwar


Process Scheduling

By: Dr. P.S.Tanwar


Schedulers
Long-term scheduler(or job scheduler) –
selects which processes should be brought
into the ready queue

Short-term scheduler(or CPU scheduler) –


selects which process should be executed
next and allocates CPU

Medium Term Schedular

By: Dr. P.S.Tanwar


Schedulers

By: Dr. P.S.Tanwar


Schedulars
Short-term scheduler is invoked very frequently (milliseconds)
(must be fast)

Long-term scheduler is invoked very infrequently (seconds,


minutes) (may be slow)

The long-term scheduler controls the degree of


multiprogramming

Processes can be described as either:


I/O-bound process–spends more time doing I/O than computations, many short
CPU bursts

CPU-bound process–spends more time doing computations; few very long CPU
bursts

By: Dr. P.S.Tanwar


Q/A
Which Process scheduler takes
less time?
A. Long Term Scheduler

B. Medium Term Scheduler

C. Short Term Scheduler

D. None of these

By: Dr. P.S.Tanwar


Q/A
Partially Completed Processes are
considered in which type of process
scheduler?
A. Long Term Scheduler

B. Medium Term Scheduler

C. Short Term Scheduler

D. None of these

By: Dr. P.S.Tanwar


Context Switch
When CPU switches to another process, the system
must save the state of the old process and load the
saved state for the new process via a context switch

Context of a process represented in the PCB

Context-switch time is overhead; the system does no


useful work while switching

Time dependent on hardware support

By: Dr. P.S.Tanwar


Q/A
What is the full form of PCB?
A. Process Common Block

B. Program Common Block

C. Process Control Block

D. Program Control Block

By: Dr. P.S.Tanwar


Q/A
In Multiprogramming more than
one process are executing by CPU
at the same time.
True

False

By: Dr. P.S.Tanwar


Process Creation
Parent process create children processes, which, in turn
create other processes, forming a tree of processes

Generally, process identified and managed via a process


identifier (pid)

Resource sharing
Parent and children share all resources

Children share subset of parent’s resources

Parent and child share no resources

Execution
Parent and children execute concurrently

Parent waits until children terminate


By: Dr. P.S.Tanwar
Process Creation
Address space
Child duplicate of parent

Child has a program loaded into it

UNIX examples
fork system call creates new process

exec system call used after a fork to replace the process’ memory space with a
new program

By: Dr. P.S.Tanwar


Process Creation

By: Dr. P.S.Tanwar


Process Termination
Process executes last statement and asks the operating
system to delete it (exit)
Output data from child to parent (via wait)

Process’ resources are deallocated by operating system

Parent may terminate execution of children processes (abort)


Child has exceeded allocated resources

Task assigned to child is no longer required

If parent is exiting

Some operating system do not allow child to continue if its parent

terminates
• All children terminated - cascading termination

By: Dr. P.S.Tanwar


Inter-process
Communication
Processes within a system may be independent or cooperating

Cooperating process can affect or be affected by other processes,


including sharing data

Reasons for cooperating processes:


Information sharing

Computation speedup

Modularity

Convenience

Cooperating processes need interprocess communication (IPC)

Two models of IPC


Shared memory

Message passing
Communications Models
(a) Message passing. (b) shared memory.
Cooperating Processes
Independent process cannot affect or be affected by the
execution of another process

Cooperating process can affect or be affected by the


execution of another process

Advantages of process cooperation


Information sharing

Computation speed-up

Modularity

Convenience
Interprocess Communication –
Shared Memory
An area of memory shared among the
processes that wish to communicate

The communication is under the control of the


users processes not the operating system.

Major issues is to provide mechanism that will


allow the user processes to synchronize their
actions when they access shared memory.
Interprocess Communication –
Message Passing
Mechanism for processes to communicate and to
synchronize their actions

Message system – processes communicate with each


other without resorting to shared variables

IPC facility provides two operations:


send(message)

receive(message)

The message size is either fixed or variable


Message Passing (Cont.)

If processes P and Q wish to communicate, they need to:


Establish a communication link between them
Exchange messages via send/receive
Implementation issues:
How are links established?

Can a link be associated with more than two processes?

How many links can there be between every pair of communicating


processes?

What is the capacity of a link?

Is the size of a message that the link can accommodate fixed or


variable?

Is a link unidirectional or bi-directional?


Threads

By:Dr. Prakash
SinghTanwar
Overview
Threads are light weight process

Processes are Heavy weight processes(process with


one thread)

Thread is a fundamental unit of CPU utilization and


consists of a
Program counter

A register set and

A stack

that forms the basis of multithreaded computer systems

By:Dr. Prakash SinghTanwar


Threads

By:Dr. Prakash SinghTanwar


Advantages of Threads
Responsiveness

Resource Sharing

Economy

Scalability

By:Dr. Prakash SinghTanwar


Multicore
Programming
Multicore systems putting pressure on
programmers, challenges include
Dividing activities

Balance

Data splitting

Data dependency

Testing and debugging

By:Dr. Prakash SinghTanwar


Multithreaded Server
Architecture

By:Dr. Prakash SinghTanwar


Concurrent Execution on
a Single-core System

By:Dr. Prakash SinghTanwar


Parallel Execution on a
Multicore System

By:Dr. Prakash SinghTanwar


Q/A
In MultiProgramming
A. More than one program are running at the
same time

B. More than one program resides in the


memory at the same time

C. A and B Both

D. None of these

By:Dr. Prakash SinghTanwar


Q/A
In Multi tasking
A. More than one processes are executed by
CPU simultaneously at different time slots.

B. More than one program resides in the


memory at the same time

C. Time Sharing concept (time slices) exists

D. All of the above

By:Dr. Prakash SinghTanwar


Q/A
In Multi Threading
A. More than one threads are executed by CPU
simultaneously at different time slots.

B. More than one program resides in the


memory at the same time

C. Multitasking concept also exists

D. All of the above

By:Dr. Prakash SinghTanwar


Q/A
In a multithreaded system
A. A separate register set and stack and,

common data and files.

B. A separate register set and data and,

common stack and files.

C. A separate files and data and,

common stack and set of registers.

D. A separate stack and data and,

common register set and files.


By:Dr. Prakash SinghTanwar
User Threads
Thread management done by user-
level threads library

Three primary thread libraries:


POSIX Pthreads

Win32 threads

Java threads
By:Dr. Prakash SinghTanwar
Kernel Threads
Supported by the Kernel

Examples
Windows XP/2000

Solaris

Linux

Tru64 UNIX

Mac OS X
By:Dr. Prakash SinghTanwar
Multithreading Models
Many-to-One

One-to-One

Many-to-Many

By:Dr. Prakash SinghTanwar


Many-to-One Model

Many user-level threads mapped


to single kernel thread Examples:
Solaris Green Threads

GNU Portable Threads

By:Dr. Prakash SinghTanwar


Many-to-One Model

By:Dr. Prakash SinghTanwar


One-to-One Model

One user-level thread maps to


kernel thread Examples:

Examples
Windows NT/XP/2000

Linux

Solaris 9 and later

By:Dr. Prakash SinghTanwar


One-to-One Model

By:Dr. Prakash SinghTanwar


Many-to-Many Model
Allows many user level threads to be mapped
to many kernel threads Allows the operating
system to create a sufficient number of kernel
threads

Solaris prior to version 9

Windows NT/2000 with the ThreadFiber


package

By:Dr. Prakash SinghTanwar


Many-to-Many Model

By:Dr. Prakash SinghTanwar


Two-level Model
Similar to M:M, except that it allows a user
thread to be

bound to kernel thread

Examples
IRIX

HP-UX

Tru64 UNIX

Solaris 8 and earlier

By:Dr. Prakash SinghTanwar


Two-level Model

By:Dr. Prakash SinghTanwar


Q/A
In Many to One Model
A. Many Kernel threads are mapped with One user
thread

B. Many user threads are mapped with One Kernel


thread

C. One user thread is mapped with One Kernel thread

D. Many user threads are mapped with Many Kernel


thread

By:Dr. Prakash SinghTanwar


Thread Libraries
Thread library provides programmer
with API for creating and managing
threads

Two primary ways of implementing


Library entirely in user space

Kernel-level library supported by the OS

By:Dr. Prakash SinghTanwar


Pthreads
May be provided either as user-level or kernel-
level
A POSIX standard (IEEE 1003.1c) API for thread
creation and synchronization

API specifies behaviour of the thread library,


implementation is up to development of the library

Common in UNIX operating systems (Solaris, Linux,


Mac OS X)

By:Dr. Prakash SinghTanwar


Java Threads
Java threads are managed by the JVM

Typically implemented using the


threads model provided by underlying
OS

Java threads may be created by


Extending Thread class

Implementing the Runnable interface


By:Dr. Prakash SinghTanwar
Threading Issues
Semantics of fork() and exec() system calls

Thread cancellation of target thread


Asynchronous or deferred

Signal handling

Thread pools

Thread-specific data

Scheduler activations

By:Dr. Prakash SinghTanwar


Threading Issues
Semantics of fork() and exec() system calls
Does fork() duplicate only the calling thread or all
threads?

By:Dr. Prakash SinghTanwar


Threading Issues
Thread cancellation of target thread
Terminating a thread before it has finished

Two general approaches:


• Asynchronous cancellation terminates the target thread
immediately

• Deferred cancellation allows the target thread to periodically check


if it should be cancelled

By:Dr. Prakash SinghTanwar


Threading Issues
Signal handling
Signals are used in UNIX systems to notify a process that a
particular event has occurred

A signal handler is used to process signals


• 1. Signal is generated by particular event

• 2. Signal is delivered to a process

• 3. Signal is handled

Options:
• Deliver the signal to the thread to which the signal applies

• Deliver the signal to every thread in the process

• Deliver the signal to certain threads in the process

• Assign a specific thread to receive all signals for the process

By:Dr. Prakash SinghTanwar


Threading Issues
Thread pools
Create a number of threads in a pool where they await
work

Advantages:
• Usually slightly faster to service a request with an existing thread
than create a new thread

• Allows the number of threads in the application(s) to be bound to


the size of the pool

By:Dr. Prakash SinghTanwar


Threading Issues
Thread Specific Data
Allows each thread to have its own copy of data

Useful when you do not have control over the


thread creation process (i.e., when using a
thread pool)

By:Dr. Prakash SinghTanwar


Threading Issues
Scheduler activations
Both M:M and Two-level models require communication to
maintain the appropriate number of kernel threads allocated to
the application

Scheduler activations provide upcalls - a communication


mechanism from the kernel to the thread library

This communication allows an application to maintain the


correct number kernel threads

By:Dr. Prakash SinghTanwar


Operating System
Examples
Windows XP Threads

Linux Threads

By:Dr. Prakash SinghTanwar


Windows XP Threads
Implements the one-to-one mapping, kernel-level

Each thread contains


A thread id

Register set

Separate user and kernel stacks

Private data storage area

The register set, stacks, and private storage area are known as
the context of the threads
The primary data structures of a thread include:
ETHREAD (executive thread block)

KTHREAD (kernel thread block)

TEB (thread environment block)

By:Dr. Prakash SinghTanwar


Linux Threads
Linux refers to them as tasks rather than
threads

Thread creation is done through clone()


system call

clone() allows a child task to share the address


space of the parent task (process)

By:Dr. Prakash SinghTanwar


Q/A
Which of the following is 2 level
Model in threading concept
A. One to Many and One to One

B. One to One and Many to Many

C. Many to Many and One to Many

D. Many to One and One to Many

By:Dr. Prakash SinghTanwar


CPU Scheduling

By: Dr. Prakash Singh Tanwar


CPU Burst & I/O Burst

By: Dr. Prakash Singh Tanwar


Efficiency Measures
CPU Utilization
Keeps the CPU as busy as possible

Throughput
number of processes per unit time

CPU Burst time


The actual time req. to complete the process by CPU

Turn around time


The time interval between job submission and job completion by CPU

Waiting time
Total Waiting time in the Ready Queue to be executed by CPU

Response time
The time interval between Job submission and first response time given by CPU

By: Dr. Prakash Singh Tanwar


Scheduling Criteria
CPU utilization – keep the CPU as busy as possible

Throughput – number of processes that complete their


execution per time unit

Turnaround time – amount of time to execute a particular


process

Waiting time – amount of time a process has been waiting in


the ready queue

Response time – amount of time it takes from when a request


was submitted until the first response is produced, not output
(for time-sharing environment)

By: Dr. Prakash Singh Tanwar


Q/A
We want to keep the CPU as busy as possible
it is related to which performance measure
A. CPU utilization

B. Throughput

C. Turnaround time

D. Response time

E. Waiting time

By: Dr. Prakash Singh Tanwar


Q/A
We want to keep the CPU as busy as possible
it is related to which performance measure
A. CPU utilization

B. Throughput

C. Turnaround time

D. Response time

E. Waiting time

By: Dr. Prakash Singh Tanwar


Q/A
Number of processes per unit of time is
A. CPU utilization

B. Throughput

C. Turnaround time

D. Response time

E. Waiting time

By: Dr. Prakash Singh Tanwar


Q/A
Number of processes per unit of time is
A. CPU utilization

B. Throughput

C. Turnaround time

D. Response time

E. Waiting time

By: Dr. Prakash Singh Tanwar


CPU Scheduling
Pre-emptive Scheduling
To forcely put the process out from
running state

Non Pre-emptive Scheduling


OS can’t forcely put the process out from
running state.

By: Dr. Prakash Singh Tanwar


CPU Scheduler
Selects from among the processes in memory that are ready to
execute, and allocates the CPU to one of them

CPU scheduling decisions may take place when a process:

1. Switches from running to waiting state

2. Switches from running to ready state

3. Switches from waiting to ready

4. Terminates

Scheduling under 1 and 4 is non-preemptive

All other scheduling is preemptive – implications for data


sharing between threads/processes

By: Dr. Prakash Singh Tanwar


CPU Scheduler
Selects from among the processes in memory that are ready to
execute, and allocates the CPU to one of them

CPU scheduling decisions may take place when a process:

1. Switches from running to waiting state

2. Switches from running to ready state

3. Switches from waiting to ready

4. Terminates

Scheduling under 1 and 4 is non-preemptive

All other scheduling is preemptive – implications for data


sharing between threads/processes

By: Dr. Prakash Singh Tanwar


Dispatcher
Dispatcher module gives control of the CPU to the
process selected by the scheduler; this involves:

switching context

switching to user mode

jumping to the proper location in the user program to


restart that program

Dispatch latency – time it takes for the dispatcher to


stop one process and start another running

By: Dr. Prakash Singh Tanwar


Scheduling Algorithm Optimization
Criteria

Max CPU utilization

Max throughput

Min turnaround time

Min waiting time

Min response time

By: Dr. Prakash Singh Tanwar


Q/A
It is desirable to
maximize the turnaround time and minimize the waiting
time and response time

maximize the waiting time and minimize the turnaround


time and response time

maximize the response time, turnaround time and


waiting time

minimize the waiting time, turnaround time and


response time

By: Dr. Prakash Singh Tanwar


Q/A
It is desirable to
maximize the turnaround time and minimize the waiting
time and response time

maximize the waiting time and minimize the turnaround


time and response time

maximize the response time, turnaround time and


waiting time

minimize the waiting time, turnaround time and


response time

By: Dr. Prakash Singh Tanwar


Scheduling Algorithms
First Come First Serve

Shortest Job First

Priority scheduling

Round robin scheduling

Multilevel queue

Multilevel feedback queue.

By: Dr. Prakash Singh Tanwar


FCFS
First Come First Serve
The process comes first in the ready queue will
be served first.

By: Dr. Prakash Singh Tanwar


First-Come, First-Served (FCFS)
Scheduling

Ex1: Suppose that the processes arrive in


the order: P1 , P2 , P3.
Process Burst Time
Arrival time=0
P1 24
P2 3
P3 3

Gantt Chart
P1 P2 P3

0 24 27 30

By: Dr. Prakash Singh Tanwar


First-Come, First-Served (FCFS)
Scheduling

Process Burst Gantt Chart


Time
P1 24 P1 P2 P3
P2 3
P3 3 0 24 27 30

a)Throughput
Throughput=

= = 0.10

By: Dr. Prakash Singh Tanwar


First-Come, First-Served (FCFS)
Scheduling

Process Burst Gantt Chart


Time P1 P2 P3
P1 24
P2 3 0 24 27 30
P3 3

b)Turnaround Time
Turnaround time for P1= 24
Turnaround time for P2= 27
Turnaround time for P3= 30
( )
Average Turnaround time = =
= 27

By: Dr. Prakash Singh Tanwar


First-Come, First-Served (FCFS)
Scheduling

Process Burst Gantt Chart


Time P1 P2 P3
P1 24
P2 3 0 24 27 30
P3 3

c)Waiting Time
Waiting time for P1= 0
Waiting time for P2= 24
Waiting time for P3= 27
( )
Average Waiting time = =
= 17

By: Dr. Prakash Singh Tanwar


First-Come, First-Served (FCFS)
Scheduling

Process Burst Gantt Chart


Time P1 P2 P3
P1 24
P2 3 0 24 27 30
P3 3

d)Response Time
Response time for P1= 0
Response time for P2= 24
Response time for P3= 27

( )
Average Response time = =
= 17

By: Dr. Prakash Singh Tanwar


First-Come, First-Served (FCFS)
Scheduling

Ex2:
Suppose that the processes arrive in the
order: P2 , P3 , P1.
Process Burst Time
P1 24 Arrival time=0
P2 3
P3 3

P2 P3 P1

0 3 6 30
By: Dr. Prakash Singh Tanwar
First-Come, First-Served (FCFS)
Scheduling

Process Burst Gantt Chart


Time P2 P3 P1
P1 24
P2 3 0 3 6 30
P3 3
Turnaround Time
Turnaround time for P1 =30-0=30

Turnaround time for P2 =3-0=3

Turnaround time for P3 =6-0=6

Average T.a.t.=(30+3+6)/3=39/3=13
By: Dr. Prakash Singh Tanwar
First-Come, First-Served (FCFS)
Scheduling

Process Burst
Gantt Chart
Time P2 P3 P1
P1 24
P2 3 0 3 6 30
P3 3 Waiting Time
Waiting time for P1 =6

Waiting time for P2 =0

Waiting time for P3 =3

Average waiting time

=(6+0+3)/3= 9/3 = 3 unit time


By: Dr. Prakash Singh Tanwar
First-Come, First-Served (FCFS)
Scheduling

Process Burst
Gantt Chart
Time P2 P3 P1
P1 24
P2 3 0 3 6 30
P3 3 Response Time
Response time for P1 =6

Response time for P2 =0

Response time for P3 =3

Average Response time=(6+0+3)/3=9/3

=3 unit time
By: Dr. Prakash Singh Tanwar
Shortest-Job-First (SJF)
Scheduling
Associate with each process the length of its
next CPU burst. Use these lengths to schedule
the process with the shortest time.

SJF is optimal – gives minimum average


waiting time for a given set of processes

The difficulty is knowing the length of the next


CPU request.

By: Dr. Prakash Singh Tanwar


Example of SJF
SJF scheduling chart
Process ArriBurst Time Arrival Time
P1 6 0
P2 2.0 8 0
P34.0 7 0
P4 3 0
Gantt Chart

P4 P1 P3 P2

0 3 9 16 24

By: Dr. Prakash Singh Tanwar


Example of SJF
(Non preemptive) Process iBurst Time Arrival Time
P1 6 0
SJF scheduling chart P2 2.0 8 0
P34.0 7 0
P4 3 0
Gantt Chart
P4 P1 P3 P2
a). Throughput
Throughput= =0.16 0 3 9 16 24
b). Turnaround Time c). Waiting Time d). Response Time
Turnaround time for P1= 9 Waiting time for P1= 3 Response time for P1= 3
Turnaround time for P2= 24 Waiting time for P2= 16 Response time for P2= 16
Turnaround time for P3= 16 Waiting time for P3= 9 Response time for P3= 9
Turnaround time for P4= 3 Waiting time for P4= 0 Response time for P4= 0

Average Turnaround time = Average Waiting time = Average Response time =


( ) ( ) ( )
= = = = = =
= 13 =7 =7

By: Dr. Prakash Singh Tanwar


SJF
P4 P1 P3 P2

0 3 9 16 24
Turnaround Time
=Job Completion Time - Job Submission Time

Turnaround time for P1 =9-0=9

Turnaround time for P2 =24-0=24

Turnaround time for P3 =16-0=16

Turnaround time for P4 =3-0=3

Average T.a.t.=(9+24+16+3)/4=13 unit time

By: Dr. Prakash Singh Tanwar


Q/A
Turnaround time of P2 (SJF)
3
Process ArriBurst Time Arrival Time
9 P1 6 0
P2 2.0 8 0
16 P34.0 7 0
P4 3 0

24

By: Dr. Prakash Singh Tanwar


Q/A
Turnaround time of P3 (SJF)
A. 3
Process ArriBurst Time Arrival Time
B. 9 P1 6 0
P2 2.0 8 0
C. 16 P34.0 7 0
P4 3 0

D. 24

By: Dr. Prakash Singh Tanwar


Q/A
Turnaround time of P4 (SJF)
A. 3
Process ArriBurst Time Arrival Time
B. 9 P1 6 0
P2 2.0 8 0
C. 16 P34.0 7 0
P4 3 0

D. 24

By: Dr. Prakash Singh Tanwar


SJF (Pre-emptive)
Shortest Remaining Time First
Process Arrival Burst
At 1 unit time At 2 unit time At 3 unit time Time Time
P1: 7 P1: 7 P1: 7 P1 0 8
At 0 unit time P2: 4 P2: 3 P2: 2
P1: 8 P3: 9 P3: 9 P2 1 4
P4: 5 P3 2 9
P4 3 5

P1 P2 P4 P1 P3

0 1 2 3 5 10 17 26
At 5 unit time At 10 unit time
P1: 7 P1: 7 At 17 unit time
P3: 9 P3: 9 P3: 9
P4: 5
By: Dr. Prakash Singh Tanwar
SJF Pre-emptive
P1 P2 P4 P1 P3

0 1 2 3 5 10 17 26

Turnaround Time(Completion time-Arrival time)


Turnaround time for P1 =17-0=17 Process Arrival Burst
Time Time
Turnaround time for P2 =5-1=4 P1 0 8
P2 1 4
Turnaround time for P3 =26-2=24 P3 2 9
P4 3 5
Turnaround time for P4 =10-3=7

Average T.a.t.=(17+4+24+7)/4=52/4=13 unit time

By: Dr. Prakash Singh Tanwar


SJF Pre-emptive
P1 P2 P4 P1 P3

0 1 2 3 5 10 17 26
Process Arrival Burst
Time Time

b). Turnaround Time P1 0 8


P2 1 4
Turnaround time for P1= 17-0=17
P3 2 9
Turnaround time for P2= 5-1=4 P4 3 5
Turnaround time for P3= 26-2=24
Turnaround time for P4= 10-3=7

Average Turnaround time =


( )
= =
= 13

By: Dr. Prakash Singh Tanwar


SJF Pre-emptive
P1 P2 P4 P1 P3

0 1 2 3 5 10 17 26
Process Arrival Burst
Time Time

c). Waiting Time P1 0 8


P2 1 4
Waiting time for P1= (0-0)+(10-1)=9
P2 arrived at 1 and P3 2 9
started at 1 Waiting time for P2= (1-1)=0 P4 3 5
Waiting time for P3= 17-2=15
Waiting time for P4= (5-3)=2

Average Waiting time =


( )
= =
= 6.5

By: Dr. Prakash Singh Tanwar


SJF Pre-emptive
P1 P2 P4 P1 P3

0 1 2 3 5 10 17 26
Process Arrival Burst
Time Time

d). Response Time P1 0 8


P2 1 4
Response time for P1= (0-0)=0
P2 arrived at 1 and P3 2 9
started at 1 Response time for P2= (1-1)=0 P4 3 5
Response time for P3= 17-2=15
Response time for P4= (5-3)=2

Average response time =


( )
= =
= 4.25

By: Dr. Prakash Singh Tanwar


Preemptive SJF Process Arrival
Time
Burst
Time

P1 0 8
P2 1 4
Pre-emptive SJF
P3 2 9
Gantt Chart P4 3 5

P1 P2 P4 P1 P3
a). Throughput
Throughput= =0.15 0 1 2 3 5 10 17 26
b). Turnaround Time c). Waiting Time d). Response Time
(0-0)+
Turnaround time for P1= 17 Waiting time for P1= 10-1=9 Response time for P1= 0
Turnaround time for P2= 4 Waiting time for P2= (1-1)=0 Response time for P2= 0
Turnaround time for P3= 24 Waiting time for P3= (17-2)=15 Response time for P3= 15
Turnaround time for P4= 7 Waiting time for P4= (5-3)=2 Response time for P4= 2

Average Turnaround time = Average Waiting time = Average Response time =


( ) ( ) ( )
= = = = = =
= 13 = 6.5 = 4.25

By: Dr. Prakash Singh Tanwar


Q/A
Waiting time for P2 in pre-emptive
P1 P2 P4 P1 P3
SJF? 0 1 2 3 5 10 17 26

A. 0
Process Arrival Burst
Time Time

B. 1 P1 0 8
P2 1 4
P3 2 9
C. 2 P4 3 5

D. 3

By: Dr. Prakash Singh Tanwar


Q/A
Waiting time for P3 in pre-emptive
P1 P2 P4 P1 P3
SJF? 0 1 2 3 5 10 17 26

A. 17
Process Arrival Burst
Time Time

B. 10 P1 0 8
P2 1 4
P3 2 9
C. 15 P4 3 5

D. 26

By: Dr. Prakash Singh Tanwar


Q/A
Waiting time for P4 in pre-emptive
P1 P2 P4 P1 P3
SJF? 0 1 2 3 5 10 17 26

A. 5
Process Arrival Burst
Time Time

B. 2 P1 0 8
P2 1 4
P3 2 9
C. 3 P4 3 5

D. 6

By: Dr. Prakash Singh Tanwar


Q/A
Response time for P1 pre-emptive
P1 P2 P4 P1 P3
SJF 0 1 2 3 5 10 17 26

A. 0
Process Arrival Burst
Time Time

B. 1 P1 0 8
P2 1 4
P3 2 9
C. 2 P4 3 5

D. 10

By: Dr. Prakash Singh Tanwar


Priority Scheduling
A priority number (integer) is associated with each process

The CPU is allocated to the process with the highest priority (smallest
integer  highest priority)

Preemptive

Non-preemptive

Note that SJF is a priority scheduling where priority is the predicted


next CPU burst time

Problem  Starvation – low priority processes may never execute

Solution  Aging – as time progresses increase the priority of the


process

By: Dr. Prakash Singh Tanwar


Priority Scheduling
Ex Process Burst Priority
Time

P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

Gantt Chart
P2 P5 P1 P3 P4

0 1 6 16 18 19

By: Dr. Prakash Singh Tanwar


Process Burst
Time
Priority
Priority Scheduling
P1 10 3

P2 1 1

P3 2 4
Priority Scheduling
P4 1 5

P5 5 2
Gantt Chart
P2 P5 P1 P3 P4
a). Throughput
Throughput= =0.26 0 1 6 16 18 19
b). Turnaround Time c). Waiting Time d). Response Time
Turnaround time for P1= 16 Waiting time for P1= 6 Response time for P1= 6
Turnaround time for P2= 1 Waiting time for P2= 0 Response time for P2= 0
Turnaround time for P3= 18 Waiting time for P3= 16 Response time for P3= 16
Turnaround time for P4= 19 Waiting time for P4= 18 Response time for P4= 18
Turnaround time for P5= 6 Waiting time for P5= 1 Response time for P5= 1
Average Turnaround time = Average Waiting time = Average Response time =
( ) ( ) ( )
= = = = = == =
12 = 8.5 = 8.5
By: Dr. Prakash Singh Tanwar
Q/A
Waiting time for P1 (according to
Priority Scheduling) Process Burst Priority
Time
P1 10 3
A. 0
P2 1 1
P3 2 4
B. 1
P4 1 5
P5 5 2
C. 6

D. 3 P2 P5 P1 P3 P4

0 1 6 16 18 19

By: Dr. Prakash Singh Tanwar


Q/A
Waiting time for P2 (according to
Priority Scheduling) Process Burst Priority
Time
P1 10 3
A. 0
P2 1 1
P3 2 4
B. 1
P4 1 5
P5 5 2
C. 3

D. 19 P2 P5 P1 P3 P4

0 1 6 16 18 19

By: Dr. Prakash Singh Tanwar


Q/A
Waiting time for P3 (according to
Priority Scheduling) Process Burst Priority
Time
P1 10 3
A. 16
P2 1 1
P3 2 4
B. 2
P4 1 5
P5 5 2
C. 15

D. 4 P2 P5 P1 P3 P4

0 1 6 16 18 19

By: Dr. Prakash Singh Tanwar


Q/A
Waiting time for P4 (according to
Priority Scheduling) Process Burst Priority
Time
A. 5 P1 10 3
P2 1 1
B. 18 P3 2 4
P4 1 5
C. 19 P5 5 2

D. 4 P2 P5 P1 P3 P4

0 1 6 16 18 19

By: Dr. Prakash Singh Tanwar


Q/A
Waiting time for P5 (according to
Priority Scheduling) Process Burst Priority
Time
A. 0 P1 10 3
P2 1 1
B. 1 P3 2 4
P4 1 5
C. 6 P5 5 2

D. 14 P2 P5 P1 P3 P4

0 1 6 16 18 19

By: Dr. Prakash Singh Tanwar


Priority Scheduling
P2 P5 P1 P3 P4

0 1 6 16 18 19
Turnaround Time
Process Burst Priority
Turnaround time for P1 =(16-0)=16 Time
P1 10 3
Turnaround time for P2 =(1-0)=1
P2 1 1
Turnaround time for P3 =(18-0)=18 P3 2 4
P4 1 5
Turnaround time for P4 =(19-0)=19
P5 5 2
Turnaround time for P5 =(6-0)=6

Average Turnaround =
=(16+1+18+19+6)/5=60/5=12 unit time
By: Dr. Prakash Singh Tanwar
Priority Scheduling
P2 P5 P1 P3 P4

0 1 6 16 18 19
Waiting Time
Process Burst Priority
Waiting time for P1 =(6-0)=6 Time
P1 10 3
Waiting time for P2 =(0-0)=0
P2 1 1
Waiting time for P3 =(16-0)=16 P3 2 4
P4 1 5
Waiting time for P4 =(18-0)=18
P5 5 2
Waiting time for P5= =(1-0)=1

Average waiting.t= =(6+0+16+18+1)/5=41/5=8.2 unit time

By: Dr. Prakash Singh Tanwar


Priority Scheduling
P2 P5 P1 P3 P4

0 1 6 16 18 19
Response Time
Process Burst Priority
Response time for P1 =(6-0)=6 Time
P1 10 3
Response time for P2 =(0-0)=16
P2 1 1
Response time for P3 =(16-0)=16 P3 2 4
P4 1 5
Response time for P4 =(18-0)=18 P5 5 2
Response time for P5 =(1-0)=1
Average Response time=(6+0+16+18+1)/5
=41/5=8.2 unit time

By: Dr. Prakash Singh Tanwar


Round Robin (RR)
Each process gets a small unit of CPU time
time quantum or time slice

Time slice=10-100 milliseconds.

After this time has elapsed, the process is


preempted and added to the end of the ready
queue.

By: Dr. Prakash Singh Tanwar


Round Robin (RR)
We can predict wait time:
If there are n processes in the ready queue and the time quantum is q, then each
process gets 1/n of the CPU time in chunks of at most q time units at once. No
process waits more than (n-1)q time units.

Performance

q (time slice) large  FCFS

q (time slice) small  overhead is too high

In this scheduling context switch occurs, so it may hit the


context switch wall: but q must be sufficient large with respect
to context switch, otherwise overhead is high

By: Dr. Prakash Singh Tanwar


Example of RR
Time Quantum = 4 Process Burst Time
P1 24
Gantt Chart P2 3
P3 3

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30

By: Dr. Prakash Singh Tanwar


Q/A
What is waiting time for P1?
0 Process Burst Time
P1 24
6 P2 3
30 P3 3

10
P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30

By: Dr. Prakash Singh Tanwar


Q/A
What is waiting time for P2?
0 Process Burst Time
P1 24
6 P2 3
4 P3 3

7
P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30

By: Dr. Prakash Singh Tanwar


Q/A
What is waiting time for P3?
0 Process Burst Time
P1 24
6 P2 3
4 P3 3

7
P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30

By: Dr. Prakash Singh Tanwar


Q/A
What is turn around time for P1?
0 Process Burst Time
P1 24
6 P2 3
30 P3 3

10
P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30

By: Dr. Prakash Singh Tanwar


Q/A
What is turn around time for P2?
0 Process Burst Time
P1 24
6 P2 3
7 P3 3

4
P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30

By: Dr. Prakash Singh Tanwar


Q/A
What is turn around time for P3?
0 Process Burst Time
P1 24
6 P2 3
7 P3 3

10
P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30

By: Dr. Prakash Singh Tanwar


RR Scheduling
Process Burst Time
P1 P2 P3 P1 P1 P1 P1 P1
P1 24
0 4 7 10 14 18 22 26 30
P2 3
Waiting Time
P3 3
Waiting time for P1 =(10-4)=6

Waiting time for P2 =(4-0)=4


Waiting time for P3 =(7-0)=7

Average waiting.t=
=(6+4+7)/3=17/3=5.66 unit time

By: Dr. Prakash Singh Tanwar


RR Scheduling
Process Burst Time
P1 P2 P3 P1 P1 P1 P1 P1 P1 24
0 4 7 10 14 18 22 26 30 P2 3
Turn around Time P3 3
Turn around time for P1 =(30-0)=30

Turn around time for P2 =(7-0)=7


Turn around time for P3 =(10-0)=10

Average Turn around =


=(30+7+10)/3=47/3=15.33 unit time

By: Dr. Prakash Singh Tanwar


RR Scheduling
Process Burst Time
P1 P2 P3 P1 P1 P1 P1 P1 P1 24
0 4 7 10 14 18 22 26 30 P2 3
Response Time P3 3
Response time for P1 =(0-0)=0

Response time for P2 =(4-0)=4


Response time for P3 =(7-0)=7

Average Response time=


=(0+4+7)/3=11/3=3.66 unit time

By: Dr. Prakash Singh Tanwar


RR Scheduling
Process Burst Time
P1 24
RR Scheduling P2 3

P3 3
Gantt Chart

a). Throughput P1 P2 P3 P1 P1 P1 P1 P1
Throughput= =0.1 0 4 7 10 14 18 22 26 30
b). Turnaround Time c). Waiting Time d). Response Time
Turnaround time for P1= 30 Waiting time for P1= (10-4)=6 Response time for P1= 0
Turnaround time for P2= 7 Waiting time for P2= 4 Response time for P2= 4
Turnaround time for P3= 10 Waiting time for P3= 7 Response time for P3= 7

Average Turnaround time = Average Waiting time = Average Response time =


( ) ( ) ( )
= = = = == =
= 15. = 5.66 = 3.66

By: Dr. Prakash Singh Tanwar


Assignment of RR
Example Process Burst Time
P1 24
Time Quantum = 1 P2 3
P3 3
Gantt Chart

P1 P2 P3 P1 P2 P3 P1 P2 P3 P1 P1 P1 P1 P1 P1 P1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
P1 P1 P1 P1 P1 P1 P1 P1 P1 P1 P1 P1 P1 P1

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

By: Dr. Prakash Singh Tanwar


Assignment
Find average turn around time, average waiting
time, and average response time for the RR
scheduling(for quantum=1,2 and 3)

Process Burst Time


P1 24
P2 3
P3 3

By: Dr. Prakash Singh Tanwar


Time Quantum and Context Switch
Time

By: Dr. Prakash Singh Tanwar


Turnaround Time Varies With
The Time Quantum

By: Dr. Prakash Singh Tanwar


Multilevel Queue
In Multilevel Queue Scheduling multiple queues are created
from ready queue.

Ready queue is partitioned into separate queues: for example

foreground (interactive)

background (batch)

Each queue has its own scheduling algorithm:

foreground – RR

background – FCFS

By: Dr. Prakash Singh Tanwar


Multilevel Queue
Scheduling must be done between the queues:

Fixed priority scheduling; (i.e., serve all from


foreground then from background). Possibility
of starvation.

Time slice – each queue gets a certain amount


of CPU time which it can schedule amongst its
processes; i.e., 80% to foreground in RR

20% to background in FCFS


By: Dr. Prakash Singh Tanwar
Multilevel Queue
Scheduling

By: Dr. Prakash Singh Tanwar


Multilevel Feedback
Queue
A process can move between the various queues; aging
can be implemented this way.

Multilevel-feedback-queue scheduler defined by the


following parameters:
number of queues

scheduling algorithms for each queue

method used to determine when to upgrade a process

method used to determine when to demote a process

method used to determine which queue a process will enter


when that process needs service
By: Dr. Prakash Singh Tanwar
Example of Multilevel
Feedback Queue
Three queues:

Q0 – RR with time quantum 8 milliseconds

Q1 – RR time quantum 16 milliseconds

Q2 – FCFS

Scheduling

A new job enters queue Q0 which is served FCFS. When it gains CPU,
job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is
moved to queue Q1.

At Q1 job is again served FCFS and receives 16 additional milliseconds.


If it still does not complete, it is preempted and moved to queue Q2.

By: Dr. Prakash Singh Tanwar


Multilevel Feedback
Queues

By: Dr. Prakash Singh Tanwar


Thread Scheduling
Distinction between user-level and kernel-level threads

Many-to-one and many-to-many models, thread library


schedules user-level threads to run on LWP
Known as process-contention scope (PCS) since scheduling
competition is within the process

Kernel thread scheduled onto available CPU is system-


contention scope (SCS) – competition among all threads
in system

By: Dr. Prakash Singh Tanwar


Pthread Scheduling
API allows specifying either PCS or SCS
during thread creation
PTHREAD SCOPE PROCESS schedules
threads using PCS scheduling

PTHREAD SCOPE SYSTEM schedules


threads using SCS scheduling.

By: Dr. Prakash Singh Tanwar


Multiple-Processor
Scheduling
CPU scheduling more complex when multiple CPUs are available

Homogeneous processors within a multiprocessor

Asymmetric multiprocessing – only one processor accesses the


system data structures, alleviating the need for data sharing

Symmetric multiprocessing (SMP) – each processor is self-scheduling,


all processes in common ready queue, or each has its own private
queue of ready processes

Processor affinity – process has affinity for processor on which it is


currently running
soft affinity

hard affinity

By: Dr. Prakash Singh Tanwar


NUMA and CPU
Scheduling
Non-Uniform Memory Access

By: Dr. Prakash Singh Tanwar


Multicore Processors
Recent trend to place multiple
processor cores on same physical chip

Faster and consume less power

Multiple threads per core also growing


Takes advantage of memory stall to make
progress on another thread while memory
retrieve happens
By: Dr. Prakash Singh Tanwar
Multithreaded Multicore
System

By: Dr. Prakash Singh Tanwar


Operating System
Examples

Solaris scheduling

Windows XP scheduling

Linux scheduling

By: Dr. Prakash Singh Tanwar


Solaris Dispatch Table

By: Dr. Prakash Singh Tanwar


Solaris Scheduling

By: Dr. Prakash Singh Tanwar


Windows XP Priorities

By: Dr. Prakash Singh Tanwar


Linux Scheduling
Constant order O(1) scheduling time
Two priority ranges:
time-sharing and real-time
Real-time : range from 0 to 99 and
Other tasks(time sharing) from 100 to 140

By: Dr. Prakash Singh Tanwar


List of Tasks Indexed
According to Priorities

By: Dr. Prakash Singh Tanwar


Algorithm Evaluation
Deterministic modeling – takes a
particular predetermined workload
and defines the performance of
each algorithm for that workload
Queuing models

Implementation
By: Dr. Prakash Singh Tanwar
Evaluation of CPU Schedulers
by Simulation

By: Dr. Prakash Singh Tanwar


Q/A
With round robin scheduling algorithm in a time
shared system ____________
using very large time slices converts it into First come
First served scheduling algorithm

using very small time slices converts it into First come


First served scheduling algorithm

using extremely small time slices increases performance

using very small time slices converts it into Shortest Job


First algorithm
By: Dr. Prakash Singh Tanwar
Chapter 6: Process
Syncronization

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013
Chapter 6: Process Syncronization

 Background
 The Critical-Section Problem
 Peterson’s Solution
 Synchronization Hardware
 Mutex Locks
 Semaphores
 Classic Problems of Synchronization
 Monitors
 Synchronization Examples
 Alternative Approaches

Operating System Concepts – 9th Edition 5.2 Silberschatz, Galvin and Gagne ©2013
Objectives
 To introduce the critical-section problem, whose solutions can be used to
ensure the consistency of shared data

 To present both software and hardware solutions of the critical-section problem

 To examine several classical process-synchronization problems

 To explore several tools that are used to solve process synchronization


problems

Operating System Concepts – 9th Edition 5.3 Silberschatz, Galvin and Gagne ©2013
Background

 Processes can execute concurrently


 May be interrupted at any time, partially completing execution

 Concurrent access to shared data may result in data inconsistency

 Maintaining data consistency requires mechanisms to ensure the orderly


execution of cooperating processes

 Illustration of the problem:


Suppose that we wanted to provide a solution to the consumer-producer
problem that fills all the buffers. We can do so by having an integer counter
that keeps track of the number of full buffers. Initially, counter is set to 0. It
is incremented by the producer after it produces a new buffer and is
decremented by the consumer after it consumes a buffer.

Operating System Concepts – 9th Edition 5.4 Silberschatz, Galvin and Gagne ©2013
Producer

while (true) {
/* produce an item in next produced */

while (counter == BUFFER SIZE) ;


/* do nothing */
buffer[in] = next produced;
in = (in + 1) % BUFFER SIZE;
counter++;
}

Operating System Concepts – 9th Edition 5.5 Silberschatz, Galvin and Gagne ©2013
Consumer
while (true) {
while (counter == 0)
; /* do nothing */
next consumed = buffer[out];
out = (out + 1) % BUFFER SIZE;
counter--;
/* consume the item in next consumed */
}

Operating System Concepts – 9th Edition 5.6 Silberschatz, Galvin and Gagne ©2013
Race Condition
 counter++ could be implemented as

register1 = counter
register1 = register1 + 1
counter = register1

 counter-- could be implemented as

register2 = counter
register2 = register2 - 1
counter = register2

 Consider this execution interleaving with “count = 5” initially:


S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 – 1 {register2 = 4}
S4: producer execute counter = register1 {counter = 6 }
S5: consumer execute counter = register2 {counter = 4}

Operating System Concepts – 9th Edition 5.7 Silberschatz, Galvin and Gagne ©2013
Critical Section Problem
 Consider system of n processes {p0, p1, … pn-1}

 Each process has critical section segment of code


 Process may be changing common variables, updating table, writing file, etc
 When one process in critical section, no other may be in its critical section

 Critical section problem is to design protocol to solve this

 Each process must ask permission to enter critical section in entry section, may
follow critical section with exit section, then remainder section

Operating System Concepts – 9th Edition 5.8 Silberschatz, Galvin and Gagne ©2013
Critical Section
 General structure of process pi is

Operating System Concepts – 9th Edition 5.9 Silberschatz, Galvin and Gagne ©2013
Solution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can
be executing in their critical sections

2. Progress - If no process is executing in its critical section and there exist some processes that
wish to enter their critical section, then the selection of the processes that will enter the critical
section next cannot be postponed indefinitely

3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed
to enter their critical sections after a process has made a request to enter its critical section and
before that request is granted
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the n processes

 Two approaches depending on if kernel is preemptive or non-preemptive


 Preemptive – allows preemption of process when running in kernel mode
 Non-preemptive – runs until exits kernel mode, blocks, or voluntarily yields CPU
Essentially free of race conditions in kernel mode

Operating System Concepts – 9th Edition 5.10 Silberschatz, Galvin and Gagne ©2013
’s Solution
Peterson’
 Good algorithmic description of solving the problem
 Two process solution
 Assume that the load and store instructions are atomic; that is, cannot be
interrupted

 The two processes share two variables:


 int turn;
 Boolean flag[2]

 The variable turn indicates whose turn it is to enter the critical section

 The flag array is used to indicate if a process is ready to enter the critical
section. flag[i] = true implies that process Pi is ready!

Operating System Concepts – 9th Edition 5.11 Silberschatz, Galvin and Gagne ©2013
Algorithm for Process Pi
do {
flag[i] = true;
turn = j;
while (flag[j] && turn == j);
critical section
flag[i] = false;
remainder section
} while (true);

 Provable that
1. Mutual exclusion is preserved
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met

Operating System Concepts – 9th Edition 5.12 Silberschatz, Galvin and Gagne ©2013
Synchronization Hardware
 Many systems provide hardware support for critical section code

 All solutions below based on idea of locking


 Protecting critical regions via locks

 Uniprocessors – could disable interrupts


 Currently running code would execute without preemption
 Generally too inefficient on multiprocessor systems
 Operating systems using this not broadly scalable

 Modern machines provide special atomic hardware instructions


 Atomic = non-interruptible
 Either test memory word and set value
 Or swap contents of two memory words

Operating System Concepts – 9th Edition 5.13 Silberschatz, Galvin and Gagne ©2013
Solution to Critical-section Problem Using Locks

do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);

Operating System Concepts – 9th Edition 5.14 Silberschatz, Galvin and Gagne ©2013
test_and_set Instruction

 Definition:

boolean test_and_set (boolean *target)


{
boolean rv = *target;
*target = TRUE;
return rv:
}

Operating System Concepts – 9th Edition 5.15 Silberschatz, Galvin and Gagne ©2013
Solution using test_and_set()

 Shared boolean variable lock, initialized to FALSE


 Solution:

do {
while (test_and_set(&lock))
; /* do nothing */
/* critical section */
lock = false;
/* remainder section */
} while (true);

Operating System Concepts – 9th Edition 5.16 Silberschatz, Galvin and Gagne ©2013
compare_and_swap Instruction

 Definition:

int compare and swap(int *value, int expected, int new


value) {
int temp = *value;
if (*value == expected)
*value = new value;
return temp;
}

Operating System Concepts – 9th Edition 5.17 Silberschatz, Galvin and Gagne ©2013
Solution using compare_and_swap
 Shared Boolean variable lock initialized to FALSE; Each process has a local
Boolean variable key
 Solution:

do {
while (compare and swap(&lock, 0, 1) != 0)
; /* do nothing */
/* critical section */
lock = 0;
/* remainder section */
} while (true);

Operating System Concepts – 9th Edition 5.18 Silberschatz, Galvin and Gagne ©2013
Mutex Locks
 Previous solutions are complicated and generally inaccessible to application
programmers
 OS designers build software tools to solve critical section problem
 Simplest is mutex lock
 Product critical regions with it by first acquire() a lock then release() it
 Boolean variable indicating if lock is available or not

 Calls to acquire() and release() must be atomic


 Usually implemented via hardware atomic instructions

 But this solution requires busy waiting


 This lock therefore called a spinlock

Operating System Concepts – 9th Edition 5.19 Silberschatz, Galvin and Gagne ©2013
acquire() and release()
acquire() {
while (!available)
; /* busy wait */
available = false;;
}
release() {
available = true;
}

do {
acquire lock
critical section
release lock
remainder section
} while (true);

Operating System Concepts – 9th Edition 5.20 Silberschatz, Galvin and Gagne ©2013
Semaphore
 Synchronization tool that does not require busy waiting
 Semaphore S – integer variable
 Two standard operations modify S: wait() and signal()
 Originally called P() and V()
 Can only be accessed via two indivisible (atomic) operations
 Original definitions of wait() and signal() proposed by Dijsktra
 Busy waiting version

wait (S) { signal (S) {


while (S <= 0) S++;
; // busy wait }
S--;
}

Operating System Concepts – 9th Edition 5.21 Silberschatz, Galvin and Gagne ©2013
Semaphore Usage
 Counting semaphore – integer value can range over an unrestricted domain
 Binary semaphore – integer value can range only between 0 and 1
 Then a mutex lock
 Can implement a counting semaphore S as a binary semaphore
 Can solve various synchronization problems
 Consider P1 and P2 that require S1 to happen before S2
P1:
S1;
signal(synch);
P2:
wait(synch);
S2;

Operating System Concepts – 9th Edition 5.22 Silberschatz, Galvin and Gagne ©2013
Semaphore Implementation
 Must guarantee that no two processes can execute wait () and signal ()
on the same semaphore at the same time

 Thus, implementation becomes the critical section problem where the wait and
signal code are placed in the critical section
 Could now have busy waiting in critical section implementation
 But implementation code is short
 Little busy waiting if critical section rarely occupied

 Note that applications may spend lots of time in critical sections and therefore
this is not a good solution

Operating System Concepts – 9th Edition 5.23 Silberschatz, Galvin and Gagne ©2013
Semaphore Implementation
with no Busy waiting

 With each semaphore there is an associated waiting queue


 Each entry in a waiting queue has two data items:
 value (of type integer)
 pointer to next record in the list

 Two operations:
 block – place the process invoking the operation on the appropriate
waiting queue
 wakeup – remove one of processes in the waiting queue and place it in
the ready queue

Operating System Concepts – 9th Edition 5.24 Silberschatz, Galvin and Gagne ©2013
Semaphore Implementation with
no Busy waiting (Cont.)
typedef struct{
int value;
struct process *list;
} semaphore;

wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
Operating System Concepts – 9th Edition 5.25 Silberschatz, Galvin and Gagne ©2013
Deadlock and Starvation
 Deadlock – two or more processes are waiting indefinitely for an event that can be
caused by only one of the waiting processes
 Let S and Q be two semaphores initialized to 1
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
. .
signal(S); signal(Q);
signal(Q); signal(S);

 Starvation – indefinite blocking


 A process may never be removed from the semaphore queue in which it is
suspended
 Priority Inversion – Scheduling problem when lower-priority process holds a lock
needed by higher-priority process
 Solved via priority-inheritance protocol

Operating System Concepts – 9th Edition 5.26 Silberschatz, Galvin and Gagne ©2013
Classical Problems of Synchronization
 Classical problems used to test newly-proposed synchronization schemes

 Bounded-Buffer Problem

 Readers and Writers Problem

 Dining-Philosophers Problem

Operating System Concepts – 9th Edition 5.27 Silberschatz, Galvin and Gagne ©2013
Bounded-Buffer Problem
 n buffers, each can hold one item

 Semaphore mutex initialized to the value 1

 Semaphore full initialized to the value 0

 Semaphore empty initialized to the value n

Operating System Concepts – 9th Edition 5.28 Silberschatz, Galvin and Gagne ©2013
Bounded Buffer Problem (Cont.)
 The structure of the producer process

do {
...
/* produce an item in next_produced */
...
wait(empty);
wait(mutex);
...
/* add next produced to the buffer */
...
signal(mutex);
signal(full);
} while (true);

Operating System Concepts – 9th Edition 5.29 Silberschatz, Galvin and Gagne ©2013
Bounded Buffer Problem (Cont.)
 The structure of the consumer process

do {
wait(full);
wait(mutex);
...
/* remove an item from buffer to next_consumed */
...
signal(mutex);
signal(empty);
...
/* consume the item in next consumed */
...
} while (true);

Operating System Concepts – 9th Edition 5.30 Silberschatz, Galvin and Gagne ©2013
Readers-Writers Problem
 A data set is shared among a number of concurrent processes
 Readers – only read the data set; they do not perform any updates
 Writers – can both read and write
 Problem – allow multiple readers to read at the same time
 Only one single writer can access the shared data at the same time
 Several variations of how readers and writers are treated – all involve priorities

 Shared Data
 Data set
 Semaphore rw_mutex initialized to 1
 Semaphore mutex initialized to 1
 Integer read_count initialized to 0

Operating System Concepts – 9th Edition 5.31 Silberschatz, Galvin and Gagne ©2013
Readers-Writers Problem (Cont.)
 The structure of a writer process

do {
wait(rw mutex);
...
/* writing is performed */
...
signal(rw mutex);
} while (true);

Operating System Concepts – 9th Edition 5.32 Silberschatz, Galvin and Gagne ©2013
Readers-Writers Problem (Cont.)
 The structure of a reader process

do {
wait(mutex);
read count++;
if (read count == 1)
wait(rw mutex);
signal(mutex);
...
/* reading is performed */
...
wait(mutex);
read count--;
if (read count == 0)
signal(rw mutex);
signal(mutex);
} while (true);

Operating System Concepts – 9th Edition 5.33 Silberschatz, Galvin and Gagne ©2013
Readers-Writers Problem Variations
 First variation – no reader kept waiting unless writer has permission to use shared
object

 Second variation – once writer is ready, it performs write asap

 Both may have starvation leading to even more variations

 Problem is solved on some systems by kernel providing reader-writer locks

Operating System Concepts – 9th Edition 5.34 Silberschatz, Galvin and Gagne ©2013
Dining-Philosophers Problem

 Philosophers spend their lives thinking and eating


 Don’t interact with their neighbors, occasionally try to pick up 2 chopsticks (one at a
time) to eat from bowl
 Need both to eat, then release both when done
 In the case of 5 philosophers
 Shared data
 Bowl of rice (data set)
 Semaphore chopstick [5] initialized to 1

Operating System Concepts – 9th Edition 5.35 Silberschatz, Galvin and Gagne ©2013
Dining-Philosophers Problem Algorithm
 The structure of Philosopher i:

do {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );

// eat

signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );

// think

} while (TRUE);

 What is the problem with this algorithm?

Operating System Concepts – 9th Edition 5.36 Silberschatz, Galvin and Gagne ©2013
Problems with Semaphores
 Incorrect use of semaphore operations:

 signal (mutex) …. wait (mutex)

 wait (mutex) … wait (mutex)

 Omitting of wait (mutex) or signal (mutex) (or both)

 Deadlock and starvation

Operating System Concepts – 9th Edition 5.37 Silberschatz, Galvin and Gagne ©2013
Monitors
 A high-level abstraction that provides a convenient and effective mechanism for
process synchronization
 Abstract data type, internal variables only accessible by code within the procedure
 Only one process may be active within the monitor at a time
 But not powerful enough to model some synchronization schemes

monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }

procedure Pn (…) {……}

Initialization code (…) { … }


}
}

Operating System Concepts – 9th Edition 5.38 Silberschatz, Galvin and Gagne ©2013
Schematic view of a Monitor

Operating System Concepts – 9th Edition 5.39 Silberschatz, Galvin and Gagne ©2013
Condition Variables

 condition x, y;

 Two operations on a condition variable:


 x.wait () – a process that invokes the operation is suspended until x.signal ()
 x.signal () – resumes one of processes (if any) that invoked x.wait ()
 If no x.wait () on the variable, then it has no effect on the variable

Operating System Concepts – 9th Edition 5.40 Silberschatz, Galvin and Gagne ©2013
Monitor with Condition Variables

Operating System Concepts – 9th Edition 5.41 Silberschatz, Galvin and Gagne ©2013
Condition Variables Choices

 If process P invokes x.signal (), with Q in x.wait () state, what should happen
next?
 If Q is resumed, then P must wait

 Options include
 Signal and wait – P waits until Q leaves monitor or waits for another
condition
 Signal and continue – Q waits until P leaves the monitor or waits for
another condition
 Both have pros and cons – language implementer can decide
 Monitors implemented in Concurrent Pascal compromise
 P executing signal immediately leaves the monitor, Q is resumed
 Implemented in other languages including Mesa, C#, Java

Operating System Concepts – 9th Edition 5.42 Silberschatz, Galvin and Gagne ©2013
Solution to Dining Philosophers
monitor DiningPhilosophers
{
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self [5];

void pickup (int i) {


state[i] = HUNGRY;
test(i);
if (state[i] != EATING) self [i].wait;
}

void putdown (int i) {


state[i] = THINKING;
// test left and right neighbors
test((i + 4) % 5);
test((i + 1) % 5);
}

Operating System Concepts – 9th Edition 5.43 Silberschatz, Galvin and Gagne ©2013
Solution to Dining Philosophers (Cont.)

void test (int i) {


if ( (state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) ) {
state[i] = EATING ;
self[i].signal () ;
}
}

initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}

Operating System Concepts – 9th Edition 5.44 Silberschatz, Galvin and Gagne ©2013
Solution to Dining Philosophers (Cont.)

 Each philosopher i invokes the operations pickup() and putdown() in the following
sequence:

DiningPhilosophers.pickup (i);

EAT

DiningPhilosophers.putdown (i);

 No deadlock, but starvation is possible

Operating System Concepts – 9th Edition 5.45 Silberschatz, Galvin and Gagne ©2013
Monitor Implementation Using Semaphores

 Variables
semaphore mutex; // (initially = 1)
semaphore next; // (initially = 0)
int next_count = 0;

 Each procedure F will be replaced by

wait(mutex);

body of F;


if (next_count > 0)
signal(next)
else
signal(mutex);

 Mutual exclusion within a monitor is ensured

Operating System Concepts – 9th Edition 5.46 Silberschatz, Galvin and Gagne ©2013
Resuming Processes within a Monitor
 If several processes queued on condition x, and x.signal() executed, which should be
resumed?

 FCFS frequently not adequate

 conditional-wait construct of the form x.wait(c)


 Where c is priority number
 Process with lowest number (highest priority) is scheduled next

Operating System Concepts – 9th Edition 5.47 Silberschatz, Galvin and Gagne ©2013
A Monitor to Allocate Single Resource
monitor ResourceAllocator
{
boolean busy;
condition x;
void acquire(int time) {
if (busy)
x.wait(time);
busy = TRUE;
}

void release() {
busy = FALSE;
x.signal();
}

initialization code() {
busy = FALSE;
}
}

Operating System Concepts – 9th Edition 5.48 Silberschatz, Galvin and Gagne ©2013
The image cannot be displayed. Your computer may not have enough memory to open the image, or the image may have been corrupted. Restart your computer, and then open the file again. If the red x still appears, you may have to delete the image and then
insert it again.

Chapter 7: Deadlocks!
Chapter 7: Deadlocks!

■  The Deadlock Problem"


■  System Model"
■  Deadlock Characterization"
■  Methods for Handling Deadlocks"
■  Deadlock Prevention"
■  Deadlock Avoidance"
■  Deadlock Detection "
■  Recovery from Deadlock "

Operating System Concepts! 7.2! Silberschatz, Galvin and Gagne ©2005!


Chapter Objectives!

■  To develop a description of deadlocks, which prevent


sets of concurrent processes from completing their tasks"
■  To present a number of different methods for preventing
or avoiding deadlocks in a computer system."
"

Operating System Concepts! 7.3! Silberschatz, Galvin and Gagne ©2005!


The Deadlock Problem!

■  A set of blocked processes each holding a resource and waiting to


acquire a resource held by another process in the set."
■  Example "
●  System has 2 tape drives."
●  P0 and P1 each hold one tape drive and each needs another
one."
■  Example "

●  semaphores A and B, initialized to 1"

P0 " " P1"


wait (A); ! !wait(B)!
wait (B); ! !wait(A)!

Operating System Concepts! 7.4! Silberschatz, Galvin and Gagne ©2005!


Bridge Crossing Example!

■  Traffic only in one direction."


■  Each section of a bridge can be viewed as a resource."
■  If a deadlock occurs, it can be resolved if one car backs up
(preempt resources and rollback)."
■  Several cars may have to be backed up if a deadlock
occurs."
■  Starvation is possible."

Operating System Concepts! 7.5! Silberschatz, Galvin and Gagne ©2005!


System Model!

■  Resource types R1, R2, . . ., Rm"


CPU cycles, memory space, I/O devices!
■  Each resource type Ri has Wi instances."
■  Each process utilizes a resource as follows:"
●  request "
●  use "
●  release"

Operating System Concepts! 7.6! Silberschatz, Galvin and Gagne ©2005!


Deadlock Characterization!

Deadlock can arise if four conditions hold simultaneously."


■  Mutual exclusion: only one process at a time can use a
resource."
■  Hold and wait: a process holding at least one resource is
waiting to acquire additional resources held by other
processes."
■  No preemption: a resource can be released only
voluntarily by the process holding it, after that process has
completed its task."
■  Circular wait: there exists a set {P0, P1, …, P0} of waiting
processes such that P0 is waiting for a resource that is held
by P1, P1 is waiting for a resource that is held by "
!P2, …, Pn–1 is waiting for a resource that is held by 

Pn, and P0 is waiting for a resource that is held by P0."

Operating System Concepts! 7.7! Silberschatz, Galvin and Gagne ©2005!


Resource-Allocation Graph!

A set of vertices V and a set of edges E."

■  V is partitioned into two types:"


●  P = {P1, P2, …, Pn}, the set consisting of all the
processes in the system.

"
●  R = {R1, R2, …, Rm}, the set consisting of all resource
types in the system."
■  request edge – directed edge P1 → Rj!
■  assignment edge – directed edge Rj → Pi"

Operating System Concepts! 7.8! Silberschatz, Galvin and Gagne ©2005!


Resource-Allocation Graph (Cont.)!

■  Process



"
■  Resource Type with 4 instances"
"

■  Pi requests instance of Rj"

Pi!
"
R j!
■  Pi is holding an instance of Rj!

Pi"
R j!

Operating System Concepts! 7.9! Silberschatz, Galvin and Gagne ©2005!


Example of a Resource Allocation Graph!

Operating System Concepts! 7.10! Silberschatz, Galvin and Gagne ©2005!


Will there be a deadlock here?!

Operating System Concepts! 7.11! Silberschatz, Galvin and Gagne ©2005!


Resource Allocation Graph With A Cycle But No Deadlock!

Operating System Concepts! 7.12! Silberschatz, Galvin and Gagne ©2005!


Basic Facts!

■  If graph contains no cycles ⇒ no deadlock.



"
■  If graph contains a cycle ⇒"
●  if only one instance per resource type, then deadlock."
●  if several instances per resource type, possibility of
deadlock."

Operating System Concepts! 7.13! Silberschatz, Galvin and Gagne ©2005!


Methods for Handling Deadlocks!

■  Ensure that the system will never enter a deadlock state.



"
■  Allow the system to enter a deadlock state and then
recover.

"
■  Ignore the problem and pretend that deadlocks never occur
in the system; used by most operating systems, including
UNIX."

Operating System Concepts! 7.14! Silberschatz, Galvin and Gagne ©2005!


The Methods (continued)!

■  Deadlock Prevention"
■  Deadlock Avoidance"
■  Deadlock Detection"

Operating System Concepts! 7.15! Silberschatz, Galvin and Gagne ©2005!


Deadlock Prevention!

Restrain the ways request can be made."

■  Mutual Exclusion – not required for sharable resources;


must hold for nonsharable resources.

"
■  Hold and Wait – must guarantee that whenever a process
requests a resource, it does not hold any other resources."
●  Require process to request and be allocated all its
resources before it begins execution, or allow process
to request resources only when the process has none."
●  Low resource utilization; starvation possible."

Operating System Concepts! 7.16! Silberschatz, Galvin and Gagne ©2005!


Deadlock Prevention (Cont.)!

■  No Preemption –"
●  If a process that is holding some resources requests
another resource that cannot be immediately allocated to
it, then all resources currently being held are released."
●  Preempted resources are added to the list of resources
for which the process is waiting."
●  Process will be restarted only when it can regain its old
resources, as well as the new ones that it is requesting.

"
■  Circular Wait – impose a total ordering of all resource types,
and require that each process requests resources in an
increasing order of enumeration."

Operating System Concepts! 7.17! Silberschatz, Galvin and Gagne ©2005!


Deadlock Avoidance!

Requires that the system has some additional a priori information 



available."

■  Simplest and most useful model requires that each process


declare the maximum number of resources of each type
that it may need.

"
■  The deadlock-avoidance algorithm dynamically examines
the resource-allocation state to ensure that there can never
be a circular-wait condition.

"
■  Resource-allocation state is defined by the number of
available and allocated resources, and the maximum
demands of the processes."

Operating System Concepts! 7.18! Silberschatz, Galvin and Gagne ©2005!


Deadlock Detection!

■  Allow system to enter deadlock state 



"
■  Detection algorithm

"
■  Recovery scheme"

Operating System Concepts! 7.19! Silberschatz, Galvin and Gagne ©2005!


Safe State!

■  When a process requests an available resource, system must


decide if immediate allocation leaves the system in a safe state.

"
■  System is in safe state if there exists a safe sequence of all
processes. 

"
■  Sequence <P1, P2, …, Pn> is safe if for each Pi, the resources that
Pi can still request can be satisfied by currently available resources
+ resources held by all the Pj, with j<I."
●  If Pi resource needs are not immediately available, then Pi can
wait until all Pj have finished."
●  When Pj is finished, Pi can obtain needed resources, execute,
return allocated resources, and terminate. "
●  When Pi terminates, Pi+1 can obtain its needed resources, and
so on. "

Operating System Concepts! 7.20! Silberschatz, Galvin and Gagne ©2005!


Basic Facts!

■  If a system is in safe state ⇒ no deadlocks.



"
■  If a system is in unsafe state ⇒ possibility of deadlock.

"
■  Avoidance ⇒ ensure that a system will never enter an
unsafe state. "

Operating System Concepts! 7.21! Silberschatz, Galvin and Gagne ©2005!


Safe, Unsafe , Deadlock State!

Operating System Concepts! 7.22! Silberschatz, Galvin and Gagne ©2005!


Resource-Allocation Graph Algorithm!

■  Claim edge Pi → Rj indicated that process Pj may request


resource Rj; represented by a dashed line.

"
■  Claim edge converts to request edge when a process
requests a resource.

"
■  When a resource is released by a process, assignment edge
reconverts to a claim edge.

"
■  Resources must be claimed a priori in the system."

Operating System Concepts! 7.23! Silberschatz, Galvin and Gagne ©2005!


Resource-Allocation Graph For Deadlock Avoidance!

Assignment Edge Request Edge

Claim Edge
Claim Edge

Operating System Concepts! 7.24! Silberschatz, Galvin and Gagne ©2005!


Unsafe State In Resource-Allocation Graph!

Assignment Edge Request Edge

Claim Edge Claim Edge

Operating System Concepts! 7.25! Silberschatz, Galvin and Gagne ©2005!


Example formal algorithms!

■  Banker’s Algorithm"
■  Resource-Request Algorithm"
■  Safety Algorithm"

Operating System Concepts! 7.26! Silberschatz, Galvin and Gagne ©2005!


Stop Here!

Operating System Concepts! 7.27! Silberschatz, Galvin and Gagne ©2005!


Banker’s Algorithm!

■  Multiple instances.

"
■  Each process must a priori claim maximum use.

"
■  When a process requests a resource it may have to wait. 

"
■  When a process gets all its resources it must return them in
a finite amount of time."

Operating System Concepts! 7.28! Silberschatz, Galvin and Gagne ©2005!


Data Structures for the Banker’s Algorithm!

Let n = number of processes, and m = number of resources types. "

■  Available: Vector of length m. If available [j] = k, there are k


instances of resource type Rj available."
■  Max: n x m matrix. If Max [i,j] = k, then process Pi may
request at most k instances of resource type Rj."
■  Allocation: n x m matrix. If Allocation[i,j] = k then Pi is
currently allocated k instances of Rj."
■  Need: n x m matrix. If Need[i,j] = k, then Pi may need k
more instances of Rj to complete its task."

Need [i,j] = Max[i,j] – Allocation [i,j]."

Operating System Concepts! 7.29! Silberschatz, Galvin and Gagne ©2005!


Safety Algorithm!

1. "Let Work and Finish be vectors of length m and n,


respectively. Initialize:"
Work = Available!
Finish [i] = false for i = 1,2,3,...n."
2. "Find and i such that both: "
(a) Finish [i] = false"
(b) Needi ≤ Work!
If no such i exists, go to step 4."
3. "Work = Work + Allocationi

Finish[i] = true

go to step 2."
4. "If Finish [i] == true for all i, then the system is in a safe
state."

Operating System Concepts! 7.30! Silberschatz, Galvin and Gagne ©2005!


Resource-Request Algorithm for Process P!i

Request = request vector for process Pi. If Requesti [j] = k then


process Pi wants k instances of resource type Rj."
1. "If Requesti ≤ Needi go to step 2. Otherwise, raise error
condition, since process has exceeded its maximum claim."
2. "If Requesti ≤ Available, go to step 3. Otherwise Pi must
wait, since resources are not available."
3. "Pretend to allocate requested resources to Pi by modifying
the state as follows:"
" "Available = Available - Requesti;!
" "Allocationi = Allocationi + Requesti;"
" "Needi = Needi – Requesti;!
●  If safe ⇒ the resources are allocated to Pi. !
●  If unsafe ⇒ Pi must wait, and the old resource-allocation
state is restored!

Operating System Concepts! 7.31! Silberschatz, Galvin and Gagne ©2005!


Example of Banker’s Algorithm!

■  5 processes P0 through P4; 3 resource types A 



(10 instances), 

B (5instances, and C (7 instances)."
■  Snapshot at time T0:"
" " "Allocation !Max !Available!
! ! !A B C !A B C !A B C!
" "P0 "0 10 "7 5 3 "3 3 2"
" " P1 "2 00 "3 2 2 "
" " P2 "3 0 2 "9 0 2"
" " P3 "2 1 1 "2 2 2"
" " P4 "0 0 2 "4 3 3 ""

Operating System Concepts! 7.32! Silberschatz, Galvin and Gagne ©2005!


Example (Cont.)!

■  The content of the matrix. Need is defined to be Max – Allocation."


" " "Need"
" " "A B C!
" " P0 "7 43"
" " P1 "1 22"
" " P2 "6 0 0 "
" " P3 "0 1 1"
" " P4 "4 3 1 

"
■  The system is in a safe state since the sequence < P1, P3, P4, P2,
P0> satisfies safety criteria. "

Operating System Concepts! 7.33! Silberschatz, Galvin and Gagne ©2005!


Example P1 Request (1,0,2) (Cont.)!

■  Check that Request ≤ Available (that is, (1,0,2) ≤ (3,3,2) ⇒ true.!


! ! !Allocation !Need !Available!
! ! !A B C !A B C !A B C !
" "P0 "0 1 0 "7 4 3 "2 3 0"
" "P1 "3 0 2 "0 2 0 ""
" "P2 "3 0 1 "6 0 0 "
" "P3 "2 1 1 "0 1 1"
" "P4 "0 0 2 "4 3 1 "
■  Executing safety algorithm shows that sequence <P1, P3, P4, P0,
P2> satisfies safety requirement. "
■  Can request for (3,3,0) by P4 be granted?"
■  Can request for (0,2,0) by P0 be granted?"

Operating System Concepts! 7.34! Silberschatz, Galvin and Gagne ©2005!


Deadlock Detection!

■  Allow system to enter deadlock state 



"
■  Detection algorithm

"
■  Recovery scheme"

Operating System Concepts! 7.35! Silberschatz, Galvin and Gagne ©2005!


Single Instance of Each Resource Type!

■  Maintain wait-for graph"


●  Nodes are processes."
●  Pi → Pj if Pi is waiting for Pj.

!
■  Periodically invoke an algorithm that searches for a cycle in
the graph.

"
■  An algorithm to detect a cycle in a graph requires an order
of n2 operations, where n is the number of vertices in the
graph."

Operating System Concepts! 7.36! Silberschatz, Galvin and Gagne ©2005!


Resource-Allocation Graph and Wait-for Graph!

Resource-Allocation Graph" Corresponding wait-for graph"

Operating System Concepts! 7.37! Silberschatz, Galvin and Gagne ©2005!


Several Instances of a Resource Type!

■  Available: A vector of length m indicates the number of


available resources of each type.

"
■  Allocation: An n x m matrix defines the number of
resources of each type currently allocated to each process.

"
■  Request: An n x m matrix indicates the current request of
each process. If Request [ij] = k, then process Pi is
requesting k more instances of resource type. Rj."

Operating System Concepts! 7.38! Silberschatz, Galvin and Gagne ©2005!


Detection Algorithm!

1. "Let Work and Finish be vectors of length m and n, respectively


Initialize:"
(a) Work = Available"
(b) "For i = 1,2, …, n, if Allocationi ≠ 0, then 

Finish[i] = false;otherwise, Finish[i] = true."
2. "Find an index i such that both:"
(a) "Finish[i] == false"
(b) "Requesti ≤ Work

"
If no such i exists, go to step 4. "

Operating System Concepts! 7.39! Silberschatz, Galvin and Gagne ©2005!


Detection Algorithm (Cont.)!

3. "Work = Work + Allocationi



Finish[i] = true

go to step 2.

"
4. "If Finish[i] == false, for some i, 1 ≤ i ≤ n, then the system is in
deadlock state. Moreover, if Finish[i] == false, then Pi is
deadlocked."
""
Algorithm requires an order of O(m x n2) operations to detect whether the
system is in deadlocked state. "
"

Operating System Concepts! 7.40! Silberschatz, Galvin and Gagne ©2005!


Example of Detection Algorithm!

■  Five processes P0 through P4; three resource types 



A (7 instances), B (2 instances), and C (6 instances)."
■  Snapshot at time T0:"
" " "Allocation !Request !Available!
" " "A B C !A B C !A B C!
" "P0 "0 1 0 "0 0 0 "0 0 0"
" "P1 "2 0 0 "2 0 2"
" "P2 "3 0 3 "0 0 0 "
" "P3 "2 1 1 "1 0 0 "
" "P4 "0 0 2 "0 0 2"
■  Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i. "
"

Operating System Concepts! 7.41! Silberschatz, Galvin and Gagne ©2005!


Example (Cont.)!

■  P2 requests an additional instance of type C."


" " "Request!
! ! !A B C!
" " P0 "0 0 0"
" " P1 "2 0 1"
" "P2 "0 0 1"
" "P3 "1 0 0 "
" "P4 "0 0 2"
■  State of system?"
●  Can reclaim resources held by process P0, but insufficient
resources to fulfill other processes; requests."
●  Deadlock exists, consisting of processes P1, P2, P3, and P4."

Operating System Concepts! 7.42! Silberschatz, Galvin and Gagne ©2005!


Detection-Algorithm Usage!

■  When, and how often, to invoke depends on:"


●  How often a deadlock is likely to occur?"
●  How many processes will need to be rolled back?"
!  one for each disjoint cycle

"
■  If detection algorithm is invoked arbitrarily, there may be many
cycles in the resource graph and so we would not be able to tell
which of the many deadlocked processes “caused” the deadlock."

Operating System Concepts! 7.43! Silberschatz, Galvin and Gagne ©2005!



Recovery from Deadlock: Process Termination!

■  Abort all deadlocked processes.



"
■  Abort one process at a time until the deadlock cycle is eliminated.

"
■  In which order should we choose to abort?"
●  Priority of the process."
●  How long process has computed, and how much longer to
completion."
●  Resources the process has used."
●  Resources process needs to complete."
●  How many processes will need to be terminated. "
●  Is process interactive or batch?"

Operating System Concepts! 7.44! Silberschatz, Galvin and Gagne ©2005!


Recovery from Deadlock: Resource Preemption!

■  Selecting a victim – minimize cost.



"
■  Rollback – return to some safe state, restart process for that state.

"
■  Starvation – same process may always be picked as victim,
include number of rollback in cost factor."

Operating System Concepts! 7.45! Silberschatz, Galvin and Gagne ©2005!


The image cannot be displayed. Your computer may not have enough memory to open the image, or the image may have been corrupted. Restart your computer, and then open the file again. If the red x still appears, you may have to delete the image and then
insert it again.

End of Chapter 7!
Deadlock
Banker’s Algorithm

By: Dr. P.S.Tanwar


Q/A
The circular wait condition can be
prevented by ____________
a) defining a linear ordering of resource types

b) using thread

c) using pipes

d) all of the mentioned

By: Dr. P.S.Tanwar


Deadlock
Deadlock Avoidance
Banker’s Algorithm
• The banker’s algorithm is a resource allocation and
deadlock avoidance algorithm

• It tests for safety by simulating the allocation for


predetermined maximum possible amounts of all resources,

• then makes an “s-state” check to test for possible activities,


before deciding whether allocation should be allowed to
continue.

By: Dr. P.S.Tanwar


Deadlock
Deadlock Avoidance
Banker’s Algorithm
• This name is given because it is used in banking system.

• Bank checks whether loan can be sanctioned to a person or


not.
– Suppose there are n number of account holders in a bank and the total
sum of their money is S. If a person applies for a loan then the bank
first subtracts the loan amount from the total money that bank has and
if the remaining amount is greater than S then only the loan is
sanctioned. It is done because if all the account holders comes to
withdraw their money then the bank can easily do it.

By: Dr. P.S.Tanwar


Data Structure of
Banker’s Algorithm

By: Dr. P.S.Tanwar


Banker’s Algorithm
Safety Algorithm

By: Dr. P.S.Tanwar


Banker’s Algorithm
Resource Request Algorithm

By: Dr. P.S.Tanwar


Banker’s Algorithm
Example
5 processes : P0 through P4;
3 resource types: A 
 (10 instances), B (5instances, and C (7 instances).

Need[i,j] = Max[i,j]-Allocation[i,j]
Allocation Max Available Need
A B C A B C
A B C A B C
P0 0 1 0 7 5 3
3 3 2 7 4 3
P1 2 0 0 3 2 2 1 2 2
P2 3 0 2 9 0 2
6 0 0
P3 2 1 1 2 2 2
0 1 1
P4 0 0 2 4 3 3
4 3 1

By: Dr. P.S.Tanwar


Banker’s Algorithm
Example

p1 p3 p4
Allocation Max Available Need
A B C A B C A B C A B C
Available
P0 0 1 0 7 5 3 3 3 2 7 4 3 A B
A B C C
P1 2 0 0 3 2 2 1 2 2 3+
3+ 3+
3+ 2+
2+
P2 3 0 2 9 0 2 6 0 0 2+ 0+ 0+
2
2+ 0
0+ 0
0+
P3 2 1 1 2 2 2 0 1 1 2+
=5
2 1+
=3
1 1+
=2
1
0
=7 0
=4 2
=3
P4 0 0 2 4 3 3 4 3 1 =7 =4 =5

By: Dr. P.S.Tanwar


Banker’s Algorithm

We claims that it is in safe state because Seq. P1 P3 P4 P2 P0


<p1,p3,p4,p2,p0> satisfies the safety criteria finish finish finish Finish Finish

Allocation Max Available Need


A B C A B C A B C A B C
Available
P0 0 1 0 7 5 3 3 3 2 7 4 3
A B C
P1 2 0 0 3 2 2 1 2 2
3+ 3+ 2+
P2 3 0 2 9 0 2 6 0 0 2+ 0+ 0+
P3 2 1 1 2 2 2 0 1 1 2+ 1+ 1+
P4 0 0 2 4 3 3 4 3 1 0+
0 0+
0 2+
2
3=
3+
=7 0=4
0+ 2=5
2+
Current state is safe state 10
0= =4
1 =7
0
10 =5 =7
By: Dr. P.S.Tanwar
Q/A
Which of the following is safe
sequence? Current Max
Need
Allocated Required
P0,P1,P2 Resources Resources
P0 5 10 5
P1,P0, P2 P1 2 4 2

P2 2 9 7
P2,P0,P1
Current
P1,P2,P0 Available
Resources
3

By: Dr. P.S.Tanwar


Q/A
Banker’s Algorithm is
Deadlock prevention algorithm

Deadlock Avoidance algorithm

Deadlock Recovery algorithm

Deadlock Ignorance algorithm

By: Dr. P.S.Tanwar


Banker’s Algorithm
If P1 Requests (1,0,2)
Request Means P1 requests for A=1,B=0, and C=2
request1[j]<=Available1
(1,0,2) <=(3,3,2)

Allocation Max Available Need


A B C A B C A B C A B C
P0 0 1 0 7 5 3 33 3
2 02 2 7 4 3
P1 32 2 0 200 3 2 2 011 2 02 2
P2 3 0 2 9 0 2 6 0 0
P3 2 1 1 2 2 2 0 1 1
P4 0 0 2 4 3 3 4 3 1

By: Dr. P.S.Tanwar


Banker’s Algorithm
If P1 Requests (1,0,2)
Request Means P1 requests for A=1,B=0, and C=2
request1[j]<=Available1
(1,0,2) <=(3,3,2)

Allocation Max Available Need


A B C A B C A B C A B C
P0 0 1 0 7 5 3 2 3 0 7 4 3
P1 3 0 2 3 2 2 0 2 0
P2 3 0 2 9 0 2 6 0 0
P3 2 1 1 2 2 2 0 1 1
P4 0 0 2 4 3 3 4 3 1

By: Dr. P.S.Tanwar


Banker’s Algorithm
If P1 Requests (1,0,2)
Request Means P1 requests for A=1,B=0, and C=2

By: Dr. P.S.Tanwar


RAG and Wait-for Graph
RAG Wait for Graph

By: Dr. P.S.Tanwar


Deadlock Detection
Algorithm

By: Dr. P.S.Tanwar


Q/A
A deadlock avoidance algorithm dynamically
examines the __________ to ensure that a
circular wait condition can never exist.
resource allocation state

system storage state

operating system

resources

By: Dr. P.S.Tanwar


Q/A
A system is in the safe state if ____________
a) the system can allocate resources to each process in
some order and still avoid a deadlock

b) there exist a safe sequence

c) all of the mentioned

d) none of the mentioned

By: Dr. P.S.Tanwar

You might also like