Professional Documents
Culture Documents
Osy Chapter 4
Osy Chapter 4
4.1 Scheduling Types – Scheduling Objectives, concept, CPU and I/O burst cycles, Pre-emptive, Non- Pre-emptive
Scheduling, Scheduling criteria.
4.2 Types of Scheduling algorithms - First come first served (FCFS), Shortest Job First (SJF), Shortest Remaining
Time(SRTN), Round Robin (RR) Priority scheduling, multilevel queue scheduling
4.3 Deadlock – System Models, Necessary Conditions leading to Deadlocks, Deadlock Handling - Preventions,
avoidance.
PROCESS SCHEDULING:
Objectives:
The main features of OS are multitasking and multithreading. Using these features we can
execute more than one process at a time. This requires CPU switching from one process to
another process to keep it busy for maximum time. Proper scheduling of CPU must be provided
to get its maximum utilization.
BASIC CONCEPTS
In multiprogramming several processes run at all times to maximize CPU utilization.
While executing a process, it may request for using the I/O device.
In simple computer system when such situation occurs, the CPU just sits idle and wait for
the process to complete its I/O operation.
But in a multiprogramming system, when one process has to complete I/O operation,
another can use the CPU.
The part of the operating system that makes the choice among the processes to allocate the
CPU first is called the scheduler, and the algorithm it uses is called the scheduling
algorithm.
SCHEDULING CRITERIA
The scheduling criterias that are used while scheduling a process are:
1. Fairness
2. Good throughput
3. Good CPU utilization
4. Low Turnaround time
5. Low Waiting time
6. Good Response time
1. Fairness
Fairness is important under all circumstances.
A scheduler makes sure that each process gets its fair share of the CPU and no process
should suffer indefinite postponement or waiting.
2. Good throughput
The number of processes completed per unit time is called as throughput.
If the algorithm is giving more throughput then we can say that the algorithm is the
best scheduling algorithm.
Throughput and fairness are conflicting. So for good policy, the throughput should be
increased without making it unfair
5. Waiting Time:
It is the time, a job or process spends waiting in the ready queue.
It should be low.
6. Response Time:
It is a time interval since the process is submitted till the first response.
It depends upon the degree of multi-programming, the efficiency of hardware and
operating system and the scheme of operating system to allocate resources.
It should be low.
7. Efficiency:
Scheduler should keep the CPU and all the Input/Output devices can be kept running
all the time sothat more work gets done per second.
2. Preemptive Scheduling
This algorithm allows a higher priority process to replace a currently running process even
if its time slice is not completed or it has not requested for any I/O.
Following are some characteristics of non-preemptive scheduling
i) This method involves more context switching, so gives less throughput.
ii) It is good for online and real time processing
SCHEDULING ALGORITHMS
P1 P2 P3
0 24 27 30
Waiting time for P1 = 0; P2 = 24; P3 = 27, Average waiting time: (0 + 24 + 27)/3 = 17
Suppose that the processes arrive in the order P2, P3, P1, the Gantt chart for the schedule is:
P2 P3 P1
0 3 6 30
Gantt Chart:
P0 P1 P2 P3
0 5 8 16 22
Wait time of each process is following
Proces Wait Time
P0 0–0=0
P1 5 -1 = 4
P2 8–2=6
P3 16 – 3 = 13
In any event, the average waiting time under round robin scheduling is often quite long.
1. Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds.
After this time has elapsed, the process is preempted and added to the end of the ready queue.
2. If there are n processes in the ready queue and the time quantum is q, then each process gets
1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)
q time units.
Two types:
1. Non preemptive – Once CPU given to the process it cannot be preempted until it
completes its CPU burst.
2. Preemptive – If a new process arrives with CPU burst length less than
remaining time of current executing process, preempt. This is also known as the Shortest
Remaining Time First (SRTF or SRTN).
Example:
P1 P3 P2 P4
0 7 8 12 16
Using Preemptive:
P1 P2 P3 P2 P4 P1
0 2 4 5 7 11 16
P0 P2 P1 P3
0 5 13 16 22
Average waiting time = (0 + (5 - 3) + (13 – 2) + (16 - 4)) / 4 = 6.25
Average turnaround time = ((5 - 0) + (13 - 3) + (16 - 2) + (22 - 4)) / 4 = 11.75
Multilevel Queue Scheduling
In this algorithm, the ready queue is partitioned into several separate queues as shown in
fig.
Based on some property of the process, such as Memory size, Process priority, Process
type the processes are permanently assigned to one queue.
Possibility I
If each queue has absolute priority over lower-priority queues then no process in the queue could
run
unless the queue for the highest-priority processes were all empty. For example, in the above
figure no
process in the batch queue could run unless the queues for system processes, interactive
processes, and interactive editing processes will all empty.
Possibility II
If there is a time slice between the queues then each queue gets a certain amount of CPU
times, which it can then schedule among the processes in its queue. For instance;
80% of the CPU time to foreground queue using RR.
20% of the CPU time to background queue using FCFS.
Since processes do not move between queue so, this policy has the advantage of low scheduling
overhead, but it is inflexible.
Example:-Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
A new job enters queue Q0 which is served in FCFS manner. When it gains CPU access, a
job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue
Q1.
At Q1 job is again served in FCFS manner and receives 16 additional milliseconds. If it
still does not complete, it is preempted and moved to queue Q2.
DEADLOCK
In a multiprogramming environment, several processes may compete for a finite number
of resources.
A process requests resources; if the resources are not available at that time, the process
enters a wait state. It may happen that waiting processes will never again change state,
because the resources they have requested are held by other waiting processes. This
situation is called deadlock.
If a process requests an instance of a resource type, the allocation of any instance of the
type will satisfy the request. If it will not, then the instances are not identical, and the
resource type classes have not been defined properly.
A process must request a resource before using it, and must release the resource after
using it. A process may request as many resources as it requires to carry out its designated
task.
Deadlock Characterization
In deadlock, processes never finish execution and system resources are tied up, preventing other
jobs from ever starting.
1. Mutual exclusion:
Resources must be allocated to processes at any time in an exclusive manner and not on
shared basis
A disk drive can be shared by two processes simultaneously but printers, tape drives have
to be allocated to a process in an exclusive manner until a process finishes its work with
it.
2. Hold and wait :
There must exist a process that is holding at least one resource and is waiting to acquire
additional resources that are currently being held by other processes.
This can cause deadlock.
3. No preemption :
Resources cannot be preempted; that is, a resource can be released only voluntarily by the
process holding it, after that process, has completed its task.
4. Circular wait:
There must exist a set {P0, P1, ..., Pn } of waiting processes such that P0 is waiting for a
resource that is held by P1, P1 is waiting for a resource that is held by P2, …., Pn-1 is
waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held by
P0.
Thus, there must be a circular chain of two or more processes each of which is waiting for a
resource held by the next process.
Deadlock Prevention
For a deadlock to occur, each of the four necessary-conditions must hold. By ensuring that at
least one of these conditions cannot hold, we can prevent the occurrence of a deadlock.
1. Mutual Exclusion:
The mutual-exclusion condition must hold for non-sharable resources. For example, a
printer cannot be simultaneously shared by several processes. Sharable resources, on the
other hand, do not require mutually exclusive access, and thus cannot be involved in a
deadlock.
3. No Preemption:
If a process that is holding some resources requests another resource that cannot be
immediately allocated to it, then all resources currently being held are preempted. That is
this resources are implicitly released. The preempted resources are added to the list of
resources for which the process is waiting. Process will be restarted only when all the
resources i.e. its old resources, as well as the new ones that it is requesting will be
available.
4. Circular Wait
Circular-wait condition never holds is to impose a total ordering of all resource types, and
to require that each process requests resources in an increasing order of enumeration.
Let R = {R1, R2, ..., Rn} be the set of resource types. We assign to each resource type a
unique integer number, which allows us to compare two resources and to determine
whether one precedes another in our ordering. Formally, we define a one-to-one function
F: R _ N, where N is the set of natural numbers.
Deadlock Avoidance
Prevent deadlocks requests can be made. The restraints ensure that at least one of the
necessary conditions for deadlock cannot occur, and, hence, that deadlocks cannot hold.
Possible side effects of preventing deadlocks by this, method, however, are low device
utilization and reduced system throughput.
An alternative method for avoiding deadlocks is to require additional information about
how resources are to be requested. For example, in a system with one tape drive and one
printer, we might be told that process P will request first the tape drive, and later the
printer, before releasing both resources. Process Q on the other hand, will request first the
printer, and then the tape drive. With this knowledge of the complete sequence of requests
and releases for each process we can decide for each request whether or not the process
should wait.
Banker’s Algorithm:
This algorithm is applicable to the system with multiple instances of each resource types.
When a new process enters the system for execution it must declare the maximum number
of resources that it will need.
This number should not exceed the total number of resources in the system.
The system must determine that whether the allocation of the resources will leave the
system in a safe state or not.
If the system is in safe state, resources are allocated else it should wait until the resources
become free from other process.
Several data structures are used to implement the banker’s algorithm. Let ‘n’ be the
number of processes in the system and ‘m’ be the number of resources types. We need the
following data structures:
i) Available:-A vector of length m indicates the number of available resources.
If Available[j]=k, then k instances of resource type Rj are available.
ii) Max:-An n*m matrix defines the maximum demand of each process.
If Max[i][j]=k, then process Pi may request at most k instances of resource type
Rj.
iii) Allocation:-An n*m matrix defines the number of resources of each type currently
allocated to each process.
If Allocation[i][j]=k, then process Pi is currently allocated k instances of
resource type Rj.
iv) Need:-An n*m matrix indicates the remaining resources need of each process.
If Need[i][j]=k, then process Pi may need k more instances of resource type
Rj to complete its task. So,
Need[i][j]=Max[i][j] - Allocation[i][j]
Race Conditions
In operating systems, processes that are working together share some common storage
(main memory, file etc.) that each process can read and write.