You are on page 1of 13

Swami Keshvanand Institute of Technology,

Management & Gramothan


Department of Information Technology

Process Scheduling
5IT4 – 03
Credits – 03

MEHUL MAHRISHI
ASSOCIATE PROFESSOR
DEPARTMENT OF INFORMATION TECHNOLOGY
Why Scheduling?
► In a single-processor system, only one process can run at a time.
► The objective of multiprogramming is to have some process running at all
times, to maximize CPU utilization.
► A process is executed until it must wait, typically for the completion of some
I/O request.
► In a simple computer system, the CPU then just sits idle. All this waiting time
is wasted; no useful work is accomplished.
► With multiprogramming, we try to use this time productively. Several
processes are kept in memory at one time. When one process has to wait,
the operating system takes the CPU away from that process and gives the
CPU to another process.
► It is also called as CPU Scheduling/Process Scheduling.
CPU – I/O Burst Cycle
► A process switches alternatively between CPU and I/O Burst.
► When CPU is allocated to it, the time taken is called as CPU Burst time
and alternatively I/O Burst time.
► Process execution begins with a CPU burst. That is followed by an I/O
burst, which is followed by another CPU burst, then another I/0 burst,
and so on.
► Eventually, the final CPU burst ends with a system request to terminate
execution.
► If the process follow this approach, i.e. to switch between CPU & I/O,
the scheduling is Preemptive otherwise non-preemptive.
First Come First Serve
► The simplest CPU-scheduling algorithm is the first-come, first-served (FCFS)
scheduling algorithm. With this scheme, the process that requests the CPU first is
allocated the CPU first.
► The implementation of the FCFS policy is easily managed with a FIFO queue.
► When a process enters the ready queue, its PCB is linked onto the tail of the
queue. When the CPU is free, it is allocated to the process at the head of the
queue.

The waiting time is 0 milliseconds for process P1, 24 milliseconds for process P2 , and 27 milliseconds for
process P3 .
Thus, the average waiting time is
(0+ 24 + 27)/3 = 17 milliseconds.
Process ID Burst Time Arrival Time
P1 24 2
P2 3 0
P3 3 1

P1
P2 P3
0 3 6 30

Waiting time of P2 = 0ms


Waiting time of P3 = 2 ms
Waiting time of P1 = 4 ms
Average = (0+2+4)/3 = 6/3 = 2ms
Shortest Job First
► The algorithms allocates the CPU to the process that have the smallest
next burst time.

The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2, 9 milliseconds for process P3,
and 0 milliseconds for process P4 . Thus, the average waiting time is
(3 + 16 + 9 + 0) I 4 = 7 milliseconds.
Do it yourself?
Process Burst Time
P1 21
Apply Non-preemptive SJF P2 3
P3 6
P4 2

P4 P2 P3 P1
0 2 5 11 32

Waiting time of P1 = 11
Waiting time of P2 = 2
Waiting time of P3 = 5
Waiting time of P4 = 0
Average waiting Time = (11+2+5+0)/4 = 18/4 = 4.5 ms
Do it yourself?
Process Burst Time Arrival Time
P1 21 0
Apply preemptive SJF P2 3 1
P3 6 2
P4 2 3

P3 P1
P1 P2 P4 P2
0 1 3 5 6 12 32

Waiting time of P1 = 12-1=11


Waiting time of P2 = 5-3 =2
Waiting time of P3 = 6-2 = 4
Waiting time of P4 = 3-3 = 0
Average waiting Time = (11+2+4+0)/4 = 17/4 = 4.25ms
Priority Scheduling
► A priority is associated with each process, and the CPU is allocated to the
process with the highest priority.
► An SJF algorithm is simply a priority algorithm where the priority is the
inverse of the (predicted) next CPU burst. The larger the CPU burst, the
lower the priority, and vice versa.

The average waiting time is 8.2 ms


► Starvation is an indefinite blocking phenomenon associated with the Priority
scheduling algorithms, in which a process ready to run for CPU can wait
indefinitely because of low priority.
► In heavily loaded computer system, a steady stream of higher-priority
processes can prevent a low-priority process from ever getting the CPU.
► Solution to Starvation : Aging
► Aging is a technique of gradually increasing the priority of processes that
wait in the system for a long time.
► For example, if priority range from 127(low) to 0(high), we could increase
the priority of a waiting process by 1 Every 15 minutes. Eventually even a
process with an initial priority of 127 would take no more than 32 hours for
priority 127 process to age to a priority-0 process.
► Solution to Starvation : Priority Inversion
► priority inversion is a scenario in scheduling in which a high priority task is
indirectly preempted by a lower priority task effectively inverting the
relative priorities of the two tasks.
Multilevel Queue Scheduling
► We can categorize the processes as Foreground (interactive) and
Background (Batch).
► Interactive are those that are submitted externally and generally has a
higher priority than batch processes.
► This type of scheduling partitions the ready queue into multiple queues.
► The processes are assigned to queues permanently on the basis their
properties such as memory, priority, type etc.
► These queues are further scheduled on the basis of different scheduling
algorithms. For eg. System processes queue can have Round Robin
Scheduling whereas batch process can have FCFS.
Multilevel Feedback Queue Scheduling
► We saw that in Multilevel Queue scheduling the processes are permanently
assigned to a queue i.e. processes don’t move from one queue to another or we
can say that processes never changes their nature.
► To make such scheduling more flexible and to schedule the processes on the
basis of their CPU burst time rather than any other property, Multilevel Feedback
queues are used.
► Such queues allows processes to move between queues.
► The process that consumes too much of CPU time is moved to Low Priority queues
and vice versa.
► Also to deal with starvation, a process that is residing in low priority queue for a
very long time can be gradually moved to higher queues.
Fair Share Scheduling
► Fair-share scheduling is a scheduling algorithm that was first designed by
Judy Kay and Piers Lauder at Sydney University in the 1980s.
► The scheduling algorithm dynamically distributes the time quanta
“equally” to users.
► Unlike RR Scheduling where time quantum is allocated in a circular
manner, Fair share follows the concept of arbitrary distribution.
► This algorithm equally distributes the processor time to its users, for
instance, there are 5 users (A, B, C, D, E)each of them are simultaneously
executing a process, the scheduler divides the CPU periods such that all
the users get the same share of the CPU cycles (100%/5) that is 20%.
► If there were 3 groups present with different number of people in each
group, the algorithm would still divide the same time for those groups,
100%/3= 33.33%, this 33.33% would be shared equally in the respective
group depending on the number of users present in the group.

You might also like