You are on page 1of 36

CPU Scheduling

Basic Concepts
Scheduling Criteria
Scheduling Algorithms
Multiple-Processor Scheduling
Operating Systems Examples
Algorithm Evaluation
Basic Concepts
• Maximum CPU utilization obtained with multiprogramming
• CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU
execution and I/O wait
• CPU burst distribution
Scheduling algorithm goals
CPU Scheduler
• Selects from among the processes in memory that are
ready to execute, and allocates the CPU to one of them
• CPU scheduling decisions may take place when a
process:
1.Switches from running to waiting state
2.Switches from running to ready state
3.Switches from waiting to ready
4.Terminates
• Scheduling under 1 and 4 is nonpreemptive
• All other scheduling is preemptive i.e. CPU is taken
forcibly from current process.
Dispatcher Scheduling Criteria
Dispatcher module gives control of the CPU utilization – keep the CPU as busy as
CPU to the process selected by the short- possible
term scheduler; this involves: Throughput – # of processes that
1.switching context complete their execution per time unit
2.switching to user mode Turnaround time – amount of time to
3.jumping to the proper location in execute a particular process
the user program to restart that Waiting time – amount of time a process
program has been waiting in the ready queue
Dispatch latency – time it takes for the Response time – amount of time it takes
dispatcher to stop one process and start from when a request was submitted until
another running the first response is produced, not output
(for time-sharing environment)
Optimization Criteria
• Max CPU utilization
• Max throughput
• Min turnaround time
• Min waiting time
• Min response time
First-Come, First-Served (FCFS) Scheduling
Process Burst time
P1 24
P2 3
P3 3

Suppose that the processes arrive in the order: P1 , P2 , P3


The Gantt Chart for the schedule is:
P1 P2 P3

0 24 27 30

• Waiting time for P1 = 0; P2 = 24; P3 = 27


• Average waiting time: (0 + 24 + 27)/3 = 17
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order
P2 , P3 , P1
• The Gantt chart for the schedule is:
P2 P3 P1

0 3 6 30

• Waiting time for P1 = 6; P2 = 0; P3 = 3


• Average waiting time: (6 + 0 + 3)/3 = 3
• Much better than previous case
• Convoy effect short process behind long process
FCFS Scheduling (Cont.)
• Advantages –
– It is simple and easy to understand.

• Disadvantages –
– The process with less execution time suffer i.e. waiting time is often quite
long.

– Favors CPU Bound process than I/O bound process.

– Here, first process will get the CPU first, other processes can get CPU only
after the current process has finished it’s execution. Now, suppose the first
process has large burst time, and other processes have less burst time,
then the processes will have to wait more unnecessarily, this will result
in more average waiting time, i.e., Convoy effect.

– This effect results in lower CPU and device utilization.

– FCFS algorithm is particularly troublesome for time-sharing systems, where


it is important that each user get a share of the CPU at regular intervals.
Shortest-Job-First (SJF) Scheduling
• Associate with each process the length of its
next CPU burst. Use these lengths to schedule
the process with the shortest time
• Two schemes:
– nonpreemptive – once CPU given to the
process it cannot be preempted until
completes its CPU burst
– preemptive – if a new process arrives with CPU
burst length less than remaining time of
current executing process, preempt. This
scheme is know as the
Shortest-Remaining-Time-First (SRTF)
• SJF is optimal – gives minimum average waiting
time for a given set of processes
Example of Non-Preemptive SJF
PROCESS ARRIVAL TIME BURST TIME

P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4

• SJF (non-preemptive)
• Average waiting time = (0 + 6 + 3 + 7)/4 = 4

P1 P3 P2 P4

0 3 7 8 12 16
Example of Preemptive SJF(SRJF/SRTN)

PROCESS ARRIVAL TIME BURST TIME

P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4

• SJF (preemptive)
• Average waiting time = (9 + 1 + 0 +2)/4 = 3

P1 P2 P3 P2 P4 P1

0 2 4 5 7 11 16
Advantage and disadvantage of
SJF(preemptive and non preemptive
• Advantages –
– Shortest jobs are favored.
– It is provably optimal, in that it gives the minimum
average waiting time for a given set of processes.
• Disadvantages –
– SJF may cause starvation, if shorter processes keep
coming. This problem is solved by aging.
– It cannot be implemented at the level of short
term CPU scheduling.
Determining Length of Next CPU Burst
• Although SJF is optimal but it cannot be applied at the level of short-
term CPU scheduling.
• There is no way to know the length of the next CPU burst.
• One approach is to approximate SJF scheduling.
• We may not be able to know next CPU burst but we may be able to
predict its value.
• We expect that the next CPU burst will be similar in length to the
previous ones.
• Let tn be the length of nth CPU burst, let τn+1 be our predicted value
for the next CPU burst. Then for α , 0≤ α ≤ 1 ,define

• τn+1 = α tn + ( 1- α) τn This is EXPONENTIAL AVERAGE


Examples of Exponential Averaging
• tn contains most recent information
• n stores past history.
•  =0
n+1 = n
Recent history does not count
•  =1
n+1 =  tn
Only the actual last CPU burst counts
• If we expand the formula, we get:
n+1 =  tn+(1 - ) tn -1 + …+(1 -  )j  tn -j + …+(1 -  )n +1 0

• Since both  and (1 - ) are less than or equal to 1, each


successive term has less weight than its predecessor
Prediction of the Length of the Next CPU Burst

Figure shows exponential average with =1/2 and 0 = 10


Priority Scheduling
• A priority number (integer) is associated with each
process
• The CPU is allocated to the process with the highest
priority (smallest integer  highest priority)
– Preemptive
– nonpreemptive
• SJF is a priority scheduling where priority is the
predicted next CPU burst time
• Problem  Starvation – low priority processes may never
execute
• Solution  Aging – as time progresses increase the
priority of the process
Priority Scheduling
• The priority of process, when internally defined, can
be decided based on memory requirements, time
limits ,number of open files, ratio of I/O burst to
CPU burst etc.

• Whereas, external priorities are set based on


criteria outside the operating system, like the
importance of the process, funds paid for the
computer resource use, market factor etc.
Priority Scheduling
• Preemptive Priority Scheduling: If the new process arrived at
the ready queue has a higher priority than the currently
running process, the CPU is preempted, which means the
processing of the current process is stoped and the incoming
new process with higher priority gets the CPU for its execution.

• Non-Preemptive Priority Scheduling: In case of non-


preemptive priority scheduling algorithm if a new process
arrives with a higher priority than the current running process,
the incoming process is put at the head of the ready queue,
which means after the execution of the current process it will
be processed.
Priority Scheduling
PRIORITY PREEMPTIVE SCHEDULING PRIORITY NON PREEMPTIVE SCHEDULING

If a process with higher priority than the Once resources are allocated to a process, the
process currently being executed arrives, the process holds it till it completes its burst time
CPU is preemeted and given to the higher even if a process with higher priority is added
priority process. to the queue.

Preemptive scheduling is more flexible. Non-preemptive scheduling is rigid

The waiting time for the process having the The waiting time for the process having the
highest priority will always be zero. highest priority may not be zero.

It is more expensive and difficult to implement. It is cheaper to implement and faster as less
Also a lot of time is wasted in switching. switching is required.

It is useful in applications where high priority It can be used in various hardware applications
processes cannot be kept waiting. where waiting will not cause any serious
issues.
Example of NON-PREEMPTIVE Priority Scheduling

The GNATT will look like:


Example of PREEMPTIVE Priority Scheduling

The GNATT CHART will look like:


Problem with Priority Scheduling
• In priority scheduling algorithm, the chances
of indefinite blocking or starvation.

• A process is considered blocked when it is ready to run


but has to wait for the CPU as some other process is
running currently.

• But in case of priority scheduling if new higher priority


processes keeps coming in the ready queue then the
processes waiting in the ready queue with lower priority
may have to wait for long durations before getting the
CPU for execution.
Solution for Priority Scheduling
• To prevent starvation of any process, we can use the
concept of aging where we keep on increasing the
priority of low-priority process based on the its waiting
time.
• For example, if we decide the aging factor to be 0.5 for
each day of waiting, then if a process with
priority 20(which is comparitively low priority) comes in
the ready queue. After one day of waiting, its priority is
increased to 19.5 and so on.
• Doing so, we can ensure that no process will have to
wait for indefinite time for getting CPU time for
processin
ROUND ROBIN Scheduling
ROUND ROBIN Scheduling
Serial No. Advantage Disadvantage

1. There is fairness since every process There is Larger waiting time and
gets equal share of CPU. Response time.

2. The newly created process is added to There is Low throughput.


end of ready queue.

3. A round-robin scheduler generally There is Context Switches.


employs time-sharing, giving each job a
time slot or quantum.

4. While performing a round-robin Gantt chart seems to come too big (if
scheduling, a particular time quantum is quantum time is less for scheduling .For
alloted to different jobs. Example:1 ms for big scheduling.)

5. Each process get a chance to reschedule Time consuming scheduling for small
after a particular quantum time in this quantums .
scheduling.
ROUND ROBIN Scheduling Example
ROUND ROBIN Scheduling Example

The GNATT chart will look like:


ROUND ROBIN Scheduling Example
The GNATT chart will look like:
Round Robin (RR)

• Each process gets a small unit of CPU time (time


quantum), usually 10-100 milliseconds. After this time
has elapsed, the process is preempted and added to the
end of the ready queue.
• If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU
time in chunks of at most q time units at once. No
process waits more than (n-1)q time units.
• Performance
– q large  FIFO
– q small  q must be large with respect to context switch,
otherwise overhead is too high
Example of RR with Time Quantum = 20

Process Burst Time


P1 53
P2 17
P3 68
P4 24
• The Gantt chart is:

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162

• Typically, higher average turnaround than SJF, but better response


Time Quantum and Context Switch Time
Turnaround Time Varies With The Time Quantum
Multilevel Queue
• Ready queue is partitioned into
separate queues:
foreground (interactive)
background (batch)
• Each queue has its own scheduling
algorithm
– foreground – RR
– background – FCFS
• Scheduling must be done between
the queues
– Fixed priority scheduling; (i.e., serve
all from foreground then from
background). Possibility of
starvation.
– Time slice – each queue gets a
certain amount of CPU time which it
can schedule amongst its processes;
i.e., 80% to foreground in RR
– 20% to background in FCFS
Multilevel Feedback Queue
• A process can move between Example of Multilevel Feedback Queue
the various queues; aging • Three queues:
can be implemented this way – Q0 – RR with time quantum 8 milliseconds
– Q1 – RR time quantum 16 milliseconds
• Multilevel-feedback-queue – Q2 – FCFS
scheduler defined by the • Scheduling
following parameters: – A new job enters queue Q0 which is served FCFS.
When it gains CPU, job receives 8 milliseconds. If it
– number of queues does not finish in 8 milliseconds, job is moved to
queue Q1.
– scheduling algorithms for each – At Q1 job is again served FCFS and receives 16
queue additional milliseconds. If it still does not
complete, it is preempted and moved to queue Q2.
– method used to determine
when to upgrade a process
– method used to determine
when to demote a process
– method used to determine
which queue a process will
enter when that process needs
service
Multiple-Processor
Scheduling Real-Time Scheduling
• CPU scheduling more • Hard real-time systems
complex when multiple – required to complete
CPUs are available
a critical task within a
guaranteed amount of
• Homogeneous processors
within a multiprocessor time
• Soft real-time
• Load sharing computing – requires
that critical processes
• Asymmetric multiprocessing receive priority over less
– only one processor fortunate ones
accesses the system data
structures, alleviating the
need for data sharing

You might also like