Professional Documents
Culture Documents
Dr.A.GAYATHRI
Assistant Professor(Senior)
SITE,
VIT, VELLORE.
1
CPU Scheduling
Basic Concepts
Scheduling Criteria
Scheduling Algorithms
Thread Scheduling
Multiple-Processor Scheduling
Real-Time CPU Scheduling
Operating Systems Examples
Algorithm Evaluation
2
Objectives
To introduce CPU scheduling, which is the
basis for multi-programmed operating
systems.
To describe various CPU-scheduling
algorithms.
To discuss evaluation criteria for selecting a
3
Alternating sequence of CPU and I/O
bursts
4
Basic Concepts
Maximum CPU utilization is obtained with
multiprogramming.
CPU–I/O Burst Cycle – Process execution
5
Histogram of CPU-burst Times
6
Pre-emptive and Non Pre-emptive
Scheduling
The Scheduling algorithms can be divided into two
categories with respect to how they deal with clock
interrupts.
Non Pre-emptive Scheduling
A scheduling discipline is nonpre-emptive if, once a process
has been given the CPU, the CPU cannot be taken away from
that process.
Pre-emptive Scheduling
A scheduling discipline is pre-emptive if, once a process has
7
Pre-emptive Scheduling
The strategy of allowing processes that are logically
running to be temporarily suspended is called Pre-
emptive Scheduling.
It is contrast to the "run to completion" method.
8
Non Pre-emptive Scheduling
In non pre-emptive scheduling systems, short jobs are
made to wait by longer jobs but the overall treatment of
all processes is fair.
In non pre-emptive system, response times are more
9
Non Pre-emptive Scheduling
In non pre-emptive scheduling, a scheduler stops the
execution of jobs only in the following two situations.
10
CPU Scheduler
Short-term scheduler selects from among the
processes in ready queue, and allocates the
CPU to one of them
Queue may be ordered in various ways
CPU scheduling decisions may take place when
a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Scheduling under 1 and 4 is nonpreemptive
11
CPU Scheduler
All other scheduling are preemptive.
Pre-emptive Scheduling Problems
Consider access to shared data.
Consider preemption while in kernel mode.
Consider interrupts occurring during crucial OS
activities.
12
Dispatcher
Dispatcher module gives control of the CPU
to the process selected by the short-term
scheduler; this involves:
◦ switching context
◦ switching to user mode
◦ jumping to the proper location in the user program
to restart that program
Dispatch latency – time it takes for the
dispatcher to stop one process and start
another running.
13
Scheduling Criteria
CPU utilization – keep the CPU as busy as possible
Throughput – # of processes that complete their
execution per time unit
Turnaround time – amount of time to execute a
particular process
Waiting time – amount of time a process has been
waiting in the ready queue
Response time – amount of time it takes from
when a request was submitted until the first
response is produced, not output (for time-
sharing environment)
14
Scheduling Algorithm Optimization
Criteria
Max CPU utilization
Max throughput
Min turnaround time
Min waiting time
Min response time
15
First- Come, First-Served (FCFS)
Scheduling
ProcessBurst Time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P , P , P
1 2 3
The Gantt Chart for the schedule is:
P 1
P 2
P3
0 24 27 30
16
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P 2 , P3 , P1
The Gantt chart for the schedule is:
P 2
P 3
P 1
0 3 6 30
17
Shortest-Job-First (SJF) Scheduling
Associate with each process, the length of its
next CPU burst
◦ Use these lengths to schedule the process with the
shortest time.
SJF is optimal – gives minimum average
waiting time for a given set of processes
◦ The difficulty is knowing the length of the next CPU
request.
◦ Could ask the user.
18
Example of SJF
ProcessArrive Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
19
Determining Length of Next CPU Burst
Can only estimate the length – should be similar to
the previous one
◦ Then pick process with shortest predicted next CPU burst
Can be done by using the length of previous CPU
bursts, using exponential averaging
Commonly, α set to ½
Preemptive version called shortest-remaining-time-
first
20
Prediction of the Length of the Next CPU Burst
21
Examples of Exponential Averaging
=0
◦ n+1 = n
◦ Recent history does not count
=1
◦ n+1 = tn
◦ Only the actual last CPU burst counts
If we expand the formula, we get:
n+1 = tn+(1 - ) tn -1 + …
+(1 - )j tn -j + …
+(1 - )n +1 0
Since both and (1 - ) are less than or equal
to 1, each successive term has less weight
than its predecessor
22
Example of Shortest-remaining-time-first
P 1
P 2
P4 P1 P3
0 1 5 10 17 26
23
Priority Scheduling
24
Example of Priority Scheduling
25
Round Robin (RR)
26
Round Robin (Cont..)
Timer interrupts every quantum to schedule
next process.
Performance
◦ q large FIFO
◦ q small q must be large with respect to context
switch, otherwise overhead is too high.
27
Example of RR with Time Quantum = 4
28
Time Quantum and Context Switch Time
29
Turnaround Time Varies With The Time Quantum
30
Multilevel Queue
Ready queue is partitioned into separate
queues, eg:
◦ foreground (interactive)
◦ background (batch)
Process permanently in a given queue
Each queue has its own scheduling
algorithm:
◦ foreground – RR
◦ background – FCFS
31
Multilevel Queue
Scheduling must be done between the
queues:
◦ Fixed priority scheduling; (i.e., serve all from
foreground than from background). Possibility of
starvation.
◦ Time slice – each queue gets a certain amount of
CPU time which it can schedule amongst its
processes; i.e., 80% to foreground in RR.
◦ 20% to background in FCFS.
32
Multilevel Queue Scheduling
33
Multilevel Queue Scheduling
Two common options are strict priority ( no
job in a lower priority queue runs until all
higher priority queues are empty ) and
round-robin ( each queue gets a time slice in
turn, possibly of different sizes. )
Note that under this algorithm jobs cannot
34
Multilevel Queue Scheduling
35
Multilevel Feedback Queue
36
Example of Multilevel Feedback Queue
37
Example of Multilevel Feedback
Queue
Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
38
Example of Multilevel Feedback
Queue
Scheduling
A new job enters queue Q0 which is served
FCFS
When it gains CPU, job receives 8 milliseconds
If it does not finish in 8 milliseconds, job is
moved to queue Q1
At Q1 job is again served FCFS and receives
16 additional milliseconds
If it still does not complete, it is preempted
and moved to queue Q2
39
Multiple-Processor Scheduling
multiprocessor.
Asymmetric multiprocessing – only
40
Multiple-Processor Scheduling
Symmetric multiprocessing (SMP) – each
processor is self-scheduling, all processes in
common ready queue, or each has its own
private queue of ready processes
◦ Currently, most common
Processor affinity – process has affinity for
processor on which it is currently running
◦ soft affinity
◦ hard affinity
◦ Variations including processor sets
41
LOAD BALANCING - Multiple-Processor Scheduling
42
Multi-core Processors
growing.
◦ Takes advantage of memory stall to make
progress on another thread while memory
retrieve happens.
43
Multithreaded Multicore System
44
Multi-core Processors
Traditional Symmetric Multi-Processing required
multiple CPU chips to run multiple kernel
threads concurrently.
Recent trends are to put multiple CPUs ( cores )
onto a single chip, which appear to the system
as multiple processors.
Compute cycles can be blocked by the time
needed to access memory, whenever the needed
data is not already present in the cache. ( Cache
misses. ) In Figure, as much as half of the CPU
cycles are lost to memory stall.
45
Multi-core Processors
By assigning multiple kernel threads to a
single processor, memory stall can be
avoided ( or reduced ) by running one thread
on the processor while the other thread waits
for memory.
A dual-threaded dual-core system has four
46
Multi-core Processors
There are two ways to multi-thread a
processor:
◦ Coarse-grained multithreading switches between
threads only when one thread blocks, say on a
memory read. Context switching is similar to
process switching, with considerable overhead.
◦ Fine-grained multithreading occurs on smaller
regular intervals, say on the boundary of instruction
cycles. However the architecture is designed to
support thread switching, so the overhead is
relatively minor.
47
Multi-core Processors
In a multi-threaded multi-core system, there
are two levels of scheduling, at the kernel level:
◦ The OS schedules which kernel thread(s) to assign to which
logical processors, and when to make context switches
using algorithms as described above.
◦ On a lower level, the hardware schedules logical processors
on each physical core using some other algorithm.
The Ultra SPARC T1 uses a simple round-robin method to
schedule the 4 logical processors ( kernel threads ) on each
physical core.
The Intel Itanium is a dual-core chip which uses a 7-level
priority scheme ( urgency ) to determine which thread to
schedule when one of 5 different events occurs.
48
Real-Time CPU Scheduling
Can present obvious
challenges
Soft real-time
systems – no
guarantee as to
when critical real-
time process will be
scheduled
Hard real-time
systems – task must
be serviced by its
deadline
49
Real-Time CPU Scheduling
Two types of latencies affect performance
Interrupt latency – time from arrival of
50
Priority-based Scheduling
For real-time scheduling, scheduler must
support preemptive, priority-based
scheduling
But only guarantees soft real-time
For hard real-time must also provide ability
to meet deadlines
51
Priority-based Scheduling
52
Priority-based Scheduling
53
Virtualization and Scheduling
Virtualization software schedules
multiple guests onto CPU(s).
Each guest doing its own
scheduling.
◦ Not knowing it does not own the CPUs.
◦ Can result in poor response time.
◦ Can effect time-of-day clocks in
guests.
Can undo good scheduling
algorithm efforts of guests.
54
Rate Montonic Scheduling
A priority is assigned based on the inverse of
its period
Shorter periods = higher priority;
Longer periods = lower priority
55
Rate Montonic Scheduling
56
Missed Deadlines with Rate Monotonic Scheduling
57
EARLIEST DEADLINE FIRST (EDF)
Priorities are assigned according to the
Deadlines.
The Earlier the Deadline, the Higher the
Priority.
The Later the Deadline, the Lower the Priority.
58
Algorithm Evaluation
How to select CPU-scheduling algorithm
for an OS?
Determine criteria, then evaluate
algorithms
Deterministic modeling
59
Algorithm Evaluation
Consider 5 processes arriving at time 0:
Process Burst Time
P1 10
P2 29
P3 3
P4 7
P5 12
60
Deterministic Evaluation
For each algorithm, calculate minimum average
waiting time
Simple and fast, but requires exact numbers for
input, applies only to those inputs
FCFS is 28ms:
RR is 23ms:
61
Queueing Models
Describes the arrival of processes, and
CPU and I/O bursts probabilistically
◦ Commonly exponential, and described by mean
◦ Computes average throughput, utilization,
waiting time, etc
Computer system described as network of
servers, each with queue of waiting
processes
◦ Knowing arrival rates and service rates
◦ Computes utilization, average queue length,
average wait time, etc
62
Little’s Formula
n = average queue length
W = average waiting time in queue
63
Simulations
Queueing models limited
Simulations more accurate
64
Evaluation of CPU Schedulers by Simulation
65
Implementation
66
Reference Text Book
THANK YOU
67