You are on page 1of 29

Operating Systems

ECEG-5202

CPU Scheduling

Surafel Lemma Abebe (Ph. D.)


Outline
• Basic Concepts
• Scheduling Criteria
• Process Scheduling
• Multiple-Processor Scheduling

Surafel Lemma Abebe (Ph. D.) 2


Basic concepts
• At a time, on a single processor only one process
can run
• What is the objective of multi-programming?
– Maximize CPU utilization by having some process
running at all times
• Process execution consist of
– A cycle of CPU execution
• Starts at CPU burst
– A cycle of I/O wait
• Starts at I/O burst

Surafel Lemma Abebe (Ph. D.) 3


Basic concepts…
• CPU-I/O burst cycle
– Process execution alternates between
• CPU burst cycle
• I/O burst cycle
– I/O bound program has many short CPU
bursts
– CPU bound program has a few long CPU
bursts
– Best performance could be achieved by
combining I/O-bound and CPU-bound process

Surafel Lemma Abebe (Ph. D.) 4


Basic concepts…
 Schedulers
 Short-term scheduler (or CPU scheduler)
 Selects which process should be executed next and allocates CPU
 Sometimes the only scheduler in a system
 Short-term scheduler is invoked frequently (in milliseconds)  (must be fast)
 Long-term scheduler (or job scheduler)
 Selects which processes (from those spooled to a mass storage, e.g., in a batch system)
should be brought into the ready queue
 Long-term scheduler is invoked infrequently (in seconds, minutes)  (may be slow)
 The long-term scheduler controls the degree of multiprogramming (# of processes in
memory)
 Absent in time sharing systems such as UNIX and MS Windows
 Medium-term scheduler
 Can be added if degree of multiple programming needs to be decreased by
removing a process from memory
 Remove process from memory, store on disk, bring back in from disk to
continue execution: swapping

Surafel Lemma Abebe (Ph. D.) 5


Basic concepts…
• CPU scheduler
– Also called short-term scheduler
– Carries out the selection of a process from the
processes in memory that are ready to execute
– Allocates the CPU to processes that are in memory
and ready to execute
– Ready queue could be ordered in various ways
– Records in the queue are generally process control
blocks (PCBs) of the processes

Surafel Lemma Abebe (Ph. D.) 6


Basic concepts…
• Dispatcher
– Component involved in the CPU scheduling function
– Gives control of the CPU to the process selected by the
short-term scheduler
– This function involves the following
• Switching context
• Switching to user mode
• Jumping to the proper location in the user program to restart that
program
– Should be as fast as possible
• Invoked during every process switch
– Dispatch latency
• Time it takes for the dispatcher to stop one process and start
another running

Surafel Lemma Abebe (Ph. D.) 7


Scheduling criteria
• Used to compare CPU-scheduling algorithms
• The criteria affects the judgment of the best
algorithm
• Properties of algorithms
– CPU utilization
• Refers to a computer's usage of CPU, or the amount of work
handled by a CPU
• Percent of the CPU’s cycles being spent on your process
– Throughput
• Refers to number of processes that are completed per time
unit
• One completed process per hour vs ten completed
processes per second

Surafel Lemma Abebe (Ph. D.) 8


Scheduling criteria…
– Turnaround time
• Refers to the time interval from submission of a process to
the time of completion
– This is from the point of view of a particular process
• Is the sum of the time spent waiting to get into memory,
waiting in the ready queue, executing on the CPU, and doing
I/O
– Waiting time
• Refers to the sum of the periods a process spends waiting in
ready queue
– Response time
• Refers to the time from the submission of a request until the
start of the first response
– Not the time it takes to output the response
• Limited by the speed of the output device
Surafel Lemma Abebe (Ph. D.) 9
Scheduling criteria…
• Ideally, choose a CPU scheduler that optimizes all criteria
simultaneously
– Maximize CPU utilization and throughput
– Minimize turnaround time, waiting time, and response
time
=> Generally not possible
• Choose a scheduling algorithm based on its ability to satisfy
a policy
– Minimize average response time
• Provide output to the user as quickly as possible
• Process their input as soon as it is received
– Minimize variance of response time
• In interactive systems, predictability may be more important
than a faster average response time
Surafel Lemma Abebe (Ph. D.) 10
Process Scheduling
• Goals of scheduling
– Fairness
• Each process gets its fair share of the CPU and no process can suffer indefinite
postponement
• Giving equivalent or equal time is not always fair
– E.g., safety control and payroll at a nuclear plant
– Policy enforcement
• The systems policy must be enforced
– Efficiency
• CPU should be kept busy for almost 100% of the time
– Response time
• Response time for interactive users must be minimized
– Turnaround
• The time batch users must wait for an output should be minimized
– Throughput
• The number of jobs processed per unit time should be maximized

Surafel Lemma Abebe (Ph. D.) 13


Process Scheduling…
• Maintains scheduling queues of processes
• Types of queues
– Job queue – set of all processes in the system
– Ready queue – set of all processes residing in main memory, ready and waiting to
execute
– Device queues – set of processes waiting for an I/O device
• Processes could move among the various queues

Surafel Lemma Abebe (Ph. D.) 14


Process Scheduling…
• Queuing diagram
– Represents queues, resources, flows

Surafel Lemma Abebe (Ph. D.) 15


Process Scheduling …
• Scheduling algorithms
– Can be divided into two categories based on
how they deal with clock interrupts
– Non-preemptive scheduling
• Once a process has been given the CPU, the CPU
cannot be taken away from that process
• Characteristics
– Short jobs are made to wait by longer jobs but the overall
treatment of all processes is fair
– Response times are more predictable because incoming high
priority jobs can not displace waiting jobs

Surafel Lemma Abebe (Ph. D.) 16


Process Scheduling…
• Scheduling algorithms…
– Preemptive scheduling
• Once a process has been given CPU, the CPU can be taken away

• CPU scheduling decisions may take place when a process


1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
• In case of 1 and 4, the scheduling is non-preemptive or cooperative
– Mostly used in older versions of OS
– Cooperative scheduling
• Doesn’t require the special hardware (e.g., timer) needed for preemptive
scheduling
• Scheduling for 2 and 3 is preemptive
• Could create race condition. How?
Surafel Lemma Abebe (Ph. D.) 17
Process Scheduling…
• Scheduling algorithms…
– First come first served
• Also called First-In-First-Out (FIFO), Run-to-Completion, Run-until-
Done
• Simplest scheduling algorithm
• Processes are dispatched according to their arrival time on the ready
queue
• Once process has a CPU, it runs to completion
• Advantage
– Fair
– More predictable
– Simple to write and understandable code
• Disadvantage
– Long jobs make short jobs wait
– Unimportant jobs make important jobs wait
– Not useful in scheduling interactive users, because it cannot guarantee good
response time
– Average time is often quite long

Surafel Lemma Abebe (Ph. D.) 18


Process Scheduling…
• Scheduling algorithms…
– Shortest-Job-First (SJF)
• Waiting process with the smallest estimated run-time-to-
completion is run next
• Appropriate for batch jobs for which the run times are known in
advance
• Favors short jobs (or processes) at the expense of longer ones
• Could be preemptive or non-preemptive
– Preemptive occurs when a new process arrives in the ready queue that
has a predicted burst time shorter than the time remaining in the process
whose burst is currently on the CPU
• Advantage
– Gives the minimum average time for a given set of processes
• Disadvantage
– Requires precise knowledge of how long a job or process will run
» Relies on user estimates of run times

Surafel Lemma Abebe (Ph. D.) 19


Process Scheduling…
• Scheduling algorithms…
– Priority
• Each process is assigned a priority and allowed to run according to its
priority
• Equal-priority processes are scheduled in FCFS order
• Could be defined internally or externally
– Internal priorities
» Time limits, memory requirement, file requirement, CPU vs I/O
requirements
– External defined priorities are set by criteria that are external to OS
» Importance of process, amount of funds being paid for computer use, politics
• Can be either preemptive or non-preemptive
– Preemptive
» When process with higher priority arrives
• Problem
– Indefinite blocking or starvation of lower priority processes
• Solution
– Aging
» Gradually increases the priority of processes that wait in the system for a
longer period
Surafel Lemma Abebe (Ph. D.) 20
Process Scheduling…
• Scheduling algorithms…
– Round robin
• Processes are dispatched in a FIFO manner but are given a limited
amount of CPU time called a time-slice or a quantum
• The oldest, simplest, fair and most widely used
• Preemptive
– Effective in time-sharing environment
– Guarantees reasonable response time for interactive users
• Challenge
– Setting the length of the quantum
» Too short quantum
• Causes too many context switches and lower the CPU efficiency
» Too long quantum
• Causes poor response time
• Average waiting time is often quite long

Surafel Lemma Abebe (Ph. D.) 21


Process Scheduling …
• Scheduling algorithms…
– Multilevel queue
• Partitions the ready queue in several separate queues
• Processes are assigned to a queue permanently based on
– Memory size, process priority, process type, …
• Each queue has its own scheduling algorithm policy
• Could use either preemptive or non-preemptive

• Possibility I
– If each queue has absolute priority over lower priority queues, wait for all processes in
highest priority queue finishes

• Possibility II
– If there is a time slice between the queues then each queue gets a certain amount of
CPU time, which it can then schedule among the processes in its queue
– Example:
» 80% of the CPU time to foreground queue using RR
» 20% of the CPU time to background queue using FCFS
• Has advantage of low scheduling overhead but its inflexible

Surafel Lemma Abebe (Ph. D.) 22


Process Scheduling …
• Scheduling algorithms…
– Multilevel feedback queue
• Same as multilevel queue but allows a process to move between
queues
– If the characteristics of a job changes between CPU-intensive and I/O
intensive
– If a process waits too long in the lower-priority queue, it may be moved
to a higher priority queue
• Is the most flexible as it can be tuned for any situation
• Complex to implement because of all the adjustable parameters
– The number of queues
– The scheduling algorithm for each queue
– The methods used to upgrade or demote
processes from one queue to another
– The method used to determine which queue
a process enters initially

Surafel Lemma Abebe (Ph. D.) 23


Multi-processor scheduling
• Multiple CPUs
=> load sharing
– But scheduling becomes more complex
• Approaches
– Asymmetric multiprocessing
• All scheduling decisions, I/O processing, and other system
activities are handled by a single processor
– Master server
• Other processors execute only user code
• Simple
– One processor accesses the system data structure, reducing the
need for data sharing

Surafel Lemma Abebe (Ph. D.) 24


Multi-processor scheduling…
• Approaches..
– Symmetric multiprocessing (SMP)
• Each processor is self-scheduling
• Each processor could use its own ready queue or a
common ready queue
• Care must be taken to ensure
– Two processors do not choose to schedule the same process
– Processes are not lost from the queue
• Supported by many modern operating systems

Surafel Lemma Abebe (Ph. D.) 25


Multi-processor scheduling…
• Process Affinity
– Keeps a process closer to the processor it has first
executed
– Avoids the high cost of invalidating and repopulating
caches
– Soft affinity
• Follows a policy of attempting to keep a process running on
the same processor but its not guaranteed
– Hard affinity
• Allows a process to specify a subset of processors on which
it may run

Surafel Lemma Abebe (Ph. D.) 26


Multi-processor scheduling…
• Process Affinity…
– Main-memory architecture of a system can affect processor
affinity issues
– NUMA (Non-uniform memory access)
• CPU has faster access to some parts of main memory than to other
parts
• Occurs in systems containing combined CPU and memory boards
• If a process is assigned to a CPU with closer affinity, it can be allocated
memory on the board where the CPU resides
– Requires the CPU scheduler and memory-placement algorithm to work
together

Surafel Lemma Abebe (Ph. D.) 27


Multi-processor scheduling…
• Load balancing
– Attempts to keep the workload evenly distributed
across all processors in an SMP system
– Important only on systems where each processor has
its own private queue of eligible processes to execute
– Approaches
• Push migration
– A specific task periodically checks the load on each processor and
make sure that the load is balanced
• Pull migration
– An idle processor pulls a waiting task from a busy processor
– Load balancing sometimes contradicts with processor
affinity. Explain?
Surafel Lemma Abebe (Ph. D.) 28
Multi-processor scheduling…
• Multicore processors
– Processors with multiple processor cores on the same physical chip
– To the OS, each core appears to be a separate physical processor
– Advantage over systems in which each processor has its own physical
chip
• They are faster
• They consume less power
– Problem
• Memory stall
– Memory stall
• The significant amount of time spent by a processor while waiting for a data to
be available
• Happens when a processor accesses memory (due to cache miss)

Surafel Lemma Abebe (Ph. D.) 29


Multi-processor scheduling…
• Multicore processors…
– Solution
• Multithreaded processor cores
– Two or more hardware threads are assigned to each core
– If one thread stalls while waiting for memory, the core can switch
to another thread
– From OS perspective, each hardware thread appears as a logical
processor that is available to run a software thread
» In dual core, dual threaded system,
• There are four logical processors to the OS

Surafel Lemma Abebe (Ph. D.) 30


Multi-processor scheduling…
• Multicore processors…
– Multithreaded processor cores…
• Require two levels of scheduling
– Which software thread to run on each logical processor
» OS uses any of the scheduling algorithms discussed
earlier
– How each core decides which hardware thread to run
» Several strategies
• Use a simple round robin algorithm
• Assign to each hardware thread a dynamic urgency
value ranging from 0 to 7 and pick the one with
highest urgency

Surafel Lemma Abebe (Ph. D.) 31

You might also like