You are on page 1of 60

Chapter 6: CPU Scheduling

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013
Chapter 6: CPU Scheduling

 Basic Concepts
 Scheduling Criteria
 Scheduling Algorithms
 Thread Scheduling
 Multiple-Processor Scheduling
 Real-Time CPU Scheduling
 Operating Systems Examples
 Algorithm Evaluation

Operating System Concepts – 9th Edition 6.2 Silberschatz, Galvin and Gagne ©2013
Objectives

 To introduce CPU scheduling, which is the basis for


multiprogrammed operating systems
 To describe various CPU-scheduling algorithms
 To discuss evaluation criteria for selecting a CPU-scheduling
algorithm for a particular system
 To examine the scheduling algorithms of several operating
systems

Operating System Concepts – 9th Edition 6.3 Silberschatz, Galvin and Gagne ©2013
Basic Concepts

 Maximum CPU utilization obtained with


multiprogramming
 CPU–I/O Burst Cycle – Process execution
consists of a cycle of CPU execution and
I/O wait
 CPU burst followed by I/O burst
 CPU burst distribution is of main concern

Operating System Concepts – 9th Edition 6.4 Silberschatz, Galvin and Gagne ©2013
Histogram of CPU-burst Times

Operating System Concepts – 9th Edition 6.5 Silberschatz, Galvin and Gagne ©2013
Top-Level View

Operating System Concepts – 9th Edition 6.6 Silberschatz, Galvin and Gagne ©2013
Processor Registers
 Faster and smaller than main memory
 Almost all computers, loads data from a larger memory into registers
 User-visible registers
 May be referenced by machine language
 Available to all programs – application programs and system
programs
 Types of registers typically available are:
 Data (Often general purpose, But some restrictions may apply)
 Address (Index Register, Segment pointer, Stack pointer )

Operating System Concepts – 9th Edition 6.7 Silberschatz, Galvin and Gagne ©2013
Processor Registers…
 Control and status registers
 Used by processor to control operating of the processor
 Used by privileged OS routines to control the execution of programs
 Program counter (PC)
 Contains the address of an instruction to be fetched
 Instruction register (IR)
 Contains the instruction most recently fetched
 Program status word (PSW)
 Contains status information
 Condition code registers (flags)
 Bits used to determine whether some instruction should or should
not be executed.

Operating System Concepts – 9th Edition 6.8 Silberschatz, Galvin and Gagne ©2013
Program counter (PC)
 The Program Counter (PC), commonly called
the instruction pointer (IP), and sometimes
called the instruction address register (IAR)

 Program Counter is a special


purpose register which stores the memory
address of the next instruction to be executed.
 It directly fetches the next memory address
from the main memory

 Instructions are usually fetched sequentially from


memory
 But control transfer instructions change the
sequence of Executed Program.
 These include branches (sometimes called
jumps), subroutine calls, and returns.

Operating System Concepts – 9th Edition 6.9 Silberschatz, Galvin and Gagne ©2013
Instruction Register (IR)
 The Instruction Register (IR) stores the instruction currently being executed.

 Fetched instruction always loaded into instruction register (IR)


 Which then decoded, prepared and ultimately executed
 Instruction contains bits that specify the action the processor is to take.

 Categories
 Processor-memory,
 Data may be transferred from processor to memory or from memory to
processor.
 Processor-I/O,
 Data may be transferred to or from a peripheral device by transferring
between the processor and an I/O module.
 Data processing,
 The processor may perform some arithmetic or logic operation on data.
 Control
 An instruction may specify that the sequence of execution be altered.
Operating System Concepts – 9th Edition 6.10 Silberschatz, Galvin and Gagne ©2013
Storage Registers
 Registers related to fetching information from RAM

 A collection of storage registers located on separate chips from the CPU:


 Memory address register (MAR)
 MAR holds the memory location of data that needs to be accessed
 Register that holds the memory address from which data will be fetched to
the CPU
 Register that holds the memory address to which data will be sent and
stored.
 Memory data register (MDR)
 Register that holds the data that needs to be accessed
 Register contains the data after a fetch from the computer storage
 Register contains the data to be stored in the computer storage (e.g. RAM).
 Memory buffer register (MBR)
 Register that holds the data ready for use at the next clock cycle
 Data can be either used by the processor or stored in main memory.

Operating System Concepts – 9th Edition 6.11 Silberschatz, Galvin and Gagne ©2013
Instruction Execution
 A Program consists of a set of instructions stored in memory
 Program counter (PC) holds address of the instruction to be fetched next
 PC is incremented after each fetch
 Simple Instruction Cycle: Two steps
 Fetch : Processor reads (fetches) instructions from memory
 Execute: Processor executes each instruction

Operating System Concepts – 9th Edition 6.12 Silberschatz, Galvin and Gagne ©2013
Fetch-Decode-Execute Cycle

Operating System Concepts – 9th Edition 6.13 Silberschatz, Galvin and Gagne ©2013
Example of Program Execution

Operating System Concepts – 9th Edition 6.14 Silberschatz, Galvin and Gagne ©2013
Interrupts
 Interrupt the normal sequencing of the processor
 Provided to improve processor utilization
 Most I/O devices are slower than the processor
 Processor must pause to wait for device
 Common Classes of Interrupts
 Program
 As a result of an instruction execution
 Such as:
– An Arithmetic overflow
– Divided by Zero
– Attempt to execute an illegal machine instruction
– Reference outside a user’s allowed memory space
 Timer
 Generated by a Timer within the processor. This allows the operating system to perform
certain functions on regular basis
 I/O
 Generated by an I/O controller, to signal normal completion of an operation or to signal a
variety of error condition
 Hardware Failure
 Generated by a failure, such as power failure or memory parity error
Operating System Concepts – 9th Edition 6.15 Silberschatz, Galvin and Gagne ©2013
Instruction Cycle with Interrupts

Operating System Concepts – 9th Edition 6.16 Silberschatz, Galvin and Gagne ©2013
Flow of Control

without Interrupt with Interrupt

Operating System Concepts – 9th Edition 6.17 Silberschatz, Galvin and Gagne ©2013
Transfer of Control via Interrupts
 Processor checks for interrupt
 Indicated by an interrupt signal
 If no interrupt, fetch next instruction
 If interrupt pending:
 Suspend execution of current program
 Save context
 Set PC to start address of interrupt handler routine
 Process interrupt
 Restore context and continue interrupted program
 transfer of control via interrupts:

Operating System Concepts – 9th Edition 6.18 Silberschatz, Galvin and Gagne ©2013
Multiple Interrupts
 Suppose an interrupt occurs while another interrupt is being processed.
 E.g. printing data being received via communications line.
 Two approaches:
 Disable interrupts during interrupt processing
 Use a priority scheme.

Operating System Concepts – 9th Edition 6.19 Silberschatz, Galvin and Gagne ©2013
Sequential Interrupt Processing

Operating System Concepts – 9th Edition 6.20 Silberschatz, Galvin and Gagne ©2013
Nested Interrupt Processing

Operating System Concepts – 9th Edition 6.21 Silberschatz, Galvin and Gagne ©2013
Example of Nested Interrupts

Operating System Concepts – 9th Edition 6.22 Silberschatz, Galvin and Gagne ©2013
Multiple Processes
 Central to the design of modern Operating Systems is managing multiple
processes
 Multiprogramming
 The management of multiple processes within a uni-processor
system.
 Multiprocessing
 The management of multiple processes within a multiprocessor.
 Distributed Processing
 The management of multiple processes executing on multiple,
distributed computer systems.

 Big Issue is Concurrency


 Several processes are executing simultaneously, and potentially
interacting with each other.

Operating System Concepts – 9th Edition 6.23 Silberschatz, Galvin and Gagne ©2013
Concurrency
Concurrency arises in:
 Multiple applications
 Multiprogramming was invented to allow processing time to be dynamically
shared among a number of active applications
 Structured applications
 Extension of structured programming resulting concurrent processes.
 Operating system structure
 OS themselves implemented as a set of processes or threads in systems
programs

 Concurrency encompasses a host of design issues, including


 Sharing of global resources
 Communication among processes,
 Sharing of and competing for resources (such as memory, files, and I/O
access),
 Synchronization of the activities of multiple processes, and
 Allocation of processor time to processes.

Operating System Concepts – 9th Edition 6.24 Silberschatz, Galvin and Gagne ©2013
Enforce Single Access
 If we enforce a rule that only one process may enter the function at a
time then
 P1 & P2 run on separate processors

 P1 enters echo first,


 P2 tries to enter but is blocked – P2 suspends
 P1 completes execution
 P2 resumes and executes echo

Operating System Concepts – 9th Edition 6.25 Silberschatz, Galvin and Gagne ©2013
Process Interaction

Operating System Concepts – 9th Edition 6.26 Silberschatz, Galvin and Gagne ©2013
CPU Scheduler
 The objective of multiprogramming is to have some process
running at all times, to maximize CPU utilization.
 The objective of time sharing is to switch the CPU among
processes so frequently.
 A process migrates between various scheduling queues
throughout its lifetime.
 The aim of CPU scheduler is to assign processes to the
processor for execution.
 Scheduling affects the performance of the system, because it
determines which process will wait and which will progress.

Operating System Concepts – 9th Edition 6.27 Silberschatz, Galvin and Gagne ©2013
CPU Scheduler
 Long-term scheduler (or job scheduler) – selects which processes
should be brought into the ready queue – loads processes from disk to
memory
 Short-term scheduler (or CPU scheduler) – selects which process
should be executed next (from the ready queue) and allocates CPU
 Medium-term scheduler can be added if degree of multiple
programming needs to decrease
 Remove process from memory, store on disk, bring back in from disk
to continue execution: swapping

Operating System Concepts – 9th Edition 6.28 Silberschatz, Galvin and Gagne ©2013
CPU Scheduler
 CPU scheduling decisions may take place when a process:
 Move from New to Ready State
 Switches from Ready State to Running State
 Switches from Running State to Waiting State
 Switches from Waiting State to Ready State
 Terminates/Exit
 Queue may be ordered in various ways

Operating System Concepts – 9th Edition 6.29 Silberschatz, Galvin and Gagne ©2013
Dispatcher
 Dispatcher module gives control of the CPU to the process
selected by the short-term scheduler; this involves:
 switching context
 switching to user mode
 jumping to the proper location in the user program to
restart that program
 Dispatch latency – time it takes for the dispatcher to stop
one process and start another running

Operating System Concepts – 9th Edition 6.30 Silberschatz, Galvin and Gagne ©2013
Scheduling Criteria

 CPU utilization – Keep the CPU as busy as possible. It range


from 0 to 100%. In practice, it range from 40 to 90%.
 Throughput – Throughput is the rate at which # of processes
that complete their execution per time unit
 Turnaround time – This is the how long a process takes to
execute a particular process. It is calculated as the time gap
between the submission of a process and its completion.
 Waiting time – Waiting time is the sum of the time periods
spent in waiting in the ready queue.
 Response time – amount of time it takes from when a
request was submitted until the first response is produced,
not output (for time-sharing environment)
 Fairness – Each process should have a fair share of CPU.

Operating System Concepts – 9th Edition 6.31 Silberschatz, Galvin and Gagne ©2013
CPU Scheduler

 CPU Scheduling can be non pre-emptive and pre-emptive


 Consider access to shared data
 Consider preemption while in kernel mode
 Consider interrupts occurring during crucial OS activities

Operating System Concepts – 9th Edition 6.32 Silberschatz, Galvin and Gagne ©2013
Scheduling Algorithm Optimization Criteria

 This decision has to be made according to a set criteria.

Operating System Concepts – 9th Edition 6.33 Silberschatz, Galvin and Gagne ©2013
Scheduling Algorithms
 List of scheduling algorithms are as follows:

 First-come, first-served scheduling (FCFS) algorithm


 Shortest Job First Scheduling (SJF) algorithm
 Preemptive
 Non-Preemptive
 Priority Scheduling algorithm
 Non-preemptive priority Scheduling algorithm
 Preemptive priority Scheduling algorithm
 Round-Robin Scheduling algorithm
 Multilevel Feedback Queue Scheduling algorithm
 Thread Scheduling algorithm
 Multiple-Processor Scheduling

Operating System Concepts – 9th Edition 6.34 Silberschatz, Galvin and Gagne ©2013
First- Come, First-Served (FCFS) Scheduling

 This policy assigns the processor to the process which is first in the
READY queue
 This is used with non pre-emptive operating systems.
 FCFS favors long jobs over short ones.

Operating System Concepts – 9th Edition 6.35 Silberschatz, Galvin and Gagne ©2013
First- Come, First-Served (FCFS) Scheduling

Process Burst Time


P1 24
P2 3
P3 3
 Suppose that the processes arrive in the order: P1 , P2 , P3

P1 P2 P3
0 24 27 30

 Waiting time for P1 = 0; P2 = 24; P3 = 27


 Average waiting time: (0 + 24 + 27)/3 = 17

Operating System Concepts – 9th Edition 6.36 Silberschatz, Galvin and Gagne ©2013
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
 The Gantt chart for the schedule is:

P2 P3 P1
0 3 6 30

 Waiting time for P1 = 6; P2 = 0; P3 = 3


 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case

Operating System Concepts – 9th Edition 6.37 Silberschatz, Galvin and Gagne ©2013
FCFS Scheduling (Cont.)

 Advantages
 Better for long processes
 Simple method (i.e., minimum overhead on processor)
 No starvation
 Disadvantages
 Convoy effect occurs. Even very small process should wait
for its turn to come to utilize the CPU. Short process behind
long process results in lower CPU utilization.

Operating System Concepts – 9th Edition 6.38 Silberschatz, Galvin and Gagne ©2013
Shortest-Job-First (SJF) Scheduling

 This policy assigns the processor to the process which take the shortest
time to complete
 This is one of the most efficient algorithms
 SJF can be pre-emptive & non pre-emptive
 SJF is optimal – gives minimum average waiting time for a given set of
processes

Operating System Concepts – 9th Edition 6.39 Silberschatz, Galvin and Gagne ©2013
Example of SJF Pre-emptive

Operating System Concepts – 9th Edition 6.40 Silberschatz, Galvin and Gagne ©2013
Example of SJF Non Pre-emptive

Operating System Concepts – 9th Edition 6.41 Silberschatz, Galvin and Gagne ©2013
SJF Scheduling (Cont.)

 Advantages
 It gives superior turnaround time performance to shortest
process next because a short job is given immediate
preference to a running longer job.
 Throughput is high.
 Disadvantages
 Starvation may be possible for the longer processes.

Operating System Concepts – 9th Edition 6.42 Silberschatz, Galvin and Gagne ©2013
Priority Scheduling

 This is used in general operating systems


 A priority number (integer) is associated with each process
 The CPU is allocated to the process with the highest priority
(smallest integer  highest priority)
 Pre-emptive
 Non pre-emptive
 SJF is priority scheduling where priority is the inverse of predicted
next CPU burst time

 Problem  Starvation – low priority processes may never execute

 Solution  Aging – as time progresses increase the priority of the


process

Operating System Concepts – 9th Edition 6.43 Silberschatz, Galvin and Gagne ©2013
Example of Priority Scheduling

Operating System Concepts – 9th Edition 6.44 Silberschatz, Galvin and Gagne ©2013
Priority Scheduling (Cont.)

 Advantages
 Good response for the highest priority processes.

 Disadvantages
 Starvation may be possible for the lowest priority processes.

Operating System Concepts – 9th Edition 6.45 Silberschatz, Galvin and Gagne ©2013
Round Robin (RR)

 This type of scheduling algorithm is basically designed for time


sharing system.
 This is FCFS with pre-emption
 Each process gets a small unit of CPU time (time quantum q),
usually 10-100 milliseconds. After this time has elapsed, the
process is pre-empted and added to the end of the ready queue.
 Timer interrupts every quantum to schedule next process
 Performance
 q large  FIFO
 q small  q must be large with respect to context switch,
otherwise overhead is too high

Operating System Concepts – 9th Edition 6.46 Silberschatz, Galvin and Gagne ©2013
Round Robin (RR)

 q large  FIFO

 q small  context switch,


otherwise overhead is too high

Operating System Concepts – 9th Edition 6.47 Silberschatz, Galvin and Gagne ©2013
Time Quantum and Context Switch Time

Operating System Concepts – 9th Edition 6.48 Silberschatz, Galvin and Gagne ©2013
Round Robin (Cont.)

 Advantages
 Round-robin is effective in a general-purpose, times-
sharing system or transaction-processing system.
 Fair treatment for all the processes.
 Good response time for short processes.

 Disadvantages
 Care must be taken in choosing quantum value.
 Context switching Overhead if time quantum is too small
 Throughput is low if time quantum is too small.

Operating System Concepts – 9th Edition 6.49 Silberschatz, Galvin and Gagne ©2013
Multilevel Feedback Queue
 This scheme allows processes to move between a number of
different priority queues.
 Each queue has an associated scheduling algorithm
 Scheduling must be done between the queues:

Operating System Concepts – 9th Edition 6.50 Silberschatz, Galvin and Gagne ©2013
Example of Multilevel Feedback Queue

 Three queues:
 Q0 – RR with time quantum 8
milliseconds
 Q1 – RR time quantum 16 milliseconds
 Q2 – FCFS

 Scheduling
 A new job enters queue Q0 which is
served FCFS
 When it gains CPU, job receives 8
milliseconds
 If it does not finish in 8
milliseconds, job is moved to queue
Q1
 At Q1 job is again served FCFS and
receives 16 additional milliseconds
 If it still does not complete, it is
preempted and moved to queue Q2

Operating System Concepts – 9th Edition 6.51 Silberschatz, Galvin and Gagne ©2013
Multiple-Processor Scheduling
 CPU scheduling more complex when multiple CPUs are
available
 Multiple-Processor Scheduling – Load Balancing
 OS need to keep all CPUs loaded for efficiency
 OS pushes task from overloaded CPU to other CPUs
 In case of idle processors OS pulls waiting task from busy
processor

Operating System Concepts – 9th Edition 6.52 Silberschatz, Galvin and Gagne ©2013
Multi-core Processors

 A multi-core processor is a single Number of cores Common names


1 single-core
computing component with two or 2 dual-core
more independent actual central 3
tri-core
triple-core
processing units
4 quad-core
 Recent trend to place multiple 5 penta-core
6 hexa-core
processor cores on same physical
7 hepta-core
chip/ same integrated circuit octa-core,
8
octo-core
 Cores may or may not share caches
9 nona-core
 They may implement message passing 10 deca-core
11 hendeca-core
or shared memory inter-core 12 dodeca-core
communication methods 13 trideca-core
14 tetradeca-core
 Faster and consumes less power
15 pentadeca-core
16 hexadeca-core
17 heptadeca-core
18 octadeca-core
19 enneadeca-core
20 icosa-core

Operating System Concepts – 9th Edition 6.53 Silberschatz, Galvin and Gagne ©2013
Operating System Examples

 Solaris scheduling
 Windows scheduling
 Linux scheduling

Operating System Concepts – 9th Edition 6.54 Silberschatz, Galvin and Gagne ©2013
Solaris
 Priority-based scheduling
 Six classes available
 Time sharing (default) (TS)
 Interactive (IA)
 Real time (RT)
 System (SYS)
 Fair Share (FSS)
 Fixed priority (FP)
 Each class has its own scheduling algorithm
 Given thread can be in one class at a time

Operating System Concepts – 9th Edition 6.55 Silberschatz, Galvin and Gagne ©2013
Windows Scheduling
 Windows uses priority-based preemptive scheduling
 Highest-priority thread runs next
 Dispatcher is scheduler
 Thread runs until
 (1) blocks
 (2) uses time slice
 (3) preempted by higher-priority thread
 Real-time threads can preempt non-real-time
 32-level priority scheme
 Queue for each priority
 If no run-able thread, runs idle thread

Operating System Concepts – 9th Edition 6.56 Silberschatz, Galvin and Gagne ©2013
Linux Scheduling Through Version 2.5

 Preemptive, priority based


 Two priority ranges: Real-time and Normal

 Higher priority gets larger q


 Task run-able as long as time left in time slice (active)
 Worked well, but poor response times for interactive
processes

Operating System Concepts – 9th Edition 6.57 Silberschatz, Galvin and Gagne ©2013
Deterministic Evaluation

For each algorithm, calculate minimum average waiting time


Simple and fast, but requires exact numbers for input, applies only to
those inputs
FCFS is 28ms:

Non-preemptive SFJ is 13ms:

RR is 23ms:

Operating System Concepts – 9th Edition 6.58 Silberschatz, Galvin and Gagne ©2013
Implementation

 Even simulations have limited accuracy


 Just implement new scheduler and test in real systems
 High cost, high risk
 Environments vary
 Most flexible schedulers can be modified
 Or APIs to modify priorities
 But again environments vary

Operating System Concepts – 9th Edition 6.59 Silberschatz, Galvin and Gagne ©2013
End of Chapter 6

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013

You might also like