You are on page 1of 38

Cpu scheduling

CPU Scheduling
• CPU Scheduling is a process of determining which process will own
CPU for execution while another process is on hold.
• The main task of CPU scheduling is to make sure that whenever the CPU
remains idle, the OS at least select one of the processes available in the
ready queue for execution.
• Maximum CPU utilization obtained with multiprogramming

• CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU


execution and I/O wait.
• CPU burst: is when the process is being executed in the CPU.
• I/O burst: is when the CPU is waiting for further execution.
Alternating Sequence of CPU And I/O Bursts
CPU Scheduler
 Selects from among the processes in memory that are ready to execute, and
allocates the CPU to one of them.

CPU scheduling decisions may take place when a process:


1. Switches from running to waiting state.

2. Switches from running to ready state.

3. Switches from waiting to ready.

4. Terminates.

 Scheduling under 1 and 4 is non preemptive.


 All other scheduling is preemptive.
Preemptive and non preemptive
• Preemptive scheduling is used when a process switches from
running state to ready state or from the waiting state to ready
state. That process stays in the ready queue till it gets its next
chance to execute.
• Non-Preemptive is a CPU scheduling technique the process takes
the resource (CPU time) and holds it till the process gets
terminated or is pushed to the waiting state. No process is
interrupted until it is completed, and after that processor switches
to another process.
Scheduling Criteria

• Max CPU utilization

• Max throughput
• Min turnaround time
• Min waiting time
• Min response time
Scheduling Criteria
 CPU utilization – keep the CPU as busy as possible.
 Throughput – # of processes that complete their execution per time unit
 Turnaround time – amount of time to execute a particular process
 Waiting time – amount of time a process has been waiting in the ready
queue.
 Response time – amount of time it takes from when a request was
submitted until the first response is produced, not output (for time-
sharing environment)
Scheduling algorithms
• A Process Scheduler schedules different processes to be assigned to the CPU based

on particular scheduling algorithms. There are six popular process scheduling

algorithms which we are going to discuss in this chapter .

First-Come, First-Served (FCFS) Scheduling

Shortest-Job- first (SJN) Scheduling

Priority Scheduling

Round Robin(RR) Scheduling

Multiple-Level Queues Scheduling

Multilevel feedback queues scheduling.


First Come First Serve (FCFS)
 Jobs are executed on first come, first serve basis.

 It is a non-preemptive, pre-emptive scheduling algorithm.


 Easy to understand and implement.
 Its implementation is based on FIFO queue.

 Poor in performance as average wait time is high.


First-Come, First-Served (FCFS) Scheduling

Process ID Burst Time


P1 24
P2 3
P3 3
 Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3

0 24 27 30

 Waiting time for P1 = 0ms; P2 = 24ms; P3 = 27ms


 Average waiting time: (0 + 24 + 27)/3 = 17ms
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order
P2 , P3 , P1 .
 The Gantt chart for the schedule is:
P2 P3 P1

0 3 6 30

 Waiting time for P1 = 6; P2 = 0; P3 = 3


 Average waiting time: (6 + 0 + 3)/3 = 3ms
 Much better than previous case.
 Convoy effect short process behind long process.
Shortest Job first (SJf)

• This is also known as shortest job first, or SJF

• This is a non-preemptive, pre-emptive scheduling algorithm.

• Best approach to minimize waiting time.

• Easy to implement in Batch systems where required CPU time is


known in advance.

• Impossible to implement in interactive systems where required


CPU time is not known.

• The processer should know in advance how much time process


will take.
Sjf scheduling non-preemptive
Sjf scheduling preemptive
Priority Based Scheduling

 Priority scheduling is a non-preemptive algorithm and one of the most common

scheduling algorithms in batch systems.

 Each process is assigned a priority. Process with highest priority is to be executed first

and so on.

 Processes with same priority are executed on first come first served basis.

 Priority can be decided based on memory requirements, time requirements or any other

resource requirement.

 Problem  Starvation – low priority processes may never execute.

 Solution  Aging – as time progresses increase the priority of the process.


Priority Based Scheduling
Round Robin Scheduling

• Round Robin is the preemptive process scheduling algorithm.


• Each process is provided a fix time to execute, it is called
a quantum.
• Once a process is executed for a given time period, it is
preempted and other process executes for a given time period.
• Context switching is used to save states of preempted
processes.
Round Robin Scheduling
turnaround time and waiting time
Gantt chat
Average time
Multi level queue scheduling
Priority based queues
Multi level queue scheduling algorithm
Gantt chat
Turn around time
Waiting time
Average time
Multi level feedback queue scheduling
Multi level feedback
Multiple Processors Scheduling
• Multiple processor scheduling or multiprocessor scheduling focuses on designing
the system's scheduling function, which consists of more than one processor.
Multiple CPUs share the load (load sharing) in multiprocessor scheduling so that
various processes run simultaneously. In general, multiprocessor scheduling is
complex as compared to single processor scheduling. In the multiprocessor
scheduling, there are many processors, and they are identical, and we can run any
process at any time.

 Approaches to Multiple Processor Scheduling

 Processor Affinity

 Load Balancing

 Multi-core Processors

 Virtualization and Threading


Multiprocessor scheduling with a single
scheduler
Process Affinity
• Processor Affinity means a process has an affinity for the processor on which

it is currently running. When a process runs on a specific processor, there are

certain effects on the cache memory. 

• Soft Affinity: When an operating system has a policy of keeping a process

running on the same processor but not guaranteeing it will do so, this

situation is called soft affinity.

• Hard Affinity: Hard Affinity allows a process to specify a subset of

processors on which it may run. Some Linux systems implement soft affinity

and provide system calls like sched_setaffinity() that also support hard

affinity.
Load Balancing
• Load Balancing is the phenomenon that keeps the

workload evenly distributed across all processors in an

SMP system. Load balancing is necessary only on

systems where each processor has its own private queue

of a process that is eligible to execute.

• Push Migration: In push migration, a task routinely

checks the load on each processor. If it finds an

imbalance, it evenly distributes the load on each

processor by moving the processes from overloaded to

idle or less busy processors.

• Pull Migration: Pull Migration occurs when an idle

processor pulls a waiting task from a busy processor for

its execution.
Multi-core Processors
• In multi-core processors, multiple processor cores are placed on the same
physical chip. Each core has a register set to maintain its architectural state
and thus appears to the operating system as a separate physical
processor. SMP systems that use multi-core processors are faster and
consume less power than systems in which each processor has its own
physical chip
• Coarse-Grained Multithreading:  A thread executes on a processor until a
long latency event such as a memory stall occurs in coarse-grained
multithreading
• Fine-Grained Multithreading:  This multithreading switches between
threads at a much finer level, mainly at the boundary of an instruction cycle. 
Multiprocessor scheduling symmetrical scheduling
Virtualization and Threading
• In this type of multiple processor scheduling, even a single
CPU system acts as a multiple processor system. In a system
with virtualization, the virtualization presents one or more
virtual CPUs to each of the virtual machines running on the
system. It then schedules the use of physical CPUs among the

virtual machines.

You might also like