You are on page 1of 60

Processes and CPU Scheduling

By
Surbhi Dongaonkar
Processes and CPU Scheduling:
• Process concept
• Interleaved I/O and CPU burst
• Process states
• Co-operating processes
• Thread:Thread libraries
• Multithreaded programming
• Scheduling: Scheduling criterion, Scheduling algorith
• Interrupts and Interrupt handling
Process
• A process is an active program/program in execution
• Ex- execution of binary code
• Single program can create multiple processes when run multiple
times
• It includes:
• Program code (text)
• Program Counter (A program counter is a register in a computer
processor that contains the address (location) of the instruction
being executed at the current time. )
• Process stack (temporary data)
• Heap(Dynamically allocated memory)
• Data (global data)
Process states
Process states
• New: A program which is going to be picked up by the OS into the
main memory is called a new process.
• Ready: Whenever a process is created, it directly enters in the ready
state, in which, it waits for the CPU to be assigned. The processes
which are ready for the execution and reside in the main memory are
called ready state processes. There can be many processes present in
the ready state.
• Running: One of the processes from the ready state will be chosen by
the OS depending upon the scheduling algorithm.
Process states
• Complition/Termination: When a process finishes its execution, it
comes in the termination state. All the context of the process (Process
Control Block) will also be deleted the process will be terminated by
the Operating system.
• Wait/Block: From the Running state, a process can make the
transition to the block or wait state depending upon the scheduling
algorithm or the intrinsic behavior of the process.
Process states
• Suspend Ready:A process in the ready state, which is moved to
secondary memory from the main memory due to lack of the
resources (mainly primary memory) is called in the suspend ready
state.
• Suspend Wait: Since process is already waiting for some resource to
get available hence it is better if it waits in the secondary memory
and make room for the higher priority process. These processes
complete their execution once the main memory gets available and
their wait is finished.
Interleaved I/O
• Interleaving is a process or methodology to make a system more
efficient, fast and reliable by arranging data in a noncontiguous
manner.
• By using interrupt facility and special commands to inform the
interface to issue an interrupt request signal whenever data is
available from any device.
• In the meantime the CPU can proceed for any other program
execution.
CPU Burst
• Time required by the process to complete the execution.
• I/O Burst : While the process is in the running state, it may ask for i/o ,
thus the process goes to the block or wait state, where the i/o will be
processed and then it will be sent back to the ready state.
• The process execution begins with a CPU burst.
• That is followed by an I/O burst, which is followed by another CPU
burst, then another I/O burst, etc.
Co-operating processes
• Process can be either independent or cooperating process
• cooperating process shares data with another.
• Cooperative processing is the splitting of an application into tasks
performed on separate computers.
• Physical connectivity can occur via:
• direct channel connection
• local-area network (LAN)
• peer-to-peer communication link
• primary/secondary link.
Advantages of cooperating Processes
• Information Sharing
• Modularity
• Computation Speedup
• Convenience to user
Cooperation by Communication
Cooperation by sharing
Threads:
• A thread is a path of execution within a process.
• A process can contain multiple threads.
• The process can be split down into so many threads.
• It helps to achieve parallelism by dividing a process into multiple
threads.
• Each thread has its own program counter, stack, and set of
registers. But the threads of a single process might share the same
code and data/file.
• Ex- Multiple tabs open in a single browser window.
Process Vrs Thread
Process Thread
Process means any program is in Thread means a segment of a
execution. process.
The process takes more time to The thread takes less time to
terminate. terminate.
It takes more time for creation. It takes less time for creation.
It also takes more time for context It takes less time for context
switching. switching.
The process is less efficient in ter of Thread is more efficient in ter of
communication. communication.
If a user-level thread is blocked,
If one process is blocked then it will not
then all other user-level threads
affect the execution of other processes
are blocked.
Process Vrs Thread
Process Thread
We don’t need multi progra in
Multiprogramming holds the concepts action for multiple threads because
of multi-process. a single process consists of
multiple threads.
The process is isolated. Threads share memory.
A Thread is lightweight as each
The process is called the heavyweight
thread in a process shares code,
process.
data, and resources.
Thread switching does not require
Process switching uses an interface in
calling an operating system and
an operating system.
causes an interrupt to the kernel.
Thread Components:
Types of Threads
There are two types of threads:
1. User Level Thread
2. Kernel Level Thread
User Level Thread
• faster to create and for context switching
• Does not recognized by OS
• Ex- java threads
Kernel Level Thread
• slower to create and manage compared to user-level threads.
• Recognized by OS
• Ex- windows solaries
Multithreaded Programming
• It is a process of multiple threads executes at same time.
• Models:
• Many to Many
• Many to One
• One to One
CPU Scheduling
• Process Scheduling is the process of the process manager handling
the removal of an active process from the CPU and selecting another
process based on a specific strategy.
• CPU Scheduling allows one process to use the CPU while another
process is delayed (in standby) due to unavailability of any resources
such as I / O etc., thus making full use of the CPU.
• The purpose of CPU Scheduling is to make the system more efficient,
faster, and fairer.
Times to considers for CPU Scheduling
• Arrival Time: Time at which the process arrives in the ready queue.
• Completion Time: Time at which process completes its execution.
• Burst Time: Time required by a process for CPU execution.
• Turn Around Time: Time Difference between completion time and
arrival time.
Scheduling Criteria
• CPU utilization: CPU usage can range from 0 to 100. But in a real-time
system, it varies from 40 to 90 percent depending on the system load.
• Throughput: The average CPU performance is the number of
processes performed and completed during each unit. This is called
throughput.
• Turn round Time
• Waiting Time
• Response Time
Advantages of CPU Scheduling
• Utilization of CPU at maximum level
• Maximum throughput
• Minimum turnaround time
• Minimum waiting time
• Reduced starvation of process/es in the ready queue
• Minimum response time
Scheduling Algorithms
Preemptive Vrs Non-Preemptive
Preemptive Scheduling Non-Preemptive Scheduling

Resources are allocated according to the cycles for a Resources are used and then held by the process until
limited time. it gets terminated.

The process can be interrupted, even before the The process is not interrupted until its life cycle is
completion. complete.

Starvation may be caused, due to the insertion of Starvation can occur when a process with large burst
priority process in the queue. time occupies the system.

Maintaining queue and remaining time needs storage No such overheads are required.
overhead.
First Come First Serve (FCFS)
• FCFS is the simplest Scheduling algorithm
• the process that requests the CPU first is allocated the CPU first and is
implemented by using FIFO queue.
• Advantages:
• Easy to implement
• Disadvantages:
• FCFS suffers from Convoy effect(one slow process slows down the
performance of the entire set of processes, and leads to wastage of CPU time
and other devices.)
• The average waiting time is much higher than the other algorith.
• FCFS is not much efficient.
Average Waiting Time= (0+19+21)/3 = 13.33

Average Turn Around Time= (20+21+22)/3=21


Priority Scheduling
• It schedules tasks based on priority.
• When the higher priority work arrives while a task with less priority is
executed, the higher priority work takes the place of the less priority one
and
• Lower is the number assigned, higher is the priority level of a process.
• Advantages:
• The average waiting time is less than FCFS
• Less complex
• Disadvantages:
• Starvation Problem(continuous stream of higher-priority processes can prevent a
low-priority process from ever getting the CPU)
Average Waiting Time= (0+5+6+0)/4 = 2.75

Average Turn Around Time= (4+7+9+2)/4=5.5


Algorithm:

• First input the processes with their burst time and


priority.
• Sort the processes, burst time and priority according to
the priority.
• Now simply apply FCFS algorithm.
Shortest Job First(SJF)
• Selects the waiting process with the smallest execution time to
execute next.
• It is associated with each task as a unit of time to complete.
• If two processes have the same burst time then the process is
selected using FCFS
• Advantages:
• minimum average waiting time
• Disadvantages:
• Starvation of longer processes.
PID AT BT
1 0 3
2 1 4
3 2 2
4 5 3
Average Waiting Time= (0+7+1+0)/4 = 2

Average Turn Around Time= (3+11+3+3)/4 = 5


Algorithm:
• Sort all the processes according to the arrival time.
• Then select that process that has minimum arrival time and minimum
Burst time.
• After completion of the process make a pool of processes that arrives
afterward till the completion of the previous process and select that
process among the pool which is having minimum Burst time.
Longest Job First(LJF)
• Scheduling in LJF is exactly opposite of shortest job first (SJF), as the name
suggests.
• The process with the largest burst time is processed first.
• If two processes have the same burst time then the tie is broken
using FCFS
• Advantages:
• No other task can schedule until the longest job or process executes completely.
• All the jobs or processes finish at the same time approximately.
• Disadvantages:
• very high average waiting time and average turn-around time
• May lead to convoy effect.
PID AT BT
P1 0 2
P2 1 1
P3 2 6
P4 3 3
Average Waiting Time= (0+10+0+5)/4 = 3.75

Average Turn Around Time= (2+11+6+8)/4 = 6.75


Algorithm:
• Sort all the processes according to the arrival time.
• Then select that process that has minimum arrival time and maximum
Burst time.
• After completion of the process make a pool of processes that arrives
afterward till the completion of the previous process and select that
process among the pool which is having maximum Burst time.
Preemptive Algorithms
• Preemptive Scheduling is a CPU scheduling technique that
works by dividing time slots of CPU to a given process.
• The time slot given might be able to complete the whole
process or might not be able to it.
• When the burst time of the process is greater than CPU
cycle/Intterupt time, it is placed back into the ready queue and
will execute in the next chance.
• This scheduling is used when the process switch to ready
state.
Round Robin
• Each process is cyclically assigned a fixed time slot(time quantum).
• It is the preemptive version of FCFS.
• Advantages:
• It’s simple, easy to use, and starvation-free as all processes get the balanced
CPU allocation. Hence is a most widely used process.
• Disadvantages:
• Comparatively large overhead in context switching specifically if time slot is
small.
Processes AT BT

P1 0 5

P2 1 4

P3 2 2

P4 4 1
Time Quantum =2

Processes AT BT CT TAT WT

P1 0 5 12 12 7

P2 1 4 11 10 6

P3 2 2 6 4 2

P4 4 1 7 5 4

Average Waiting Time= (7+6+2+4)/4 = 4.75

Average Turn Around Time= (12+10+4+5)/4 = 7.75


Shortest Remaining Time First (SRTF)
• In SRTF the process with the smallest amount of time remaining until
completion is selected to execute.
• Preemptive version of SJF.
• Advantages:
• In SRTF the short processes are handled very fast.
• The system also requires very little overhead since it only makes a decision
when a process completes or a new process is added.
• Disadvantages:
• Process starvation
Turn
Arrival Completion Waiting
Process Burst Time Around
Time Time Time
Time
P1 2 6 15 13 7

P2 5 2 7 2 0

P3 1 8 23 22 14

P4 0 3 3 3 0

P5 4 4 10 6 2

Average Waiting Time= (7+0+14+0+2)/5 = 4.6

Average Turn Around Time= (13+2+22+3+6)/5 = 8.12


Longest Remaining Time First(LRTF)
• It is a preemptive version of the LJF
• This algorithm schedules those processes first which have the longest
processing time remaining for completion.
• If two processes have the same remaining burst time then the tie is
broken using FCFS.
• Advantage:
• No other process can execute until the longest task executes completely.
• Disadvantage:
• This algorithm gives a very high average waiting time and average turn-
around time, can lead to convoy effect.
Completion Turn Around Waiting Time
Processes Arrival time Burst Time
Time Time
P1 0 2 18 17 15

P2 1 4 19 17 13

P3 2 6 20 17 11

P4 3 8 21 17 9

P1 P2 P3 P4 P3 P4 P3 P4 P2 P3
0 1 2 3 4 7 8 9 10 11 12 13

P4 P2 P3 P4 P1 P2 P3 P4
13 14 15 16 17 18 19 20 21

Average Waiting Time= (15+13+11+9)/4 = 12

Average Turn Around Time= (17+17+17+17)/4 = 17


Highest Response Ratio Next:

• Find the response ratio of all available processes and select the one
with the highest Response Ratio
• Response Ratio = (WT+BT)/BT

• Non-preemptive algorithm
• Reduce Starvation Problem
• HRRN is the modification of Shortest Job Next(SJN)
• if there is any process that is currently in execution with the CPU and during its execution, if any
new process arrives in the memory with burst time smaller than the currently running process then
at that time the currently running process will not be put in the ready queue & complete its
execution without any interruption.
Algorithm:
• Once a process is selected for execution will run until its completion.
• The first step is to calculate the waiting time for all the processes.
• Processes get scheduled each time for execution in order to find the response ratio for each
available process.
• Then after the process having the highest response ratio is executed first by the processor.
• In a case, if two processes have the same response ratio then the tie is broken using the FCFS
scheduling algorithm.
Example
Example Explanation
• At time=0 there is no process available in the ready queue, so from 0 to 1
CPU is idle. Thus 0 to 1 is considered as CPU idle time.
• At time=1, only the process P1 is available in the ready queue. So, process
P1 executes till its completion.
• After process P1, at time=4 only process P2 arrived, so the process P2 gets
executed because the operating system did not have any other option.
• At time=10, the processes P3, P4, and P5 were in the ready queue. So in
order to schedule the next process after P2, we need to calculate the
response ratio.
• In this step, we are going to calculate the response ratio for P3, P4, and P5
Example Explanation continued..

• Response Ratio = WT+BT/BT


• RR(P3) = [(10-5) +8]/8 = 1.625
• RR(P4) = [(10-7) +4]/4 = 1.75
• RR(P5) = [(10-8) +5]/5 = 1.4
• From the above results, it is clear that Process P4 has the Highest
Response ratio, so the Process P4 is schedule after P2.
Example Explanation continued..
• Now in the ready queue, we have two processes P3 and P5, after the
execution of P4(time =14 ) let us calculate the response ratio of P3
and P5
• RR (P3) = [(14-5) +8]/8 =2.125
• RR (P5) = [(14-8) +5]/5 =2.2
• Thus, At t=14, process P5 is executed.
• After the complete execution of P5, P3 is in the ready queue so at
time t=19 P3 gets executed.
Multiple Queue Scheduling:

• Processes in the ready queue can be divided into different classes where each class has its
own scheduling needs.
• For example, a common division is a foreground (interactive) process and a background
(batch) process.
• Highest priority is given to System processes, then interactive and then batch processes.
• Advantages
1.You can use multilevel queue scheduling to apply different scheduling methods to
distinct processes.
2.It will have low overhead in terms of scheduling.
• Disadvantages
1.There is a risk of starvation for lower priority processes.
2.It is rigid in nature.
Process FCFS Priority SJF LJF SRTF RR LRTF
Name

Preemptive

Starvation

Convoy Effect

Average
Waiting Time
Interrupts
• It is a signal which requires immediate attention of the processor.
• May be generated by Hardware, software or any process.
• Hardware Interrupts: pressing a keyboard key or moving a mouse triggers
hardware interrupts that cause the processor to read the keystroke or
mouse position.
• Maskable
• Non-Maskable
• Software Interrupts: program execution errors typically
called traps or exceptions.
• Interrupt Service Routine (ISR): It is one of the bus control lines in I/O
devices which is dedicated for interrupt handling.
Interrupt Handling:
1.Devices raise an IRQ.
2.The processor interrupts the program currently being executed.
3.The device is informed that its request has been recognized and the
device deactivates the request signal.
4.The requested action is performed.
5.An interrupt is enabled and the interrupted program is resumed.
Handling Multiple Devices:
• When more than one device raises an interrupt request signal, then
additional information is needed to decide which device to be
considered first.
• methods used to decide which device to select are:
• Polling: first device encountered with the IRQ bit set is the device that is to be
serviced first.
• Vectored Interrupts: a device requesting an interrupt identifies itself directly
by sending a special code to the processor over the bus
• Interrupt Nesting: the I/O device is organized in a priority structure.
Therefore, an interrupt request from a higher priority device is recognized
whereas a request from a lower priority device is not. The processor accepts
interrupts only from devices/processes having priority.

You might also like